instance_id
stringlengths 13
37
| text
stringlengths 3.08k
667k
| repo
stringclasses 35
values | base_commit
stringlengths 40
40
| problem_statement
stringlengths 10
256k
| hints_text
stringlengths 0
908k
| created_at
stringlengths 20
20
| patch
stringlengths 18
101M
| test_patch
stringclasses 1
value | version
stringclasses 1
value | FAIL_TO_PASS
stringclasses 1
value | PASS_TO_PASS
stringclasses 1
value | environment_setup_commit
stringclasses 1
value |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
Qiskit__qiskit-7187
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
CalibrationBuilder scales superlinearly
<!-- ⚠️ If you do not respect this template, your issue will be closed -->
<!-- ⚠️ Make sure to browse the opened and closed issues to confirm this idea does not exist. -->
### What is the expected enhancement?
For a random circuit of equal width and depth the CalibrationBuilder pass scales like n_qubits^3 whereas one might expect scaling like n_qubits^2 if it was proportional to the number of gates. This slow down seems to be in retrieving the qubits associated with a dag node.
This test was run without actual calibrations so only part of the pass code is actually profiled.


This is qiskit-terra version 0.19.0.dev0+637acc0.
</issue>
<code>
[start of README.md]
1 # Qiskit Terra
2 [](https://opensource.org/licenses/Apache-2.0)<!--- long-description-skip-begin -->[](https://travis-ci.com/Qiskit/qiskit-terra)[](https://github.com/Qiskit/qiskit-terra/releases)[](https://pypi.org/project/qiskit-terra/)[](https://coveralls.io/github/Qiskit/qiskit-terra?branch=main)<!--- long-description-skip-end -->
3
4 **Qiskit** is an open-source framework for working with noisy quantum computers at the level of pulses, circuits, and algorithms.
5
6 Qiskit is made up of elements that work together to enable quantum computing. This element is **Terra** and is the foundation on which the rest of Qiskit is built.
7
8 ## Installation
9
10 We encourage installing Qiskit via the pip tool (a python package manager), which installs all Qiskit elements, including Terra.
11
12 ```bash
13 pip install qiskit
14 ```
15
16 PIP will handle all dependencies automatically and you will always install the latest (and well-tested) version.
17
18 To install from source, follow the instructions in the [documentation](https://qiskit.org/documentation/contributing_to_qiskit.html#install-terra-from-source).
19
20 ## Creating Your First Quantum Program in Qiskit Terra
21
22 Now that Qiskit is installed, it's time to begin working with Terra.
23
24 We are ready to try out a quantum circuit example, which is simulated locally using
25 the Qiskit BasicAer element. This is a simple example that makes an entangled state.
26
27 ```
28 $ python
29 ```
30
31 ```python
32 >>> from qiskit import QuantumCircuit, transpile
33 >>> from qiskit.providers.basicaer import QasmSimulatorPy
34 >>> qc = QuantumCircuit(2, 2)
35 >>> qc.h(0)
36 >>> qc.cx(0, 1)
37 >>> qc.measure([0,1], [0,1])
38 >>> backend_sim = QasmSimulatorPy()
39 >>> transpiled_qc = transpile(qc, backend_sim)
40 >>> result = backend_sim.run(transpiled_qc).result()
41 >>> print(result.get_counts(qc))
42 ```
43
44 In this case, the output will be:
45
46 ```python
47 {'00': 513, '11': 511}
48 ```
49
50 A script is available [here](examples/python/ibmq/hello_quantum.py), where we also show how to
51 run the same program on a real quantum computer via IBMQ.
52
53 ### Executing your code on a real quantum chip
54
55 You can also use Qiskit to execute your code on a
56 **real quantum chip**.
57 In order to do so, you need to configure Qiskit for using the credentials in
58 your IBM Q account:
59
60 #### Configure your IBMQ credentials
61
62 1. Create an _[IBM Q](https://quantum-computing.ibm.com) > Account_ if you haven't already done so.
63
64 2. Get an API token from the IBM Q website under _My Account > API Token_ and the URL for the account.
65
66 3. Take your token and url from step 2, here called `MY_API_TOKEN`, `MY_URL`, and run:
67
68 ```python
69 >>> from qiskit import IBMQ
70 >>> IBMQ.save_account('MY_API_TOKEN', 'MY_URL')
71 ```
72
73 After calling `IBMQ.save_account()`, your credentials will be stored on disk.
74 Once they are stored, at any point in the future you can load and use them
75 in your program simply via:
76
77 ```python
78 >>> from qiskit import IBMQ
79 >>> IBMQ.load_account()
80 ```
81
82 Those who do not want to save their credentials to disk should use instead:
83
84 ```python
85 >>> from qiskit import IBMQ
86 >>> IBMQ.enable_account('MY_API_TOKEN')
87 ```
88
89 and the token will only be active for the session. For examples using Terra with real
90 devices we have provided a set of examples in **examples/python** and we suggest starting with [using_qiskit_terra_level_0.py](examples/python/using_qiskit_terra_level_0.py) and working up in
91 the levels.
92
93 ## Contribution Guidelines
94
95 If you'd like to contribute to Qiskit Terra, please take a look at our
96 [contribution guidelines](CONTRIBUTING.md). This project adheres to Qiskit's [code of conduct](CODE_OF_CONDUCT.md). By participating, you are expected to uphold this code.
97
98 We use [GitHub issues](https://github.com/Qiskit/qiskit-terra/issues) for tracking requests and bugs. Please
99 [join the Qiskit Slack community](https://ibm.co/joinqiskitslack)
100 and use our [Qiskit Slack channel](https://qiskit.slack.com) for discussion and simple questions.
101 For questions that are more suited for a forum we use the Qiskit tag in the [Stack Exchange](https://quantumcomputing.stackexchange.com/questions/tagged/qiskit).
102
103 ## Next Steps
104
105 Now you're set up and ready to check out some of the other examples from our
106 [Qiskit Tutorials](https://github.com/Qiskit/qiskit-tutorials) repository.
107
108 ## Authors and Citation
109
110 Qiskit Terra is the work of [many people](https://github.com/Qiskit/qiskit-terra/graphs/contributors) who contribute
111 to the project at different levels. If you use Qiskit, please cite as per the included [BibTeX file](https://github.com/Qiskit/qiskit/blob/master/Qiskit.bib).
112
113 ## Changelog and Release Notes
114
115 The changelog for a particular release is dynamically generated and gets
116 written to the release page on Github for each release. For example, you can
117 find the page for the `0.9.0` release here:
118
119 https://github.com/Qiskit/qiskit-terra/releases/tag/0.9.0
120
121 The changelog for the current release can be found in the releases tab:
122 [](https://github.com/Qiskit/qiskit-terra/releases)
123 The changelog provides a quick overview of notable changes for a given
124 release.
125
126 Additionally, as part of each release detailed release notes are written to
127 document in detail what has changed as part of a release. This includes any
128 documentation on potential breaking changes on upgrade and new features.
129 For example, You can find the release notes for the `0.9.0` release in the
130 Qiskit documentation here:
131
132 https://qiskit.org/documentation/release_notes.html#terra-0-9
133
134 ## License
135
136 [Apache License 2.0](LICENSE.txt)
137
[end of README.md]
[start of qiskit/transpiler/__init__.py]
1 # This code is part of Qiskit.
2 #
3 # (C) Copyright IBM 2017, 2018.
4 #
5 # This code is licensed under the Apache License, Version 2.0. You may
6 # obtain a copy of this license in the LICENSE.txt file in the root directory
7 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
8 #
9 # Any modifications or derivative works of this code must retain this
10 # copyright notice, and modified files need to carry a notice indicating
11 # that they have been altered from the originals.
12
13 """
14 =====================================
15 Transpiler (:mod:`qiskit.transpiler`)
16 =====================================
17
18 .. currentmodule:: qiskit.transpiler
19
20 Overview
21 ========
22 Transpilation is the process of rewriting a given input circuit to match
23 the topology of a specific quantum device, and/or to optimize the circuit
24 for execution on present day noisy quantum systems.
25
26 Most circuits must undergo a series of transformations that make them compatible with
27 a given target device, and optimize them to reduce the effects of noise on the
28 resulting outcomes. Rewriting quantum circuits to match hardware constraints and
29 optimizing for performance can be far from trivial. The flow of logic in the rewriting
30 tool chain need not be linear, and can often have iterative sub-loops, conditional
31 branches, and other complex behaviors. That being said, the basic building blocks
32 follow the structure given below:
33
34 .. image:: /source_images/transpiling_core_steps.png
35
36 .. raw:: html
37
38 <br>
39
40 Qiskit has four pre-built transpilation pipelines available here:
41 :mod:`qiskit.transpiler.preset_passmanagers`. Unless the reader is familiar with
42 quantum circuit optimization methods and their usage, it is best to use one of
43 these ready-made routines.
44
45
46 Supplementary Information
47 =========================
48
49 .. dropdown:: Basis gates
50 :animate: fade-in-slide-down
51
52 When writing a quantum circuit you are free to use any quantum gate (unitary operator) that
53 you like, along with a collection of non-gate operations such as qubit measurements and
54 reset operations. However, when running a circuit on a real quantum device one no longer
55 has this flexibility. Due to limitations in, for example, the physical interactions
56 between qubits, difficulty in implementing multi-qubit gates, control electronics etc,
57 a quantum computing device can only natively support a handful of quantum gates and non-gate
58 operations. In the present case of IBM Q devices, the native gate set can be found by querying
59 the devices themselves, and looking for the corresponding attribute in their configuration:
60
61 .. jupyter-execute::
62 :hide-code:
63 :hide-output:
64
65 from qiskit.test.mock import FakeVigo
66 backend = FakeVigo()
67
68 .. jupyter-execute::
69
70 backend.configuration().basis_gates
71
72
73 Every quantum circuit run on an IBM Q device must be expressed using only these basis gates.
74 For example, suppose one wants to run a simple phase estimation circuit:
75
76 .. jupyter-execute::
77
78 import numpy as np
79 from qiskit import QuantumCircuit
80 qc = QuantumCircuit(2, 1)
81
82 qc.h(0)
83 qc.x(1)
84 qc.cp(np.pi/4, 0, 1)
85 qc.h(0)
86 qc.measure([0], [0])
87 qc.draw(output='mpl')
88
89 We have :math:`H`, :math:`X`, and controlled-:math:`P` gates, all of which are
90 not in our devices basis gate set, and must be expanded. This expansion is taken
91 care of for us in the :func:`qiskit.execute` function. However, we can
92 decompose the circuit to show what it would look like in the native gate set of
93 the IBM Quantum devices:
94
95 .. jupyter-execute::
96
97 qc_basis = qc.decompose()
98 qc_basis.draw(output='mpl')
99
100
101 A few things to highlight. First, the circuit has gotten longer with respect to the
102 initial one. This can be verified by checking the depth of the circuits:
103
104 .. jupyter-execute::
105
106 print('Original depth:', qc.depth(), 'Decomposed Depth:', qc_basis.depth())
107
108 Second, although we had a single controlled gate, the fact that it was not in the basis
109 set means that, when expanded, it requires more than a single `cx` gate to implement.
110 All said, unrolling to the basis set of gates leads to an increase in the depth of a
111 quantum circuit and the number of gates.
112
113 It is important to highlight two special cases:
114
115 1. A SWAP gate is not a native gate on the IBM Q devices, and must be decomposed into
116 three CNOT gates:
117
118 .. jupyter-execute::
119
120 swap_circ = QuantumCircuit(2)
121 swap_circ.swap(0, 1)
122 swap_circ.decompose().draw(output='mpl')
123
124 As a product of three CNOT gates, SWAP gates are expensive operations to perform on a
125 noisy quantum devices. However, such operations are usually necessary for embedding a
126 circuit into the limited entangling gate connectivities of actual devices. Thus,
127 minimizing the number of SWAP gates in a circuit is a primary goal in the
128 transpilation process.
129
130
131 2. A Toffoli, or controlled-controlled-not gate (`ccx`), is a three-qubit gate. Given
132 that our basis gate set includes only single- and two-qubit gates, it is obvious that
133 this gate must be decomposed. This decomposition is quite costly:
134
135 .. jupyter-execute::
136
137 ccx_circ = QuantumCircuit(3)
138 ccx_circ.ccx(0, 1, 2)
139 ccx_circ.decompose().draw(output='mpl')
140
141 For every Toffoli gate in a quantum circuit, the IBM Quantum hardware may execute up to
142 six CNOT gates, and a handful of single-qubit gates. From this example, it should be
143 clear that any algorithm that makes use of multiple Toffoli gates will end up as a
144 circuit with large depth and will therefore be appreciably affected by noise and gate
145 errors.
146
147
148 .. raw:: html
149
150 <br>
151
152 .. dropdown:: Initial layout
153 :animate: fade-in-slide-down
154
155 Quantum circuits are abstract entities whose qubits are "virtual" representations of actual
156 qubits used in computations. We need to be able to map these virtual qubits in a one-to-one
157 manner to the "physical" qubits in an actual quantum device.
158
159 .. image:: /source_images/mapping.png
160
161 .. raw:: html
162
163 <br><br>
164
165 By default, qiskit will do this mapping for you. The choice of mapping depends on the
166 properties of the circuit, the particular device you are targeting, and the optimization
167 level that is chosen. The basic mapping strategies are the following:
168
169 - **Trivial layout**: Map virtual qubits to the same numbered physical qubit on the device,
170 i.e. `[0,1,2,3,4]` -> `[0,1,2,3,4]` (default in `optimization_level=0` and
171 `optimization_level=1`).
172
173 - **Dense layout**: Find the sub-graph of the device with same number of qubits as the circuit
174 with the greatest connectivity (default in `optimization_level=2` and `optimization_level=3`).
175
176
177 The choice of initial layout is extremely important when:
178
179 1. Computing the number of SWAP operations needed to map the input circuit onto the device
180 topology.
181
182 2. Taking into account the noise properties of the device.
183
184
185 The choice of `initial_layout` can mean the difference between getting a result,
186 and getting nothing but noise.
187
188 Lets see what layouts are automatically picked at various optimization levels. The modified
189 circuits returned by :func:`qiskit.compiler.transpile` have this initial layout information
190 in them, and we can view this layout selection graphically using
191 :func:`qiskit.visualization.plot_circuit_layout`:
192
193 .. jupyter-execute::
194
195 from qiskit import QuantumCircuit, transpile
196 from qiskit.visualization import plot_circuit_layout
197 from qiskit.test.mock import FakeVigo
198 backend = FakeVigo()
199
200 ghz = QuantumCircuit(3, 3)
201 ghz.h(0)
202 ghz.cx(0,range(1,3))
203 ghz.barrier()
204 ghz.measure(range(3), range(3))
205 ghz.draw(output='mpl')
206
207
208 - **Layout Using Optimization Level 0**
209
210 .. jupyter-execute::
211
212 new_circ_lv0 = transpile(ghz, backend=backend, optimization_level=0)
213 plot_circuit_layout(new_circ_lv0, backend)
214
215
216 - **Layout Using Optimization Level 3**
217
218 .. jupyter-execute::
219
220 new_circ_lv3 = transpile(ghz, backend=backend, optimization_level=3)
221 plot_circuit_layout(new_circ_lv3, backend)
222
223
224 It is completely possible to specify your own initial layout. To do so we can
225 pass a list of integers to :func:`qiskit.compiler.transpile` via the `initial_layout`
226 keyword argument, where the index labels the virtual qubit in the circuit and the
227 corresponding value is the label for the physical qubit to map onto:
228
229 .. jupyter-execute::
230
231 # Virtual -> physical
232 # 0 -> 3
233 # 1 -> 4
234 # 2 -> 2
235
236 my_ghz = transpile(ghz, backend, initial_layout=[3, 4, 2])
237 plot_circuit_layout(my_ghz, backend)
238
239 .. raw:: html
240
241 <br>
242
243
244 .. dropdown:: Mapping circuits to hardware topology
245 :animate: fade-in-slide-down
246
247 In order to implement a CNOT gate between qubits in a quantum circuit that are not directly
248 connected on a quantum device one or more SWAP gates must be inserted into the circuit to
249 move the qubit states around until they are adjacent on the device gate map. Each SWAP
250 gate is decomposed into three CNOT gates on the IBM Quantum devices, and represents an
251 expensive and noisy operation to perform. Thus, finding the minimum number of SWAP gates
252 needed to map a circuit onto a given device, is an important step (if not the most important)
253 in the whole execution process.
254
255 However, as with many important things in life, finding the optimal SWAP mapping is hard.
256 In fact it is in a class of problems called NP-Hard, and is thus prohibitively expensive
257 to compute for all but the smallest quantum devices and input circuits. To get around this,
258 by default Qiskit uses a stochastic heuristic algorithm called
259 :class:`Qiskit.transpiler.passes.StochasticSwap` to compute a good, but not necessarily minimal
260 SWAP count. The use of a stochastic method means the circuits generated by
261 :func:`Qiskit.compiler.transpile` (or :func:`Qiskit.execute` that calls `transpile` internally)
262 are not guaranteed to be the same over repeated runs. Indeed, running the same circuit
263 repeatedly will in general result in a distribution of circuit depths and gate counts at the
264 output.
265
266 In order to highlight this, we run a GHZ circuit 100 times, using a "bad" (disconnected)
267 `initial_layout`:
268
269 .. jupyter-execute::
270
271 import matplotlib.pyplot as plt
272 from qiskit import QuantumCircuit, transpile
273 from qiskit.test.mock import FakeBoeblingen
274 backend = FakeBoeblingen()
275
276 ghz = QuantumCircuit(5)
277 ghz.h(0)
278 ghz.cx(0,range(1,5))
279 ghz.draw(output='mpl')
280
281
282 .. jupyter-execute::
283
284 depths = []
285 for _ in range(100):
286 depths.append(transpile(ghz,
287 backend,
288 initial_layout=[7, 0, 4, 15, 19],
289 ).depth())
290
291 plt.figure(figsize=(8, 6))
292 plt.hist(depths, bins=list(range(14,36)), align='left', color='#AC557C')
293 plt.xlabel('Depth', fontsize=14)
294 plt.ylabel('Counts', fontsize=14);
295
296
297 This distribution is quite wide, signaling the difficultly the SWAP mapper is having
298 in computing the best mapping. Most circuits will have a distribution of depths,
299 perhaps not as wide as this one, due to the stochastic nature of the default SWAP
300 mapper. Of course, we want the best circuit we can get, especially in cases where
301 the depth is critical to success or failure. In cases like this, it is best to
302 :func:`transpile` a circuit several times, e.g. 10, and take the one with the
303 lowest depth. The :func:`transpile` function will automatically run in parallel
304 mode, making this procedure relatively speedy in most cases.
305
306 .. raw:: html
307
308 <br>
309
310
311 .. dropdown:: Gate optimization
312 :animate: fade-in-slide-down
313
314 Decomposing quantum circuits into the basis gate set of the IBM Quantum devices,
315 and the addition of SWAP gates needed to match hardware topology, conspire to
316 increase the depth and gate count of quantum circuits. Fortunately many routines
317 for optimizing circuits by combining or eliminating gates exist. In some cases
318 these methods are so effective the output circuits have lower depth than the inputs.
319 In other cases, not much can be done, and the computation may be difficult to
320 perform on noisy devices. Different gate optimizations are turned on with
321 different `optimization_level` values. Below we show the benefits gained from
322 setting the optimization level higher:
323
324 .. important::
325
326 The output from :func:`transpile` varies due to the stochastic swap mapper.
327 So the numbers below will likely change each time you run the code.
328
329
330 .. jupyter-execute::
331
332 import matplotlib.pyplot as plt
333 from qiskit import QuantumCircuit, transpile
334 from qiskit.test.mock import FakeBoeblingen
335 backend = FakeBoeblingen()
336
337 ghz = QuantumCircuit(5)
338 ghz.h(0)
339 ghz.cx(0,range(1,5))
340 ghz.draw(output='mpl')
341
342
343 .. jupyter-execute::
344
345 for kk in range(4):
346 circ = transpile(ghz, backend, optimization_level=kk)
347 print('Optimization Level {}'.format(kk))
348 print('Depth:', circ.depth())
349 print('Gate counts:', circ.count_ops())
350 print()
351
352
353 .. raw:: html
354
355 <br>
356
357
358 Transpiler API
359 ==============
360
361 Pass Manager Construction
362 -------------------------
363
364 .. autosummary::
365 :toctree: ../stubs/
366
367 PassManager
368 PassManagerConfig
369 PropertySet
370 FlowController
371
372 Layout and Topology
373 -------------------
374
375 .. autosummary::
376 :toctree: ../stubs/
377
378 Layout
379 CouplingMap
380
381 Scheduling
382 ----------
383
384 .. autosummary::
385 :toctree: ../stubs/
386
387 InstructionDurations
388
389 Fenced Objects
390 --------------
391
392 .. autosummary::
393 :toctree: ../stubs/
394
395 FencedDAGCircuit
396 FencedPropertySet
397
398 Abstract Passes
399 ---------------
400
401 .. autosummary::
402 :toctree: ../stubs/
403
404 TransformationPass
405 AnalysisPass
406
407 Exceptions
408 ----------
409
410 .. autosummary::
411 :toctree: ../stubs/
412
413 TranspilerError
414 TranspilerAccessError
415 """
416
417 from .runningpassmanager import FlowController
418 from .passmanager import PassManager
419 from .passmanager_config import PassManagerConfig
420 from .propertyset import PropertySet
421 from .exceptions import TranspilerError, TranspilerAccessError
422 from .fencedobjs import FencedDAGCircuit, FencedPropertySet
423 from .basepasses import AnalysisPass, TransformationPass
424 from .coupling import CouplingMap
425 from .layout import Layout
426 from .instruction_durations import InstructionDurations
427
[end of qiskit/transpiler/__init__.py]
[start of qiskit/visualization/circuit_visualization.py]
1 # This code is part of Qiskit.
2 #
3 # (C) Copyright IBM 2017, 2018.
4 #
5 # This code is licensed under the Apache License, Version 2.0. You may
6 # obtain a copy of this license in the LICENSE.txt file in the root directory
7 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
8 #
9 # Any modifications or derivative works of this code must retain this
10 # copyright notice, and modified files need to carry a notice indicating
11 # that they have been altered from the originals.
12
13
14 """
15 Module for the primary interface to the circuit drawers.
16
17 This module contains the end user facing API for drawing quantum circuits.
18 There are 3 available drawer backends available:
19
20 0. ASCII art
21 1. LaTeX
22 2. Matplotlib
23
24 This provides a single function entry point to drawing a circuit object with
25 any of the backends.
26 """
27
28 import logging
29 import os
30 import subprocess
31 import tempfile
32
33 try:
34 from PIL import Image
35
36 HAS_PIL = True
37 except ImportError:
38 HAS_PIL = False
39
40 from qiskit import user_config
41 from qiskit.exceptions import MissingOptionalLibraryError
42 from qiskit.visualization.exceptions import VisualizationError
43 from qiskit.visualization import latex as _latex
44 from qiskit.visualization import text as _text
45 from qiskit.visualization import utils
46 from qiskit.visualization import matplotlib as _matplotlib
47
48
49 logger = logging.getLogger(__name__)
50
51
52 class _HasPdfLatexWrapper:
53 """Wrapper to lazily detect presence of the ``pdflatex`` command."""
54
55 def __init__(self):
56 self.has_pdflatex = None
57
58 def __bool__(self):
59 if self.has_pdflatex is None:
60 try:
61 subprocess.run(
62 ["pdflatex", "-version"],
63 check=True,
64 stdout=subprocess.DEVNULL,
65 stderr=subprocess.DEVNULL,
66 )
67 self.has_pdflatex = True
68 except (OSError, subprocess.SubprocessError):
69 self.has_pdflatex = False
70 return self.has_pdflatex
71
72
73 class _HasPdfToCairoWrapper:
74 """Lazily detect the presence of the ``pdftocairo`` command."""
75
76 def __init__(self):
77 self.has_pdftocairo = None
78
79 def __bool__(self):
80 if self.has_pdftocairo is None:
81 try:
82 subprocess.run(
83 ["pdftocairo", "-v"],
84 check=True,
85 stdout=subprocess.DEVNULL,
86 stderr=subprocess.DEVNULL,
87 )
88 self.has_pdftocairo = True
89 except (OSError, subprocess.SubprocessError):
90 self.has_pdftocairo = False
91 return self.has_pdftocairo
92
93
94 HAS_PDFLATEX = _HasPdfLatexWrapper()
95 HAS_PDFTOCAIRO = _HasPdfToCairoWrapper()
96
97
98 def circuit_drawer(
99 circuit,
100 scale=None,
101 filename=None,
102 style=None,
103 output=None,
104 interactive=False,
105 plot_barriers=True,
106 reverse_bits=False,
107 justify=None,
108 vertical_compression="medium",
109 idle_wires=True,
110 with_layout=True,
111 fold=None,
112 ax=None,
113 initial_state=False,
114 cregbundle=True,
115 ):
116 """Draw the quantum circuit. Use the output parameter to choose the drawing format:
117
118 **text**: ASCII art TextDrawing that can be printed in the console.
119
120 **matplotlib**: images with color rendered purely in Python.
121
122 **latex**: high-quality images compiled via latex.
123
124 **latex_source**: raw uncompiled latex output.
125
126 Args:
127 circuit (QuantumCircuit): the quantum circuit to draw
128 scale (float): scale of image to draw (shrink if < 1.0). Only used by
129 the `mpl`, `latex` and `latex_source` outputs. Defaults to 1.0.
130 filename (str): file path to save image to. Defaults to None.
131 style (dict or str): dictionary of style or file name of style json file.
132 This option is only used by the `mpl` or `latex` output type.
133 If `style` is a str, it is used as the path to a json file
134 which contains a style dict. The file will be opened, parsed, and
135 then any style elements in the dict will replace the default values
136 in the input dict. A file to be loaded must end in ``.json``, but
137 the name entered here can omit ``.json``. For example,
138 ``style='iqx.json'`` or ``style='iqx'``.
139 If `style` is a dict and the ``'name'`` key is set, that name
140 will be used to load a json file, followed by loading the other
141 items in the style dict. For example, ``style={'name': 'iqx'}``.
142 If `style` is not a str and `name` is not a key in the style dict,
143 then the default value from the user config file (usually
144 ``~/.qiskit/settings.conf``) will be used, for example,
145 ``circuit_mpl_style = iqx``.
146 If none of these are set, the `default` style will be used.
147 The search path for style json files can be specified in the user
148 config, for example,
149 ``circuit_mpl_style_path = /home/user/styles:/home/user``.
150 See: :class:`~qiskit.visualization.qcstyle.DefaultStyle` for more
151 information on the contents.
152 output (str): select the output method to use for drawing the circuit.
153 Valid choices are ``text``, ``mpl``, ``latex``, ``latex_source``.
154 By default the `text` drawer is used unless the user config file
155 (usually ``~/.qiskit/settings.conf``) has an alternative backend set
156 as the default. For example, ``circuit_drawer = latex``. If the output
157 kwarg is set, that backend will always be used over the default in
158 the user config file.
159 interactive (bool): when set to true, show the circuit in a new window
160 (for `mpl` this depends on the matplotlib backend being used
161 supporting this). Note when used with either the `text` or the
162 `latex_source` output type this has no effect and will be silently
163 ignored. Defaults to False.
164 reverse_bits (bool): when set to True, reverse the bit order inside
165 registers for the output visualization. Defaults to False.
166 plot_barriers (bool): enable/disable drawing barriers in the output
167 circuit. Defaults to True.
168 justify (string): options are ``left``, ``right`` or ``none``. If
169 anything else is supplied, it defaults to left justified. It refers
170 to where gates should be placed in the output circuit if there is
171 an option. ``none`` results in each gate being placed in its own
172 column.
173 vertical_compression (string): ``high``, ``medium`` or ``low``. It
174 merges the lines generated by the `text` output so the drawing
175 will take less vertical room. Default is ``medium``. Only used by
176 the `text` output, will be silently ignored otherwise.
177 idle_wires (bool): include idle wires (wires with no circuit elements)
178 in output visualization. Default is True.
179 with_layout (bool): include layout information, with labels on the
180 physical layout. Default is True.
181 fold (int): sets pagination. It can be disabled using -1. In `text`,
182 sets the length of the lines. This is useful when the drawing does
183 not fit in the console. If None (default), it will try to guess the
184 console width using ``shutil.get_terminal_size()``. However, if
185 running in jupyter, the default line length is set to 80 characters.
186 In `mpl`, it is the number of (visual) layers before folding.
187 Default is 25.
188 ax (matplotlib.axes.Axes): Only used by the `mpl` backend. An optional
189 Axes object to be used for the visualization output. If none is
190 specified, a new matplotlib Figure will be created and used.
191 Additionally, if specified there will be no returned Figure since
192 it is redundant.
193 initial_state (bool): optional. Adds ``|0>`` in the beginning of the wire.
194 Default is False.
195 cregbundle (bool): optional. If set True, bundle classical registers.
196 Default is True.
197
198 Returns:
199 :class:`TextDrawing` or :class:`matplotlib.figure` or :class:`PIL.Image` or
200 :class:`str`:
201
202 * `TextDrawing` (output='text')
203 A drawing that can be printed as ascii art.
204 * `matplotlib.figure.Figure` (output='mpl')
205 A matplotlib figure object for the circuit diagram.
206 * `PIL.Image` (output='latex')
207 An in-memory representation of the image of the circuit diagram.
208 * `str` (output='latex_source')
209 The LaTeX source code for visualizing the circuit diagram.
210
211 Raises:
212 VisualizationError: when an invalid output method is selected
213 MissingOptionalLibraryError: when the output methods requires non-installed libraries.
214
215 Example:
216 .. jupyter-execute::
217
218 from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit
219 from qiskit.tools.visualization import circuit_drawer
220 q = QuantumRegister(1)
221 c = ClassicalRegister(1)
222 qc = QuantumCircuit(q, c)
223 qc.h(q)
224 qc.measure(q, c)
225 circuit_drawer(qc, output='mpl', style={'backgroundcolor': '#EEEEEE'})
226 """
227 image = None
228 config = user_config.get_config()
229 # Get default from config file else use text
230 default_output = "text"
231 if config:
232 default_output = config.get("circuit_drawer", "text")
233 if default_output == "auto":
234 if _matplotlib.HAS_MATPLOTLIB:
235 default_output = "mpl"
236 else:
237 default_output = "text"
238 if output is None:
239 output = default_output
240
241 if output == "text":
242 return _text_circuit_drawer(
243 circuit,
244 filename=filename,
245 reverse_bits=reverse_bits,
246 plot_barriers=plot_barriers,
247 justify=justify,
248 vertical_compression=vertical_compression,
249 idle_wires=idle_wires,
250 with_layout=with_layout,
251 fold=fold,
252 initial_state=initial_state,
253 cregbundle=cregbundle,
254 )
255 elif output == "latex":
256 image = _latex_circuit_drawer(
257 circuit,
258 filename=filename,
259 scale=scale,
260 style=style,
261 plot_barriers=plot_barriers,
262 reverse_bits=reverse_bits,
263 justify=justify,
264 idle_wires=idle_wires,
265 with_layout=with_layout,
266 initial_state=initial_state,
267 cregbundle=cregbundle,
268 )
269 elif output == "latex_source":
270 return _generate_latex_source(
271 circuit,
272 filename=filename,
273 scale=scale,
274 style=style,
275 plot_barriers=plot_barriers,
276 reverse_bits=reverse_bits,
277 justify=justify,
278 idle_wires=idle_wires,
279 with_layout=with_layout,
280 initial_state=initial_state,
281 cregbundle=cregbundle,
282 )
283 elif output == "mpl":
284 image = _matplotlib_circuit_drawer(
285 circuit,
286 scale=scale,
287 filename=filename,
288 style=style,
289 plot_barriers=plot_barriers,
290 reverse_bits=reverse_bits,
291 justify=justify,
292 idle_wires=idle_wires,
293 with_layout=with_layout,
294 fold=fold,
295 ax=ax,
296 initial_state=initial_state,
297 cregbundle=cregbundle,
298 )
299 else:
300 raise VisualizationError(
301 "Invalid output type %s selected. The only valid choices "
302 "are text, latex, latex_source, and mpl" % output
303 )
304 if image and interactive:
305 image.show()
306 return image
307
308
309 # -----------------------------------------------------------------------------
310 # _text_circuit_drawer
311 # -----------------------------------------------------------------------------
312
313
314 def _text_circuit_drawer(
315 circuit,
316 filename=None,
317 reverse_bits=False,
318 plot_barriers=True,
319 justify=None,
320 vertical_compression="high",
321 idle_wires=True,
322 with_layout=True,
323 fold=None,
324 initial_state=True,
325 cregbundle=False,
326 encoding=None,
327 ):
328 """Draws a circuit using ascii art.
329
330 Args:
331 circuit (QuantumCircuit): Input circuit
332 filename (str): Optional filename to write the result
333 reverse_bits (bool): Rearrange the bits in reverse order.
334 plot_barriers (bool): Draws the barriers when they are there.
335 justify (str) : `left`, `right` or `none`. Defaults to `left`. Says how
336 the circuit should be justified.
337 vertical_compression (string): `high`, `medium`, or `low`. It merges the
338 lines so the drawing will take less vertical room. Default is `high`.
339 idle_wires (bool): Include idle wires. Default is True.
340 with_layout (bool): Include layout information with labels on the physical
341 layout. Default: True
342 fold (int): Optional. Breaks the circuit drawing to this length. This
343 is useful when the drawing does not fit in the console. If
344 None (default), it will try to guess the console width using
345 `shutil.get_terminal_size()`. If you don't want pagination
346 at all, set `fold=-1`.
347 initial_state (bool): Optional. Adds |0> in the beginning of the line.
348 Default: `False`.
349 cregbundle (bool): Optional. If set True, bundle classical registers.
350 Default: ``True``.
351 encoding (str): Optional. Sets the encoding preference of the output.
352 Default: ``sys.stdout.encoding``.
353
354 Returns:
355 TextDrawing: An instance that, when printed, draws the circuit in ascii art.
356
357 Raises:
358 VisualizationError: When the filename extenstion is not .txt.
359 """
360 qubits, clbits, nodes = utils._get_layered_instructions(
361 circuit, reverse_bits=reverse_bits, justify=justify, idle_wires=idle_wires
362 )
363
364 if with_layout:
365 layout = circuit._layout
366 else:
367 layout = None
368 global_phase = circuit.global_phase if hasattr(circuit, "global_phase") else None
369 text_drawing = _text.TextDrawing(
370 qubits,
371 clbits,
372 nodes,
373 reverse_bits=reverse_bits,
374 layout=layout,
375 initial_state=initial_state,
376 cregbundle=cregbundle,
377 global_phase=global_phase,
378 encoding=encoding,
379 qregs=circuit.qregs,
380 cregs=circuit.cregs,
381 )
382 text_drawing.plotbarriers = plot_barriers
383 text_drawing.line_length = fold
384 text_drawing.vertical_compression = vertical_compression
385
386 if filename:
387 if not filename.endswith(".txt"):
388 raise VisualizationError("ERROR: filename parameter does not use .txt extension.")
389 text_drawing.dump(filename, encoding=encoding)
390 return text_drawing
391
392
393 # -----------------------------------------------------------------------------
394 # latex_circuit_drawer
395 # -----------------------------------------------------------------------------
396
397
398 def _latex_circuit_drawer(
399 circuit,
400 scale=0.7,
401 style=None,
402 filename=None,
403 plot_barriers=True,
404 reverse_bits=False,
405 justify=None,
406 idle_wires=True,
407 with_layout=True,
408 initial_state=False,
409 cregbundle=False,
410 ):
411 """Draw a quantum circuit based on latex (Qcircuit package)
412
413 Requires version >=2.6.0 of the qcircuit LaTeX package.
414
415 Args:
416 circuit (QuantumCircuit): a quantum circuit
417 scale (float): scaling factor
418 style (dict or str): dictionary of style or file name of style file
419 filename (str): file path to save image to
420 reverse_bits (bool): When set to True reverse the bit order inside
421 registers for the output visualization.
422 plot_barriers (bool): Enable/disable drawing barriers in the output
423 circuit. Defaults to True.
424 justify (str) : `left`, `right` or `none`. Defaults to `left`. Says how
425 the circuit should be justified.
426 idle_wires (bool): Include idle wires. Default is True.
427 with_layout (bool): Include layout information, with labels on the physical
428 layout. Default: True
429 initial_state (bool): Optional. Adds |0> in the beginning of the line.
430 Default: `False`.
431 cregbundle (bool): Optional. If set True, bundle classical registers.
432 Default: ``False``.
433
434 Returns:
435 PIL.Image: an in-memory representation of the circuit diagram
436
437 Raises:
438 MissingOptionalLibraryError: if pillow, pdflatex, or poppler are not installed
439 VisualizationError: if one of the conversion utilities failed for some internal or
440 file-access reason.
441 """
442 tmpfilename = "circuit"
443 with tempfile.TemporaryDirectory() as tmpdirname:
444 tmppath = os.path.join(tmpdirname, tmpfilename + ".tex")
445 _generate_latex_source(
446 circuit,
447 filename=tmppath,
448 scale=scale,
449 style=style,
450 plot_barriers=plot_barriers,
451 reverse_bits=reverse_bits,
452 justify=justify,
453 idle_wires=idle_wires,
454 with_layout=with_layout,
455 initial_state=initial_state,
456 cregbundle=cregbundle,
457 )
458 if not HAS_PDFLATEX:
459 raise MissingOptionalLibraryError(
460 libname="pdflatex",
461 name="LaTeX circuit drawing",
462 msg="You will likely need to install a full LaTeX distribution for your system",
463 )
464 if not HAS_PDFTOCAIRO:
465 raise MissingOptionalLibraryError(
466 libname="pdftocairo",
467 name="LaTeX circuit drawing",
468 msg="This is part of the 'poppler' set of PDF utilities",
469 )
470 if not HAS_PIL:
471 raise MissingOptionalLibraryError(
472 libname="pillow",
473 name="LaTeX circuit drawing",
474 pip_install="pip install pillow",
475 )
476 try:
477 subprocess.run(
478 [
479 "pdflatex",
480 "-halt-on-error",
481 f"-output-directory={tmpdirname}",
482 f"{tmpfilename + '.tex'}",
483 ],
484 stdout=subprocess.PIPE,
485 stderr=subprocess.DEVNULL,
486 check=True,
487 )
488 except OSError as exc:
489 # OSError should generally not occur, because it's usually only triggered if `pdflatex`
490 # doesn't exist as a command, but we've already checked that.
491 raise VisualizationError("`pdflatex` command could not be run.") from exc
492 except subprocess.CalledProcessError as exc:
493 with open("latex_error.log", "wb") as error_file:
494 error_file.write(exc.stdout)
495 logger.warning(
496 "Unable to compile LaTeX. Perhaps you are missing the `qcircuit` package."
497 " The output from the `pdflatex` command is in `latex_error.log`."
498 )
499 raise VisualizationError(
500 "`pdflatex` call did not succeed: see `latex_error.log`."
501 ) from exc
502 base = os.path.join(tmpdirname, tmpfilename)
503 try:
504 subprocess.run(
505 ["pdftocairo", "-singlefile", "-png", "-q", base + ".pdf", base],
506 check=True,
507 )
508 except (OSError, subprocess.CalledProcessError) as exc:
509 message = "`pdftocairo` failed to produce an image."
510 logger.warning(message)
511 raise VisualizationError(message) from exc
512 image = Image.open(base + ".png")
513 image = utils._trim(image)
514 if filename:
515 if filename.endswith(".pdf"):
516 os.rename(base + ".pdf", filename)
517 else:
518 try:
519 image.save(filename)
520 except (ValueError, OSError) as exc:
521 raise VisualizationError(
522 f"Pillow could not write the image file '{filename}'."
523 ) from exc
524 return image
525
526
527 def _generate_latex_source(
528 circuit,
529 filename=None,
530 scale=0.7,
531 style=None,
532 reverse_bits=False,
533 plot_barriers=True,
534 justify=None,
535 idle_wires=True,
536 with_layout=True,
537 initial_state=False,
538 cregbundle=False,
539 ):
540 """Convert QuantumCircuit to LaTeX string.
541
542 Args:
543 circuit (QuantumCircuit): a quantum circuit
544 scale (float): scaling factor
545 style (dict or str): dictionary of style or file name of style file
546 filename (str): optional filename to write latex
547 reverse_bits (bool): When set to True reverse the bit order inside
548 registers for the output visualization.
549 plot_barriers (bool): Enable/disable drawing barriers in the output
550 circuit. Defaults to True.
551 justify (str) : `left`, `right` or `none`. Defaults to `left`. Says how
552 the circuit should be justified.
553 idle_wires (bool): Include idle wires. Default is True.
554 with_layout (bool): Include layout information, with labels on the physical
555 layout. Default: True
556 initial_state (bool): Optional. Adds |0> in the beginning of the line.
557 Default: `False`.
558 cregbundle (bool): Optional. If set True, bundle classical registers.
559 Default: ``False``.
560
561 Returns:
562 str: Latex string appropriate for writing to file.
563 """
564 qubits, clbits, nodes = utils._get_layered_instructions(
565 circuit, reverse_bits=reverse_bits, justify=justify, idle_wires=idle_wires
566 )
567 if with_layout:
568 layout = circuit._layout
569 else:
570 layout = None
571
572 global_phase = circuit.global_phase if hasattr(circuit, "global_phase") else None
573 qcimg = _latex.QCircuitImage(
574 qubits,
575 clbits,
576 nodes,
577 scale,
578 style=style,
579 reverse_bits=reverse_bits,
580 plot_barriers=plot_barriers,
581 layout=layout,
582 initial_state=initial_state,
583 cregbundle=cregbundle,
584 global_phase=global_phase,
585 qregs=circuit.qregs,
586 cregs=circuit.cregs,
587 )
588 latex = qcimg.latex()
589 if filename:
590 with open(filename, "w") as latex_file:
591 latex_file.write(latex)
592
593 return latex
594
595
596 # -----------------------------------------------------------------------------
597 # matplotlib_circuit_drawer
598 # -----------------------------------------------------------------------------
599
600
601 def _matplotlib_circuit_drawer(
602 circuit,
603 scale=None,
604 filename=None,
605 style=None,
606 plot_barriers=True,
607 reverse_bits=False,
608 justify=None,
609 idle_wires=True,
610 with_layout=True,
611 fold=None,
612 ax=None,
613 initial_state=False,
614 cregbundle=True,
615 ):
616
617 """Draw a quantum circuit based on matplotlib.
618 If `%matplotlib inline` is invoked in a Jupyter notebook, it visualizes a circuit inline.
619 We recommend `%config InlineBackend.figure_format = 'svg'` for the inline visualization.
620
621 Args:
622 circuit (QuantumCircuit): a quantum circuit
623 scale (float): scaling factor
624 filename (str): file path to save image to
625 style (dict or str): dictionary of style or file name of style file
626 reverse_bits (bool): When set to True, reverse the bit order inside
627 registers for the output visualization.
628 plot_barriers (bool): Enable/disable drawing barriers in the output
629 circuit. Defaults to True.
630 justify (str): `left`, `right` or `none`. Defaults to `left`. Says how
631 the circuit should be justified.
632 idle_wires (bool): Include idle wires. Default is True.
633 with_layout (bool): Include layout information, with labels on the physical
634 layout. Default: True.
635 fold (int): Number of vertical layers allowed before folding. Default is 25.
636 ax (matplotlib.axes.Axes): An optional Axes object to be used for
637 the visualization output. If none is specified, a new matplotlib
638 Figure will be created and used. Additionally, if specified there
639 will be no returned Figure since it is redundant.
640 initial_state (bool): Optional. Adds |0> in the beginning of the line.
641 Default: `False`.
642 cregbundle (bool): Optional. If set True bundle classical registers.
643 Default: ``True``.
644
645 Returns:
646 matplotlib.figure: a matplotlib figure object for the circuit diagram
647 if the ``ax`` kwarg is not set.
648 """
649
650 qubits, clbits, nodes = utils._get_layered_instructions(
651 circuit, reverse_bits=reverse_bits, justify=justify, idle_wires=idle_wires
652 )
653 if with_layout:
654 layout = circuit._layout
655 else:
656 layout = None
657
658 if fold is None:
659 fold = 25
660
661 global_phase = circuit.global_phase if hasattr(circuit, "global_phase") else None
662 qcd = _matplotlib.MatplotlibDrawer(
663 qubits,
664 clbits,
665 nodes,
666 scale=scale,
667 style=style,
668 reverse_bits=reverse_bits,
669 plot_barriers=plot_barriers,
670 layout=layout,
671 fold=fold,
672 ax=ax,
673 initial_state=initial_state,
674 cregbundle=cregbundle,
675 global_phase=global_phase,
676 qregs=circuit.qregs,
677 cregs=circuit.cregs,
678 calibrations=circuit.calibrations,
679 )
680 return qcd.draw(filename)
681
[end of qiskit/visualization/circuit_visualization.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
Qiskit/qiskit
|
f27a1806ac82b10713dba60cc9c0ac8fea0af2f9
|
CalibrationBuilder scales superlinearly
<!-- ⚠️ If you do not respect this template, your issue will be closed -->
<!-- ⚠️ Make sure to browse the opened and closed issues to confirm this idea does not exist. -->
### What is the expected enhancement?
For a random circuit of equal width and depth the CalibrationBuilder pass scales like n_qubits^3 whereas one might expect scaling like n_qubits^2 if it was proportional to the number of gates. This slow down seems to be in retrieving the qubits associated with a dag node.
This test was run without actual calibrations so only part of the pass code is actually profiled.


This is qiskit-terra version 0.19.0.dev0+637acc0.
|
The obvious culprit I assume are these lines:
```python
for node in dag.gate_nodes():
qubits = list(dag.qubits.index(q) for q in node.qargs)
```
That looks pretty cubic to me - "for op in circuit: for qubit in op: for bit in dag: if qubit == bit: ...". I don't know off the top of my head if `DAGCircuit` gained the benefits of #6621 - I assume probably not, because keeping it updated would be a bit gross. The simplest solution is probably just to do a little bit of
```python
qubit_map = {q: i for i, q in enumerate(dag.qubits)}
for node in dag.gate_nodes():
qubits = [qubit_map[q] for q in node.qargs]
```
in the transpiler pass manually, which should drop it back down to quadratic complexity.
There's also an easy win in `Bit.__eq__`:
```python
def __eq__(self, other):
if (self._register, self._index) == (None, None):
return self is other
# ... else handle old-style bits ...
```
The `if` check is a bit inefficient here: `if self._register is None and self._index is None` should be faster, because it avoids creation of temporary tuples, and avoids a call to `Register.__eq__`.
Yeah, it was a follow up for #6621 to add an equivalent method for DAGCircuit https://github.com/Qiskit/qiskit-terra/pull/6621#issuecomment-885934083 just adding the dict comprehension to get the indices is the best way to do this. That's what we've been doing in other transpiler passes while we wait for an equivalent to #6621 for DAGCircuit.
|
2021-10-26T16:24:13Z
|
<patch>
diff --git a/qiskit/transpiler/passes/calibration/builders.py b/qiskit/transpiler/passes/calibration/builders.py
--- a/qiskit/transpiler/passes/calibration/builders.py
+++ b/qiskit/transpiler/passes/calibration/builders.py
@@ -74,8 +74,9 @@ def run(self, dag: DAGCircuit) -> DAGCircuit:
Returns:
A DAG with calibrations added to it.
"""
+ qubit_map = {qubit: i for i, qubit in enumerate(dag.qubits)}
for node in dag.gate_nodes():
- qubits = list(dag.qubits.index(q) for q in node.qargs)
+ qubits = [qubit_map[q] for q in node.qargs]
if self.supported(node.op, qubits) and not dag.has_calibration_for(node):
# calibration can be provided and no user-defined calibration is already provided
</patch>
|
[]
|
[]
| |||
pandas-dev__pandas-4015
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
df.groupby() does not allow to use name of index at level 0 when not using a MultiIndex
``` python
In [24]: df = pd.DataFrame({
....: 'exp' : ['A']*3 + ['B']*3,
....: 'var1' : range(6),
....: })
In [25]: df = df.set_index(['exp'])
In [26]: df.index.names
Out[27]: ['exp']
In [28]: df
Out[28]:
var1
exp
A 0
A 1
A 2
B 3
B 4
B 5
In [29]: try:
....: df.groupby(level='exp').size()
....: except ValueError as e:
....: print('index can not be accessed by name')
....: print(e)
....:
index can not be accessed by name
level > 0 only valid with MultiIndex
# at this point I would argue that the level 'exp' is level 0,
# and therefore this should work.
In [29]: df1 = df.groupby(level=0).size()
In [30]: df1
Out[30]:
exp
A 3
B 3
dtype: int64
```
</issue>
<code>
[start of README.rst]
1 =============================================
2 pandas: powerful Python data analysis toolkit
3 =============================================
4
5 .. image:: https://travis-ci.org/pydata/pandas.png
6 :target: https://travis-ci.org/pydata/pandas
7
8 What is it
9 ==========
10
11 **pandas** is a Python package providing fast, flexible, and expressive data
12 structures designed to make working with "relational" or "labeled" data both
13 easy and intuitive. It aims to be the fundamental high-level building block for
14 doing practical, **real world** data analysis in Python. Additionally, it has
15 the broader goal of becoming **the most powerful and flexible open source data
16 analysis / manipulation tool available in any language**. It is already well on
17 its way toward this goal.
18
19 Main Features
20 =============
21
22 Here are just a few of the things that pandas does well:
23
24 - Easy handling of **missing data** (represented as NaN) in floating point as
25 well as non-floating point data
26 - Size mutability: columns can be **inserted and deleted** from DataFrame and
27 higher dimensional objects
28 - Automatic and explicit **data alignment**: objects can be explicitly
29 aligned to a set of labels, or the user can simply ignore the labels and
30 let `Series`, `DataFrame`, etc. automatically align the data for you in
31 computations
32 - Powerful, flexible **group by** functionality to perform
33 split-apply-combine operations on data sets, for both aggregating and
34 transforming data
35 - Make it **easy to convert** ragged, differently-indexed data in other
36 Python and NumPy data structures into DataFrame objects
37 - Intelligent label-based **slicing**, **fancy indexing**, and **subsetting**
38 of large data sets
39 - Intuitive **merging** and **joining** data sets
40 - Flexible **reshaping** and pivoting of data sets
41 - **Hierarchical** labeling of axes (possible to have multiple labels per
42 tick)
43 - Robust IO tools for loading data from **flat files** (CSV and delimited),
44 Excel files, databases, and saving / loading data from the ultrafast **HDF5
45 format**
46 - **Time series**-specific functionality: date range generation and frequency
47 conversion, moving window statistics, moving window linear regressions,
48 date shifting and lagging, etc.
49
50 Where to get it
51 ===============
52
53 The source code is currently hosted on GitHub at: http://github.com/pydata/pandas
54
55 Binary installers for the latest released version are available at the Python
56 package index::
57
58 http://pypi.python.org/pypi/pandas/
59
60 And via ``easy_install`` or ``pip``::
61
62 easy_install pandas
63 pip install pandas
64
65 Dependencies
66 ============
67
68 - `NumPy <http://www.numpy.org>`__: 1.6.1 or higher
69 - `python-dateutil <http://labix.org/python-dateutil>`__ 1.5 or higher
70 - `pytz <http://pytz.sourceforge.net/>`__
71 - Needed for time zone support with ``date_range``
72
73 Highly Recommended Dependencies
74 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
75
76 - `numexpr <http://code.google.com/p/numexpr/>`__
77 - Needed to accelerate some expression evaluation operations
78 - Required by `PyTables`
79 - `bottleneck <http://berkeleyanalytics.com/bottleneck>`__
80 - Needed to accelerate certain numerical operations
81
82 Optional dependencies
83 ~~~~~~~~~~~~~~~~~~~~~
84
85 - `Cython <http://www.cython.org>`__: Only necessary to build development version. Version 0.17.1 or higher.
86 - `SciPy <http://www.scipy.org>`__: miscellaneous statistical functions
87 - `PyTables <http://www.pytables.org>`__: necessary for HDF5-based storage
88 - `matplotlib <http://matplotlib.sourceforge.net/>`__: for plotting
89 - `statsmodels <http://statsmodels.sourceforge.net/>`__
90 - Needed for parts of :mod:`pandas.stats`
91 - `openpyxl <http://packages.python.org/openpyxl/>`__, `xlrd/xlwt <http://www.python-excel.org/>`__
92 - openpyxl version 1.6.1 or higher, for writing .xlsx files
93 - xlrd >= 0.9.0
94 - Needed for Excel I/O
95 - `boto <https://pypi.python.org/pypi/boto>`__: necessary for Amazon S3
96 access.
97 - One of the following combinations of libraries is needed to use the
98 top-level :func:`~pandas.io.html.read_html` function:
99
100 - `BeautifulSoup4`_ and `html5lib`_ (Any recent version of `html5lib`_ is
101 okay.)
102 - `BeautifulSoup4`_ and `lxml`_
103 - `BeautifulSoup4`_ and `html5lib`_ and `lxml`_
104 - Only `lxml`_, although see :ref:`HTML reading gotchas <html-gotchas>`
105 for reasons as to why you should probably **not** take this approach.
106
107 .. warning::
108
109 - if you install `BeautifulSoup4`_ you must install either
110 `lxml`_ or `html5lib`_ or both.
111 :func:`~pandas.io.html.read_html` will **not** work with *only*
112 `BeautifulSoup4`_ installed.
113 - You are highly encouraged to read :ref:`HTML reading gotchas
114 <html-gotchas>`. It explains issues surrounding the installation and
115 usage of the above three libraries
116 - You may need to install an older version of `BeautifulSoup4`_:
117 - Versions 4.2.1, 4.1.3 and 4.0.2 have been confirmed for 64 and
118 32-bit Ubuntu/Debian
119 - Additionally, if you're using `Anaconda`_ you should definitely
120 read :ref:`the gotchas about HTML parsing libraries <html-gotchas>`
121
122 .. note::
123
124 - if you're on a system with ``apt-get`` you can do
125
126 .. code-block:: sh
127
128 sudo apt-get build-dep python-lxml
129
130 to get the necessary dependencies for installation of `lxml`_. This
131 will prevent further headaches down the line.
132
133
134 .. _html5lib: https://github.com/html5lib/html5lib-python
135 .. _BeautifulSoup4: http://www.crummy.com/software/BeautifulSoup
136 .. _lxml: http://lxml.de
137 .. _Anaconda: https://store.continuum.io/cshop/anaconda
138
139
140 Installation from sources
141 =========================
142
143 To install pandas from source you need ``cython`` in addition to the normal dependencies above,
144 which can be installed from pypi::
145
146 pip install cython
147
148 In the ``pandas`` directory (same one where you found this file after cloning the git repo), execute::
149
150 python setup.py install
151
152 or for installing in `development mode <http://www.pip-installer.org/en/latest/usage.html>`__::
153
154 python setup.py develop
155
156 Alternatively, you can use `pip` if you want all the dependencies pulled in automatically
157 (the optional ``-e`` option is for installing it in
158 `development mode <http://www.pip-installer.org/en/latest/usage.html>`__)::
159
160 pip install -e .
161
162 On Windows, you will need to install MinGW and execute::
163
164 python setup.py build --compiler=mingw32
165 python setup.py install
166
167 See http://pandas.pydata.org/ for more information.
168
169 License
170 =======
171
172 BSD
173
174 Documentation
175 =============
176
177 The official documentation is hosted on PyData.org: http://pandas.pydata.org/
178
179 The Sphinx documentation should provide a good starting point for learning how
180 to use the library. Expect the docs to continue to expand as time goes on.
181
182 Background
183 ==========
184
185 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and
186 has been under active development since then.
187
188 Discussion and Development
189 ==========================
190
191 Since ``pandas`` development is related to a number of other scientific
192 Python projects, questions are welcome on the scipy-user mailing
193 list. Specialized discussions or design issues should take place on
194 the pystatsmodels mailing list / Google group, where
195 ``scikits.statsmodels`` and other libraries will also be discussed:
196
197 http://groups.google.com/group/pystatsmodels
198
199 .. _NumPy: http://numpy.scipy.org/
200
[end of README.rst]
[start of pandas/core/indexing.py]
1 # pylint: disable=W0223
2
3 from datetime import datetime
4 from pandas.core.common import _asarray_tuplesafe
5 from pandas.core.index import Index, MultiIndex, _ensure_index
6 import pandas.core.common as com
7 import pandas.lib as lib
8
9 import numpy as np
10
11 # the supported indexers
12 def get_indexers_list():
13
14 return [
15 ('ix' ,_NDFrameIndexer),
16 ('iloc',_iLocIndexer ),
17 ('loc' ,_LocIndexer ),
18 ('at' ,_AtIndexer ),
19 ('iat' ,_iAtIndexer ),
20 ]
21
22 # "null slice"
23 _NS = slice(None, None)
24
25
26 class IndexingError(Exception):
27 pass
28
29
30 class _NDFrameIndexer(object):
31
32 def __init__(self, obj, name):
33 self.obj = obj
34 self.ndim = obj.ndim
35 self.name = name
36
37 def __iter__(self):
38 raise NotImplementedError('ix is not iterable')
39
40 def __getitem__(self, key):
41 if type(key) is tuple:
42 try:
43 return self.obj.get_value(*key)
44 except Exception:
45 pass
46
47 return self._getitem_tuple(key)
48 else:
49 return self._getitem_axis(key, axis=0)
50
51 def _get_label(self, label, axis=0):
52 # ueber-hack
53 if (isinstance(label, tuple) and
54 isinstance(label[axis], slice)):
55
56 raise IndexingError('no slices here')
57
58 try:
59 return self.obj._xs(label, axis=axis, copy=False)
60 except Exception:
61 return self.obj._xs(label, axis=axis, copy=True)
62
63 def _get_loc(self, key, axis=0):
64 return self.obj._ixs(key, axis=axis)
65
66 def _slice(self, obj, axis=0, raise_on_error=False):
67 return self.obj._slice(obj, axis=axis, raise_on_error=raise_on_error)
68
69 def __setitem__(self, key, value):
70 # kludgetastic
71 ax = self.obj._get_axis(0)
72 if isinstance(ax, MultiIndex):
73 try:
74 indexer = ax.get_loc(key)
75 self._setitem_with_indexer(indexer, value)
76 return
77 except Exception:
78 pass
79
80 if isinstance(key, tuple):
81 if len(key) > self.ndim:
82 raise IndexingError('only tuples of length <= %d supported',
83 self.ndim)
84 indexer = self._convert_tuple(key)
85 else:
86 indexer = self._convert_to_indexer(key)
87
88 self._setitem_with_indexer(indexer, value)
89
90 def _has_valid_tuple(self, key):
91 pass
92
93 def _convert_tuple(self, key):
94 keyidx = []
95 for i, k in enumerate(key):
96 idx = self._convert_to_indexer(k, axis=i)
97 keyidx.append(idx)
98 return tuple(keyidx)
99
100 def _setitem_with_indexer(self, indexer, value):
101 from pandas.core.frame import DataFrame, Series
102
103 # also has the side effect of consolidating in-place
104
105 # mmm, spaghetti
106
107 if self.obj._is_mixed_type:
108 if not isinstance(indexer, tuple):
109 indexer = self._tuplify(indexer)
110
111 if isinstance(value, Series):
112 value = self._align_series(indexer, value)
113
114 het_axis = self.obj._het_axis
115 het_idx = indexer[het_axis]
116
117 if com.is_integer(het_idx):
118 het_idx = [het_idx]
119
120 plane_indexer = indexer[:het_axis] + indexer[het_axis + 1:]
121 item_labels = self.obj._get_axis(het_axis)
122
123 def setter(item, v):
124 data = self.obj[item]
125 values = data.values
126 if np.prod(values.shape):
127 result, changed = com._maybe_upcast_indexer(values,plane_indexer,v,dtype=getattr(data,'dtype',None))
128 self.obj[item] = result
129
130 labels = item_labels[het_idx]
131
132 if _is_list_like(value):
133
134 # we have an equal len Frame
135 if isinstance(value, DataFrame) and value.ndim > 1:
136
137 for item in labels:
138
139 # align to
140 if item in value:
141 v = value[item]
142 v = v.reindex(self.obj[item].index & v.index)
143 setter(item, v.values)
144 else:
145 setter(item, np.nan)
146
147 # we have an equal len ndarray to our labels
148 elif isinstance(value, np.ndarray) and value.ndim == 2:
149 if len(labels) != value.shape[1]:
150 raise ValueError('Must have equal len keys and value when'
151 ' setting with an ndarray')
152
153 for i, item in enumerate(labels):
154 setter(item, value[:,i])
155
156 # we have an equal len list/ndarray
157 elif len(labels) == 1 and (
158 len(self.obj[labels[0]]) == len(value) or len(plane_indexer[0]) == len(value)):
159 setter(labels[0], value)
160
161 # per label values
162 else:
163
164 for item, v in zip(labels, value):
165 setter(item, v)
166 else:
167
168 # scalar
169 for item in labels:
170 setter(item, value)
171
172 else:
173 if isinstance(indexer, tuple):
174 indexer = _maybe_convert_ix(*indexer)
175
176 if isinstance(value, Series):
177 value = self._align_series(indexer, value)
178
179 if isinstance(value, DataFrame):
180 value = self._align_frame(indexer, value)
181
182 # 2096
183 values = self.obj.values
184 if np.prod(values.shape):
185 values[indexer] = value
186
187 def _align_series(self, indexer, ser):
188 # indexer to assign Series can be tuple or scalar
189 if isinstance(indexer, tuple):
190 for i, idx in enumerate(indexer):
191 ax = self.obj.axes[i]
192 if com._is_sequence(idx) or isinstance(idx, slice):
193 new_ix = ax[idx]
194 if ser.index.equals(new_ix):
195 return ser.values.copy()
196 return ser.reindex(new_ix).values
197
198 elif np.isscalar(indexer):
199 ax = self.obj._get_axis(1)
200
201 if ser.index.equals(ax):
202 return ser.values.copy()
203
204 return ser.reindex(ax).values
205
206 raise ValueError('Incompatible indexer with Series')
207
208 def _align_frame(self, indexer, df):
209 from pandas import DataFrame
210 is_frame = isinstance(self.obj, DataFrame)
211 if not is_frame:
212 df = df.T
213 if isinstance(indexer, tuple):
214 idx, cols = None, None
215 for i, ix in enumerate(indexer):
216 ax = self.obj.axes[i]
217 if com._is_sequence(ix) or isinstance(ix, slice):
218 if idx is None:
219 idx = ax[ix].ravel()
220 elif cols is None:
221 cols = ax[ix].ravel()
222 else:
223 break
224
225 if idx is not None and cols is not None:
226 if df.index.equals(idx) and df.columns.equals(cols):
227 val = df.copy().values
228 else:
229 val = df.reindex(idx, columns=cols).values
230 return val
231
232 elif ((isinstance(indexer, slice) or com.is_list_like(indexer))
233 and is_frame):
234 ax = self.obj.index[indexer]
235 if df.index.equals(ax):
236 val = df.copy().values
237 else:
238 val = df.reindex(ax).values
239 return val
240
241 elif np.isscalar(indexer) and not is_frame:
242 idx = self.obj.axes[1]
243 cols = self.obj.axes[2]
244
245 if idx.equals(df.index) and cols.equals(df.columns):
246 return df.copy().values
247 return df.reindex(idx, columns=cols).values
248
249 raise ValueError('Incompatible indexer with DataFrame')
250
251 def _getitem_tuple(self, tup):
252 try:
253 return self._getitem_lowerdim(tup)
254 except IndexingError:
255 pass
256
257 # no multi-index, so validate all of the indexers
258 self._has_valid_tuple(tup)
259
260 # ugly hack for GH #836
261 if self._multi_take_opportunity(tup):
262 return self._multi_take(tup)
263
264 # no shortcut needed
265 retval = self.obj
266 for i, key in enumerate(tup):
267 if i >= self.obj.ndim:
268 raise IndexingError('Too many indexers')
269
270 if _is_null_slice(key):
271 continue
272
273 retval = getattr(retval,self.name)._getitem_axis(key, axis=i)
274
275 return retval
276
277 def _multi_take_opportunity(self, tup):
278 from pandas.core.generic import NDFrame
279
280 # ugly hack for GH #836
281 if not isinstance(self.obj, NDFrame):
282 return False
283
284 if not all(_is_list_like(x) for x in tup):
285 return False
286
287 # just too complicated
288 for ax in self.obj._data.axes:
289 if isinstance(ax, MultiIndex):
290 return False
291
292 return True
293
294 def _multi_take(self, tup):
295 from pandas.core.frame import DataFrame
296 from pandas.core.panel import Panel
297 from pandas.core.panel4d import Panel4D
298
299 if isinstance(self.obj, DataFrame):
300 index = self._convert_for_reindex(tup[0], axis=0)
301 columns = self._convert_for_reindex(tup[1], axis=1)
302 return self.obj.reindex(index=index, columns=columns)
303 elif isinstance(self.obj, Panel4D):
304 conv = [self._convert_for_reindex(x, axis=i)
305 for i, x in enumerate(tup)]
306 return self.obj.reindex(labels=tup[0], items=tup[1], major=tup[2], minor=tup[3])
307 elif isinstance(self.obj, Panel):
308 conv = [self._convert_for_reindex(x, axis=i)
309 for i, x in enumerate(tup)]
310 return self.obj.reindex(items=tup[0], major=tup[1], minor=tup[2])
311
312 def _convert_for_reindex(self, key, axis=0):
313 labels = self.obj._get_axis(axis)
314
315 if com._is_bool_indexer(key):
316 key = _check_bool_indexer(labels, key)
317 return labels[key]
318 else:
319 if isinstance(key, Index):
320 # want Index objects to pass through untouched
321 keyarr = key
322 else:
323 # asarray can be unsafe, NumPy strings are weird
324 keyarr = _asarray_tuplesafe(key)
325
326 if _is_integer_dtype(keyarr) and not _is_integer_index(labels):
327 keyarr = com._ensure_platform_int(keyarr)
328 return labels.take(keyarr)
329
330 return keyarr
331
332 def _getitem_lowerdim(self, tup):
333 from pandas.core.frame import DataFrame
334
335 ax0 = self.obj._get_axis(0)
336 # a bit kludgy
337 if isinstance(ax0, MultiIndex):
338 try:
339 return self._get_label(tup, axis=0)
340 except TypeError:
341 # slices are unhashable
342 pass
343 except Exception, e1:
344 if isinstance(tup[0], (slice, Index)):
345 raise IndexingError
346
347 # raise the error if we are not sorted
348 if not ax0.is_lexsorted_for_tuple(tup):
349 raise e1
350 try:
351 loc = ax0.get_loc(tup[0])
352 except KeyError:
353 raise e1
354
355 if len(tup) > self.obj.ndim:
356 raise IndexingError
357
358 # to avoid wasted computation
359 # df.ix[d1:d2, 0] -> columns first (True)
360 # df.ix[0, ['C', 'B', A']] -> rows first (False)
361 for i, key in enumerate(tup):
362 if _is_label_like(key) or isinstance(key, tuple):
363 section = self._getitem_axis(key, axis=i)
364
365 # we have yielded a scalar ?
366 if not _is_list_like(section):
367 return section
368
369 # might have been a MultiIndex
370 elif section.ndim == self.ndim:
371 new_key = tup[:i] + (_NS,) + tup[i + 1:]
372 # new_key = tup[:i] + tup[i+1:]
373 else:
374 new_key = tup[:i] + tup[i + 1:]
375
376 # unfortunately need an odious kludge here because of
377 # DataFrame transposing convention
378 if (isinstance(section, DataFrame) and i > 0
379 and len(new_key) == 2):
380 a, b = new_key
381 new_key = b, a
382
383 if len(new_key) == 1:
384 new_key, = new_key
385
386 return getattr(section,self.name)[new_key]
387
388 raise IndexingError('not applicable')
389
390 def _getitem_axis(self, key, axis=0):
391 labels = self.obj._get_axis(axis)
392 if isinstance(key, slice):
393 return self._get_slice_axis(key, axis=axis)
394 elif _is_list_like(key) and not (isinstance(key, tuple) and
395 isinstance(labels, MultiIndex)):
396
397 if hasattr(key, 'ndim') and key.ndim > 1:
398 raise ValueError('Cannot index with multidimensional key')
399
400 return self._getitem_iterable(key, axis=axis)
401 else:
402 if com.is_integer(key):
403 if axis == 0 and isinstance(labels, MultiIndex):
404 try:
405 return self._get_label(key, axis=axis)
406 except (KeyError, TypeError):
407 if _is_integer_index(self.obj.index.levels[0]):
408 raise
409
410 if not _is_integer_index(labels):
411 return self._get_loc(key, axis=axis)
412
413 return self._get_label(key, axis=axis)
414
415 def _getitem_iterable(self, key, axis=0):
416 labels = self.obj._get_axis(axis)
417
418 def _reindex(keys, level=None):
419 try:
420 return self.obj.reindex_axis(keys, axis=axis, level=level)
421 except AttributeError:
422 # Series
423 if axis != 0:
424 raise AssertionError('axis must be 0')
425 return self.obj.reindex(keys, level=level)
426
427 if com._is_bool_indexer(key):
428 key = _check_bool_indexer(labels, key)
429 inds, = key.nonzero()
430 return self.obj.take(inds, axis=axis, convert=False)
431 else:
432 if isinstance(key, Index):
433 # want Index objects to pass through untouched
434 keyarr = key
435 else:
436 # asarray can be unsafe, NumPy strings are weird
437 keyarr = _asarray_tuplesafe(key)
438
439 if _is_integer_dtype(keyarr):
440 if labels.inferred_type != 'integer':
441 keyarr = np.where(keyarr < 0,
442 len(labels) + keyarr, keyarr)
443
444 if labels.inferred_type == 'mixed-integer':
445 indexer = labels.get_indexer(keyarr)
446 if (indexer >= 0).all():
447 self.obj.take(indexer, axis=axis, convert=True)
448 else:
449 return self.obj.take(keyarr, axis=axis)
450 elif not labels.inferred_type == 'integer':
451
452 return self.obj.take(keyarr, axis=axis)
453
454 # this is not the most robust, but...
455 if (isinstance(labels, MultiIndex) and
456 not isinstance(keyarr[0], tuple)):
457 level = 0
458 else:
459 level = None
460
461 if labels.is_unique and Index(keyarr).is_unique:
462 return _reindex(keyarr, level=level)
463 else:
464 indexer, missing = labels.get_indexer_non_unique(keyarr)
465 check = indexer != -1
466 result = self.obj.take(indexer[check], axis=axis, convert=False)
467
468 # need to merge the result labels and the missing labels
469 if len(missing):
470 l = np.arange(len(indexer))
471
472 missing = com._ensure_platform_int(missing)
473 missing_labels = keyarr.take(missing)
474 missing_indexer = com._ensure_int64(l[~check])
475 cur_labels = result._get_axis(axis).values
476 cur_indexer = com._ensure_int64(l[check])
477
478 new_labels = np.empty(tuple([len(indexer)]),dtype=object)
479 new_labels[cur_indexer] = cur_labels
480 new_labels[missing_indexer] = missing_labels
481
482 result = result.reindex_axis(new_labels,axis=axis)
483
484 return result
485
486 def _convert_to_indexer(self, obj, axis=0):
487 """
488 Convert indexing key into something we can use to do actual fancy
489 indexing on an ndarray
490
491 Examples
492 ix[:5] -> slice(0, 5)
493 ix[[1,2,3]] -> [1,2,3]
494 ix[['foo', 'bar', 'baz']] -> [i, j, k] (indices of foo, bar, baz)
495
496 Going by Zen of Python?
497 "In the face of ambiguity, refuse the temptation to guess."
498 raise AmbiguousIndexError with integer labels?
499 - No, prefer label-based indexing
500 """
501 labels = self.obj._get_axis(axis)
502 is_int_index = _is_integer_index(labels)
503
504 if com.is_integer(obj) and not is_int_index:
505 return obj
506
507 try:
508 return labels.get_loc(obj)
509 except (KeyError, TypeError):
510 pass
511
512 if isinstance(obj, slice):
513 ltype = labels.inferred_type
514
515 # in case of providing all floats, use label-based indexing
516 float_slice = (labels.inferred_type == 'floating'
517 and _is_float_slice(obj))
518
519 # floats that are within tolerance of int used as positions
520 int_slice = _is_index_slice(obj)
521
522 null_slice = obj.start is None and obj.stop is None
523
524 # could have integers in the first level of the MultiIndex,
525 # in which case we wouldn't want to do position-based slicing
526 position_slice = (int_slice
527 and not ltype == 'integer'
528 and not isinstance(labels, MultiIndex)
529 and not float_slice)
530
531 start, stop = obj.start, obj.stop
532
533 # last ditch effort: if we are mixed and have integers
534 try:
535 if position_slice and 'mixed' in ltype:
536 if start is not None:
537 i = labels.get_loc(start)
538 if stop is not None:
539 j = labels.get_loc(stop)
540 position_slice = False
541 except KeyError:
542 if ltype == 'mixed-integer-float':
543 raise
544
545 if null_slice or position_slice:
546 indexer = obj
547 else:
548 try:
549 indexer = labels.slice_indexer(start, stop, obj.step)
550 except Exception:
551 if _is_index_slice(obj):
552 if ltype == 'integer':
553 raise
554 indexer = obj
555 else:
556 raise
557
558 return indexer
559
560 elif _is_list_like(obj):
561 if com._is_bool_indexer(obj):
562 obj = _check_bool_indexer(labels, obj)
563 inds, = obj.nonzero()
564 return inds
565 else:
566 if isinstance(obj, Index):
567 objarr = obj.values
568 else:
569 objarr = _asarray_tuplesafe(obj)
570
571 # If have integer labels, defer to label-based indexing
572 if _is_integer_dtype(objarr) and not is_int_index:
573 if labels.inferred_type != 'integer':
574 objarr = np.where(objarr < 0,
575 len(labels) + objarr, objarr)
576 return objarr
577
578 # this is not the most robust, but...
579 if (isinstance(labels, MultiIndex) and
580 not isinstance(objarr[0], tuple)):
581 level = 0
582 _, indexer = labels.reindex(objarr, level=level)
583
584 check = labels.levels[0].get_indexer(objarr)
585 else:
586 level = None
587
588 # unique index
589 if labels.is_unique:
590 indexer = check = labels.get_indexer(objarr)
591
592 # non-unique (dups)
593 else:
594 indexer, missing = labels.get_indexer_non_unique(objarr)
595 check = indexer
596
597 mask = check == -1
598 if mask.any():
599 raise KeyError('%s not in index' % objarr[mask])
600
601 return indexer
602
603 else:
604 return labels.get_loc(obj)
605
606 def _tuplify(self, loc):
607 tup = [slice(None, None) for _ in range(self.ndim)]
608 tup[0] = loc
609 return tuple(tup)
610
611 def _get_slice_axis(self, slice_obj, axis=0):
612 obj = self.obj
613
614 if not _need_slice(slice_obj):
615 return obj
616
617 labels = obj._get_axis(axis)
618
619 ltype = labels.inferred_type
620
621 # in case of providing all floats, use label-based indexing
622 float_slice = (labels.inferred_type == 'floating'
623 and _is_float_slice(slice_obj))
624
625 # floats that are within tolerance of int used as positions
626 int_slice = _is_index_slice(slice_obj)
627
628 null_slice = slice_obj.start is None and slice_obj.stop is None
629
630 # could have integers in the first level of the MultiIndex,
631 # in which case we wouldn't want to do position-based slicing
632 position_slice = (int_slice
633 and not ltype == 'integer'
634 and not isinstance(labels, MultiIndex)
635 and not float_slice)
636
637 start, stop = slice_obj.start, slice_obj.stop
638
639 # last ditch effort: if we are mixed and have integers
640 try:
641 if position_slice and 'mixed' in ltype:
642 if start is not None:
643 i = labels.get_loc(start)
644 if stop is not None:
645 j = labels.get_loc(stop)
646 position_slice = False
647 except KeyError:
648 if ltype == 'mixed-integer-float':
649 raise
650
651 if null_slice or position_slice:
652 indexer = slice_obj
653 else:
654 try:
655 indexer = labels.slice_indexer(start, stop, slice_obj.step)
656 except Exception:
657 if _is_index_slice(slice_obj):
658 if ltype == 'integer':
659 raise
660 indexer = slice_obj
661 else:
662 raise
663
664 if isinstance(indexer, slice):
665 return self._slice(indexer, axis=axis)
666 else:
667 return self.obj.take(indexer, axis=axis)
668
669 class _LocationIndexer(_NDFrameIndexer):
670 _valid_types = None
671 _exception = Exception
672
673 def _has_valid_type(self, k, axis):
674 raise NotImplementedError()
675
676 def _has_valid_tuple(self, key):
677 """ check the key for valid keys across my indexer """
678 for i, k in enumerate(key):
679 if i >= self.obj.ndim:
680 raise ValueError('Too many indexers')
681 if not self._has_valid_type(k,i):
682 raise ValueError("Location based indexing can only have [%s] types" % self._valid_types)
683
684 def __getitem__(self, key):
685 if type(key) is tuple:
686 return self._getitem_tuple(key)
687 else:
688 return self._getitem_axis(key, axis=0)
689
690 def _getitem_axis(self, key, axis=0):
691 raise NotImplementedError()
692
693 def _getbool_axis(self, key, axis=0):
694 labels = self.obj._get_axis(axis)
695 key = _check_bool_indexer(labels, key)
696 inds, = key.nonzero()
697 try:
698 return self.obj.take(inds, axis=axis, convert=False)
699 except (Exception), detail:
700 raise self._exception(detail)
701 def _get_slice_axis(self, slice_obj, axis=0):
702 """ this is pretty simple as we just have to deal with labels """
703 obj = self.obj
704 if not _need_slice(slice_obj):
705 return obj
706
707 labels = obj._get_axis(axis)
708 indexer = labels.slice_indexer(slice_obj.start, slice_obj.stop, slice_obj.step)
709
710 if isinstance(indexer, slice):
711 return self._slice(indexer, axis=axis)
712 else:
713 return self.obj.take(indexer, axis=axis)
714
715 class _LocIndexer(_LocationIndexer):
716 """ purely label based location based indexing """
717 _valid_types = "labels (MUST BE IN THE INDEX), slices of labels (BOTH endpoints included! Can be slices of integers if the index is integers), listlike of labels, boolean"
718 _exception = KeyError
719
720 def _has_valid_type(self, key, axis):
721 ax = self.obj._get_axis(axis)
722
723 # valid for a label where all labels are in the index
724 # slice of lables (where start-end in labels)
725 # slice of integers (only if in the lables)
726 # boolean
727
728 if isinstance(key, slice):
729
730 if key.start is not None:
731 if key.start not in ax:
732 raise KeyError("start bound [%s] is not the [%s]" % (key.start,self.obj._get_axis_name(axis)))
733 if key.stop is not None:
734 if key.stop not in ax:
735 raise KeyError("stop bound [%s] is not in the [%s]" % (key.stop,self.obj._get_axis_name(axis)))
736
737 elif com._is_bool_indexer(key):
738 return True
739
740 elif _is_list_like(key):
741
742 # require all elements in the index
743 idx = _ensure_index(key)
744 if not idx.isin(ax).all():
745 raise KeyError("[%s] are not in ALL in the [%s]" % (key,self.obj._get_axis_name(axis)))
746
747 return True
748
749 else:
750
751 # if its empty we want a KeyError here
752 if not len(ax):
753 raise KeyError("The [%s] axis is empty" % self.obj._get_axis_name(axis))
754
755 if not key in ax:
756 raise KeyError("the label [%s] is not in the [%s]" % (key,self.obj._get_axis_name(axis)))
757
758 return True
759
760 def _getitem_axis(self, key, axis=0):
761 labels = self.obj._get_axis(axis)
762
763 if isinstance(key, slice):
764 self._has_valid_type(key,axis)
765 return self._get_slice_axis(key, axis=axis)
766 elif com._is_bool_indexer(key):
767 return self._getbool_axis(key, axis=axis)
768 elif _is_list_like(key) and not (isinstance(key, tuple) and
769 isinstance(labels, MultiIndex)):
770
771 if hasattr(key, 'ndim') and key.ndim > 1:
772 raise ValueError('Cannot index with multidimensional key')
773
774 return self._getitem_iterable(key, axis=axis)
775 else:
776 return self._get_label(key, axis=axis)
777
778 class _iLocIndexer(_LocationIndexer):
779 """ purely integer based location based indexing """
780 _valid_types = "integer, integer slice (START point is INCLUDED, END point is EXCLUDED), listlike of integers, boolean array"
781 _exception = IndexError
782
783 def _has_valid_type(self, key, axis):
784 if com._is_bool_indexer(key):
785 if hasattr(key,'index') and isinstance(key.index,Index):
786 if key.index.inferred_type == 'integer':
787 raise NotImplementedError("iLocation based boolean indexing on an integer type is not available")
788 raise ValueError("iLocation based boolean indexing cannot use an indexable as a mask")
789 return True
790
791 return isinstance(key, slice) or com.is_integer(key) or _is_list_like(key)
792
793 def _getitem_tuple(self, tup):
794
795 self._has_valid_tuple(tup)
796 try:
797 return self._getitem_lowerdim(tup)
798 except:
799 pass
800
801 retval = self.obj
802 for i, key in enumerate(tup):
803 if i >= self.obj.ndim:
804 raise IndexingError('Too many indexers')
805
806 if _is_null_slice(key):
807 continue
808
809 retval = getattr(retval,self.name)._getitem_axis(key, axis=i)
810
811 return retval
812
813 def _get_slice_axis(self, slice_obj, axis=0):
814 obj = self.obj
815
816 if not _need_slice(slice_obj):
817 return obj
818
819 if isinstance(slice_obj, slice):
820 return self._slice(slice_obj, axis=axis, raise_on_error=True)
821 else:
822 return self.obj.take(slice_obj, axis=axis)
823
824 def _getitem_axis(self, key, axis=0):
825
826 if isinstance(key, slice):
827 self._has_valid_type(key,axis)
828 return self._get_slice_axis(key, axis=axis)
829
830 elif com._is_bool_indexer(key):
831 self._has_valid_type(key,axis)
832 return self._getbool_axis(key, axis=axis)
833
834 # a single integer or a list of integers
835 else:
836
837 if not (com.is_integer(key) or _is_list_like(key)):
838 raise ValueError("Cannot index by location index with a non-integer key")
839
840 return self._get_loc(key,axis=axis)
841
842 def _convert_to_indexer(self, obj, axis=0):
843 """ much simpler as we only have to deal with our valid types """
844 if self._has_valid_type(obj,axis):
845 return obj
846
847 raise ValueError("Can only index by location with a [%s]" % self._valid_types)
848
849
850 class _ScalarAccessIndexer(_NDFrameIndexer):
851 """ access scalars quickly """
852
853 def _convert_key(self, key):
854 return list(key)
855
856 def __getitem__(self, key):
857 if not isinstance(key, tuple):
858
859 # we could have a convertible item here (e.g. Timestamp)
860 if not _is_list_like(key):
861 key = tuple([ key ])
862 else:
863 raise ValueError('Invalid call for scalar access (getting)!')
864
865 if len(key) != self.obj.ndim:
866 raise ValueError('Not enough indexers for scalar access (getting)!')
867 key = self._convert_key(key)
868 return self.obj.get_value(*key)
869
870 def __setitem__(self, key, value):
871 if not isinstance(key, tuple):
872 raise ValueError('Invalid call for scalar access (setting)!')
873 if len(key) != self.obj.ndim:
874 raise ValueError('Not enough indexers for scalar access (setting)!')
875 key = self._convert_key(key)
876 key.append(value)
877 self.obj.set_value(*key)
878
879 class _AtIndexer(_ScalarAccessIndexer):
880 """ label based scalar accessor """
881 pass
882
883 class _iAtIndexer(_ScalarAccessIndexer):
884 """ integer based scalar accessor """
885
886 def _convert_key(self, key):
887 """ require integer args (and convert to label arguments) """
888 ckey = []
889 for a, i in zip(self.obj.axes,key):
890 if not com.is_integer(i):
891 raise ValueError("iAt based indexing can only have integer indexers")
892 ckey.append(a[i])
893 return ckey
894
895 # 32-bit floating point machine epsilon
896 _eps = np.finfo('f4').eps
897
898
899 def _convert_to_index_sliceable(obj, key):
900 """ if we are index sliceable, then return my slicer, otherwise return None """
901 idx = obj.index
902 if isinstance(key, slice):
903 idx_type = idx.inferred_type
904 if idx_type == 'floating':
905 indexer = obj.ix._convert_to_indexer(key, axis=0)
906 elif idx_type == 'integer' or _is_index_slice(key):
907 indexer = key
908 else:
909 indexer = obj.ix._convert_to_indexer(key, axis=0)
910 return indexer
911
912 elif isinstance(key, basestring):
913
914 # we are an actual column
915 if key in obj._data.items:
916 return None
917
918 # we need a timelike key here
919 if idx.is_all_dates:
920 try:
921 return idx._get_string_slice(key)
922 except:
923 return None
924
925 return None
926
927 def _is_index_slice(obj):
928 def _is_valid_index(x):
929 return (com.is_integer(x) or com.is_float(x)
930 and np.allclose(x, int(x), rtol=_eps, atol=0))
931
932 def _crit(v):
933 return v is None or _is_valid_index(v)
934
935 both_none = obj.start is None and obj.stop is None
936
937 return not both_none and (_crit(obj.start) and _crit(obj.stop))
938
939
940 def _is_int_slice(obj):
941 def _is_valid_index(x):
942 return com.is_integer(x)
943
944 def _crit(v):
945 return v is None or _is_valid_index(v)
946
947 both_none = obj.start is None and obj.stop is None
948
949 return not both_none and (_crit(obj.start) and _crit(obj.stop))
950
951
952 def _is_float_slice(obj):
953 def _is_valid_index(x):
954 return com.is_float(x)
955
956 def _crit(v):
957 return v is None or _is_valid_index(v)
958
959 both_none = obj.start is None and obj.stop is None
960
961 return not both_none and (_crit(obj.start) and _crit(obj.stop))
962
963
964 class _SeriesIndexer(_NDFrameIndexer):
965 """
966 Class to support fancy indexing, potentially using labels
967
968 Notes
969 -----
970 Indexing based on labels is INCLUSIVE
971 Slicing uses PYTHON SEMANTICS (endpoint is excluded)
972
973 If Index contains int labels, these will be used rather than the locations,
974 so be very careful (ambiguous).
975
976 Examples
977 --------
978 >>> ts.ix[5:10] # equivalent to ts[5:10]
979 >>> ts.ix[[date1, date2, date3]]
980 >>> ts.ix[date1:date2] = 0
981 """
982
983 def _get_label(self, key, axis=0):
984 return self.obj[key]
985
986 def _get_loc(self, key, axis=0):
987 return self.obj.values[key]
988
989 def _slice(self, indexer, axis=0):
990 return self.obj._get_values(indexer)
991
992 def _setitem_with_indexer(self, indexer, value):
993 self.obj._set_values(indexer, value)
994
995 def _check_bool_indexer(ax, key):
996 # boolean indexing, need to check that the data are aligned, otherwise
997 # disallowed
998
999 # this function assumes that com._is_bool_indexer(key) == True
1000
1001 result = key
1002 if _is_series(key) and not key.index.equals(ax):
1003 result = result.reindex(ax)
1004 mask = com.isnull(result)
1005 if mask.any():
1006 raise IndexingError('Unalignable boolean Series key provided')
1007
1008 # com._is_bool_indexer has already checked for nulls in the case of an
1009 # object array key, so no check needed here
1010 result = np.asarray(result, dtype=bool)
1011 return result
1012
1013 def _is_series(obj):
1014 from pandas.core.series import Series
1015 return isinstance(obj, Series)
1016
1017
1018 def _maybe_convert_indices(indices, n):
1019 """ if we have negative indicies, translate to postive here
1020 if have indicies that are out-of-bounds, raise an IndexError """
1021 if isinstance(indices, list):
1022 indices = np.array(indices)
1023
1024 mask = indices<0
1025 if mask.any():
1026 indices[mask] += n
1027 mask = (indices>=n) | (indices<0)
1028 if mask.any():
1029 raise IndexError("indices are out-of-bounds")
1030 return indices
1031
1032 def _maybe_convert_ix(*args):
1033 """
1034 We likely want to take the cross-product
1035 """
1036 ixify = True
1037 for arg in args:
1038 if not isinstance(arg, (np.ndarray, list)):
1039 ixify = False
1040
1041 if ixify:
1042 return np.ix_(*args)
1043 else:
1044 return args
1045
1046
1047 def _is_null_slice(obj):
1048 return (isinstance(obj, slice) and obj.start is None and
1049 obj.stop is None and obj.step is None)
1050
1051
1052 def _is_integer_dtype(arr):
1053 return (issubclass(arr.dtype.type, np.integer) and
1054 not arr.dtype.type == np.datetime64)
1055
1056
1057 def _is_integer_index(index):
1058 return index.inferred_type == 'integer'
1059
1060
1061 def _is_label_like(key):
1062 # select a label or row
1063 return not isinstance(key, slice) and not _is_list_like(key)
1064
1065
1066 def _is_list_like(obj):
1067 # Consider namedtuples to be not list like as they are useful as indices
1068 return (np.iterable(obj)
1069 and not isinstance(obj, basestring)
1070 and not (isinstance(obj, tuple) and type(obj) is not tuple))
1071
1072
1073 def _need_slice(obj):
1074 return (obj.start is not None or
1075 obj.stop is not None or
1076 (obj.step is not None and obj.step != 1))
1077
1078
1079 def _check_slice_bounds(slobj, values):
1080 l = len(values)
1081 start = slobj.start
1082 if start is not None:
1083 if start < -l or start > l-1:
1084 raise IndexError("out-of-bounds on slice (start)")
1085 stop = slobj.stop
1086 if stop is not None:
1087 if stop < -l-1 or stop > l:
1088 raise IndexError("out-of-bounds on slice (end)")
1089
1090 def _maybe_droplevels(index, key):
1091 # drop levels
1092 if isinstance(key, tuple):
1093 for _ in key:
1094 index = index.droplevel(0)
1095 else:
1096 index = index.droplevel(0)
1097
1098 return index
1099
[end of pandas/core/indexing.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pandas-dev/pandas
|
d7fe7454ce37375ac04db9c133af619f2aa3665f
|
df.groupby() does not allow to use name of index at level 0 when not using a MultiIndex
``` python
In [24]: df = pd.DataFrame({
....: 'exp' : ['A']*3 + ['B']*3,
....: 'var1' : range(6),
....: })
In [25]: df = df.set_index(['exp'])
In [26]: df.index.names
Out[27]: ['exp']
In [28]: df
Out[28]:
var1
exp
A 0
A 1
A 2
B 3
B 4
B 5
In [29]: try:
....: df.groupby(level='exp').size()
....: except ValueError as e:
....: print('index can not be accessed by name')
....: print(e)
....:
index can not be accessed by name
level > 0 only valid with MultiIndex
# at this point I would argue that the level 'exp' is level 0,
# and therefore this should work.
In [29]: df1 = df.groupby(level=0).size()
In [30]: df1
Out[30]:
exp
A 3
B 3
dtype: int64
```
|
Good find. I've put a fix together for this (coming shortly).
Note: although in python 2 'exp' > 0 in py3 this raises an error.
|
2013-06-24T20:15:42Z
|
<patch>
diff --git a/doc/source/release.rst b/doc/source/release.rst
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -214,6 +214,7 @@ pandas 0.11.1
names (:issue:`3873`)
- Fixed bug in groupby with empty series referencing a variable before assignment. (:issue:`3510`)
+ - Allow index name to be used in groupby for non MultiIndex (:issue:`4014`)
- Fixed bug in mixed-frame assignment with aligned series (:issue:`3492`)
- Fixed bug in selecting month/quarter/year from a series would not select the time element
on the last day (:issue:`3546`)
diff --git a/pandas/core/groupby.py b/pandas/core/groupby.py
--- a/pandas/core/groupby.py
+++ b/pandas/core/groupby.py
@@ -1252,11 +1252,14 @@ def _get_grouper(obj, key=None, axis=0, level=None, sort=True):
if level is not None:
if not isinstance(group_axis, MultiIndex):
- if level > 0:
+ if isinstance(level, basestring):
+ if obj.index.name != level:
+ raise ValueError('level name %s is not the name of the index' % level)
+ elif level > 0:
raise ValueError('level > 0 only valid with MultiIndex')
- else:
- level = None
- key = group_axis
+
+ level = None
+ key = group_axis
if isinstance(key, CustomGrouper):
gpr = key.get_grouper(obj)
</patch>
|
[]
|
[]
| |||
pantsbuild__pants-5030
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Error in publishing jars with resources after updates
When publishing a jar with resources, the jar does not correctly update when resources are updated. But, the sources jar _does_ update when resources are updated.
Running 1.3.0rc1
## Repro
Have all these in the same directory:
`pants.ini`:
```
[GLOBAL]
pants_version: 1.3.0rc1
[ivy]
ivy_settings: %(pants_supportdir)s/ivysettings.xml
[publish.jar]
dryrun=False
local=~/.m2/repository
prompt=False
changelog=False
named_snapshot=0.1-SNAPSHOT
```
`build-support/ivysettings.xml`:
```
<?xml version="1.0"?>
<ivysettings>
<settings defaultResolver="chain-repos"/>
<resolvers>
<chain name="chain-repos" returnFirst="true">
<ibiblio name="maven-central" m2compatible="true" descriptor="required"/>
<filesystem name="local" m2compatible="true"/>
</chain>
</resolvers>
</ivysettings>
```
`BUILD`:
```
scala_library(
name='MyLibrary',
sources=['MyClass.scala'],
dependencies=[':MyResources'],
provides=artifact(
org='foo',
name='my-library',
repo=repository(
push_db_basedir='build-support/pushdb'
),
),
)
resources(
name='MyResources',
sources=['hello.txt'],
)
```
`MyClass.scala`:
```
package bar
object MyClass {
val myVal: Option[Int] = None
}
```
`hello.txt`:
```
hi
```
Run `./pants publish.jar ::` and then:
```
$ unzip -q -c ~/.m2/repository/foo/my-library/0.1-SNAPSHOT/my-library-0.1-SNAPSHOT.jar hello.txt
hi
$ unzip -q -c ~/.m2/repository/foo/my-library/0.1-SNAPSHOT/my-library-0.1-SNAPSHOT-sources.jar hello.txt
hi
```
Now update `hello.txt`:
```
bye
```
Run `./pants publish.jar ::` and then:
```
$ unzip -q -c ~/.m2/repository/foo/my-library/0.1-SNAPSHOT/my-library-0.1-SNAPSHOT.jar hello.txt
hi
$ unzip -q -c ~/.m2/repository/foo/my-library/0.1-SNAPSHOT/my-library-0.1-SNAPSHOT-sources.jar hello.txt
bye
```
</issue>
<code>
[start of README.md]
1 # Pants Build System
2
3 Pants is a build system for software projects in a variety of languages.
4 It works particularly well for a source code repository that contains
5 many distinct projects.
6
7 Friendly documentation: http://www.pantsbuild.org/
8
9 We release to [PyPI](https://pypi.python.org/pypi)
10 [](https://pypi.python.org/pypi/pantsbuild.pants)
11 [](https://pypi.python.org/pypi/pantsbuild.pants)
12
13 We use [Travis CI](https://travis-ci.org) to verify the build
14 [](https://travis-ci.org/pantsbuild/pants/branches).
15
16 We use [Coveralls](https://coveralls.io) to monitor test coverage
17 [](https://coveralls.io/r/pantsbuild/pants).
18
19 # Requirements
20
21 At a minimum, pants requires the following to run properly:
22
23 * Linux or Mac OS X
24 * Python 2.7.x (the latest stable version of 2.7 is recommended)
25 * A C compiler, system headers, Python headers (to compile native Python modules) and the libffi
26 library and headers (to compile and link modules that use CFFI to access native code).
27 * Internet access (so that pants can fully bootstrap itself)
28
29 Additionally, if you use the jvm backend to work with java or scala code (installed by default):
30
31 * OpenJDK or Oracle JDK version 7 or greater
32
[end of README.md]
[start of src/python/pants/backend/jvm/tasks/jar_publish.py]
1 # coding=utf-8
2 # Copyright 2014 Pants project contributors (see CONTRIBUTORS.md).
3 # Licensed under the Apache License, Version 2.0 (see LICENSE).
4
5 from __future__ import (absolute_import, division, generators, nested_scopes, print_function,
6 unicode_literals, with_statement)
7
8 import functools
9 import getpass
10 import hashlib
11 import os
12 import pkgutil
13 import shutil
14 import sys
15 from collections import OrderedDict, defaultdict, namedtuple
16 from copy import copy
17
18 from twitter.common.collections import OrderedSet
19
20 from pants.backend.jvm.ossrh_publication_metadata import OSSRHPublicationMetadata
21 from pants.backend.jvm.targets.jarable import Jarable
22 from pants.backend.jvm.targets.scala_library import ScalaLibrary
23 from pants.backend.jvm.tasks.jar_task import JarTask
24 from pants.backend.jvm.tasks.properties import Properties
25 from pants.base.build_environment import get_buildroot, get_scm
26 from pants.base.build_file import BuildFile
27 from pants.base.exceptions import TaskError
28 from pants.base.generator import Generator, TemplateData
29 from pants.build_graph.address import Address
30 from pants.build_graph.address_lookup_error import AddressLookupError
31 from pants.build_graph.build_file_parser import BuildFileParser
32 from pants.build_graph.build_graph import sort_targets
33 from pants.ivy.bootstrapper import Bootstrapper
34 from pants.ivy.ivy import Ivy
35 from pants.task.scm_publish_mixin import Namedver, ScmPublishMixin, Semver
36 from pants.util.dirutil import safe_mkdir, safe_open, safe_rmtree
37 from pants.util.strutil import ensure_text
38
39
40 _TEMPLATES_RELPATH = os.path.join('templates', 'jar_publish')
41
42
43 class PushDb(object):
44
45 @staticmethod
46 def load(path):
47 """Loads a pushdb maintained in a properties file at the given path."""
48 with open(path, 'r') as props:
49 properties = Properties.load(props)
50 return PushDb(properties)
51
52 class Entry(object):
53
54 def __init__(self, sem_ver, named_ver, named_is_latest, sha, fingerprint):
55 """Records the most recent push/release of an artifact.
56
57 :param Semver sem_ver: The last semantically versioned release (or Semver(0.0.0))
58 :param Namedver named_ver: The last named release of this entry (or None)
59 :param boolean named_is_latest: True if named_ver is the latest, false if sem_ver is
60 :param string sha: The last Git SHA (or None)
61 :param string fingerprint: A unique hash for the most recent version of the target.
62 """
63 self.sem_ver = sem_ver
64 self.named_ver = named_ver
65 self.named_is_latest = named_is_latest
66 self.sha = sha
67 self.fingerprint = fingerprint
68
69 def version(self):
70 if self.named_is_latest:
71 return self.named_ver
72 else:
73 return self.sem_ver
74
75 def with_sem_ver(self, sem_ver):
76 """Returns a clone of this entry with the given sem_ver marked as the latest."""
77 return PushDb.Entry(sem_ver, self.named_ver, False, self.sha, self.fingerprint)
78
79 def with_named_ver(self, named_ver):
80 """Returns a clone of this entry with the given name_ver marked as the latest."""
81 return PushDb.Entry(self.sem_ver, named_ver, True, self.sha, self.fingerprint)
82
83 def with_sha_and_fingerprint(self, sha, fingerprint):
84 """Returns a clone of this entry with the given sha and fingerprint."""
85 return PushDb.Entry(self.sem_ver, self.named_ver, self.named_is_latest, sha, fingerprint)
86
87 def __repr__(self):
88 return '<{}, {}, {}, {}, {}, {}>'.format(
89 self.__class__.__name__, self.sem_ver, self.named_ver, self.named_is_latest,
90 self.sha, self.fingerprint)
91
92 def __init__(self, props=None):
93 self._props = props or OrderedDict()
94
95 def get_entry(self, target):
96 """Given an internal target, return a PushDb.Entry, which might contain defaults."""
97 db_get, _ = self._accessors_for_target(target)
98
99 major = int(db_get('revision.major', '0'))
100 minor = int(db_get('revision.minor', '0'))
101 patch = int(db_get('revision.patch', '0'))
102 snapshot = str(db_get('revision.snapshot', 'false')).lower() == 'true'
103 named_version = db_get('revision.named_version', None)
104 named_is_latest = str(db_get('revision.named_is_latest', 'false')).lower() == 'true'
105 sha = db_get('revision.sha', None)
106 fingerprint = db_get('revision.fingerprint', None)
107 sem_ver = Semver(major, minor, patch, snapshot=snapshot)
108 named_ver = Namedver(named_version) if named_version else None
109 return self.Entry(sem_ver, named_ver, named_is_latest, sha, fingerprint)
110
111 def set_entry(self, target, pushdb_entry):
112 pe = pushdb_entry
113 _, db_set = self._accessors_for_target(target)
114 db_set('revision.major', pe.sem_ver.major)
115 db_set('revision.minor', pe.sem_ver.minor)
116 db_set('revision.patch', pe.sem_ver.patch)
117 db_set('revision.snapshot', str(pe.sem_ver.snapshot).lower())
118 if pe.named_ver:
119 db_set('revision.named_version', pe.named_ver.version())
120 db_set('revision.named_is_latest', str(pe.named_is_latest).lower())
121 db_set('revision.sha', pe.sha)
122 db_set('revision.fingerprint', pe.fingerprint)
123
124 def _accessors_for_target(self, target):
125 jar_dep, exported = target.get_artifact_info()
126 if not exported:
127 raise ValueError
128
129 def key(prefix):
130 return '{}.{}%{}'.format(prefix, jar_dep.org, jar_dep.name)
131
132 def getter(prefix, default=None):
133 return self._props.get(key(prefix), default)
134
135 def setter(prefix, value):
136 self._props[key(prefix)] = value
137
138 return getter, setter
139
140 def dump(self, path):
141 """Saves the pushdb as a properties file to the given path."""
142 with open(path, 'w') as props:
143 Properties.dump(self._props, props)
144
145
146 class PomWriter(object):
147 def __init__(self, get_db, tag):
148 self._get_db = get_db
149 self._tag = tag
150
151 def write(self, target, path):
152 dependencies = OrderedDict()
153 for internal_dep in target_internal_dependencies(target):
154 jar = self._as_versioned_jar(internal_dep)
155 key = (jar.org, jar.name)
156 dependencies[key] = self._internaldep(jar, internal_dep)
157
158 for jar in target.jar_dependencies:
159 jardep = self._jardep(jar)
160 if jardep:
161 key = (jar.org, jar.name, jar.classifier)
162 dependencies[key] = jardep
163
164 target_jar = self._internaldep(self._as_versioned_jar(target), target)
165 if target_jar:
166 target_jar = target_jar.extend(dependencies=dependencies.values())
167
168 template_relpath = os.path.join(_TEMPLATES_RELPATH, 'pom.xml.mustache')
169 template_text = pkgutil.get_data(__name__, template_relpath)
170 generator = Generator(template_text, project=target_jar)
171 with safe_open(path, 'w') as output:
172 generator.write(output)
173
174 def _as_versioned_jar(self, internal_target):
175 """Fetches the jar representation of the given target, and applies the latest pushdb version."""
176 jar, _ = internal_target.get_artifact_info()
177 pushdb_entry = self._get_db(internal_target).get_entry(internal_target)
178 return jar.copy(rev=pushdb_entry.version().version())
179
180 def _internaldep(self, jar_dependency, target):
181 template_data = self._jardep(jar_dependency)
182 if isinstance(target.provides.publication_metadata, OSSRHPublicationMetadata):
183 pom = target.provides.publication_metadata
184
185 # Forming the project name from the coordinates like this is acceptable as a fallback when
186 # the user supplies no project name.
187 # See: http://central.sonatype.org/pages/requirements.html#project-name-description-and-url
188 name = pom.name or '{}:{}'.format(jar_dependency.org, jar_dependency.name)
189
190 template_data = template_data.extend(name=name,
191 description=pom.description,
192 url=pom.url,
193 licenses=pom.licenses,
194 scm=pom.scm.tagged(self._tag),
195 developers=pom.developers)
196 return template_data
197
198 def _jardep(self, jar):
199 return TemplateData(
200 classifier=jar.classifier,
201 artifact_id=jar.name,
202 group_id=jar.org,
203 version=jar.rev,
204 scope='compile',
205 excludes=[TemplateData(org=exclude.org, name=exclude.name)
206 for exclude in jar.excludes if exclude.name])
207
208
209 def coordinate(org, name, rev=None):
210 return '{}#{};{}'.format(org, name, rev) if rev else '{}#{}'.format(org, name)
211
212
213 def jar_coordinate(jar, rev=None):
214 return coordinate(jar.org, jar.name, rev or jar.rev)
215
216
217 def pushdb_coordinate(jar, entry):
218 return jar_coordinate(jar, rev=entry.version().version())
219
220
221 def target_internal_dependencies(target):
222 """Returns internal Jarable dependencies that were "directly" declared.
223
224 Directly declared deps are those that are explicitly listed in the definition of a
225 target, rather than being depended on transitively. But in order to walk through
226 aggregator targets such as `target`, `dependencies`, or `jar_library`, this recursively
227 descends the dep graph and stops at Jarable instances."""
228 for dep in target.dependencies:
229 if isinstance(dep, Jarable):
230 yield dep
231 else:
232 for childdep in target_internal_dependencies(dep):
233 yield childdep
234
235
236 class JarPublish(ScmPublishMixin, JarTask):
237 """Publish jars to a maven repository.
238
239 At a high-level, pants uses `Apache Ivy <http://ant.apache.org/ivy/>`_ to
240 publish artifacts to Maven-style repositories. Pants performs prerequisite
241 tasks like compiling, creating jars, and generating ``pom.xml`` files then
242 invokes Ivy to actually publish the artifacts, so publishing is largely
243 configured in ``ivysettings.xml``. ``BUILD`` and ``pants.ini`` files
244 primarily provide linkage between publishable targets and the
245 Ivy ``resolvers`` used to publish them.
246
247 The following target types are publishable:
248 `java_library <build_dictionary.html#java_library>`_,
249 `scala_library <build_dictionary.html#scala_library>`_,
250 `java_thrift_library <build_dictionary.html#java_thrift_library>`_,
251 `annotation_processor <build_dictionary.html#annotation_processor>`_.
252 Targets to publish and their dependencies must be publishable target
253 types and specify the ``provides`` argument. One exception is
254 `jar <build_dictionary.html#jar>`_\s - pants will generate a pom file that
255 depends on the already-published jar.
256
257 Example usage: ::
258
259 # By default pants will perform a dry-run.
260 ./pants clean-all publish src/java/com/twitter/mybird
261
262 # Actually publish.
263 ./pants clean-all publish src/java/com/twitter/mybird --no-publish-dryrun
264
265 Please see ``./pants publish -h`` for a detailed description of all
266 publishing options.
267
268 Publishing can be configured with the following options:
269
270 * ``--repos`` - Required dictionary of settings for repos that may be pushed to.
271 * ``--jvm-options`` - Optional list of JVM command-line args when invoking Ivy.
272 * ``--restrict-push-branches`` - Optional list of branches to restrict publishing to.
273
274 Example repos dictionary: ::
275
276 repos = {
277 # repository target name is paired with this key
278 'myrepo': {
279 # ivysettings.xml resolver to use for publishing
280 'resolver': 'maven.example.com',
281 # address of a Credentials target to use when publishing
282 'auth': 'address/of/credentials:target',
283 # help message if unable to initialize the Credentials target.
284 'help': 'Please check your credentials and try again.',
285 },
286 }
287 """
288
289 class Publication(namedtuple('Publication', ['name', 'classifier', 'ext'])):
290 """Represents an artifact publication.
291
292 There will be at least 2 of these for any given published coordinate - a pom, and at least one
293 other artifact.
294 """
295
296 class DuplicateArtifactError(TaskError):
297 """An artifact was defined by two different targets."""
298
299 @classmethod
300 def register_options(cls, register):
301 super(JarPublish, cls).register_options(register)
302
303 # TODO(John Sirois): Support a preview mode that outputs a file with entries like:
304 # artifact id:
305 # revision:
306 # publish: (true|false)
307 # changelog:
308 #
309 # Allow re-running this goal with the file as input to support forcing an arbitrary set of
310 # revisions and supply of hand edited changelogs.
311
312 register('--dryrun', default=True, type=bool,
313 help='Run through a push without actually pushing artifacts, editing publish dbs or '
314 'otherwise writing data')
315 register('--commit', default=True, type=bool,
316 help='Commit the push db. Turn off for local testing.')
317 register('--local', metavar='<PATH>',
318 help='Publish jars to a maven repository on the local filesystem at this path.')
319 register('--local-snapshot', default=True, type=bool,
320 help='If --local is specified, publishes jars with -SNAPSHOT revision suffixes.')
321 register('--named-snapshot', default=None,
322 help='Publish all artifacts with the given snapshot name, replacing their version. '
323 'This is not Semantic Versioning compatible, but is easier to consume in cases '
324 'where many artifacts must align.')
325 register('--transitive', default=True, type=bool,
326 help='Publish the specified targets and all their internal dependencies transitively.')
327 register('--force', type=bool,
328 help='Force pushing jars even if there have been no changes since the last push.')
329 register('--override', type=list,
330 help='Specifies a published jar revision override in the form: '
331 '([org]#[name]|[target spec])=[new revision] '
332 'For example, to specify 2 overrides: '
333 '--override=com.foo.bar#baz=0.1.2 --override=src/java/com/foo/bar/qux=1.0.0')
334 register('--restart-at',
335 help='Restart a fail push at the given jar. Jars can be identified by '
336 'maven coordinate [org]#[name] or target. '
337 'For example: --restart-at=com.twitter.common#quantity '
338 'Or: --restart-at=src/java/com/twitter/common/base')
339 register('--ivy_settings', advanced=True, default=None,
340 help='Specify a custom ivysettings.xml file to be used when publishing.')
341 register('--repos', advanced=True, type=dict,
342 help='Settings for repositories that can be pushed to. See '
343 'https://pantsbuild.org/publish.html for details.')
344 register('--publish-extras', advanced=True, type=dict,
345 help='Extra products to publish. See '
346 'https://pantsbuild.org/dev_tasks_publish_extras.html for details.')
347 register('--individual-plugins', advanced=True, type=bool,
348 help='Extra products to publish as a individual artifact.')
349 register('--push-postscript', advanced=True, default=None,
350 help='A post-script to add to pushdb commit messages and push tag commit messages.')
351 register('--changelog', default=True, type=bool,
352 help='A changelog.txt file will be created and printed to the console for each '
353 'artifact published')
354 register('--prompt', default=True, type=bool,
355 help='Interactively prompt user before publishing each artifact.')
356
357 @classmethod
358 def prepare(cls, options, round_manager):
359 super(JarPublish, cls).prepare(options, round_manager)
360 round_manager.require('jars')
361 round_manager.require('javadoc')
362 round_manager.require('scaladoc')
363
364 def __init__(self, *args, **kwargs):
365 super(JarPublish, self).__init__(*args, **kwargs)
366 self.cachedir = os.path.join(self.workdir, 'cache')
367
368 self._jvm_options = self.get_options().jvm_options
369
370 self.log = self.context.log
371
372 if self.get_options().local:
373 local_repo = dict(
374 resolver='publish_local',
375 path=os.path.abspath(os.path.expanduser(self.get_options().local)),
376 confs=['default'],
377 auth=None
378 )
379 self.repos = defaultdict(lambda: local_repo)
380 self.commit = False
381 self.local_snapshot = self.get_options().local_snapshot
382 else:
383 self.repos = self.get_options().repos
384 if not self.repos:
385 raise TaskError(
386 "This repo is not configured to publish externally! Please configure per\n"
387 "http://pantsbuild.org/publish.html#authenticating-to-the-artifact-repository,\n"
388 "by setting --publish-jar-repos=<dict> or re-run with '--publish-jar-local=<dir>'.")
389 for repo, data in self.repos.items():
390 auth = data.get('auth')
391 if auth:
392 credentials = next(iter(self.context.resolve(auth)))
393 user = credentials.username(data['resolver'])
394 password = credentials.password(data['resolver'])
395 self.context.log.debug('Found auth for repo={} user={}'.format(repo, user))
396 self.repos[repo]['username'] = user
397 self.repos[repo]['password'] = password
398 self.commit = self.get_options().commit
399 self.push_postscript = self.get_options().push_postscript or ''
400 self.local_snapshot = False
401
402 self.scm = get_scm() if self.commit else None
403
404 self.named_snapshot = self.get_options().named_snapshot
405 if self.named_snapshot:
406 self.named_snapshot = Namedver.parse(self.named_snapshot)
407
408 self.dryrun = self.get_options().dryrun
409 self.transitive = self.get_options().transitive
410 self.force = self.get_options().force
411 self.publish_changelog = self.get_options().changelog and self.scm
412
413 def parse_jarcoordinate(coordinate):
414 components = coordinate.split('#', 1)
415 if len(components) == 2:
416 org, name = components
417 return org, name
418 else:
419 spec = components[0]
420 address = Address.parse(spec)
421 try:
422 self.context.build_graph.inject_address_closure(address)
423 target = self.context.build_graph.get_target(address)
424 if not target:
425 siblings = self.context.address_mapper.addresses_in_spec_path(address.spec_path)
426 prompt = 'did you mean' if len(siblings) == 1 else 'maybe you meant one of these'
427 raise TaskError('{} => {}?:\n {}'.format(address, prompt,
428 '\n '.join(str(a) for a in siblings)))
429 if not target.is_exported:
430 raise TaskError('{} is not an exported target'.format(coordinate))
431 return target.provides.org, target.provides.name
432 except (BuildFile.BuildFileError,
433 BuildFileParser.BuildFileParserError,
434 AddressLookupError) as e:
435 raise TaskError('{message}\n Problem identifying target at {spec}'
436 .format(message=e, spec=spec))
437
438 self.overrides = {}
439 if self.get_options().override:
440 if self.named_snapshot:
441 raise TaskError('Options --named-snapshot and --override are mutually exclusive!')
442
443 def parse_override(override):
444 try:
445 coordinate, rev = override.split('=', 1)
446 try:
447 # overrides imply semantic versioning
448 rev = Semver.parse(rev)
449 except ValueError as e:
450 raise TaskError('Invalid version {}: {}'.format(rev, e))
451 return parse_jarcoordinate(coordinate), rev
452 except ValueError:
453 raise TaskError('Invalid override: {}'.format(override))
454
455 self.overrides.update(parse_override(o) for o in self.get_options().override)
456
457 self.restart_at = None
458 if self.get_options().restart_at:
459 self.restart_at = parse_jarcoordinate(self.get_options().restart_at)
460
461 def confirm_push(self, coord, version):
462 """Ask the user if a push should be done for a particular version of a
463 particular coordinate. Return True if the push should be done"""
464 if not self.get_options().prompt:
465 return True
466 try:
467 isatty = os.isatty(sys.stdin.fileno())
468 except ValueError:
469 # In tests, sys.stdin might not have a fileno
470 isatty = False
471 if not isatty:
472 return True
473 push = raw_input('\nPublish {} with revision {} ? [y|N] '.format(
474 coord, version
475 ))
476 print('\n')
477 return push.strip().lower() == 'y'
478
479 def _copy_artifact(self, tgt, jar, version, typename, suffix='', extension='jar',
480 artifact_ext='', override_name=None):
481 """Copy the products for a target into the artifact path for the jar/version"""
482 genmap = self.context.products.get(typename)
483 product_mapping = genmap.get(tgt)
484 if product_mapping is None:
485 raise ValueError("No product mapping in {} for {}. "
486 "You may need to run some other task first".format(typename, tgt))
487 for basedir, jars in product_mapping.items():
488 for artifact in jars:
489 path = self.artifact_path(jar, version, name=override_name, suffix=suffix,
490 extension=extension, artifact_ext=artifact_ext)
491 safe_mkdir(os.path.dirname(path))
492 shutil.copy(os.path.join(basedir, artifact), path)
493
494 def _ivy_jvm_options(self, repo):
495 """Get the JVM options for ivy authentication, if needed."""
496 # Get authentication for the publish repo if needed.
497 if not repo.get('auth'):
498 # No need to copy here, as this list isn't modified by the caller.
499 return self._jvm_options
500
501 # Create a copy of the options, so that the modification is appropriately transient.
502 jvm_options = copy(self._jvm_options)
503 user = repo.get('username')
504 password = repo.get('password')
505 if user and password:
506 jvm_options.append('-Dlogin={}'.format(user))
507 jvm_options.append('-Dpassword={}'.format(password))
508 else:
509 raise TaskError('Unable to publish to {}. {}'
510 .format(repo.get('resolver'), repo.get('help', '')))
511 return jvm_options
512
513 def publish(self, publications, jar, entry, repo, published):
514 """Run ivy to publish a jar. ivyxml_path is the path to the ivy file; published
515 is a list of jars published so far (including this one). entry is a pushdb entry."""
516
517 try:
518 ivy = Bootstrapper.default_ivy()
519 except Bootstrapper.Error as e:
520 raise TaskError('Failed to push {0}! {1}'.format(pushdb_coordinate(jar, entry), e))
521
522 path = repo.get('path')
523 ivysettings = self.generate_ivysettings(ivy, published, publish_local=path)
524
525 version = entry.version().version()
526 ivyxml = self.generate_ivy(jar, version, publications)
527
528 resolver = repo['resolver']
529 args = [
530 '-settings', ivysettings,
531 '-ivy', ivyxml,
532
533 # Without this setting, the ivy.xml is delivered to the CWD, littering the workspace. We
534 # don't need the ivy.xml, so just give it path under the workdir we won't use.
535 '-deliverto', ivyxml + '.unused',
536
537 '-publish', resolver,
538 '-publishpattern', '{}/[organisation]/[module]/'
539 '[artifact]-[revision](-[classifier]).[ext]'.format(self.workdir),
540 '-revision', version,
541 '-m2compatible',
542 ]
543
544 # TODO(John Sirois): global logging options should be hidden behind some sort of log manager
545 # that we can:
546 # a.) obtain a handle to (dependency injection or manual plumbing)
547 # b.) query for log detail, ie: `if log_manager.is_verbose:`
548 if self.get_options().level == 'debug':
549 args.append('-verbose')
550
551 if self.local_snapshot:
552 args.append('-overwrite')
553
554 try:
555 jvm_options = self._ivy_jvm_options(repo)
556 ivy.execute(jvm_options=jvm_options, args=args,
557 workunit_factory=self.context.new_workunit, workunit_name='ivy-publish')
558 except Ivy.Error as e:
559 raise TaskError('Failed to push {0}! {1}'.format(pushdb_coordinate(jar, entry), e))
560
561 def execute(self):
562 self.check_clean_master(commit=(not self.dryrun and self.commit))
563
564 exported_targets = self.exported_targets()
565 self.check_targets(exported_targets)
566
567 pushdbs = {}
568
569 def get_db(tgt):
570 # TODO(tdesai) Handle resource type in get_db.
571 if tgt.provides is None:
572 raise TaskError('trying to publish target {!r} which does not provide an artifact'.format(tgt))
573 dbfile = tgt.provides.repo.push_db(tgt)
574 result = pushdbs.get(dbfile)
575 if not result:
576 # Create an empty pushdb if no dbfile exists.
577 if (os.path.exists(dbfile)):
578 db = PushDb.load(dbfile)
579 else:
580 safe_mkdir(os.path.dirname(dbfile))
581 db = PushDb()
582 try:
583 repo = self.repos[tgt.provides.repo.name]
584 except KeyError:
585 raise TaskError('Repository {0} has no entry in the --repos option.'.format(
586 tgt.provides.repo.name))
587 result = (db, dbfile, repo)
588 pushdbs[dbfile] = result
589 return result
590
591 def get_pushdb(tgt):
592 return get_db(tgt)[0]
593
594 def fingerprint_internal(tgt):
595 pushdb = get_pushdb(tgt)
596 entry = pushdb.get_entry(tgt)
597 return entry.fingerprint or '0.0.0'
598
599 def stage_artifacts(tgt, jar, version, tag, changelog):
600 publications = OrderedSet()
601
602 # TODO Remove this once we fix https://github.com/pantsbuild/pants/issues/1229
603 if (not self.context.products.get('jars').has(tgt) and
604 not self.get_options().individual_plugins):
605 raise TaskError('Expected to find a primary artifact for {} but there was no jar for it.'
606 .format(tgt.address.reference()))
607
608 # TODO Remove this guard once we fix https://github.com/pantsbuild/pants/issues/1229, there
609 # should always be a primary artifact.
610 if self.context.products.get('jars').has(tgt):
611 self._copy_artifact(tgt, jar, version, typename='jars')
612 publications.add(self.Publication(name=jar.name, classifier=None, ext='jar'))
613
614 self.create_source_jar(tgt, jar, version)
615 publications.add(self.Publication(name=jar.name, classifier='sources', ext='jar'))
616
617 # don't request docs unless they are available for all transitive targets
618 # TODO: doc products should be checked by an independent jar'ing task, and
619 # conditionally enabled; see https://github.com/pantsbuild/pants/issues/568
620 doc_jar = self.create_doc_jar(tgt, jar, version)
621 if doc_jar:
622 publications.add(self.Publication(name=jar.name, classifier='javadoc', ext='jar'))
623
624 if self.publish_changelog:
625 changelog_path = self.artifact_path(jar, version, suffix='-CHANGELOG', extension='txt')
626 with safe_open(changelog_path, 'wb') as changelog_file:
627 changelog_file.write(changelog.encode('utf-8'))
628 publications.add(self.Publication(name=jar.name, classifier='CHANGELOG', ext='txt'))
629
630 # Process any extra jars that might have been previously generated for this target, or a
631 # target that it was derived from.
632 for extra_product, extra_config in (self.get_options().publish_extras or {}).items():
633 override_name = jar.name
634 if 'override_name' in extra_config:
635 # If the supplied string has a '{target_provides_name}' in it, replace it with the
636 # current jar name. If not, the string will be taken verbatim.
637 override_name = extra_config['override_name'].format(target_provides_name=jar.name)
638
639 classifier = None
640 suffix = ''
641 if 'classifier' in extra_config:
642 classifier = extra_config['classifier']
643 suffix = "-{0}".format(classifier)
644
645 extension = extra_config.get('extension', 'jar')
646
647 extra_pub = self.Publication(name=override_name, classifier=classifier, ext=extension)
648
649 # A lot of flexibility is allowed in parameterizing the extra artifact, ensure those
650 # parameters lead to a unique publication.
651 # TODO(John Sirois): Check this much earlier.
652 if extra_pub in publications:
653 raise TaskError("publish_extra for '{0}' must override one of name, classifier or "
654 "extension with a non-default value.".format(extra_product))
655
656 # Build a list of targets to check. This list will consist of the current target, plus the
657 # entire derived_from chain.
658 target_list = [tgt]
659 target = tgt
660 while target.derived_from != target:
661 target_list.append(target.derived_from)
662 target = target.derived_from
663 for cur_tgt in target_list:
664 if self.context.products.get(extra_product).has(cur_tgt):
665 self._copy_artifact(cur_tgt, jar, version, typename=extra_product, suffix=suffix,
666 extension=extension, override_name=override_name)
667 publications.add(extra_pub)
668
669 pom_path = self.artifact_path(jar, version, extension='pom')
670 PomWriter(get_pushdb, tag).write(tgt, path=pom_path)
671 return publications
672
673 if self.overrides:
674 print('\nPublishing with revision overrides:')
675 for (org, name), rev in self.overrides.items():
676 print('{0}={1}'.format(coordinate(org, name), rev))
677
678 head_sha = self.scm.commit_id if self.scm else None
679
680 safe_rmtree(self.workdir)
681 published = []
682 skip = (self.restart_at is not None)
683 for target in exported_targets:
684 pushdb, dbfile, repo = get_db(target)
685 oldentry = pushdb.get_entry(target)
686
687 # the jar version is ignored here, since it is overridden below with the new entry
688 jar, _ = target.get_artifact_info()
689 published.append(jar)
690
691 if skip and (jar.org, jar.name) == self.restart_at:
692 skip = False
693 # select the next version: either a named version, or semver via the pushdb/overrides
694 if self.named_snapshot:
695 newentry = oldentry.with_named_ver(self.named_snapshot)
696 else:
697 override = self.overrides.get((jar.org, jar.name))
698 sem_ver = override if override else oldentry.sem_ver.bump()
699 if self.local_snapshot:
700 sem_ver = sem_ver.make_snapshot()
701
702 if sem_ver <= oldentry.sem_ver:
703 raise TaskError('Requested version {} must be greater than the current version {}'.format(
704 sem_ver, oldentry.sem_ver
705 ))
706 newentry = oldentry.with_sem_ver(sem_ver)
707
708 newfingerprint = self.entry_fingerprint(target, fingerprint_internal)
709 newentry = newentry.with_sha_and_fingerprint(head_sha, newfingerprint)
710 no_changes = newentry.fingerprint == oldentry.fingerprint
711
712 changelog = ''
713 if self.publish_changelog:
714 if no_changes:
715 changelog = 'No changes for {0} - forced push.\n'.format(pushdb_coordinate(jar, oldentry))
716 else:
717 changelog = self.changelog(target, oldentry.sha) or 'Direct dependencies changed.\n'
718
719 org = jar.org
720 name = jar.name
721 rev = newentry.version().version()
722 tag_name = '{org}-{name}-{rev}'.format(org=org, name=name, rev=rev) if self.commit else None
723
724 if no_changes and not self.force:
725 print('No changes for {0}'.format(pushdb_coordinate(jar, oldentry)))
726 stage_artifacts(target, jar, oldentry.version().version(), tag_name, changelog)
727 elif skip:
728 print('Skipping {} to resume at {}'.format(
729 jar_coordinate(jar, (newentry.version() if self.force else oldentry.version()).version()),
730 coordinate(self.restart_at[0], self.restart_at[1])
731 ))
732 stage_artifacts(target, jar, oldentry.version().version(), tag_name, changelog)
733 else:
734 if not self.dryrun:
735 # Confirm push looks good
736 if self.publish_changelog:
737 if no_changes:
738 print(changelog)
739 else:
740 # The changelog may contain non-ascii text, but the print function can, under certain
741 # circumstances, incorrectly detect the output encoding to be ascii and thus blow up
742 # on non-ascii changelog characters. Here we explicitly control the encoding to avoid
743 # the print function's mis-interpretation.
744 # TODO(John Sirois): Consider introducing a pants/util `print_safe` helper for this.
745 message = '\nChanges for {} since {} @ {}:\n\n{}\n'.format(
746 coordinate(jar.org, jar.name), oldentry.version(), oldentry.sha, changelog)
747 # The stdout encoding can be detected as None when running without a tty (common in
748 # tests), in which case we want to force encoding with a unicode-supporting codec.
749 encoding = sys.stdout.encoding or 'utf-8'
750 sys.stdout.write(message.encode(encoding))
751 if not self.confirm_push(coordinate(jar.org, jar.name), newentry.version()):
752 raise TaskError('User aborted push')
753
754 pushdb.set_entry(target, newentry)
755 publications = stage_artifacts(target, jar, rev, tag_name, changelog)
756
757 if self.dryrun:
758 print('Skipping publish of {0} in test mode.'.format(pushdb_coordinate(jar, newentry)))
759 else:
760 self.publish(publications, jar=jar, entry=newentry, repo=repo, published=published)
761
762 if self.commit:
763 coord = coordinate(org, name, rev)
764
765 pushdb.dump(dbfile)
766
767 self.publish_pushdb_changes_to_remote_scm(
768 pushdb_file=dbfile,
769 coordinate=coord,
770 tag_name=tag_name,
771 tag_message='Publish of {coordinate} initiated by {user} {cause}'.format(
772 coordinate=coord,
773 user=getpass.getuser(),
774 cause='with forced revision' if (org, name) in self.overrides else '(autoinc)',
775 ),
776 postscript=self.push_postscript
777 )
778
779 def artifact_path(self, jar, version, name=None, suffix='', extension='jar', artifact_ext=''):
780 return os.path.join(self.workdir, jar.org, jar.name + artifact_ext,
781 '{}{}-{}{}.{}'.format((name or jar.name),
782 artifact_ext if name != 'ivy' else '',
783 version,
784 suffix,
785 extension))
786
787 def check_for_duplicate_artifacts(self, targets):
788 targets_by_artifact = defaultdict(list)
789 duplicates = set()
790 for target in targets:
791 artifact = target.provides
792 if artifact in targets_by_artifact:
793 duplicates.add(artifact)
794 targets_by_artifact[artifact].append(target)
795
796 def duplication_message(artifact):
797 specs = sorted('\n {}'.format(t.address.spec) for t in targets_by_artifact[artifact])
798 return '\n {artifact} is defined by:{specs}'.format(artifact=artifact, specs=''.join(specs))
799
800 if duplicates:
801 raise self.DuplicateArtifactError('Multiple targets define the same artifacts!\n{}'.format(
802 '\n'.join(duplication_message(artifact) for artifact in duplicates)))
803
804 def check_targets(self, targets):
805 self.check_for_duplicate_artifacts(targets)
806 invalid = defaultdict(lambda: defaultdict(set))
807 derived_by_target = defaultdict(set)
808
809 def collect_invalid(publish_target, walked_target):
810 for derived_target in walked_target.derived_from_chain:
811 derived_by_target[derived_target].add(walked_target)
812 if not walked_target.has_sources() or not walked_target.sources_relative_to_buildroot():
813 invalid[publish_target][walked_target].add('No sources.')
814 if not walked_target.is_exported:
815 invalid[publish_target][walked_target].add('Does not provide a binary artifact.')
816
817 for target in targets:
818 target.walk(functools.partial(collect_invalid, target),
819 predicate=lambda t: isinstance(t, Jarable))
820
821 # When walking the graph of a publishable target, we may encounter families of sibling targets
822 # that form a derivation chain. As long as one of these siblings is publishable, we can
823 # proceed and publish a valid graph.
824 for publish_target, invalid_targets in list(invalid.items()):
825 for invalid_target, reasons in list(invalid_targets.items()):
826 derived_from_set = derived_by_target[invalid_target]
827 if derived_from_set - set(invalid_targets.keys()):
828 invalid_targets.pop(invalid_target)
829 if not invalid_targets:
830 invalid.pop(publish_target)
831
832 if invalid:
833 msg = list()
834
835 def first_address(pair):
836 first, _ = pair
837 return str(first.address)
838
839 for publish_target, invalid_targets in sorted(invalid.items(), key=first_address):
840 msg.append('\n Cannot publish {} due to:'.format(publish_target.address))
841 for invalid_target, reasons in sorted(invalid_targets.items(), key=first_address):
842 for reason in sorted(reasons):
843 msg.append('\n {} - {}'.format(invalid_target.address, reason))
844
845 raise TaskError('The following errors must be resolved to publish.{}'.format(''.join(msg)))
846
847 def exported_targets(self):
848 candidates = set()
849 if self.transitive:
850 candidates.update(self.context.targets())
851 else:
852 candidates.update(self.context.target_roots)
853
854 def get_synthetic(lang, target):
855 mappings = self.context.products.get(lang).get(target)
856 if mappings:
857 for key, generated in mappings.items():
858 for synthetic in generated:
859 yield synthetic
860
861 # Handle the case where a code gen target is in the listed roots and thus the publishable
862 # target is a synthetic twin generated by a code gen task upstream.
863 for candidate in self.context.target_roots:
864 candidates.update(get_synthetic('java', candidate))
865 candidates.update(get_synthetic('scala', candidate))
866
867 def exportable(tgt):
868 return tgt in candidates and tgt.is_exported
869
870 return OrderedSet(filter(exportable,
871 reversed(sort_targets(filter(exportable, candidates)))))
872
873 def entry_fingerprint(self, target, fingerprint_internal):
874 sha = hashlib.sha1()
875 sha.update(target.invalidation_hash())
876
877 # TODO(Tejal Desai): pantsbuild/pants/65: Remove java_sources attribute for ScalaLibrary
878 if isinstance(target, ScalaLibrary):
879 for java_source in sorted(target.java_sources):
880 sha.update(java_source.invalidation_hash())
881
882 # TODO(John Sirois): handle resources
883
884 for jarsig in sorted([jar_coordinate(j) for j in target.jar_dependencies if j.rev]):
885 sha.update(jarsig)
886
887 # TODO(tdesai) Handle resource type in get_db.
888 internal_dependencies = sorted(target_internal_dependencies(target), key=lambda t: t.id)
889 for internal_target in internal_dependencies:
890 fingerprint = fingerprint_internal(internal_target)
891 sha.update(fingerprint)
892
893 return sha.hexdigest()
894
895 def changelog(self, target, sha):
896 # Filter synthetic files.
897 files = filter(lambda filename: not filename.startswith(os.pardir), target.sources_relative_to_buildroot())
898 return ensure_text(self.scm.changelog(from_commit=sha, files=files))
899
900 def fetch_ivysettings(self, ivy):
901 if self.get_options().ivy_settings:
902 return self.get_options().ivy_settings
903 elif ivy.ivy_settings is None:
904 raise TaskError('An ivysettings.xml with writeable resolvers is required for publishing, '
905 'but none was configured.')
906 else:
907 return ivy.ivy_settings
908
909 def generate_ivysettings(self, ivy, publishedjars, publish_local=None):
910 template_relpath = os.path.join(_TEMPLATES_RELPATH, 'ivysettings.xml.mustache')
911 template_text = pkgutil.get_data(__name__, template_relpath)
912
913 published = [TemplateData(org=jar.org, name=jar.name) for jar in publishedjars]
914
915 generator = Generator(template_text,
916 ivysettings=self.fetch_ivysettings(ivy),
917 dir=self.workdir,
918 cachedir=self.cachedir,
919 published=published,
920 publish_local=publish_local)
921
922 with safe_open(os.path.join(self.workdir, 'ivysettings.xml'), 'w') as wrapper:
923 generator.write(wrapper)
924 return wrapper.name
925
926 def generate_ivy(self, jar, version, publications):
927 template_relpath = os.path.join(_TEMPLATES_RELPATH, 'ivy.xml.mustache')
928 template_text = pkgutil.get_data(__name__, template_relpath)
929
930 pubs = [TemplateData(name=None if p.name == jar.name else p.name,
931 classifier=p.classifier,
932 ext=None if p.ext == 'jar' else p.ext) for p in publications]
933
934 generator = Generator(template_text,
935 org=jar.org,
936 name=jar.name,
937 rev=version,
938 publications=pubs)
939
940 with safe_open(os.path.join(self.workdir, 'ivy.xml'), 'w') as ivyxml:
941 generator.write(ivyxml)
942 return ivyxml.name
943
944 def create_source_jar(self, target, open_jar, version):
945 # TODO(Tejal Desai) pantsbuild/pants/65: Avoid creating 2 jars with java sources for a
946 # scala_library with java_sources. Currently publish fails fast if scala_library owning
947 # java sources pointed by java_library target also provides an artifact. However, jar_create
948 # ends up creating 2 jars one scala and other java both including the java_sources.
949
950 def abs_and_relative_sources(target):
951 abs_source_root = os.path.join(get_buildroot(), target.target_base)
952 for source in target.sources_relative_to_source_root():
953 yield os.path.join(abs_source_root, source), source
954
955 jar_path = self.artifact_path(open_jar, version, suffix='-sources')
956 with self.open_jar(jar_path, overwrite=True, compressed=True) as open_jar:
957 for abs_source, rel_source in abs_and_relative_sources(target):
958 open_jar.write(abs_source, rel_source)
959
960 # TODO(Tejal Desai): pantsbuild/pants/65 Remove java_sources attribute for ScalaLibrary
961 if isinstance(target, ScalaLibrary):
962 for java_source_target in target.java_sources:
963 for abs_source, rel_source in abs_and_relative_sources(java_source_target):
964 open_jar.write(abs_source, rel_source)
965
966 if target.has_resources:
967 for resource_target in target.resources:
968 for abs_source, rel_source in abs_and_relative_sources(resource_target):
969 open_jar.write(abs_source, rel_source)
970
971 return jar_path
972
973 def _java_doc(self, target):
974 return self.context.products.get('javadoc').get(target)
975
976 def _scala_doc(self, target):
977 return self.context.products.get('scaladoc').get(target)
978
979 def create_doc_jar(self, target, open_jar, version):
980 """Returns a doc jar if either scala or java docs are available for the given target."""
981 javadoc = self._java_doc(target)
982 scaladoc = self._scala_doc(target)
983 if javadoc or scaladoc:
984 jar_path = self.artifact_path(open_jar, version, suffix='-javadoc')
985 with self.open_jar(jar_path, overwrite=True, compressed=True) as open_jar:
986 def add_docs(docs):
987 if docs:
988 for basedir, doc_files in docs.items():
989 for doc_file in doc_files:
990 open_jar.write(os.path.join(basedir, doc_file), doc_file)
991
992 add_docs(javadoc)
993 add_docs(scaladoc)
994 return jar_path
995 else:
996 return None
997
[end of src/python/pants/backend/jvm/tasks/jar_publish.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pantsbuild/pants
|
057bbbb43215f9067ca4e2b3203bc6f949107928
|
Error in publishing jars with resources after updates
When publishing a jar with resources, the jar does not correctly update when resources are updated. But, the sources jar _does_ update when resources are updated.
Running 1.3.0rc1
## Repro
Have all these in the same directory:
`pants.ini`:
```
[GLOBAL]
pants_version: 1.3.0rc1
[ivy]
ivy_settings: %(pants_supportdir)s/ivysettings.xml
[publish.jar]
dryrun=False
local=~/.m2/repository
prompt=False
changelog=False
named_snapshot=0.1-SNAPSHOT
```
`build-support/ivysettings.xml`:
```
<?xml version="1.0"?>
<ivysettings>
<settings defaultResolver="chain-repos"/>
<resolvers>
<chain name="chain-repos" returnFirst="true">
<ibiblio name="maven-central" m2compatible="true" descriptor="required"/>
<filesystem name="local" m2compatible="true"/>
</chain>
</resolvers>
</ivysettings>
```
`BUILD`:
```
scala_library(
name='MyLibrary',
sources=['MyClass.scala'],
dependencies=[':MyResources'],
provides=artifact(
org='foo',
name='my-library',
repo=repository(
push_db_basedir='build-support/pushdb'
),
),
)
resources(
name='MyResources',
sources=['hello.txt'],
)
```
`MyClass.scala`:
```
package bar
object MyClass {
val myVal: Option[Int] = None
}
```
`hello.txt`:
```
hi
```
Run `./pants publish.jar ::` and then:
```
$ unzip -q -c ~/.m2/repository/foo/my-library/0.1-SNAPSHOT/my-library-0.1-SNAPSHOT.jar hello.txt
hi
$ unzip -q -c ~/.m2/repository/foo/my-library/0.1-SNAPSHOT/my-library-0.1-SNAPSHOT-sources.jar hello.txt
hi
```
Now update `hello.txt`:
```
bye
```
Run `./pants publish.jar ::` and then:
```
$ unzip -q -c ~/.m2/repository/foo/my-library/0.1-SNAPSHOT/my-library-0.1-SNAPSHOT.jar hello.txt
hi
$ unzip -q -c ~/.m2/repository/foo/my-library/0.1-SNAPSHOT/my-library-0.1-SNAPSHOT-sources.jar hello.txt
bye
```
|
Of a similar repro:
1. Make sure to clear all caches
1. Start as above, publishing the first time
1. Edit `MyResources` to have `sources=[],`
1. Run publish again
```
$ unzip -q -c ~/.m2/repository/foo/my-library/0.1-SNAPSHOT/my-library-0.1-SNAPSHOT.jar hello.txt
hi
$ unzip -q -c ~/.m2/repository/foo/my-library/0.1-SNAPSHOT/my-library-0.1-SNAPSHOT-sources.jar hello.txt
caution: filename not matched: hello.txt
```
Or another flavor
1. Make sure to clear all caches
1. Start as above, except with `MyResources` not in the dependencies for `MyLibrary`
```
$ unzip -q -c ~/.m2/repository/foo/my-library/0.1-SNAPSHOT/my-library-0.1-SNAPSHOT.jar hello.txt
caution: filename not matched: hello.txt
$ unzip -q -c ~/.m2/repository/foo/my-library/0.1-SNAPSHOT/my-library-0.1-SNAPSHOT-sources.jar hello.txt
caution: filename not matched: hello.txt
```
3. Add `MyResources` as a dependencies for `MyLibrary`
3. Publish again
```
$ unzip -q -c ~/.m2/repository/foo/my-library/0.1-SNAPSHOT/my-library-0.1-SNAPSHOT.jar hello.txt
caution: filename not matched: hello.txt
$ unzip -q -c ~/.m2/repository/foo/my-library/0.1-SNAPSHOT/my-library-0.1-SNAPSHOT-sources.jar hello.txt
hi
```
Thanks for the report @bhmiller !
One thing to try and rule out quickly: do you see the same result when you use different `--named-snapshot` versions for each publish? Colliding/reusing versions are generally problematic (since there is so much caching going on).
Probably unrelated, but note that `--named-snapshot` is not intended to be used to create semver versions (it eagerly fails if you try to give it one).
Changing `pants.ini` to `named_snapshot=0.2-SNAPSHOT` still exhibits the bad behavior:
```
$ unzip -q -c ~/.m2/repository/foo/my-library/0.2-SNAPSHOT/my-library-0.2-SNAPSHOT.jar hello.txt
caution: filename not matched: hello.txt
$ unzip -q -c ~/.m2/repository/foo/my-library/0.2-SNAPSHOT/my-library-0.2-SNAPSHOT-sources.jar hello.txt
hi
```
(starting point for this being my last comment's 2nd case)
On previous comment, that was from the first time publishing with the updated `named_snapshot` setting in `pants.ini`
Will look at this one tonight.
I repro... nice find. Looks like an invalidation bug in the `publish.jar` task. As a workaround, should be able to run `./pants clean-all` before publishing.
But also occurs in the `1.2.1` release, so I'm going to remove this from the `1.3.x` release.
thanks for the update
We should be able to target to `1.3.1` though, as soon as we get `1.3.0` out the door.
thoughts on tagging this to the 1.3.1 milestone now that 1.3.0 is done? thanks.
Thanks, done.
|
2017-10-28T18:55:42Z
|
<patch>
diff --git a/src/python/pants/backend/jvm/tasks/jar_create.py b/src/python/pants/backend/jvm/tasks/jar_create.py
--- a/src/python/pants/backend/jvm/tasks/jar_create.py
+++ b/src/python/pants/backend/jvm/tasks/jar_create.py
@@ -62,7 +62,14 @@ def cache_target_dirs(self):
return True
def execute(self):
- with self.invalidated(self.context.targets(is_jvm_library)) as invalidation_check:
+ # NB: Invalidating dependents transitively is more than is strictly necessary, but
+ # we know that JarBuilderTask touches (at least) the direct dependencies of targets (in
+ # the case of resources). One of these tasks could implement an FingerprintStrategy that
+ # would attempt to hash the relevant dependencies, but that is really error prone, and
+ # this task is more than fast enough to re-run (JarTool "copies" pre-zipped data from input
+ # zip/jar files).
+ with self.invalidated(self.context.targets(is_jvm_library),
+ invalidate_dependents=True) as invalidation_check:
with self.context.new_workunit(name='jar-create', labels=[WorkUnitLabel.MULTITOOL]):
jar_mapping = self.context.products.get('jars')
</patch>
|
[]
|
[]
| |||
pandas-dev__pandas-4113
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
BUG: double figure when passing by to Series.hist
</issue>
<code>
[start of README.md]
1 # pandas: powerful Python data analysis toolkit
2
3 
4
5 ## What is it
6 **pandas** is a Python package providing fast, flexible, and expressive data
7 structures designed to make working with "relational" or "labeled" data both
8 easy and intuitive. It aims to be the fundamental high-level building block for
9 doing practical, **real world** data analysis in Python. Additionally, it has
10 the broader goal of becoming **the most powerful and flexible open source data
11 analysis / manipulation tool available in any language**. It is already well on
12 its way toward this goal.
13
14 ## Main Features
15 Here are just a few of the things that pandas does well:
16
17 - Easy handling of [**missing data**][missing-data] (represented as
18 `NaN`) in floating point as well as non-floating point data
19 - Size mutability: columns can be [**inserted and
20 deleted**][insertion-deletion] from DataFrame and higher dimensional
21 objects
22 - Automatic and explicit [**data alignment**][alignment]: objects can
23 be explicitly aligned to a set of labels, or the user can simply
24 ignore the labels and let `Series`, `DataFrame`, etc. automatically
25 align the data for you in computations
26 - Powerful, flexible [**group by**][groupby] functionality to perform
27 split-apply-combine operations on data sets, for both aggregating
28 and transforming data
29 - Make it [**easy to convert**][conversion] ragged,
30 differently-indexed data in other Python and NumPy data structures
31 into DataFrame objects
32 - Intelligent label-based [**slicing**][slicing], [**fancy
33 indexing**][fancy-indexing], and [**subsetting**][subsetting] of
34 large data sets
35 - Intuitive [**merging**][merging] and [**joining**][joining] data
36 sets
37 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of
38 data sets
39 - [**Hierarchical**][mi] labeling of axes (possible to have multiple
40 labels per tick)
41 - Robust IO tools for loading data from [**flat files**][flat-files]
42 (CSV and delimited), [**Excel files**][excel], [**databases**][db],
43 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore]
44 - [**Time series**][timeseries]-specific functionality: date range
45 generation and frequency conversion, moving window statistics,
46 moving window linear regressions, date shifting and lagging, etc.
47
48
49 [missing-data]: http://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data
50 [insertion-deletion]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html#column-selection-addition-deletion
51 [alignment]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html?highlight=alignment#intro-to-data-structures
52 [groupby]: http://pandas.pydata.org/pandas-docs/stable/groupby.html#group-by-split-apply-combine
53 [conversion]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe
54 [slicing]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges
55 [fancy-indexing]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#advanced-indexing-with-ix
56 [subsetting]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing
57 [merging]: http://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging
58 [joining]: http://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index
59 [reshape]: http://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-and-pivot-tables
60 [pivot-table]: http://pandas.pydata.org/pandas-docs/stable/reshaping.html#pivot-tables-and-cross-tabulations
61 [mi]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex
62 [flat-files]: http://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files
63 [excel]: http://pandas.pydata.org/pandas-docs/stable/io.html#excel-files
64 [db]: http://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries
65 [hdfstore]: http://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables
66 [timeseries]: http://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-series-date-functionality
67
68 ## Where to get it
69 The source code is currently hosted on GitHub at:
70 http://github.com/pydata/pandas
71
72 Binary installers for the latest released version are available at the Python
73 package index
74
75 http://pypi.python.org/pypi/pandas/
76
77 And via `easy_install`:
78
79 ```sh
80 easy_install pandas
81 ```
82
83 or `pip`:
84
85 ```sh
86 pip install pandas
87 ```
88
89 ## Dependencies
90 - [NumPy](http://www.numpy.org): 1.6.1 or higher
91 - [python-dateutil](http://labix.org/python-dateutil): 1.5 or higher
92 - [pytz](http://pytz.sourceforge.net)
93 - Needed for time zone support with ``pandas.date_range``
94
95 ### Highly Recommended Dependencies
96 - [numexpr](http://code.google.com/p/numexpr/)
97 - Needed to accelerate some expression evaluation operations
98 - Required by PyTables
99 - numexpr version <= 2.1 is recommended as the current version (2.2.1) has a backward
100 incompatible change with PyTables < 3.0
101 - [bottleneck](http://berkeleyanalytics.com/bottleneck)
102 - Needed to accelerate certain numerical operations
103
104 ### Optional dependencies
105 - [Cython](http://www.cython.org): Only necessary to build development version. Version 0.17.1 or higher.
106 - [SciPy](http://www.scipy.org): miscellaneous statistical functions
107 - [PyTables](http://www.pytables.org): necessary for HDF5-based storage
108 - [matplotlib](http://matplotlib.sourceforge.net/): for plotting
109 - [statsmodels](http://statsmodels.sourceforge.net/)
110 - Needed for parts of `pandas.stats`
111 - [openpyxl](http://packages.python.org/openpyxl/), [xlrd/xlwt](http://www.python-excel.org/)
112 - openpyxl version 1.6.1 or higher, for writing .xlsx files
113 - xlrd >= 0.9.0
114 - Needed for Excel I/O
115 - [boto](https://pypi.python.org/pypi/boto): necessary for Amazon S3 access.
116 - One of the following combinations of libraries is needed to use the
117 top-level [`pandas.read_html`][read-html-docs] function:
118 - [BeautifulSoup4][BeautifulSoup4] and [html5lib][html5lib] (Any
119 recent version of [html5lib][html5lib] is okay.)
120 - [BeautifulSoup4][BeautifulSoup4] and [lxml][lxml]
121 - [BeautifulSoup4][BeautifulSoup4] and [html5lib][html5lib] and [lxml][lxml]
122 - Only [lxml][lxml], although see [HTML reading gotchas][html-gotchas]
123 for reasons as to why you should probably **not** take this approach.
124
125 #### Notes about HTML parsing libraries
126 - If you install [BeautifulSoup4][BeautifulSoup4] you must install
127 either [lxml][lxml] or [html5lib][html5lib] or both.
128 `pandas.read_html` will **not** work with *only* `BeautifulSoup4`
129 installed.
130 - You are strongly encouraged to read [HTML reading
131 gotchas][html-gotchas]. It explains issues surrounding the
132 installation and usage of the above three libraries.
133 - You may need to install an older version of
134 [BeautifulSoup4][BeautifulSoup4]:
135 - Versions 4.2.1, 4.1.3 and 4.0.2 have been confirmed for 64 and
136 32-bit Ubuntu/Debian
137 - Additionally, if you're using [Anaconda][Anaconda] you should
138 definitely read [the gotchas about HTML parsing][html-gotchas]
139 libraries
140 - If you're on a system with `apt-get` you can do
141
142 ```sh
143 sudo apt-get build-dep python-lxml
144 ```
145
146 to get the necessary dependencies for installation of [lxml][lxml].
147 This will prevent further headaches down the line.
148
149 [html5lib]: https://github.com/html5lib/html5lib-python "html5lib"
150 [BeautifulSoup4]: http://www.crummy.com/software/BeautifulSoup "BeautifulSoup4"
151 [lxml]: http://lxml.de
152 [Anaconda]: https://store.continuum.io/cshop/anaconda
153 [NumPy]: http://numpy.scipy.org/
154 [html-gotchas]: http://pandas.pydata.org/pandas-docs/stable/gotchas.html#html-table-parsing
155 [read-html-docs]: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.io.html.read_html.html#pandas.io.html.read_html
156
157 ## Installation from sources
158 To install pandas from source you need Cython in addition to the normal
159 dependencies above. Cython can be installed from pypi:
160
161 ```sh
162 pip install cython
163 ```
164
165 In the `pandas` directory (same one where you found this file after
166 cloning the git repo), execute:
167
168 ```sh
169 python setup.py install
170 ```
171
172 or for installing in [development mode](http://www.pip-installer.org/en/latest/usage.html):
173
174 ```sh
175 python setup.py develop
176 ```
177
178 Alternatively, you can use `pip` if you want all the dependencies pulled
179 in automatically (the `-e` option is for installing it in [development
180 mode](http://www.pip-installer.org/en/latest/usage.html)):
181
182 ```sh
183 pip install -e .
184 ```
185
186 On Windows, you will need to install MinGW and execute:
187
188 ```sh
189 python setup.py build --compiler=mingw32
190 python setup.py install
191 ```
192
193 See http://pandas.pydata.org/ for more information.
194
195 ## License
196 BSD
197
198 ## Documentation
199 The official documentation is hosted on PyData.org: http://pandas.pydata.org/
200
201 The Sphinx documentation should provide a good starting point for learning how
202 to use the library. Expect the docs to continue to expand as time goes on.
203
204 ## Background
205 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and
206 has been under active development since then.
207
208 ## Discussion and Development
209 Since pandas development is related to a number of other scientific
210 Python projects, questions are welcome on the scipy-user mailing
211 list. Specialized discussions or design issues should take place on
212 the pystatsmodels mailing list / Google group, where
213 ``scikits.statsmodels`` and other libraries will also be discussed:
214
215 http://groups.google.com/group/pystatsmodels
216
[end of README.md]
[start of pandas/io/stata.py]
1 """
2 Module contains tools for processing Stata files into DataFrames
3
4 The StataReader below was originally written by Joe Presbrey as part of PyDTA.
5 It has been extended and improved by Skipper Seabold from the Statsmodels project
6 who also developed the StataWriter and was finally added to pandas in an once again
7 improved version.
8
9 You can find more information on http://presbrey.mit.edu/PyDTA and
10 http://statsmodels.sourceforge.net/devel/
11 """
12 # TODO: Fix this module so it can use cross-compatible zip, map, and range
13 import numpy as np
14
15 import sys
16 import struct
17 from pandas.core.base import StringMixin
18 from pandas.core.frame import DataFrame
19 from pandas.core.series import Series
20 from pandas.core.categorical import Categorical
21 import datetime
22 from pandas import compat
23 from pandas import compat
24 from pandas.compat import StringIO, long, lrange, lmap, lzip
25 from pandas import isnull
26 from pandas.io.parsers import _parser_params, Appender
27 from pandas.io.common import get_filepath_or_buffer, maybe_read_encoded_stream
28
29
30 _read_stata_doc = """
31 Read Stata file into DataFrame
32
33 %s
34 """ % (_parser_params)
35
36
37 @Appender(_read_stata_doc)
38 def read_stata(filepath_or_buffer, convert_dates=True, convert_categoricals=True, encoding=None, index=None):
39 reader = StataReader(filepath_or_buffer, encoding)
40
41 return reader.data(convert_dates, convert_categoricals, index)
42
43 _date_formats = ["%tc", "%tC", "%td", "%tw", "%tm", "%tq", "%th", "%ty"]
44
45 def _stata_elapsed_date_to_datetime(date, fmt):
46 """
47 Convert from SIF to datetime. http://www.stata.com/help.cgi?datetime
48
49 Parameters
50 ----------
51 date : int
52 The Stata Internal Format date to convert to datetime according to fmt
53 fmt : str
54 The format to convert to. Can be, tc, td, tw, tm, tq, th, ty
55
56 Examples
57 --------
58 >>> _stata_elapsed_date_to_datetime(52, "%tw") datetime.datetime(1961, 1, 1, 0, 0)
59
60 Notes
61 -----
62 datetime/c - tc
63 milliseconds since 01jan1960 00:00:00.000, assuming 86,400 s/day
64 datetime/C - tC - NOT IMPLEMENTED
65 milliseconds since 01jan1960 00:00:00.000, adjusted for leap seconds
66 date - td
67 days since 01jan1960 (01jan1960 = 0)
68 weekly date - tw
69 weeks since 1960w1
70 This assumes 52 weeks in a year, then adds 7 * remainder of the weeks.
71 The datetime value is the start of the week in terms of days in the
72 year, not ISO calendar weeks.
73 monthly date - tm
74 months since 1960m1
75 quarterly date - tq
76 quarters since 1960q1
77 half-yearly date - th
78 half-years since 1960h1 yearly
79 date - ty
80 years since 0000
81
82 If you don't have pandas with datetime support, then you can't do
83 milliseconds accurately.
84 """
85 #NOTE: we could run into overflow / loss of precision situations here
86 # casting to int, but I'm not sure what to do. datetime won't deal with
87 # numpy types and numpy datetime isn't mature enough / we can't rely on
88 # pandas version > 0.7.1
89 #TODO: IIRC relative delta doesn't play well with np.datetime?
90 if np.isnan(date):
91 return np.datetime64('nat')
92
93 date = int(date)
94 stata_epoch = datetime.datetime(1960, 1, 1)
95 if fmt in ["%tc", "tc"]:
96 from dateutil.relativedelta import relativedelta
97 return stata_epoch + relativedelta(microseconds=date * 1000)
98 elif fmt in ["%tC", "tC"]:
99 from warnings import warn
100 warn("Encountered %tC format. Leaving in Stata Internal Format.")
101 return date
102 elif fmt in ["%td", "td"]:
103 return stata_epoch + datetime.timedelta(int(date))
104 elif fmt in ["%tw", "tw"]: # does not count leap days - 7 days is a week
105 year = datetime.datetime(stata_epoch.year + date // 52, 1, 1)
106 day_delta = (date % 52) * 7
107 return year + datetime.timedelta(int(day_delta))
108 elif fmt in ["%tm", "tm"]:
109 year = stata_epoch.year + date // 12
110 month_delta = (date % 12) + 1
111 return datetime.datetime(year, month_delta, 1)
112 elif fmt in ["%tq", "tq"]:
113 year = stata_epoch.year + date // 4
114 month_delta = (date % 4) * 3 + 1
115 return datetime.datetime(year, month_delta, 1)
116 elif fmt in ["%th", "th"]:
117 year = stata_epoch.year + date // 2
118 month_delta = (date % 2) * 6 + 1
119 return datetime.datetime(year, month_delta, 1)
120 elif fmt in ["%ty", "ty"]:
121 if date > 0:
122 return datetime.datetime(date, 1, 1)
123 else: # don't do negative years bc can't mix dtypes in column
124 raise ValueError("Year 0 and before not implemented")
125 else:
126 raise ValueError("Date fmt %s not understood" % fmt)
127
128
129 def _datetime_to_stata_elapsed(date, fmt):
130 """
131 Convert from datetime to SIF. http://www.stata.com/help.cgi?datetime
132
133 Parameters
134 ----------
135 date : datetime.datetime
136 The date to convert to the Stata Internal Format given by fmt
137 fmt : str
138 The format to convert to. Can be, tc, td, tw, tm, tq, th, ty
139 """
140 if not isinstance(date, datetime.datetime):
141 raise ValueError("date should be datetime.datetime format")
142 stata_epoch = datetime.datetime(1960, 1, 1)
143 if fmt in ["%tc", "tc"]:
144 delta = date - stata_epoch
145 return (delta.days * 86400000 + delta.seconds*1000 +
146 delta.microseconds/1000)
147 elif fmt in ["%tC", "tC"]:
148 from warnings import warn
149 warn("Stata Internal Format tC not supported.")
150 return date
151 elif fmt in ["%td", "td"]:
152 return (date - stata_epoch).days
153 elif fmt in ["%tw", "tw"]:
154 return (52*(date.year-stata_epoch.year) +
155 (date - datetime.datetime(date.year, 1, 1)).days / 7)
156 elif fmt in ["%tm", "tm"]:
157 return (12 * (date.year - stata_epoch.year) + date.month - 1)
158 elif fmt in ["%tq", "tq"]:
159 return 4*(date.year-stata_epoch.year) + int((date.month - 1)/3)
160 elif fmt in ["%th", "th"]:
161 return 2 * (date.year - stata_epoch.year) + int(date.month > 6)
162 elif fmt in ["%ty", "ty"]:
163 return date.year
164 else:
165 raise ValueError("fmt %s not understood" % fmt)
166
167
168 class StataMissingValue(StringMixin):
169 """
170 An observation's missing value.
171
172 Parameters
173 -----------
174 offset
175 value
176
177 Attributes
178 ----------
179 string
180 value
181
182 Notes
183 -----
184 More information: <http://www.stata.com/help.cgi?missing>
185 """
186
187 def __init__(self, offset, value):
188 self._value = value
189 if type(value) is int or type(value) is long:
190 self._str = value - offset is 1 and \
191 '.' or ('.' + chr(value - offset + 96))
192 else:
193 self._str = '.'
194 string = property(lambda self: self._str, doc="The Stata representation of the missing value: '.', '.a'..'.z'")
195 value = property(lambda self: self._value, doc='The binary representation of the missing value.')
196
197 def __unicode__(self):
198 return self.string
199
200 def __repr__(self):
201 # not perfect :-/
202 return "%s(%s)" % (self.__class__, self)
203
204
205 class StataParser(object):
206 _default_encoding = 'cp1252'
207
208 def __init__(self, encoding=None):
209 self._encoding = encoding
210
211 #type code.
212 #--------------------
213 #str1 1 = 0x01
214 #str2 2 = 0x02
215 #...
216 #str244 244 = 0xf4
217 #byte 251 = 0xfb (sic)
218 #int 252 = 0xfc
219 #long 253 = 0xfd
220 #float 254 = 0xfe
221 #double 255 = 0xff
222 #--------------------
223 #NOTE: the byte type seems to be reserved for categorical variables
224 # with a label, but the underlying variable is -127 to 100
225 # we're going to drop the label and cast to int
226 self.DTYPE_MAP = \
227 dict(
228 lzip(range(1, 245), ['a' + str(i) for i in range(1, 245)]) +
229 [
230 (251, np.int16),
231 (252, np.int32),
232 (253, np.int64),
233 (254, np.float32),
234 (255, np.float64)
235 ]
236 )
237 self.TYPE_MAP = lrange(251) + list('bhlfd')
238 #NOTE: technically, some of these are wrong. there are more numbers
239 # that can be represented. it's the 27 ABOVE and BELOW the max listed
240 # numeric data type in [U] 12.2.2 of the 11.2 manual
241 self.MISSING_VALUES = \
242 {
243 'b': (-127, 100),
244 'h': (-32767, 32740),
245 'l': (-2147483647, 2147483620),
246 'f': (-1.701e+38, +1.701e+38),
247 'd': (-1.798e+308, +8.988e+307)
248 }
249
250 self.OLD_TYPE_MAPPING = \
251 {
252 'i': 252,
253 'f': 254,
254 'b': 251
255 }
256
257 def _decode_bytes(self, str, errors=None):
258 if compat.PY3 or self._encoding is not None:
259 return str.decode(self._encoding, errors)
260 else:
261 return str
262
263
264 class StataReader(StataParser):
265 """
266 Class for working with a Stata dataset. There are two possibilities for usage:
267
268 * The from_dta() method on the DataFrame class.
269 This will return a DataFrame with the Stata dataset. Note that when using the
270 from_dta() method, you will not have access to meta-information like variable
271 labels or the data label.
272
273 * Work with this object directly. Upon instantiation, the header of the Stata data
274 file is read, giving you access to attributes like variable_labels(), data_label(),
275 nobs(), ... A DataFrame with the data is returned by the read() method; this will
276 also fill up the value_labels. Note that calling the value_labels() method will
277 result in an error if the read() method has not been called yet. This is because
278 the value labels are stored at the end of a Stata dataset, after the data.
279
280 Parameters
281 ----------
282 path_or_buf : string or file-like object
283 Path to .dta file or object implementing a binary read() functions
284 encoding : string, None or encoding
285 Encoding used to parse the files. Note that Stata doesn't
286 support unicode. None defaults to cp1252.
287 """
288
289 def __init__(self, path_or_buf, encoding='cp1252'):
290 super(StataReader, self).__init__(encoding)
291 self.col_sizes = ()
292 self._has_string_data = False
293 self._missing_values = False
294 self._data_read = False
295 self._value_labels_read = False
296 if isinstance(path_or_buf, str):
297 path_or_buf, encoding = get_filepath_or_buffer(path_or_buf, encoding='cp1252')
298
299 if isinstance(path_or_buf, (str, compat.text_type, bytes)):
300 self.path_or_buf = open(path_or_buf, 'rb')
301 else:
302 self.path_or_buf = path_or_buf
303
304 self._read_header()
305
306 def _read_header(self):
307 # header
308 self.format_version = struct.unpack('b', self.path_or_buf.read(1))[0]
309 if self.format_version not in [104, 105, 108, 113, 114, 115]:
310 raise ValueError("Version of given Stata file is not 104, 105, 108, 113 (Stata 8/9), 114 (Stata 10/11) or 115 (Stata 12)")
311 self.byteorder = self.path_or_buf.read(1) == 0x1 and '>' or '<'
312 self.filetype = struct.unpack('b', self.path_or_buf.read(1))[0]
313 self.path_or_buf.read(1) # unused
314
315 self.nvar = struct.unpack(self.byteorder + 'H', self.path_or_buf.read(2))[0]
316 self.nobs = struct.unpack(self.byteorder + 'I', self.path_or_buf.read(4))[0]
317 if self.format_version > 105:
318 self.data_label = self.path_or_buf.read(81)
319 else:
320 self.data_label = self.path_or_buf.read(32)
321 if self.format_version > 104:
322 self.time_stamp = self.path_or_buf.read(18)
323
324 # descriptors
325 if self.format_version > 108:
326 typlist = [ord(self.path_or_buf.read(1)) for i in range(self.nvar)]
327 else:
328 typlist = [self.OLD_TYPE_MAPPING[self._decode_bytes(self.path_or_buf.read(1))] for i in range(self.nvar)]
329
330 try:
331 self.typlist = [self.TYPE_MAP[typ] for typ in typlist]
332 except:
333 raise ValueError("cannot convert stata types [{0}]".format(','.join(typlist)))
334 try:
335 self.dtyplist = [self.DTYPE_MAP[typ] for typ in typlist]
336 except:
337 raise ValueError("cannot convert stata dtypes [{0}]".format(','.join(typlist)))
338
339 if self.format_version > 108:
340 self.varlist = [self._null_terminate(self.path_or_buf.read(33)) for i in range(self.nvar)]
341 else:
342 self.varlist = [self._null_terminate(self.path_or_buf.read(9)) for i in range(self.nvar)]
343 self.srtlist = struct.unpack(self.byteorder + ('h' * (self.nvar + 1)), self.path_or_buf.read(2 * (self.nvar + 1)))[:-1]
344 if self.format_version > 113:
345 self.fmtlist = [self._null_terminate(self.path_or_buf.read(49)) for i in range(self.nvar)]
346 elif self.format_version > 104:
347 self.fmtlist = [self._null_terminate(self.path_or_buf.read(12)) for i in range(self.nvar)]
348 else:
349 self.fmtlist = [self._null_terminate(self.path_or_buf.read(7)) for i in range(self.nvar)]
350 if self.format_version > 108:
351 self.lbllist = [self._null_terminate(self.path_or_buf.read(33)) for i in range(self.nvar)]
352 else:
353 self.lbllist = [self._null_terminate(self.path_or_buf.read(9)) for i in range(self.nvar)]
354 if self.format_version > 105:
355 self.vlblist = [self._null_terminate(self.path_or_buf.read(81)) for i in range(self.nvar)]
356 else:
357 self.vlblist = [self._null_terminate(self.path_or_buf.read(32)) for i in range(self.nvar)]
358
359 # ignore expansion fields (Format 105 and later)
360 # When reading, read five bytes; the last four bytes now tell you the
361 # size of the next read, which you discard. You then continue like
362 # this until you read 5 bytes of zeros.
363
364 if self.format_version > 104:
365 while True:
366 data_type = struct.unpack(self.byteorder + 'b', self.path_or_buf.read(1))[0]
367 if self.format_version > 108:
368 data_len = struct.unpack(self.byteorder + 'i', self.path_or_buf.read(4))[0]
369 else:
370 data_len = struct.unpack(self.byteorder + 'h', self.path_or_buf.read(2))[0]
371 if data_type == 0:
372 break
373 self.path_or_buf.read(data_len)
374
375 # necessary data to continue parsing
376 self.data_location = self.path_or_buf.tell()
377 self.has_string_data = len([x for x in self.typlist if type(x) is int]) > 0
378 self._col_size()
379
380 def _calcsize(self, fmt):
381 return type(fmt) is int and fmt or struct.calcsize(self.byteorder + fmt)
382
383 def _col_size(self, k=None):
384 """Calculate size of a data record."""
385 if len(self.col_sizes) == 0:
386 self.col_sizes = lmap(lambda x: self._calcsize(x), self.typlist)
387 if k is None:
388 return self.col_sizes
389 else:
390 return self.col_sizes[k]
391
392 def _unpack(self, fmt, byt):
393 d = struct.unpack(self.byteorder + fmt, byt)[0]
394 if fmt[-1] in self.MISSING_VALUES:
395 nmin, nmax = self.MISSING_VALUES[fmt[-1]]
396 if d < nmin or d > nmax:
397 if self._missing_values:
398 return StataMissingValue(nmax, d)
399 else:
400 return None
401 return d
402
403 def _null_terminate(self, s):
404 if compat.PY3 or self._encoding is not None: # have bytes not strings, so must decode
405 null_byte = b"\0"
406 try:
407 s = s[:s.index(null_byte)]
408 except:
409 pass
410 return s.decode(self._encoding or self._default_encoding)
411 else:
412 null_byte = "\0"
413 try:
414 return s.lstrip(null_byte)[:s.index(null_byte)]
415 except:
416 return s
417
418 def _next(self):
419 typlist = self.typlist
420 if self.has_string_data:
421 data = [None] * self.nvar
422 for i in range(len(data)):
423 if type(typlist[i]) is int:
424 data[i] = self._null_terminate(self.path_or_buf.read(typlist[i]))
425 else:
426 data[i] = self._unpack(typlist[i], self.path_or_buf.read(self._col_size(i)))
427 return data
428 else:
429 return list(map(lambda i: self._unpack(typlist[i],
430 self.path_or_buf.read(self._col_size(i))),
431 range(self.nvar)))
432
433 def _dataset(self):
434 """
435 Returns a Python generator object for iterating over the dataset.
436
437
438 Parameters
439 ----------
440
441 Returns
442 -------
443 Generator object for iterating over the dataset. Yields each row of
444 observations as a list by default.
445
446 Notes
447 -----
448 If missing_values is True during instantiation of StataReader then
449 observations with _StataMissingValue(s) are not filtered and should
450 be handled by your applcation.
451 """
452
453 try:
454 self._file.seek(self._data_location)
455 except Exception:
456 pass
457
458 for i in range(self.nobs):
459 yield self._next()
460
461 def _read_value_labels(self):
462 if not self._data_read:
463 raise Exception("Data has not been read. Because of the layout of Stata files, this is necessary before reading value labels.")
464 if self._value_labels_read:
465 raise Exception("Value labels have already been read.")
466
467 self.value_label_dict = dict()
468
469 if self.format_version <= 108:
470 return # Value labels are not supported in version 108 and earlier.
471
472 while True:
473 slength = self.path_or_buf.read(4)
474 if not slength:
475 break # end of variable lable table
476 labname = self._null_terminate(self.path_or_buf.read(33))
477 self.path_or_buf.read(3) # padding
478
479 n = struct.unpack(self.byteorder + 'I', self.path_or_buf.read(4))[0]
480 txtlen = struct.unpack(self.byteorder + 'I', self.path_or_buf.read(4))[0]
481 off = []
482 for i in range(n):
483 off.append(struct.unpack(self.byteorder + 'I', self.path_or_buf.read(4))[0])
484 val = []
485 for i in range(n):
486 val.append(struct.unpack(self.byteorder + 'I', self.path_or_buf.read(4))[0])
487 txt = self.path_or_buf.read(txtlen)
488 self.value_label_dict[labname] = dict()
489 for i in range(n):
490 self.value_label_dict[labname][val[i]] = self._null_terminate(txt[off[i]:])
491 self._value_labels_read = True
492
493 def data(self, convert_dates=True, convert_categoricals=True, index=None):
494 """
495 Reads observations from Stata file, converting them into a dataframe
496
497 Parameters
498 ----------
499 convert_dates : boolean, defaults to True
500 Convert date variables to DataFrame time values
501 convert_categoricals : boolean, defaults to True
502 Read value labels and convert columns to Categorical/Factor variables
503 index : identifier of index column
504 identifier of column that should be used as index of the DataFrame
505
506 Returns
507 -------
508 y : DataFrame instance
509 """
510 if self._data_read:
511 raise Exception("Data has already been read.")
512 self._data_read = True
513
514 stata_dta = self._dataset()
515
516 data = []
517 for rownum, line in enumerate(stata_dta):
518 # doesn't handle missing value objects, just casts
519 # None will only work without missing value object.
520 for i, val in enumerate(line):
521 #NOTE: This will only be scalar types because missing strings
522 # are empty not None in Stata
523 if val is None:
524 line[i] = np.nan
525 data.append(tuple(line))
526
527 if convert_categoricals:
528 self._read_value_labels()
529
530 data = DataFrame(data, columns=self.varlist, index=index)
531
532 cols_ = np.where(self.dtyplist)[0]
533 for i in cols_:
534 if self.dtyplist[i] is not None:
535 col = data.columns[i]
536 if data[col].dtype is not np.dtype(object):
537 data[col] = Series(data[col], data[col].index, self.dtyplist[i])
538
539 if convert_dates:
540 cols = np.where(lmap(lambda x: x in _date_formats, self.fmtlist))[0]
541 for i in cols:
542 col = data.columns[i]
543 data[col] = data[col].apply(_stata_elapsed_date_to_datetime, args=(self.fmtlist[i],))
544
545 if convert_categoricals:
546 cols = np.where(lmap(lambda x: x in compat.iterkeys(self.value_label_dict), self.lbllist))[0]
547 for i in cols:
548 col = data.columns[i]
549 labeled_data = np.copy(data[col])
550 labeled_data = labeled_data.astype(object)
551 for k, v in compat.iteritems(self.value_label_dict[self.lbllist[i]]):
552 labeled_data[(data[col] == k).values] = v
553 data[col] = Categorical.from_array(labeled_data)
554
555 return data
556
557 def data_label(self):
558 """Returns data label of Stata file"""
559 return self.data_label
560
561 def variable_labels(self):
562 """Returns variable labels as a dict, associating each variable name with corresponding label"""
563 return dict(zip(self.varlist, self.vlblist))
564
565 def value_labels(self):
566 """Returns a dict, associating each variable name a dict, associating each value its corresponding label"""
567 if not self._value_labels_read:
568 self._read_value_labels()
569
570 return self.value_label_dict
571
572
573 def _open_file_binary_write(fname, encoding):
574 if hasattr(fname, 'write'):
575 #if 'b' not in fname.mode:
576 return fname
577 return open(fname, "wb")
578
579
580 def _set_endianness(endianness):
581 if endianness.lower() in ["<", "little"]:
582 return "<"
583 elif endianness.lower() in [">", "big"]:
584 return ">"
585 else: # pragma : no cover
586 raise ValueError("Endianness %s not understood" % endianness)
587
588
589 def _pad_bytes(name, length):
590 """
591 Takes a char string and pads it wih null bytes until it's length chars
592 """
593 return name + "\x00" * (length - len(name))
594
595
596 def _default_names(nvar):
597 """
598 Returns default Stata names v1, v2, ... vnvar
599 """
600 return ["v%d" % i for i in range(1, nvar+1)]
601
602
603 def _convert_datetime_to_stata_type(fmt):
604 """
605 Converts from one of the stata date formats to a type in TYPE_MAP
606 """
607 if fmt in ["tc", "%tc", "td", "%td", "tw", "%tw", "tm", "%tm", "tq",
608 "%tq", "th", "%th", "ty", "%ty"]:
609 return np.float64 # Stata expects doubles for SIFs
610 else:
611 raise ValueError("fmt %s not understood" % fmt)
612
613
614 def _maybe_convert_to_int_keys(convert_dates, varlist):
615 new_dict = {}
616 for key in convert_dates:
617 if not convert_dates[key].startswith("%"): # make sure proper fmts
618 convert_dates[key] = "%" + convert_dates[key]
619 if key in varlist:
620 new_dict.update({varlist.index(key): convert_dates[key]})
621 else:
622 if not isinstance(key, int):
623 raise ValueError("convery_dates key is not in varlist and is not an int")
624 new_dict.update({key: convert_dates[key]})
625 return new_dict
626
627
628 def _dtype_to_stata_type(dtype):
629 """
630 Converts dtype types to stata types. Returns the byte of the given ordinal.
631 See TYPE_MAP and comments for an explanation. This is also explained in
632 the dta spec.
633 1 - 244 are strings of this length
634 251 - chr(251) - for int8 and int16, byte
635 252 - chr(252) - for int32, int
636 253 - chr(253) - for int64, long
637 254 - chr(254) - for float32, float
638 255 - chr(255) - double, double
639
640 If there are dates to convert, then dtype will already have the correct
641 type inserted.
642 """
643 #TODO: expand to handle datetime to integer conversion
644 if dtype.type == np.string_:
645 return chr(dtype.itemsize)
646 elif dtype.type == np.object_: # try to coerce it to the biggest string
647 # not memory efficient, what else could we do?
648 return chr(244)
649 elif dtype == np.float64:
650 return chr(255)
651 elif dtype == np.float32:
652 return chr(254)
653 elif dtype == np.int64:
654 return chr(253)
655 elif dtype == np.int32:
656 return chr(252)
657 elif dtype == np.int8 or dtype == np.int16:
658 return chr(251)
659 else: # pragma : no cover
660 raise ValueError("Data type %s not currently understood. "
661 "Please report an error to the developers." % dtype)
662
663
664 def _dtype_to_default_stata_fmt(dtype):
665 """
666 Maps numpy dtype to stata's default format for this type. Not terribly
667 important since users can change this in Stata. Semantics are
668
669 string -> "%DDs" where DD is the length of the string
670 float64 -> "%10.0g"
671 float32 -> "%9.0g"
672 int64 -> "%9.0g"
673 int32 -> "%12.0g"
674 int16 -> "%8.0g"
675 int8 -> "%8.0g"
676 """
677 #TODO: expand this to handle a default datetime format?
678 if dtype.type == np.string_:
679 return "%" + str(dtype.itemsize) + "s"
680 elif dtype.type == np.object_:
681 return "%244s"
682 elif dtype == np.float64:
683 return "%10.0g"
684 elif dtype == np.float32:
685 return "%9.0g"
686 elif dtype == np.int64:
687 return "%9.0g"
688 elif dtype == np.int32:
689 return "%12.0g"
690 elif dtype == np.int8 or dtype == np.int16:
691 return "%8.0g"
692 else: # pragma : no cover
693 raise ValueError("Data type %s not currently understood. "
694 "Please report an error to the developers." % dtype)
695
696
697 class StataWriter(StataParser):
698 """
699 A class for writing Stata binary dta files from array-like objects
700
701 Parameters
702 ----------
703 fname : file path or buffer
704 Where to save the dta file.
705 data : array-like
706 Array-like input to save. Pandas objects are also accepted.
707 convert_dates : dict
708 Dictionary mapping column of datetime types to the stata internal
709 format that you want to use for the dates. Options are
710 'tc', 'td', 'tm', 'tw', 'th', 'tq', 'ty'. Column can be either a
711 number or a name.
712 encoding : str
713 Default is latin-1. Note that Stata does not support unicode.
714 byteorder : str
715 Can be ">", "<", "little", or "big". The default is None which uses
716 `sys.byteorder`
717
718 Returns
719 -------
720 writer : StataWriter instance
721 The StataWriter instance has a write_file method, which will
722 write the file to the given `fname`.
723
724 Examples
725 --------
726 >>> writer = StataWriter('./data_file.dta', data)
727 >>> writer.write_file()
728
729 Or with dates
730
731 >>> writer = StataWriter('./date_data_file.dta', date, {2 : 'tw'})
732 >>> writer.write_file()
733 """
734 def __init__(self, fname, data, convert_dates=None, write_index=True, encoding="latin-1",
735 byteorder=None):
736 super(StataWriter, self).__init__(encoding)
737 self._convert_dates = convert_dates
738 self._write_index = write_index
739 # attach nobs, nvars, data, varlist, typlist
740 self._prepare_pandas(data)
741
742 if byteorder is None:
743 byteorder = sys.byteorder
744 self._byteorder = _set_endianness(byteorder)
745 self._file = _open_file_binary_write(fname, self._encoding or self._default_encoding)
746 self.type_converters = {253: np.long, 252: int}
747
748 def _write(self, to_write):
749 """
750 Helper to call encode before writing to file for Python 3 compat.
751 """
752 if compat.PY3:
753 self._file.write(to_write.encode(self._encoding or self._default_encoding))
754 else:
755 self._file.write(to_write)
756
757 def _prepare_pandas(self, data):
758 #NOTE: we might need a different API / class for pandas objects so
759 # we can set different semantics - handle this with a PR to pandas.io
760 class DataFrameRowIter(object):
761 def __init__(self, data):
762 self.data = data
763
764 def __iter__(self):
765 for i, row in data.iterrows():
766 yield row
767
768 if self._write_index:
769 data = data.reset_index()
770 self.datarows = DataFrameRowIter(data)
771 self.nobs, self.nvar = data.shape
772 self.data = data
773 self.varlist = data.columns.tolist()
774 dtypes = data.dtypes
775 if self._convert_dates is not None:
776 self._convert_dates = _maybe_convert_to_int_keys(self._convert_dates, self.varlist)
777 for key in self._convert_dates:
778 new_type = _convert_datetime_to_stata_type(self._convert_dates[key])
779 dtypes[key] = np.dtype(new_type)
780 self.typlist = [_dtype_to_stata_type(dt) for dt in dtypes]
781 self.fmtlist = [_dtype_to_default_stata_fmt(dt) for dt in dtypes]
782 # set the given format for the datetime cols
783 if self._convert_dates is not None:
784 for key in self._convert_dates:
785 self.fmtlist[key] = self._convert_dates[key]
786
787 def write_file(self):
788 self._write_header()
789 self._write_descriptors()
790 self._write_variable_labels()
791 # write 5 zeros for expansion fields
792 self._write(_pad_bytes("", 5))
793 if self._convert_dates is None:
794 self._write_data_nodates()
795 else:
796 self._write_data_dates()
797 #self._write_value_labels()
798 self._file.close()
799
800 def _write_header(self, data_label=None, time_stamp=None):
801 byteorder = self._byteorder
802 # ds_format - just use 114
803 self._file.write(struct.pack("b", 114))
804 # byteorder
805 self._write(byteorder == ">" and "\x01" or "\x02")
806 # filetype
807 self._write("\x01")
808 # unused
809 self._write("\x00")
810 # number of vars, 2 bytes
811 self._file.write(struct.pack(byteorder+"h", self.nvar)[:2])
812 # number of obs, 4 bytes
813 self._file.write(struct.pack(byteorder+"i", self.nobs)[:4])
814 # data label 81 bytes, char, null terminated
815 if data_label is None:
816 self._file.write(self._null_terminate(_pad_bytes("", 80)))
817 else:
818 self._file.write(self._null_terminate(_pad_bytes(data_label[:80], 80)))
819 # time stamp, 18 bytes, char, null terminated
820 # format dd Mon yyyy hh:mm
821 if time_stamp is None:
822 time_stamp = datetime.datetime.now()
823 elif not isinstance(time_stamp, datetime):
824 raise ValueError("time_stamp should be datetime type")
825 self._file.write(self._null_terminate(time_stamp.strftime("%d %b %Y %H:%M")))
826
827 def _write_descriptors(self, typlist=None, varlist=None, srtlist=None,
828 fmtlist=None, lbllist=None):
829 nvar = self.nvar
830 # typlist, length nvar, format byte array
831 for typ in self.typlist:
832 self._write(typ)
833
834 # varlist, length 33*nvar, char array, null terminated
835 for name in self.varlist:
836 name = self._null_terminate(name, True)
837 name = _pad_bytes(name[:32], 33)
838 self._write(name)
839
840 # srtlist, 2*(nvar+1), int array, encoded by byteorder
841 srtlist = _pad_bytes("", (2*(nvar+1)))
842 self._write(srtlist)
843
844 # fmtlist, 49*nvar, char array
845 for fmt in self.fmtlist:
846 self._write(_pad_bytes(fmt, 49))
847
848 # lbllist, 33*nvar, char array
849 #NOTE: this is where you could get fancy with pandas categorical type
850 for i in range(nvar):
851 self._write(_pad_bytes("", 33))
852
853 def _write_variable_labels(self, labels=None):
854 nvar = self.nvar
855 if labels is None:
856 for i in range(nvar):
857 self._write(_pad_bytes("", 81))
858
859 def _write_data_nodates(self):
860 data = self.datarows
861 byteorder = self._byteorder
862 TYPE_MAP = self.TYPE_MAP
863 typlist = self.typlist
864 for row in data:
865 #row = row.squeeze().tolist() # needed for structured arrays
866 for i, var in enumerate(row):
867 typ = ord(typlist[i])
868 if typ <= 244: # we've got a string
869 if len(var) < typ:
870 var = _pad_bytes(var, typ)
871 self._write(var)
872 else:
873 try:
874 self._file.write(struct.pack(byteorder + TYPE_MAP[typ], var))
875 except struct.error:
876 # have to be strict about type pack won't do any
877 # kind of casting
878 self._file.write(struct.pack(byteorder+TYPE_MAP[typ],
879 self.type_converters[typ](var)))
880
881 def _write_data_dates(self):
882 convert_dates = self._convert_dates
883 data = self.datarows
884 byteorder = self._byteorder
885 TYPE_MAP = self.TYPE_MAP
886 MISSING_VALUES = self.MISSING_VALUES
887 typlist = self.typlist
888 for row in data:
889 #row = row.squeeze().tolist() # needed for structured arrays
890 for i, var in enumerate(row):
891 typ = ord(typlist[i])
892 #NOTE: If anyone finds this terribly slow, there is
893 # a vectorized way to convert dates, see genfromdta for going
894 # from int to datetime and reverse it. will copy data though
895 if i in convert_dates:
896 var = _datetime_to_stata_elapsed(var, self.fmtlist[i])
897 if typ <= 244: # we've got a string
898 if len(var) < typ:
899 var = _pad_bytes(var, typ)
900 self._write(var)
901 else:
902 if isnull(var): # this only matters for floats
903 var = MISSING_VALUES[typ]
904 self._file.write(struct.pack(byteorder+TYPE_MAP[typ], var))
905
906 def _null_terminate(self, s, as_string=False):
907 null_byte = '\x00'
908 if compat.PY3 and not as_string:
909 s += null_byte
910 return s.encode(self._encoding)
911 else:
912 s += null_byte
913 return s
914
[end of pandas/io/stata.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pandas-dev/pandas
|
ea63f36fa3745160e37638db7b1eb81cf9139c9f
|
BUG: double figure when passing by to Series.hist
|
fixed, but a bit of a hack. could use some unification in the plotting api argument passing
|
2013-07-03T13:56:31Z
|
<patch>
diff --git a/doc/source/release.rst b/doc/source/release.rst
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -435,6 +435,8 @@ Bug Fixes
- Bug in getitem with a duplicate index when using where (:issue:`4879`)
- Fix Type inference code coerces float column into datetime (:issue:`4601`)
- Fixed ``_ensure_numeric`` does not check for complex numbers (:issue:`4902`)
+ - Fixed a bug in ``Series.hist`` where two figures were being created when
+ the ``by`` argument was passed (:issue:`4112`, :issue:`4113`).
pandas 0.12.0
diff --git a/pandas/tools/plotting.py b/pandas/tools/plotting.py
--- a/pandas/tools/plotting.py
+++ b/pandas/tools/plotting.py
@@ -2042,15 +2042,16 @@ def hist_series(self, by=None, ax=None, grid=True, xlabelsize=None,
"""
import matplotlib.pyplot as plt
- fig = kwds.get('figure', _gcf()
- if plt.get_fignums() else plt.figure(figsize=figsize))
- if figsize is not None and tuple(figsize) != tuple(fig.get_size_inches()):
- fig.set_size_inches(*figsize, forward=True)
-
if by is None:
- if kwds.get('layout', None):
+ if kwds.get('layout', None) is not None:
raise ValueError("The 'layout' keyword is not supported when "
"'by' is None")
+ # hack until the plotting interface is a bit more unified
+ fig = kwds.pop('figure', plt.gcf() if plt.get_fignums() else
+ plt.figure(figsize=figsize))
+ if (figsize is not None and tuple(figsize) !=
+ tuple(fig.get_size_inches())):
+ fig.set_size_inches(*figsize, forward=True)
if ax is None:
ax = fig.gca()
elif ax.get_figure() != fig:
</patch>
|
[]
|
[]
| |||
pandas-dev__pandas-29173
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Crash during groupby quantile
#### Code Sample, a copy-pastable example if possible
```python
dtf = dtf.groupby(cut(dtf[X], rng)).quantile(.5)
```
#### Problem description
Process: Python [24642]
Path: /Library/Frameworks/Python.framework/Versions/3.7/Resources/Python.app/Contents/MacOS/Python
Identifier: Python
Version: 3.7.4 (3.7.4)
Code Type: X86-64 (Native)
Parent Process: Python [24593]
Responsible: iTerm2 [1703]
User ID: 501
Date/Time: 2019-10-09 17:11:04.949 -0500
OS Version: Mac OS X 10.15 (19A583)
Report Version: 12
Bridge OS Version: 3.0 (14Y904)
Anonymous UUID: F986CCB3-5DD1-9587-8492-6D8B8A43979D
Sleep/Wake UUID: 42F77302-9822-4979-89CB-7C39F3C0556A
Time Awake Since Boot: 67000 seconds
Time Since Wake: 1900 seconds
System Integrity Protection: enabled
Crashed Thread: 7
Exception Type: EXC_BAD_ACCESS (SIGSEGV)
Exception Codes: KERN_INVALID_ADDRESS at 0x000000013a6dcff8
Exception Note: EXC_CORPSE_NOTIFY
Termination Signal: Segmentation fault: 11
Termination Reason: Namespace SIGNAL, Code 0xb
Terminating Process: exc handler [24642]
VM Regions Near 0x13a6dcff8:
MALLOC_LARGE 000000013a5ee000-000000013a633000 [ 276K] rw-/rwx SM=PRV
-->
MALLOC_LARGE 000000013a6dd000-000000013a74d000 [ 448K] rw-/rwx SM=PRV
0 groupby.cpython-37m-darwin.so 0x000000011b1596cf **__pyx_fuse_9__pyx_pw_6pandas_5_libs_7groupby_125group_quantile** + 6719
1 algos.cpython-37m-darwin.so 0x0000000119e8937c __pyx_FusedFunction_call + 812
<details>
Thread 0:: Dispatch queue: com.apple.main-thread
0 org.python.python 0x000000010d924bbf _PyEval_EvalFrameDefault + 1423
1 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
2 org.python.python 0x000000010d92d8c2 call_function + 738
3 org.python.python 0x000000010d92a92e _PyEval_EvalFrameDefault + 25342
4 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
5 org.python.python 0x000000010d92d8c2 call_function + 738
6 org.python.python 0x000000010d92a88d _PyEval_EvalFrameDefault + 25181
7 org.python.python 0x000000010d92e413 _PyEval_EvalCodeWithName + 2467
8 org.python.python 0x000000010d86e5a1 _PyFunction_FastCallKeywords + 257
9 org.python.python 0x000000010d92d8c2 call_function + 738
10 org.python.python 0x000000010d92a92e _PyEval_EvalFrameDefault + 25342
11 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
12 org.python.python 0x000000010d92d8c2 call_function + 738
13 org.python.python 0x000000010d92a873 _PyEval_EvalFrameDefault + 25155
14 org.python.python 0x000000010d92e413 _PyEval_EvalCodeWithName + 2467
15 org.python.python 0x000000010d86e5a1 _PyFunction_FastCallKeywords + 257
16 org.python.python 0x000000010d92d8c2 call_function + 738
17 org.python.python 0x000000010d92a92e _PyEval_EvalFrameDefault + 25342
18 org.python.python 0x000000010d92e413 _PyEval_EvalCodeWithName + 2467
19 org.python.python 0x000000010d86e17b _PyFunction_FastCallDict + 523
20 org.python.python 0x000000010d92ab9e _PyEval_EvalFrameDefault + 25966
21 org.python.python 0x000000010d92e413 _PyEval_EvalCodeWithName + 2467
22 org.python.python 0x000000010d86e5a1 _PyFunction_FastCallKeywords + 257
23 org.python.python 0x000000010d92d8c2 call_function + 738
24 org.python.python 0x000000010d92a9d4 _PyEval_EvalFrameDefault + 25508
25 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
26 org.python.python 0x000000010d92d8c2 call_function + 738
27 org.python.python 0x000000010d92a92e _PyEval_EvalFrameDefault + 25342
28 org.python.python 0x000000010d92e413 _PyEval_EvalCodeWithName + 2467
29 org.python.python 0x000000010d924554 PyEval_EvalCode + 100
30 org.python.python 0x000000010d961c31 PyRun_FileExFlags + 209
31 org.python.python 0x000000010d9614aa PyRun_SimpleFileExFlags + 890
32 org.python.python 0x000000010d980903 pymain_main + 6915
33 org.python.python 0x000000010d980e6a _Py_UnixMain + 58
34 libdyld.dylib 0x00007fff731e2405 start + 1
Thread 1:
0 libsystem_pthread.dylib 0x00007fff733eb5b4 start_wqthread + 0
Thread 2:
0 libsystem_kernel.dylib 0x00007fff7333359e poll + 10
1 select.cpython-37m-darwin.so 0x000000010e0c1982 poll_poll + 466
2 org.python.python 0x000000010d86f1cc _PyMethodDef_RawFastCallKeywords + 668
3 org.python.python 0x000000010d874d42 _PyMethodDescr_FastCallKeywords + 82
4 org.python.python 0x000000010d92d8ec call_function + 780
5 org.python.python 0x000000010d92a873 _PyEval_EvalFrameDefault + 25155
6 org.python.python 0x000000010d92e413 _PyEval_EvalCodeWithName + 2467
7 org.python.python 0x000000010d86e5a1 _PyFunction_FastCallKeywords + 257
8 org.python.python 0x000000010d92d8c2 call_function + 738
9 org.python.python 0x000000010d92a873 _PyEval_EvalFrameDefault + 25155
10 org.python.python 0x000000010d92e413 _PyEval_EvalCodeWithName + 2467
11 org.python.python 0x000000010d86e5a1 _PyFunction_FastCallKeywords + 257
12 org.python.python 0x000000010d92d8c2 call_function + 738
13 org.python.python 0x000000010d92a88d _PyEval_EvalFrameDefault + 25181
14 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
15 org.python.python 0x000000010d92d8c2 call_function + 738
16 org.python.python 0x000000010d92a873 _PyEval_EvalFrameDefault + 25155
17 org.python.python 0x000000010d92e413 _PyEval_EvalCodeWithName + 2467
18 org.python.python 0x000000010d86e17b _PyFunction_FastCallDict + 523
19 org.python.python 0x000000010d92ab9e _PyEval_EvalFrameDefault + 25966
20 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
21 org.python.python 0x000000010d92d8c2 call_function + 738
22 org.python.python 0x000000010d92a873 _PyEval_EvalFrameDefault + 25155
23 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
24 org.python.python 0x000000010d92d8c2 call_function + 738
25 org.python.python 0x000000010d92a873 _PyEval_EvalFrameDefault + 25155
26 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
27 org.python.python 0x000000010d86e004 _PyFunction_FastCallDict + 148
28 org.python.python 0x000000010d86f4bf _PyObject_Call_Prepend + 143
29 org.python.python 0x000000010d86e707 PyObject_Call + 135
30 org.python.python 0x000000010d9b5b77 t_bootstrap + 71
31 org.python.python 0x000000010d96c939 pythread_wrapper + 25
32 libsystem_pthread.dylib 0x00007fff733eed76 _pthread_start + 125
33 libsystem_pthread.dylib 0x00007fff733eb5d7 thread_start + 15
Thread 3:
0 libsystem_kernel.dylib 0x00007fff7332e8f6 __psynch_cvwait + 10
1 libsystem_pthread.dylib 0x00007fff733ef082 _pthread_cond_wait + 701
2 org.python.python 0x000000010d96ce01 PyThread_acquire_lock_timed + 673
3 org.python.python 0x000000010d9b620f acquire_timed + 111
4 org.python.python 0x000000010d9b6320 lock_PyThread_acquire_lock + 48
5 org.python.python 0x000000010d86f1dd _PyMethodDef_RawFastCallKeywords + 685
6 org.python.python 0x000000010d874d42 _PyMethodDescr_FastCallKeywords + 82
7 org.python.python 0x000000010d92d8ec call_function + 780
8 org.python.python 0x000000010d92a873 _PyEval_EvalFrameDefault + 25155
9 org.python.python 0x000000010d92e413 _PyEval_EvalCodeWithName + 2467
10 org.python.python 0x000000010d86e5a1 _PyFunction_FastCallKeywords + 257
11 org.python.python 0x000000010d92d8c2 call_function + 738
12 org.python.python 0x000000010d92a873 _PyEval_EvalFrameDefault + 25155
13 org.python.python 0x000000010d92e413 _PyEval_EvalCodeWithName + 2467
14 org.python.python 0x000000010d86e5a1 _PyFunction_FastCallKeywords + 257
15 org.python.python 0x000000010d92d8c2 call_function + 738
16 org.python.python 0x000000010d92a9d4 _PyEval_EvalFrameDefault + 25508
17 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
18 org.python.python 0x000000010d92d8c2 call_function + 738
19 org.python.python 0x000000010d92a873 _PyEval_EvalFrameDefault + 25155
20 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
21 org.python.python 0x000000010d92d8c2 call_function + 738
22 org.python.python 0x000000010d92a873 _PyEval_EvalFrameDefault + 25155
23 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
24 org.python.python 0x000000010d92d8c2 call_function + 738
25 org.python.python 0x000000010d92a873 _PyEval_EvalFrameDefault + 25155
26 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
27 org.python.python 0x000000010d86e004 _PyFunction_FastCallDict + 148
28 org.python.python 0x000000010d86f4bf _PyObject_Call_Prepend + 143
29 org.python.python 0x000000010d86e707 PyObject_Call + 135
30 org.python.python 0x000000010d9b5b77 t_bootstrap + 71
31 org.python.python 0x000000010d96c939 pythread_wrapper + 25
32 libsystem_pthread.dylib 0x00007fff733eed76 _pthread_start + 125
33 libsystem_pthread.dylib 0x00007fff733eb5d7 thread_start + 15
Thread 4:
0 libsystem_kernel.dylib 0x00007fff7332b146 mach_msg_trap + 10
1 libsystem_kernel.dylib 0x00007fff7332b6ac mach_msg + 60
2 com.apple.CoreFoundation 0x00007fff3bee419b __CFRunLoopServiceMachPort + 322
3 com.apple.CoreFoundation 0x00007fff3bee3737 __CFRunLoopRun + 1695
4 com.apple.CoreFoundation 0x00007fff3bee2e13 CFRunLoopRunSpecific + 499
5 com.apple.CoreFoundation 0x00007fff3bee2bea CFRunLoopRun + 40
6 _watchdog_fsevents.cpython-37m-darwin.so 0x00000001323a5915 watchdog_read_events + 149
7 org.python.python 0x000000010d86f1cc _PyMethodDef_RawFastCallKeywords + 668
8 org.python.python 0x000000010d86e5da _PyCFunction_FastCallKeywords + 42
9 org.python.python 0x000000010d92d8b4 call_function + 724
10 org.python.python 0x000000010d92a88d _PyEval_EvalFrameDefault + 25181
11 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
12 org.python.python 0x000000010d92d8c2 call_function + 738
13 org.python.python 0x000000010d92a873 _PyEval_EvalFrameDefault + 25155
14 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
15 org.python.python 0x000000010d92d8c2 call_function + 738
16 org.python.python 0x000000010d92a873 _PyEval_EvalFrameDefault + 25155
17 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
18 org.python.python 0x000000010d86e004 _PyFunction_FastCallDict + 148
19 org.python.python 0x000000010d86f4bf _PyObject_Call_Prepend + 143
20 org.python.python 0x000000010d86e707 PyObject_Call + 135
21 org.python.python 0x000000010d9b5b77 t_bootstrap + 71
22 org.python.python 0x000000010d96c939 pythread_wrapper + 25
23 libsystem_pthread.dylib 0x00007fff733eed76 _pthread_start + 125
24 libsystem_pthread.dylib 0x00007fff733eb5d7 thread_start + 15
Thread 5:
0 libsystem_kernel.dylib 0x00007fff7332b146 mach_msg_trap + 10
1 libsystem_kernel.dylib 0x00007fff7332b6ac mach_msg + 60
2 com.apple.CoreFoundation 0x00007fff3bee419b __CFRunLoopServiceMachPort + 322
3 com.apple.CoreFoundation 0x00007fff3bee3737 __CFRunLoopRun + 1695
4 com.apple.CoreFoundation 0x00007fff3bee2e13 CFRunLoopRunSpecific + 499
5 com.apple.CoreFoundation 0x00007fff3bee2bea CFRunLoopRun + 40
6 _watchdog_fsevents.cpython-37m-darwin.so 0x00000001323a5915 watchdog_read_events + 149
7 org.python.python 0x000000010d86f1cc _PyMethodDef_RawFastCallKeywords + 668
8 org.python.python 0x000000010d86e5da _PyCFunction_FastCallKeywords + 42
9 org.python.python 0x000000010d92d8b4 call_function + 724
10 org.python.python 0x000000010d92a88d _PyEval_EvalFrameDefault + 25181
11 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
12 org.python.python 0x000000010d92d8c2 call_function + 738
13 org.python.python 0x000000010d92a873 _PyEval_EvalFrameDefault + 25155
14 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
15 org.python.python 0x000000010d92d8c2 call_function + 738
16 org.python.python 0x000000010d92a873 _PyEval_EvalFrameDefault + 25155
17 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
18 org.python.python 0x000000010d86e004 _PyFunction_FastCallDict + 148
19 org.python.python 0x000000010d86f4bf _PyObject_Call_Prepend + 143
20 org.python.python 0x000000010d86e707 PyObject_Call + 135
21 org.python.python 0x000000010d9b5b77 t_bootstrap + 71
22 org.python.python 0x000000010d96c939 pythread_wrapper + 25
23 libsystem_pthread.dylib 0x00007fff733eed76 _pthread_start + 125
24 libsystem_pthread.dylib 0x00007fff733eb5d7 thread_start + 15
Thread 6:
0 libsystem_kernel.dylib 0x00007fff7332b146 mach_msg_trap + 10
1 libsystem_kernel.dylib 0x00007fff7332b6ac mach_msg + 60
2 com.apple.CoreFoundation 0x00007fff3bee419b __CFRunLoopServiceMachPort + 322
3 com.apple.CoreFoundation 0x00007fff3bee3737 __CFRunLoopRun + 1695
4 com.apple.CoreFoundation 0x00007fff3bee2e13 CFRunLoopRunSpecific + 499
5 com.apple.CoreFoundation 0x00007fff3bee2bea CFRunLoopRun + 40
6 _watchdog_fsevents.cpython-37m-darwin.so 0x00000001323a5915 watchdog_read_events + 149
7 org.python.python 0x000000010d86f1cc _PyMethodDef_RawFastCallKeywords + 668
8 org.python.python 0x000000010d86e5da _PyCFunction_FastCallKeywords + 42
9 org.python.python 0x000000010d92d8b4 call_function + 724
10 org.python.python 0x000000010d92a88d _PyEval_EvalFrameDefault + 25181
11 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
12 org.python.python 0x000000010d92d8c2 call_function + 738
13 org.python.python 0x000000010d92a873 _PyEval_EvalFrameDefault + 25155
14 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
15 org.python.python 0x000000010d92d8c2 call_function + 738
16 org.python.python 0x000000010d92a873 _PyEval_EvalFrameDefault + 25155
17 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
18 org.python.python 0x000000010d86e004 _PyFunction_FastCallDict + 148
19 org.python.python 0x000000010d86f4bf _PyObject_Call_Prepend + 143
20 org.python.python 0x000000010d86e707 PyObject_Call + 135
21 org.python.python 0x000000010d9b5b77 t_bootstrap + 71
22 org.python.python 0x000000010d96c939 pythread_wrapper + 25
23 libsystem_pthread.dylib 0x00007fff733eed76 _pthread_start + 125
24 libsystem_pthread.dylib 0x00007fff733eb5d7 thread_start + 15
Thread 7 Crashed:
0 groupby.cpython-37m-darwin.so 0x000000011b1596cf __pyx_fuse_9__pyx_pw_6pandas_5_libs_7groupby_125group_quantile + 6719
1 algos.cpython-37m-darwin.so 0x0000000119e8937c __pyx_FusedFunction_call + 812
2 org.python.python 0x000000010d86e707 PyObject_Call + 135
3 org.python.python 0x000000010d9a45c0 partial_call + 256
4 org.python.python 0x000000010d86e707 PyObject_Call + 135
5 org.python.python 0x000000010d92ab9e _PyEval_EvalFrameDefault + 25966
6 org.python.python 0x000000010d92e413 _PyEval_EvalCodeWithName + 2467
7 org.python.python 0x000000010d86e5a1 _PyFunction_FastCallKeywords + 257
8 org.python.python 0x000000010d92d8c2 call_function + 738
9 org.python.python 0x000000010d92a9d4 _PyEval_EvalFrameDefault + 25508
10 org.python.python 0x000000010d92e413 _PyEval_EvalCodeWithName + 2467
11 org.python.python 0x000000010d86e5a1 _PyFunction_FastCallKeywords + 257
12 org.python.python 0x000000010d92d8c2 call_function + 738
13 org.python.python 0x000000010d92a88d _PyEval_EvalFrameDefault + 25181
14 org.python.python 0x000000010d92e413 _PyEval_EvalCodeWithName + 2467
15 org.python.python 0x000000010d86e5a1 _PyFunction_FastCallKeywords + 257
16 org.python.python 0x000000010d92d8c2 call_function + 738
17 org.python.python 0x000000010d92a9d4 _PyEval_EvalFrameDefault + 25508
18 org.python.python 0x000000010d92e413 _PyEval_EvalCodeWithName + 2467
19 org.python.python 0x000000010d86e17b _PyFunction_FastCallDict + 523
20 org.python.python 0x000000010d92ab9e _PyEval_EvalFrameDefault + 25966
21 org.python.python 0x000000010d92e413 _PyEval_EvalCodeWithName + 2467
22 org.python.python 0x000000010d86e17b _PyFunction_FastCallDict + 523
23 org.python.python 0x000000010d92ab9e _PyEval_EvalFrameDefault + 25966
24 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
25 org.python.python 0x000000010d92d8c2 call_function + 738
26 org.python.python 0x000000010d92a873 _PyEval_EvalFrameDefault + 25155
27 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
28 org.python.python 0x000000010d92d8c2 call_function + 738
29 org.python.python 0x000000010d92a873 _PyEval_EvalFrameDefault + 25155
30 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
31 org.python.python 0x000000010d92d8c2 call_function + 738
32 org.python.python 0x000000010d92a873 _PyEval_EvalFrameDefault + 25155
33 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
34 org.python.python 0x000000010d86e004 _PyFunction_FastCallDict + 148
35 org.python.python 0x000000010d86f4bf _PyObject_Call_Prepend + 143
36 org.python.python 0x000000010d8bc926 slot_tp_call + 150
37 org.python.python 0x000000010d86e3f1 _PyObject_FastCallKeywords + 433
38 org.python.python 0x000000010d92d784 call_function + 420
39 org.python.python 0x000000010d92a88d _PyEval_EvalFrameDefault + 25181
40 org.python.python 0x000000010d87ca5e gen_send_ex + 206
41 org.python.python 0x000000010d92a06f _PyEval_EvalFrameDefault + 23103
42 org.python.python 0x000000010d92e413 _PyEval_EvalCodeWithName + 2467
43 org.python.python 0x000000010d86e5a1 _PyFunction_FastCallKeywords + 257
44 org.python.python 0x000000010d92d8c2 call_function + 738
45 org.python.python 0x000000010d92a92e _PyEval_EvalFrameDefault + 25342
46 org.python.python 0x000000010d92e413 _PyEval_EvalCodeWithName + 2467
47 org.python.python 0x000000010d86e5a1 _PyFunction_FastCallKeywords + 257
48 org.python.python 0x000000010d92d8c2 call_function + 738
49 org.python.python 0x000000010d92a873 _PyEval_EvalFrameDefault + 25155
50 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
51 org.python.python 0x000000010d92d8c2 call_function + 738
52 org.python.python 0x000000010d92a873 _PyEval_EvalFrameDefault + 25155
53 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
54 org.python.python 0x000000010d92d8c2 call_function + 738
55 org.python.python 0x000000010d92a88d _PyEval_EvalFrameDefault + 25181
56 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
57 org.python.python 0x000000010d92d8c2 call_function + 738
58 org.python.python 0x000000010d92a873 _PyEval_EvalFrameDefault + 25155
59 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
60 org.python.python 0x000000010d86e004 _PyFunction_FastCallDict + 148
61 org.python.python 0x000000010d86f4bf _PyObject_Call_Prepend + 143
62 org.python.python 0x000000010d8bdd61 slot_tp_init + 145
63 org.python.python 0x000000010d8b9749 type_call + 297
64 org.python.python 0x000000010d86e3f1 _PyObject_FastCallKeywords + 433
65 org.python.python 0x000000010d92d784 call_function + 420
66 org.python.python 0x000000010d92a88d _PyEval_EvalFrameDefault + 25181
67 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
68 org.python.python 0x000000010d92d8c2 call_function + 738
69 org.python.python 0x000000010d92a873 _PyEval_EvalFrameDefault + 25155
70 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
71 org.python.python 0x000000010d86e004 _PyFunction_FastCallDict + 148
72 org.python.python 0x000000010d86f4bf _PyObject_Call_Prepend + 143
73 org.python.python 0x000000010d86e707 PyObject_Call + 135
74 org.python.python 0x000000010d92ab9e _PyEval_EvalFrameDefault + 25966
75 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
76 org.python.python 0x000000010d92d8c2 call_function + 738
77 org.python.python 0x000000010d92a873 _PyEval_EvalFrameDefault + 25155
78 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
79 org.python.python 0x000000010d92d8c2 call_function + 738
80 org.python.python 0x000000010d92a873 _PyEval_EvalFrameDefault + 25155
81 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
82 org.python.python 0x000000010d86e004 _PyFunction_FastCallDict + 148
83 org.python.python 0x000000010d86f4bf _PyObject_Call_Prepend + 143
84 org.python.python 0x000000010d86e707 PyObject_Call + 135
85 org.python.python 0x000000010d9b5b77 t_bootstrap + 71
86 org.python.python 0x000000010d96c939 pythread_wrapper + 25
87 libsystem_pthread.dylib 0x00007fff733eed76 _pthread_start + 125
88 libsystem_pthread.dylib 0x00007fff733eb5d7 thread_start + 15
Thread 8:
0 _multiarray_umath.cpython-37m-darwin.so 0x000000010ea13664 BOOL_logical_not + 612
1 _multiarray_umath.cpython-37m-darwin.so 0x000000010eab05a1 trivial_two_operand_loop + 273
2 _multiarray_umath.cpython-37m-darwin.so 0x000000010eaa9140 PyUFunc_GenericFunction + 15792
3 _multiarray_umath.cpython-37m-darwin.so 0x000000010eaabc68 ufunc_generic_call + 136
4 org.python.python 0x000000010d86def9 _PyObject_FastCallDict + 297
5 org.python.python 0x000000010d87014c object_vacall + 316
6 org.python.python 0x000000010d870334 PyObject_CallFunctionObjArgs + 148
7 org.python.python 0x000000010d86f225 _PyMethodDef_RawFastCallKeywords + 757
8 org.python.python 0x000000010d86e5da _PyCFunction_FastCallKeywords + 42
9 org.python.python 0x000000010d92d8b4 call_function + 724
10 org.python.python 0x000000010d92a88d _PyEval_EvalFrameDefault + 25181
11 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
12 org.python.python 0x000000010d86e004 _PyFunction_FastCallDict + 148
13 org.python.python 0x000000010d86f3df _PyObject_FastCall_Prepend + 127
14 org.python.python 0x000000010d8c0316 slot_nb_invert + 134
15 org.python.python 0x000000010d925152 _PyEval_EvalFrameDefault + 2850
16 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
17 org.python.python 0x000000010d92d8c2 call_function + 738
18 org.python.python 0x000000010d92a92e _PyEval_EvalFrameDefault + 25342
19 org.python.python 0x000000010d92e413 _PyEval_EvalCodeWithName + 2467
20 org.python.python 0x000000010d86e5a1 _PyFunction_FastCallKeywords + 257
21 org.python.python 0x000000010d92d8c2 call_function + 738
22 org.python.python 0x000000010d92a9d4 _PyEval_EvalFrameDefault + 25508
23 org.python.python 0x000000010d92e413 _PyEval_EvalCodeWithName + 2467
24 org.python.python 0x000000010d86e5a1 _PyFunction_FastCallKeywords + 257
25 org.python.python 0x000000010d92d8c2 call_function + 738
26 org.python.python 0x000000010d92a9d4 _PyEval_EvalFrameDefault + 25508
27 org.python.python 0x000000010d92e413 _PyEval_EvalCodeWithName + 2467
28 org.python.python 0x000000010d86e5a1 _PyFunction_FastCallKeywords + 257
29 org.python.python 0x000000010d92d8c2 call_function + 738
30 org.python.python 0x000000010d92a9d4 _PyEval_EvalFrameDefault + 25508
31 org.python.python 0x000000010d92e413 _PyEval_EvalCodeWithName + 2467
32 org.python.python 0x000000010d86e17b _PyFunction_FastCallDict + 523
33 org.python.python 0x000000010d92ab9e _PyEval_EvalFrameDefault + 25966
34 org.python.python 0x000000010d92e413 _PyEval_EvalCodeWithName + 2467
35 org.python.python 0x000000010d86e17b _PyFunction_FastCallDict + 523
36 org.python.python 0x000000010d92ab9e _PyEval_EvalFrameDefault + 25966
37 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
38 org.python.python 0x000000010d92d8c2 call_function + 738
39 org.python.python 0x000000010d92a873 _PyEval_EvalFrameDefault + 25155
40 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
41 org.python.python 0x000000010d92d8c2 call_function + 738
42 org.python.python 0x000000010d92a873 _PyEval_EvalFrameDefault + 25155
43 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
44 org.python.python 0x000000010d92d8c2 call_function + 738
45 org.python.python 0x000000010d92a873 _PyEval_EvalFrameDefault + 25155
46 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
47 org.python.python 0x000000010d86e004 _PyFunction_FastCallDict + 148
48 org.python.python 0x000000010d86f4bf _PyObject_Call_Prepend + 143
49 org.python.python 0x000000010d8bc926 slot_tp_call + 150
50 org.python.python 0x000000010d86e3f1 _PyObject_FastCallKeywords + 433
51 org.python.python 0x000000010d92d784 call_function + 420
52 org.python.python 0x000000010d92a88d _PyEval_EvalFrameDefault + 25181
53 org.python.python 0x000000010d87ca5e gen_send_ex + 206
54 org.python.python 0x000000010d92a06f _PyEval_EvalFrameDefault + 23103
55 org.python.python 0x000000010d92e413 _PyEval_EvalCodeWithName + 2467
56 org.python.python 0x000000010d86e5a1 _PyFunction_FastCallKeywords + 257
57 org.python.python 0x000000010d92d8c2 call_function + 738
58 org.python.python 0x000000010d92a92e _PyEval_EvalFrameDefault + 25342
59 org.python.python 0x000000010d92e413 _PyEval_EvalCodeWithName + 2467
60 org.python.python 0x000000010d86e5a1 _PyFunction_FastCallKeywords + 257
61 org.python.python 0x000000010d92d8c2 call_function + 738
62 org.python.python 0x000000010d92a873 _PyEval_EvalFrameDefault + 25155
63 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
64 org.python.python 0x000000010d92d8c2 call_function + 738
65 org.python.python 0x000000010d92a873 _PyEval_EvalFrameDefault + 25155
66 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
67 org.python.python 0x000000010d92d8c2 call_function + 738
68 org.python.python 0x000000010d92a88d _PyEval_EvalFrameDefault + 25181
69 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
70 org.python.python 0x000000010d92d8c2 call_function + 738
71 org.python.python 0x000000010d92a873 _PyEval_EvalFrameDefault + 25155
72 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
73 org.python.python 0x000000010d86e004 _PyFunction_FastCallDict + 148
74 org.python.python 0x000000010d86f4bf _PyObject_Call_Prepend + 143
75 org.python.python 0x000000010d8bdd61 slot_tp_init + 145
76 org.python.python 0x000000010d8b9749 type_call + 297
77 org.python.python 0x000000010d86e3f1 _PyObject_FastCallKeywords + 433
78 org.python.python 0x000000010d92d784 call_function + 420
79 org.python.python 0x000000010d92a88d _PyEval_EvalFrameDefault + 25181
80 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
81 org.python.python 0x000000010d92d8c2 call_function + 738
82 org.python.python 0x000000010d92a873 _PyEval_EvalFrameDefault + 25155
83 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
84 org.python.python 0x000000010d86e004 _PyFunction_FastCallDict + 148
85 org.python.python 0x000000010d86f4bf _PyObject_Call_Prepend + 143
86 org.python.python 0x000000010d86e707 PyObject_Call + 135
87 org.python.python 0x000000010d92ab9e _PyEval_EvalFrameDefault + 25966
88 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
89 org.python.python 0x000000010d92d8c2 call_function + 738
90 org.python.python 0x000000010d92a873 _PyEval_EvalFrameDefault + 25155
91 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
92 org.python.python 0x000000010d92d8c2 call_function + 738
93 org.python.python 0x000000010d92a873 _PyEval_EvalFrameDefault + 25155
94 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
95 org.python.python 0x000000010d86e004 _PyFunction_FastCallDict + 148
96 org.python.python 0x000000010d86f4bf _PyObject_Call_Prepend + 143
97 org.python.python 0x000000010d86e707 PyObject_Call + 135
98 org.python.python 0x000000010d9b5b77 t_bootstrap + 71
99 org.python.python 0x000000010d96c939 pythread_wrapper + 25
100 libsystem_pthread.dylib 0x00007fff733eed76 _pthread_start + 125
101 libsystem_pthread.dylib 0x00007fff733eb5d7 thread_start + 15
Thread 7 crashed with X86 Thread State (64-bit):
rax: 0x00007fc23e50d830 rbx: 0xffffffffffffffff rcx: 0x0000000148926000 rdx: 0x0000000131c66000
rdi: 0x000000000013dc10 rsi: 0xfffffffffffffff8 rbp: 0x0000700006b28490 rsp: 0x0000700006b27df0
r8: 0x0000000000000008 r9: 0x0000000000000008 r10: 0x0000000000000008 r11: 0x0000000000000001
r12: 0x000000013a6dd000 r13: 0x0000700006b282f8 r14: 0x00000001352d0760 r15: 0x000000000000880b
rip: 0x000000011b1596cf rfl: 0x0000000000010282 cr2: 0x000000013a6dcff8
Logical CPU: 4
Error Code: 0x00000006 (no mapping for user data read)
Trap Number: 14
Binary Images:
0x10d844000 - 0x10d844fff +org.python.python (3.7.4 - 3.7.4) <4B030EC4-815E-34B7-90E7-D0720C31E072> /Library/Frameworks/Python.framework/Versions/3.7/Resources/Python.app/Contents/MacOS/Python
0x10d84d000 - 0x10da26fff +org.python.python (3.7.4, [c] 2001-2019 Python Software Foundation. - 3.7.4) <AC1AEBEB-FF5A-32AD-BAE0-C6A0BCA86B84> /Library/Frameworks/Python.framework/Versions/3.7/Python
0x10de68000 - 0x10de69fff +_heapq.cpython-37m-darwin.so (0) <E8B35F18-1B5A-3C9E-B1F4-0BE0432459A2> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload/_heapq.cpython-37m-darwin.so
0x10deed000 - 0x10def0ff7 +zlib.cpython-37m-darwin.so (0) <993EF100-1498-3D6A-91FD-79558CAC8F13> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload/zlib.cpython-37m-darwin.so
0x10df42000 - 0x10df43ff7 +_bz2.cpython-37m-darwin.so (0) <F89816AF-0BA9-3228-BAE7-54BA0D68EF67> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload/_bz2.cpython-37m-darwin.so
0x10df47000 - 0x10df77ff7 +_lzma.cpython-37m-darwin.so (0) <AEA78736-809A-3F3E-A2A3-BDA83B0ECBA8> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload/_lzma.cpython-37m-darwin.so
0x10df81000 - 0x10df81fff +grp.cpython-37m-darwin.so (0) <CF2821DC-6D7D-36C4-9F67-5D20E43D70B2> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload/grp.cpython-37m-darwin.so
0x10df93000 - 0x10df96ffb +_comb.cpython-37m-darwin.so (0) <007AB3F6-F95A-3A84-A311-9A47B603490E> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/special/_comb.cpython-37m-darwin.so
0x10df9b000 - 0x10df9bff7 +_raw_ecb.cpython-37m-darwin.so (???) <8F4B4796-E875-304A-84A1-1612D5965846> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/Crypto/Cipher/_raw_ecb.cpython-37m-darwin.so
0x10e01d000 - 0x10e021fff +_struct.cpython-37m-darwin.so (0) <2379780F-4AB4-394B-B5AB-55A517D6627E> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload/_struct.cpython-37m-darwin.so
0x10e02a000 - 0x10e02dff7 +binascii.cpython-37m-darwin.so (0) <58A5F4AD-285A-35E3-90C4-08A3D3D14BF2> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload/binascii.cpython-37m-darwin.so
0x10e0ba000 - 0x10e0bbff7 +_posixsubprocess.cpython-37m-darwin.so (0) <11920A4C-3AD4-3C87-95E5-418D30950610> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload/_posixsubprocess.cpython-37m-darwin.so
0x10e0bf000 - 0x10e0c2fff +select.cpython-37m-darwin.so (0) <473A1E84-EAC7-30DD-A0C0-111ECA9BC60A> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload/select.cpython-37m-darwin.so
0x10e0c8000 - 0x10e0ccfff +math.cpython-37m-darwin.so (0) <C780CA87-2A8D-342E-930E-7EDBB84B3896> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload/math.cpython-37m-darwin.so
0x10e113000 - 0x10e121ff7 +_datetime.cpython-37m-darwin.so (0) <C1603837-F8C7-3FFF-8C6B-D527535D7535> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload/_datetime.cpython-37m-darwin.so
0x10e12d000 - 0x10e156ff7 +pyexpat.cpython-37m-darwin.so (0) <DFD21217-38D1-329A-844A-67778791E921> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload/pyexpat.cpython-37m-darwin.so
0x10e1e9000 - 0x10e1ebff7 +_hashlib.cpython-37m-darwin.so (0) <A6066959-BCC0-3790-9FB2-8B8A9ECBF097> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload/_hashlib.cpython-37m-darwin.so
0x10e1f0000 - 0x10e248ff7 +libssl.1.1.dylib (0) <1DF55B16-0F3A-3620-A4C8-6CEDF39B9620> /Library/Frameworks/Python.framework/Versions/3.7/lib/libssl.1.1.dylib
0x10e271000 - 0x10e4871df +libcrypto.1.1.dylib (0) <34708DE8-CBA8-3112-91FA-3333E07F30DB> /Library/Frameworks/Python.framework/Versions/3.7/lib/libcrypto.1.1.dylib
0x10e517000 - 0x10e51cff7 +_blake2.cpython-37m-darwin.so (0) <5D4A9B1B-FE9F-34EA-BD75-7B3CDDBB7CD0> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload/_blake2.cpython-37m-darwin.so
0x10e521000 - 0x10e531ff7 +_sha3.cpython-37m-darwin.so (0) <E32B9196-5FD3-38FF-BF4E-EF74519A0AFA> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload/_sha3.cpython-37m-darwin.so
0x10e537000 - 0x10e537ff7 +_bisect.cpython-37m-darwin.so (0) <A4FCF31A-2AA6-3EAC-AF46-2F2D10EC1AB1> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload/_bisect.cpython-37m-darwin.so
0x10e53a000 - 0x10e53bff7 +_random.cpython-37m-darwin.so (0) <7E1DAB2E-F4F2-3DDD-BD85-C74BC8983933> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload/_random.cpython-37m-darwin.so
0x10e53f000 - 0x10e548ff7 +_socket.cpython-37m-darwin.so (0) <7B684803-C0A8-34D7-81CE-7A4EE7DEA614> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload/_socket.cpython-37m-darwin.so
0x10e615000 - 0x10e615ff7 +_opcode.cpython-37m-darwin.so (0) <11A650B3-FF7B-3DF1-81E2-A906553221C9> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload/_opcode.cpython-37m-darwin.so
0x10e6c3000 - 0x10e6c4ff3 +_zeros.cpython-37m-darwin.so (0) <05F50EFF-5388-3F0E-8034-D9031383D3AA> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/optimize/_zeros.cpython-37m-darwin.so
0x10e6c7000 - 0x10e6c7ff7 +_raw_cbc.cpython-37m-darwin.so (???) <B161CC1C-8823-32C3-A77F-125C1F15F391> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/Crypto/Cipher/_raw_cbc.cpython-37m-darwin.so
0x10e814000 - 0x10e814ff3 +_api_implementation.cpython-37m-darwin.so (0) <7AD2BE44-57F1-385A-AD04-ECF361EFBF65> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/google/protobuf/internal/_api_implementation.cpython-37m-darwin.so
0x10e817000 - 0x10e817fff +_raw_cfb.cpython-37m-darwin.so (???) <D1F530FC-1F2F-3868-BF2C-6A3E1CA296E0> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/Crypto/Cipher/_raw_cfb.cpython-37m-darwin.so
0x10e8db000 - 0x10e8e0ff3 +messagestream.cpython-37m-darwin.so (0) <892D9031-1B21-35AE-9E89-8684E88BE576> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/_lib/messagestream.cpython-37m-darwin.so
0x10e8e7000 - 0x10e8e7ff7 +_raw_ofb.cpython-37m-darwin.so (???) <F59104DC-4122-34B0-92E5-5A2989E14249> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/Crypto/Cipher/_raw_ofb.cpython-37m-darwin.so
0x10e8ea000 - 0x10e8eaff7 +_strxor.cpython-37m-darwin.so (???) <3C58F5A3-8D98-33B2-814F-0EBBC5F20333> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/Crypto/Util/_strxor.cpython-37m-darwin.so
0x10e8ec000 - 0x10eb61fff +_multiarray_umath.cpython-37m-darwin.so (0) <671D7C13-F80F-39BB-AAAC-7812A00AF0AD> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/numpy/core/_multiarray_umath.cpython-37m-darwin.so
0x10ec73000 - 0x112701797 +libopenblasp-r0.3.7.dev.dylib (0) <0E19F9FE-2367-3794-9260-55F4BB058EF2> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/numpy/.dylibs/libopenblasp-r0.3.7.dev.dylib
0x112945000 - 0x112a5cff7 +libgfortran.3.dylib (0) <9ABE5EDE-AD43-391A-9E54-866711FAC32A> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/numpy/.dylibs/libgfortran.3.dylib
0x112ac0000 - 0x112af6fff +libquadmath.0.dylib (0) <7FFA409F-FB04-3B64-BE9A-3E3A494C975E> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/numpy/.dylibs/libquadmath.0.dylib
0x112b05000 - 0x112b1aff7 +libgcc_s.1.dylib (0) <7C6D7CB7-82DB-3290-8181-07646FEA1F80> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/numpy/.dylibs/libgcc_s.1.dylib
0x118ba5000 - 0x118bb8ff7 +_pickle.cpython-37m-darwin.so (0) <9C74285E-75A9-33BD-8836-AE129AFA3A86> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload/_pickle.cpython-37m-darwin.so
0x118cc4000 - 0x118cc4fff +_cpuid_c.cpython-37m-darwin.so (???) <E61506B0-F069-3A2D-847B-4006A2DBD5BF> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/Crypto/Util/_cpuid_c.cpython-37m-darwin.so
0x118d06000 - 0x118d13fff +_multiarray_tests.cpython-37m-darwin.so (0) <79FE98ED-E4E1-30CE-8345-D110F170574F> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/numpy/core/_multiarray_tests.cpython-37m-darwin.so
0x118d23000 - 0x118d33fff +_ctypes.cpython-37m-darwin.so (0) <B0740DFD-2C92-3A81-9E85-B7CAA9F7EF67> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload/_ctypes.cpython-37m-darwin.so
0x118dc4000 - 0x118dc5ff7 +lapack_lite.cpython-37m-darwin.so (0) <69D4AA05-FED8-3329-97EF-5F1D0B0C7D4D> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/numpy/linalg/lapack_lite.cpython-37m-darwin.so
0x118dc9000 - 0x118de2fff +_umath_linalg.cpython-37m-darwin.so (0) <F2C3E3AE-7A1D-3981-B31A-DC92F46EAE81> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/numpy/linalg/_umath_linalg.cpython-37m-darwin.so
0x118eb0000 - 0x118ef4ff7 +_decimal.cpython-37m-darwin.so (0) <F035ADB0-3946-309B-8C35-E789BD3A7696> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload/_decimal.cpython-37m-darwin.so
0x118f53000 - 0x118f62ff7 +pocketfft_internal.cpython-37m-darwin.so (0) <190AE76A-4D5F-3035-A1FC-6205292B1543> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/numpy/fft/pocketfft_internal.cpython-37m-darwin.so
0x118fa6000 - 0x119018fff +mtrand.cpython-37m-darwin.so (0) <7A0F0AE5-72B5-3D7B-B3B8-475F664F9AFA> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/numpy/random/mtrand.cpython-37m-darwin.so
0x119069000 - 0x1190a1fff +common.cpython-37m-darwin.so (0) <7A31D2A9-7507-3A37-B23B-C63CD062B806> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/numpy/random/common.cpython-37m-darwin.so
0x1190b6000 - 0x119114ff7 +bounded_integers.cpython-37m-darwin.so (0) <8A5547BC-C82A-3E41-8320-A788E2DC1801> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/numpy/random/bounded_integers.cpython-37m-darwin.so
0x119136000 - 0x11914aff7 +mt19937.cpython-37m-darwin.so (0) <BC393547-41A0-3F0F-9652-201F8B610385> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/numpy/random/mt19937.cpython-37m-darwin.so
0x119156000 - 0x119175ff7 +bit_generator.cpython-37m-darwin.so (0) <9AF84E7A-4923-34C9-9430-788D70CCB66B> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/numpy/random/bit_generator.cpython-37m-darwin.so
0x119190000 - 0x1191b2ff7 +entropy.cpython-37m-darwin.so (0) <A97081D3-BB5C-3BD4-962E-5B2A0C72FD26> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/numpy/random/entropy.cpython-37m-darwin.so
0x1191c8000 - 0x1191d5ff7 +philox.cpython-37m-darwin.so (0) <488F375C-7017-38DF-BA7B-74AF2913019F> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/numpy/random/philox.cpython-37m-darwin.so
0x1191e0000 - 0x1191ebfff +pcg64.cpython-37m-darwin.so (0) <BF7967AA-BF0B-3BF9-8EAE-AAF5A73302FB> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/numpy/random/pcg64.cpython-37m-darwin.so
0x1191f6000 - 0x1191feff7 +sfc64.cpython-37m-darwin.so (0) <6CB1F36F-C4FC-3CE7-B5BE-0FA005F65E2C> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/numpy/random/sfc64.cpython-37m-darwin.so
0x119208000 - 0x11928bfff +generator.cpython-37m-darwin.so (0) <868CE861-C95A-383B-935D-942F28314F69> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/numpy/random/generator.cpython-37m-darwin.so
0x11933d000 - 0x119348ff7 +_flinalg.cpython-37m-darwin.so (0) <9C1F46F8-2DA2-3943-8DBA-D6BF8932E0B7> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/linalg/_flinalg.cpython-37m-darwin.so
0x1194d2000 - 0x119508ffb +conversion.cpython-37m-darwin.so (0) <7E026496-33EB-37FB-B2B1-9E59112C1202> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/tslibs/conversion.cpython-37m-darwin.so
0x119523000 - 0x119551fff +c_timestamp.cpython-37m-darwin.so (0) <6103BAF4-3AF5-3352-B179-4DE0A932BFF1> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/tslibs/c_timestamp.cpython-37m-darwin.so
0x1195ac000 - 0x1195cbfff +nattype.cpython-37m-darwin.so (0) <2443C3F9-7228-3839-B38B-B0FDCD9B921B> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/tslibs/nattype.cpython-37m-darwin.so
0x1195e5000 - 0x1195ecff7 +np_datetime.cpython-37m-darwin.so (0) <746367F4-693A-3E0E-B820-2114D5BC93E2> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/tslibs/np_datetime.cpython-37m-darwin.so
0x1195f4000 - 0x11961aff3 +timezones.cpython-37m-darwin.so (0) <286416F5-248C-3583-A79A-37E3E95770D2> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/tslibs/timezones.cpython-37m-darwin.so
0x119671000 - 0x1196b1ff7 +tzconversion.cpython-37m-darwin.so (0) <1CCBA52F-B8E6-32C2-BF26-7D6A4365B415> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/tslibs/tzconversion.cpython-37m-darwin.so
0x1196cb000 - 0x119722ffb +timedeltas.cpython-37m-darwin.so (0) <9474E801-68EE-3035-87C0-FAB39EFFDC50> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/tslibs/timedeltas.cpython-37m-darwin.so
0x119750000 - 0x119799ffb +offsets.cpython-37m-darwin.so (0) <F8F7176B-2E00-347F-B839-BF92D81E0CA2> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/tslibs/offsets.cpython-37m-darwin.so
0x1197c8000 - 0x1197d0fff +ccalendar.cpython-37m-darwin.so (0) <39FE416F-D8BE-3B91-8324-09787BBDFFE5> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/tslibs/ccalendar.cpython-37m-darwin.so
0x11981b000 - 0x119868ff3 +strptime.cpython-37m-darwin.so (0) <4F4ED8D8-D2C7-3B18-91AE-6C592082E58A> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/tslibs/strptime.cpython-37m-darwin.so
0x1198d0000 - 0x1198fbff3 +fields.cpython-37m-darwin.so (0) <58EF15EB-CAF3-3AF8-BAFF-F6DD9456E8EE> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/tslibs/fields.cpython-37m-darwin.so
0x119915000 - 0x119963ff3 +parsing.cpython-37m-darwin.so (0) <F1038D43-02A4-350A-93A7-110FBDC5EEAA> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/tslibs/parsing.cpython-37m-darwin.so
0x11998d000 - 0x1199e1fff +period.cpython-37m-darwin.so (0) <509F2D3F-95A5-39CE-A910-B67DE9FC8930> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/tslibs/period.cpython-37m-darwin.so
0x119a4b000 - 0x119a5efff +frequencies.cpython-37m-darwin.so (0) <E8080AA8-3BEC-3D95-AE57-52DDE03AAC30> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/tslibs/frequencies.cpython-37m-darwin.so
0x119a6f000 - 0x119aa6ffb +timestamps.cpython-37m-darwin.so (0) <E9B5D2EF-6108-3BD1-9E5F-911D411D13F3> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/tslibs/timestamps.cpython-37m-darwin.so
0x119ace000 - 0x119af9ff3 +resolution.cpython-37m-darwin.so (0) <616CA3A1-5E97-3E9B-8544-A58C13CA8FB6> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/tslibs/resolution.cpython-37m-darwin.so
0x119b17000 - 0x119b8cfff +hashtable.cpython-37m-darwin.so (0) <6515F28C-9FA4-3D61-89B6-E329E4C532A9> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/hashtable.cpython-37m-darwin.so
0x119bbd000 - 0x119bcaffb +missing.cpython-37m-darwin.so (0) <D129B9BF-A6A5-3FE3-9FEF-867119DCCFD5> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/missing.cpython-37m-darwin.so
0x119bd4000 - 0x119c3dff3 +lib.cpython-37m-darwin.so (0) <A1B28D8E-0A89-39F0-A4D3-FDEFAE6464E3> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/lib.cpython-37m-darwin.so
0x119cb6000 - 0x119cf0fff +tslib.cpython-37m-darwin.so (0) <B1872F61-F555-3139-AAE2-06F77C01DA82> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/tslib.cpython-37m-darwin.so
0x119d0c000 - 0x119e90fff +algos.cpython-37m-darwin.so (0) <EC939FA9-67B7-3D9F-8DDD-3E552DEA7F01> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/algos.cpython-37m-darwin.so
0x119f27000 - 0x11a0dcff7 +interval.cpython-37m-darwin.so (0) <60B34032-C2E3-3818-9D5C-D34771EEBC32> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/interval.cpython-37m-darwin.so
0x11a193000 - 0x11a19bfff +properties.cpython-37m-darwin.so (0) <BEE72926-AD95-3673-B647-C9A705E770A4> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/properties.cpython-37m-darwin.so
0x11a1e4000 - 0x11a200ff7 +hashing.cpython-37m-darwin.so (0) <88438F20-9EA1-31DB-BDF0-5C93BB68E102> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/hashing.cpython-37m-darwin.so
0x11a213000 - 0x11a238fff +ops.cpython-37m-darwin.so (0) <04703BD6-9898-308A-AC6F-29C8BE32156F> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/ops.cpython-37m-darwin.so
0x11a3cd000 - 0x11a453ff3 +index.cpython-37m-darwin.so (0) <20F36BD4-9F1D-314B-8392-0D63CC742C43> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/index.cpython-37m-darwin.so
0x11a483000 - 0x11a6c4ff3 +join.cpython-37m-darwin.so (0) <344FAA02-156E-3C95-845E-E7619E27314E> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/join.cpython-37m-darwin.so
0x11a845000 - 0x11a865fff +_elementpath.cpython-37m-darwin.so (???) <429F29F9-50B3-33CC-9E45-AEDA036695FB> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/lxml/_elementpath.cpython-37m-darwin.so
0x11a900000 - 0x11a901fff +_check_build.cpython-37m-darwin.so (0) <B87447C3-40DD-37C4-9E59-11EC294212EE> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/sklearn/__check_build/_check_build.cpython-37m-darwin.so
0x11a906000 - 0x11a9c3ffb +sparse.cpython-37m-darwin.so (0) <526BE788-0E0D-3680-9B30-20D25684ECC5> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/sparse.cpython-37m-darwin.so
0x11aa4b000 - 0x11aa4cff7 +_raw_ctr.cpython-37m-darwin.so (???) <212D5173-BF56-324C-BF96-75DA28AD6D41> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/Crypto/Cipher/_raw_ctr.cpython-37m-darwin.so
0x11aa4f000 - 0x11aa4fff7 +_BLAKE2s.cpython-37m-darwin.so (???) <10F5EA93-0EB0-3E8A-A621-E0F5BEB14C86> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/Crypto/Hash/_BLAKE2s.cpython-37m-darwin.so
0x11aa52000 - 0x11aa54ff7 +_SHA1.cpython-37m-darwin.so (???) <29C10BF6-3352-3F5E-BB0B-663D9BA3B34E> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/Crypto/Hash/_SHA1.cpython-37m-darwin.so
0x11aa57000 - 0x11aa5efff +minpack2.cpython-37m-darwin.so (0) <B547A37A-E772-3C2D-A07D-F944F3D89961> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/optimize/minpack2.cpython-37m-darwin.so
0x11aae4000 - 0x11aae9fff +_json.cpython-37m-darwin.so (0) <58573D55-4505-383C-89CE-7B16ED7981AD> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload/_json.cpython-37m-darwin.so
0x11abee000 - 0x11abf3ff3 +indexing.cpython-37m-darwin.so (0) <3C9B3910-F390-33CB-B1A6-571B9D9AEF51> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/indexing.cpython-37m-darwin.so
0x11ac3a000 - 0x11ac67ffb +internals.cpython-37m-darwin.so (0) <795FFB1B-2E84-3251-A206-06D464DD4426> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/internals.cpython-37m-darwin.so
0x11acc1000 - 0x11acc2fff +_MD5.cpython-37m-darwin.so (???) <09E8FD08-E36A-35F9-807A-9EC5871408DE> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/Crypto/Hash/_MD5.cpython-37m-darwin.so
0x11acc5000 - 0x11adc2fff +unicodedata.cpython-37m-darwin.so (0) <B4AE629C-6564-3E2E-9A6E-AE586EE0AD79> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload/unicodedata.cpython-37m-darwin.so
0x11adc8000 - 0x11adcbfff +_csv.cpython-37m-darwin.so (0) <F629A3FE-5724-37C1-8940-6E5C172BFD77> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload/_csv.cpython-37m-darwin.so
0x11ae51000 - 0x11ae5eff7 +_ssl.cpython-37m-darwin.so (0) <D1740549-C698-31F9-95C7-88A38F5385F5> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload/_ssl.cpython-37m-darwin.so
0x11aeaf000 - 0x11aeb1ff7 +mmap.cpython-37m-darwin.so (0) <BA9E74DF-BF4B-34B0-BC25-AF2E4712468A> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload/mmap.cpython-37m-darwin.so
0x11aef5000 - 0x11aef5ff7 +_scproxy.cpython-37m-darwin.so (0) <1C12C693-374D-3CDA-8235-D20E4F60F2D7> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload/_scproxy.cpython-37m-darwin.so
0x11aef8000 - 0x11af25fff +reshape.cpython-37m-darwin.so (0) <3EFC5C55-6B9E-38BB-9CEA-F1AD738C57E2> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/reshape.cpython-37m-darwin.so
0x11afbe000 - 0x11b04eff7 +window.cpython-37m-darwin.so (0) <FD91798B-305C-395E-98AD-DFD6E27FBCFB> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/window.cpython-37m-darwin.so
0x11b07c000 - 0x11b08bff7 +skiplist.cpython-37m-darwin.so (0) <1C8A7441-A005-31E9-B1DC-2E12A64BE530> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/skiplist.cpython-37m-darwin.so
0x11b0d5000 - 0x11b1b3ff7 +groupby.cpython-37m-darwin.so (0) <38D47B27-F8F5-3209-A258-40AF5104539B> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/groupby.cpython-37m-darwin.so
0x11b223000 - 0x11b266ff3 +reduction.cpython-37m-darwin.so (0) <1FDC291C-62F5-34E4-A080-528C5372305E> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/reduction.cpython-37m-darwin.so
0x11b346000 - 0x11b3b1ff3 +parsers.cpython-37m-darwin.so (0) <D436D433-AFB0-30CB-94AC-EE18B760AA2C> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/parsers.cpython-37m-darwin.so
0x11b41d000 - 0x11b42bfff +json.cpython-37m-darwin.so (0) <5457E458-ABB1-3B38-9B4D-0DF00198FA6A> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/json.cpython-37m-darwin.so
0x11b435000 - 0x11b455ff3 +writers.cpython-37m-darwin.so (0) <63C148BE-23C9-35A1-B4CE-915F2DBAF243> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/writers.cpython-37m-darwin.so
0x11b46c000 - 0x11b46cffb +_move.cpython-37m-darwin.so (0) <9F92A2B0-79E4-3647-901D-37ECBF8387FE> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/util/_move.cpython-37m-darwin.so
0x11b4af000 - 0x11b4baffb +_packer.cpython-37m-darwin.so (0) <502E1D7D-FF86-386D-A102-CDEC1D2A4614> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/io/msgpack/_packer.cpython-37m-darwin.so
0x11b4c5000 - 0x11b4d4ff7 +_unpacker.cpython-37m-darwin.so (0) <95ECB8B7-7DD6-3AB7-8C88-7A58EB442678> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/io/msgpack/_unpacker.cpython-37m-darwin.so
0x11b5e2000 - 0x11b5f1ff7 +testing.cpython-37m-darwin.so (0) <CCEF6A15-EB6A-39B1-B296-D271DBEC2F6E> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/testing.cpython-37m-darwin.so
0x11b87a000 - 0x11b87bff3 +cprocessors.cpython-37m-darwin.so (0) <D739212E-3C17-39B1-B068-EE1948A71018> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/sqlalchemy/cprocessors.cpython-37m-darwin.so
0x11b97f000 - 0x11b97ffff +cutils.cpython-37m-darwin.so (0) <7FC0CA82-C081-30D2-896C-E8E780682181> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/sqlalchemy/cutils.cpython-37m-darwin.so
0x11ba02000 - 0x11ba03ff7 +cresultproxy.cpython-37m-darwin.so (0) <90BBE3F6-E1A5-3E3D-8FEF-7095622F11E2> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/sqlalchemy/cresultproxy.cpython-37m-darwin.so
0x11bd47000 - 0x11bd48ff7 +_queue.cpython-37m-darwin.so (0) <B9D80A7C-A744-3A24-AA10-1CEF3CFFD022> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload/_queue.cpython-37m-darwin.so
0x11bdcc000 - 0x11bdccfff +_uuid.cpython-37m-darwin.so (0) <4283C23E-E755-3642-9450-F25DED17AE4D> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload/_uuid.cpython-37m-darwin.so
0x11be6e000 - 0x11befc93f dyld (732.8) <42C11B81-6928-369F-B03E-D57355572700> /usr/lib/dyld
0x11bfef000 - 0x11c2d9ff3 +cygrpc.cpython-37m-darwin.so (0) <60A0DCC9-ACBA-33CA-9005-DB149C9B8520> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/grpc/_cython/cygrpc.cpython-37m-darwin.so
0x11c73a000 - 0x11c898ff7 +_message.cpython-37m-darwin.so (0) <9E0844FB-B4A0-3A08-9583-7EE6C3431BB2> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/google/protobuf/pyext/_message.cpython-37m-darwin.so
0x11cda2000 - 0x11cda4ff3 +lgamma.cpython-37m-darwin.so (0) <B68C97ED-5D34-3885-B77F-0799388D8581> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/sklearn/utils/lgamma.cpython-37m-darwin.so
0x11cdaa000 - 0x11cdafff7 +array.cpython-37m-darwin.so (0) <7934FE3A-C258-3F4F-AD15-47D5BE9FCE15> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload/array.cpython-37m-darwin.so
0x11cdb9000 - 0x11cdb9fff +_Salsa20.cpython-37m-darwin.so (???) <2C457652-7378-3C11-9BF6-EEF20A1ECC2D> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/Crypto/Cipher/_Salsa20.cpython-37m-darwin.so
0x11ce7d000 - 0x11ce7ffff +_SHA256.cpython-37m-darwin.so (???) <DADFF09A-A82C-31DB-AA28-58613525E993> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/Crypto/Hash/_SHA256.cpython-37m-darwin.so
0x11ce82000 - 0x11ce83ff7 +_scrypt.cpython-37m-darwin.so (???) <CDA31FE5-A642-3D51-A95E-A5274A12CF21> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/Crypto/Protocol/_scrypt.cpython-37m-darwin.so
0x11d106000 - 0x11d112ff7 +_ccallback_c.cpython-37m-darwin.so (0) <56789943-E473-3E97-B057-4D00CD59C800> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/_lib/_ccallback_c.cpython-37m-darwin.so
0x11d11e000 - 0x11d11efff +_ghash_portable.cpython-37m-darwin.so (???) <9308BC75-0900-3678-B259-8A8B8B96CC86> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/Crypto/Hash/_ghash_portable.cpython-37m-darwin.so
0x11d1a1000 - 0x11d1a8fff +_elementtree.cpython-37m-darwin.so (0) <BCBD7BDA-D6E4-3986-AE4F-BABD7C9F1B29> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload/_elementtree.cpython-37m-darwin.so
0x11d1f2000 - 0x11d1fbffb +moduleTNC.cpython-37m-darwin.so (0) <2514C81C-13FD-3A19-8658-0F5A796846E0> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/optimize/moduleTNC.cpython-37m-darwin.so
0x11e5e5000 - 0x11e5e5fff +_ghash_clmul.cpython-37m-darwin.so (???) <245E6E62-37E7-37E1-8EF5-02F1D1F27BAA> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/Crypto/Hash/_ghash_clmul.cpython-37m-darwin.so
0x11e5e8000 - 0x11e5e9ff7 +_raw_ocb.cpython-37m-darwin.so (???) <559E6F1E-78A8-3B46-87AA-B1A16C69643A> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/Crypto/Cipher/_raw_ocb.cpython-37m-darwin.so
0x11e5ec000 - 0x11e5ecfff +_ARC4.cpython-37m-darwin.so (???) <9A105AD4-8C53-3276-8F5E-A60AE4D07299> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/Crypto/Cipher/_ARC4.cpython-37m-darwin.so
0x11e5ef000 - 0x11e5fbff3 +murmurhash.cpython-37m-darwin.so (0) <8A36719F-1606-3E51-9B60-4A105A38E8F9> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/sklearn/utils/murmurhash.cpython-37m-darwin.so
0x11e61b000 - 0x11e61cff7 +_multiprocessing.cpython-37m-darwin.so (0) <31A2882A-FE2F-3243-BB8A-D24B0B99DD41> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload/_multiprocessing.cpython-37m-darwin.so
0x11eb23000 - 0x11eb6affb +libomp.dylib (0) <BC7C4D7D-BD45-3672-8D71-70A964A65AC1> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/sklearn/.dylibs/libomp.dylib
0x11eb97000 - 0x11eb9bffb +mio_utils.cpython-37m-darwin.so (0) <88C81605-0DFE-389B-AD34-67A521FADBD0> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/io/matlab/mio_utils.cpython-37m-darwin.so
0x11eba2000 - 0x11eba3fff +_speedups.cpython-37m-darwin.so (???) <E9B73517-643A-3FD1-8B18-600595CA2B65> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/markupsafe/_speedups.cpython-37m-darwin.so
0x11f1e6000 - 0x11f200ff3 +_tools.cpython-37m-darwin.so (0) <9DAA9185-9847-33AB-AEFF-8CC3D71E6E2D> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/sparse/csgraph/_tools.cpython-37m-darwin.so
0x11f2a5000 - 0x11f2aaff7 +_datadir.cpython-37m-darwin.so (???) <61E3FE1E-4E30-3A18-933B-49A79C824B33> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pyproj/_datadir.cpython-37m-darwin.so
0x11fb7f000 - 0x11ff80ff7 +_sparsetools.cpython-37m-darwin.so (0) <9B6DC85E-8A9B-38E6-8A0B-45476F0D491C> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/sparse/_sparsetools.cpython-37m-darwin.so
0x12009e000 - 0x120101ffb +_csparsetools.cpython-37m-darwin.so (0) <4843EB5F-2CF8-3625-A14C-C2BB1C00AB43> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/sparse/_csparsetools.cpython-37m-darwin.so
0x12012d000 - 0x12017dff3 +_shortest_path.cpython-37m-darwin.so (0) <DDA3A585-1D73-39C8-9261-B78F2A102F1D> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/sparse/csgraph/_shortest_path.cpython-37m-darwin.so
0x1201e6000 - 0x1201ffff7 +_traversal.cpython-37m-darwin.so (0) <C10079CB-2153-3AE6-AA41-802DF15E48CA> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/sparse/csgraph/_traversal.cpython-37m-darwin.so
0x12020f000 - 0x12022cffb +_min_spanning_tree.cpython-37m-darwin.so (0) <DE0724C0-F449-3C3C-8D5E-3FFBBF8964A7> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/sparse/csgraph/_min_spanning_tree.cpython-37m-darwin.so
0x120241000 - 0x120276fff +_reordering.cpython-37m-darwin.so (0) <7F605572-00DF-31A8-80EF-6FB38BEA5B82> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/sparse/csgraph/_reordering.cpython-37m-darwin.so
0x120297000 - 0x12031ffff +ckdtree.cpython-37m-darwin.so (0) <593DFC90-CA73-3A14-9DE3-7AD597879471> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/spatial/ckdtree.cpython-37m-darwin.so
0x120362000 - 0x120426ff7 +qhull.cpython-37m-darwin.so (0) <AF26961F-68AA-3833-A1C7-257D526EEA9D> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/spatial/qhull.cpython-37m-darwin.so
0x12046c000 - 0x123efa797 +libopenblasp-r0.3.7.dev.dylib (0) <0E19F9FE-2367-3794-9260-55F4BB058EF2> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/.dylibs/libopenblasp-r0.3.7.dev.dylib
0x12413e000 - 0x124255ff7 +libgfortran.3.dylib (0) <9ABE5EDE-AD43-391A-9E54-866711FAC32A> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/.dylibs/libgfortran.3.dylib
0x1242b9000 - 0x1242effff +libquadmath.0.dylib (0) <7FFA409F-FB04-3B64-BE9A-3E3A494C975E> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/.dylibs/libquadmath.0.dylib
0x1242fe000 - 0x124313ff7 +libgcc_s.1.dylib (0) <7C6D7CB7-82DB-3290-8181-07646FEA1F80> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/.dylibs/libgcc_s.1.dylib
0x12a31e000 - 0x12a338ff3 +_voronoi.cpython-37m-darwin.so (0) <C3ADEAE2-5485-3E7C-AC5F-C0FE5EF61FC5> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/spatial/_voronoi.cpython-37m-darwin.so
0x12a34b000 - 0x12a358fff +_distance_wrap.cpython-37m-darwin.so (0) <A1F90DCF-4476-3B3D-9248-959CE54C3521> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/spatial/_distance_wrap.cpython-37m-darwin.so
0x12a362000 - 0x12a37dff3 +_hausdorff.cpython-37m-darwin.so (0) <BBF0B71A-E24B-30CC-B0CC-CEBECF2A84B0> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/spatial/_hausdorff.cpython-37m-darwin.so
0x12a3d0000 - 0x12a429ff7 +_fblas.cpython-37m-darwin.so (0) <6BA9FD7D-004E-3374-90D4-C959B8F759EE> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/linalg/_fblas.cpython-37m-darwin.so
0x12a461000 - 0x12a532fff +_flapack.cpython-37m-darwin.so (0) <9730DD4E-02D3-37B7-9277-22E7080D1CBE> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/linalg/_flapack.cpython-37m-darwin.so
0x12a602000 - 0x12a626ff7 +_solve_toeplitz.cpython-37m-darwin.so (0) <D4FF436D-39FD-3B0D-B315-CB0027952B4C> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/linalg/_solve_toeplitz.cpython-37m-darwin.so
0x12a63f000 - 0x12a673ff3 +_decomp_update.cpython-37m-darwin.so (0) <E4FBDC23-F6F0-3743-920A-3AA7A21D8069> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/linalg/_decomp_update.cpython-37m-darwin.so
0x12a6c9000 - 0x12a6eeff7 +cython_blas.cpython-37m-darwin.so (0) <7B73EDCD-02F8-3F72-91C8-3D57187F3476> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/linalg/cython_blas.cpython-37m-darwin.so
0x12a70b000 - 0x12a777ff7 +cython_lapack.cpython-37m-darwin.so (0) <E8047A89-0FFC-3019-A549-6242E04C5D04> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/linalg/cython_lapack.cpython-37m-darwin.so
0x12a7d5000 - 0x12a944ff7 +_ufuncs.cpython-37m-darwin.so (0) <231123C7-D03E-39E7-A40C-B2A4BDB656EB> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/special/_ufuncs.cpython-37m-darwin.so
0x12a9ba000 - 0x12a9ceff3 +_ufuncs_cxx.cpython-37m-darwin.so (0) <54656921-42D2-3BFC-B1A3-D89B37345272> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/special/_ufuncs_cxx.cpython-37m-darwin.so
0x12aa2d000 - 0x12aaeafef +specfun.cpython-37m-darwin.so (0) <B4BF7EF8-2C94-3180-BE22-7D05DA004D35> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/special/specfun.cpython-37m-darwin.so
0x12aafd000 - 0x12ab0aff3 +_ellip_harm_2.cpython-37m-darwin.so (0) <FA8A764D-D157-3585-B135-EF1296C9142E> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/special/_ellip_harm_2.cpython-37m-darwin.so
0x12ab55000 - 0x12ab66ff7 +_vq.cpython-37m-darwin.so (0) <711944FB-2A8B-3F72-9BCB-51CB1DCE33E5> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/cluster/_vq.cpython-37m-darwin.so
0x12ab72000 - 0x12abaeff7 +_hierarchy.cpython-37m-darwin.so (0) <39E0A236-8538-3948-B428-360A12A786AA> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/cluster/_hierarchy.cpython-37m-darwin.so
0x12abd4000 - 0x12abffff3 +_optimal_leaf_ordering.cpython-37m-darwin.so (0) <A00A07F0-D477-34F4-870F-E1FC5BDDE84E> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/cluster/_optimal_leaf_ordering.cpython-37m-darwin.so
0x12ad1f000 - 0x12ad52fff +_trlib.cpython-37m-darwin.so (0) <EDC680B5-E488-3DDE-AB77-40F03A2A9339> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/optimize/_trlib/_trlib.cpython-37m-darwin.so
0x12ad6e000 - 0x12ad9bff7 +_iterative.cpython-37m-darwin.so (0) <D87A7BA8-D6D1-3C20-A8E3-6C271354B114> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/sparse/linalg/isolve/_iterative.cpython-37m-darwin.so
0x12adf3000 - 0x12ae40fff +_superlu.cpython-37m-darwin.so (0) <7B89A303-CDC4-31AE-BEA4-C7735BE91E78> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/sparse/linalg/dsolve/_superlu.cpython-37m-darwin.so
0x12ae57000 - 0x12aedcfff +_arpack.cpython-37m-darwin.so (0) <B00F585E-589F-3E6B-9065-8304F602AD6E> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/sparse/linalg/eigen/arpack/_arpack.cpython-37m-darwin.so
0x12af3c000 - 0x12af58ffb +_group_columns.cpython-37m-darwin.so (0) <7B6FB61C-76E1-34E3-A5B4-1E0B7095D645> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/optimize/_group_columns.cpython-37m-darwin.so
0x12afab000 - 0x12afc7ff7 +_lbfgsb.cpython-37m-darwin.so (0) <62513E45-54E0-3341-AAEF-7E2BF7EA3691> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/optimize/_lbfgsb.cpython-37m-darwin.so
0x12afcd000 - 0x12afe9fff +_cobyla.cpython-37m-darwin.so (0) <7CDE2C8B-C8AA-3695-8F9F-BA00A0191866> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/optimize/_cobyla.cpython-37m-darwin.so
0x12afee000 - 0x12b00bff7 +_slsqp.cpython-37m-darwin.so (0) <EDB0BB9F-0CEC-3B8E-A642-A312C2F90FAF> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/optimize/_slsqp.cpython-37m-darwin.so
0x12b010000 - 0x12b02dff7 +_minpack.cpython-37m-darwin.so (0) <6CFDC72D-829F-3920-8C24-AB25E2FF2B58> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/optimize/_minpack.cpython-37m-darwin.so
0x12b032000 - 0x12b04bfff +givens_elimination.cpython-37m-darwin.so (0) <A884AB38-43EF-31C1-8DA2-E1163ACD577D> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/optimize/_lsq/givens_elimination.cpython-37m-darwin.so
0x12b09c000 - 0x12b0a4ff7 +_nnls.cpython-37m-darwin.so (0) <D61A8ED9-C639-380C-BE82-E4338D8EBAA0> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/optimize/_nnls.cpython-37m-darwin.so
0x12b0e9000 - 0x12b116ffb +_bglu_dense.cpython-37m-darwin.so (0) <B34FDC8A-C379-3C7A-999B-35722F6ADAE6> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/optimize/_bglu_dense.cpython-37m-darwin.so
0x12b174000 - 0x12b189fc7 +_odepack.cpython-37m-darwin.so (0) <01715935-1352-3AE2-98D3-16CB731655E9> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/integrate/_odepack.cpython-37m-darwin.so
0x12b18e000 - 0x12b1a5fd7 +_quadpack.cpython-37m-darwin.so (0) <F1217CB4-76B4-3DC8-A382-02790B7D8E22> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/integrate/_quadpack.cpython-37m-darwin.so
0x12b1ab000 - 0x12b1e1ff7 +vode.cpython-37m-darwin.so (0) <8264B1DF-E9B8-38B1-B9A8-F3806142ACBF> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/integrate/vode.cpython-37m-darwin.so
0x12b1e9000 - 0x12b200fff +_dop.cpython-37m-darwin.so (0) <C4BBF863-FA3E-35BF-8092-14C10FEDE39D> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/integrate/_dop.cpython-37m-darwin.so
0x12b207000 - 0x12b21efdf +lsoda.cpython-37m-darwin.so (0) <E97962D2-0742-3A3A-9395-35500BED7672> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/integrate/lsoda.cpython-37m-darwin.so
0x12b376000 - 0x12b3aaff7 +_fitpack.cpython-37m-darwin.so (0) <1EB60BA3-DFC5-3970-A052-DDAEC52BBC7B> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/interpolate/_fitpack.cpython-37m-darwin.so
0x12b3b1000 - 0x12b40dfff +dfitpack.cpython-37m-darwin.so (0) <6DDDE435-2622-3511-9598-E414C1076614> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/interpolate/dfitpack.cpython-37m-darwin.so
0x12b41e000 - 0x12b448fff +_bspl.cpython-37m-darwin.so (0) <6301FAAE-E019-3ED9-A7E8-396550ABFD10> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/interpolate/_bspl.cpython-37m-darwin.so
0x12b463000 - 0x12b49ffff +_ppoly.cpython-37m-darwin.so (0) <D68A7950-D74D-3DCE-A92A-E44427175F87> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/interpolate/_ppoly.cpython-37m-darwin.so
0x12b4fc000 - 0x12b538ff7 +interpnd.cpython-37m-darwin.so (0) <A202C753-3B48-3192-8172-939D328E6D63> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/interpolate/interpnd.cpython-37m-darwin.so
0x12b55a000 - 0x12b59eff7 +_stats.cpython-37m-darwin.so (0) <364241A1-BF01-370A-88CA-42D37D88FBAB> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/stats/_stats.cpython-37m-darwin.so
0x12b6c3000 - 0x12b6c8ff7 +_raw_aes.cpython-37m-darwin.so (???) <1F862189-D7BD-34FC-8218-C98348A3B57D> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/Crypto/Cipher/_raw_aes.cpython-37m-darwin.so
0x12b6cb000 - 0x12b6ccfff +_raw_aesni.cpython-37m-darwin.so (???) <D90003E6-B955-3F3C-ACD5-E3406613E935> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/Crypto/Cipher/_raw_aesni.cpython-37m-darwin.so
0x12b6cf000 - 0x12b6cfff7 +_contextvars.cpython-37m-darwin.so (0) <BFABAB06-4010-3C23-9E3D-BF0705E87D09> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload/_contextvars.cpython-37m-darwin.so
0x12b6d2000 - 0x12b6d3fff +fcntl.cpython-37m-darwin.so (0) <10868A3A-7663-33DC-B405-8F0BEE4DAA6A> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload/fcntl.cpython-37m-darwin.so
0x12b758000 - 0x12b764ff7 +statlib.cpython-37m-darwin.so (0) <8F2CFCAC-47D6-372E-9ABF-3D4424DB4C0B> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/stats/statlib.cpython-37m-darwin.so
0x12b769000 - 0x12b77bfef +mvn.cpython-37m-darwin.so (0) <26820C7D-BBEC-31F9-8627-1E8D07FF5906> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/stats/mvn.cpython-37m-darwin.so
0x12b8f9000 - 0x12bdeffff +libproj.15.dylib (0) <D3CA33A2-7C0E-32EF-9028-DFD61B4D51B0> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pyproj/.dylibs/libproj.15.dylib
0x12bf49000 - 0x12c06bff3 +libsqlite3.0.dylib (0) <C34BBD4D-8251-3D89-B334-1D94033A93EB> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pyproj/.dylibs/libsqlite3.0.dylib
0x12c093000 - 0x12c0a2ff7 +_list.cpython-37m-darwin.so (???) <2DC6661F-21A8-399A-BE13-86683E1129DD> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pyproj/_list.cpython-37m-darwin.so
0x12c0f0000 - 0x12c128fff +_crs.cpython-37m-darwin.so (???) <D9EF9403-F5CB-3F58-A496-EE28064D4C0D> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pyproj/_crs.cpython-37m-darwin.so
0x12c159000 - 0x12c163fff +_geod.cpython-37m-darwin.so (???) <5FA2FBB9-C364-3142-BC54-57B8FDB815FB> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pyproj/_geod.cpython-37m-darwin.so
0x12c16d000 - 0x12c177ff7 +_proj.cpython-37m-darwin.so (???) <4578E36F-5E47-3AF2-B97B-4ADF5FEC37EF> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pyproj/_proj.cpython-37m-darwin.so
0x12c182000 - 0x12c19aff7 +_transformer.cpython-37m-darwin.so (???) <25B17D5E-2BCB-3F1D-BFAC-8EB5ABFBA9C3> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pyproj/_transformer.cpython-37m-darwin.so
0x12c22f000 - 0x12c24eff3 +libgeos_c.1.dylib (0) <7A4B8EDB-A092-3095-B708-D7F261F7C5F5> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/shapely/.dylibs/libgeos_c.1.dylib
0x12c260000 - 0x12c333ffb +libgeos-3.6.2.dylib (0) <60DF366F-25E3-30C4-9682-E393C9A21B83> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/shapely/.dylibs/libgeos-3.6.2.dylib
0x12c450000 - 0x12c46aff3 +_speedups.cpython-37m-darwin.so (0) <E549A48D-033E-3251-9EEE-2E00E61423D9> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/shapely/speedups/_speedups.cpython-37m-darwin.so
0x12c4b9000 - 0x12c519ff7 +ogrext.cpython-37m-darwin.so (0) <C16F1087-E476-3F22-A4A3-F42013983D0B> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/fiona/ogrext.cpython-37m-darwin.so
0x12c550000 - 0x12dac0ffb +libgdal.20.dylib (0) <B4C09B26-10D8-367F-B05E-D132FD5A43D5> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/fiona/.dylibs/libgdal.20.dylib
0x12df74000 - 0x12dfd5ffb +libproj.12.dylib (0) <6EB36E24-CDD5-358F-AB3B-B6E657CB8935> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/fiona/.dylibs/libproj.12.dylib
0x12dfea000 - 0x12dff3ffb +libjson-c.2.dylib (0) <310BF741-A36A-3B81-BF65-F686C7E07ED0> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/fiona/.dylibs/libjson-c.2.dylib
0x12dff8000 - 0x12e016fff +libgeos_c.1.dylib (0) <18817ADA-5E51-3124-BC36-77EE8827876B> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/fiona/.dylibs/libgeos_c.1.dylib
0x12e029000 - 0x12e14bff3 +libsqlite3.0.dylib (0) <6C3DF904-B8C3-3A42-A9DE-48D0DBABB703> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/fiona/.dylibs/libsqlite3.0.dylib
0x12e173000 - 0x12e1baffb +libopenjp2.2.3.0.dylib (0) <BD0F539C-FCA3-3846-A7F5-4C8FB9287E27> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/fiona/.dylibs/libopenjp2.2.3.0.dylib
0x12e1c3000 - 0x12e290ffb +libnetcdf.11.dylib (0) <4DF67557-E4CC-3338-A523-AC3E0B7CF686> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/fiona/.dylibs/libnetcdf.11.dylib
0x13130b000 - 0x13135cff3 +libjpeg.9.dylib (0) <564E6966-6C6D-3A9C-8C54-0187163BA378> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/fiona/.dylibs/libjpeg.9.dylib
0x131364000 - 0x1313eefff +libtiff.5.dylib (0) <035CAD7E-6D4C-3329-9065-C655E068A9B2> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/fiona/.dylibs/libtiff.5.dylib
0x1313ff000 - 0x13143bff7 +libpng16.16.dylib (0) <24554181-3A37-31D7-B77B-F4FE6ADB016C> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/fiona/.dylibs/libpng16.16.dylib
0x131445000 - 0x1314f4ff7 +libgeos-3.6.2.dylib (0) <9586199A-F1C2-36A5-B94B-58EFA92A1E4E> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/fiona/.dylibs/libgeos-3.6.2.dylib
0x13159d000 - 0x1315b4ff3 +libhdf5_hl.100.dylib (0) <8585B545-1C2A-3AF7-8E15-3ECB59B00EE7> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/fiona/.dylibs/libhdf5_hl.100.dylib
0x1315bd000 - 0x1318aafff +libhdf5.101.dylib (0) <ABC3515E-5271-37A2-AC1D-30B40BE4D22B> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/fiona/.dylibs/libhdf5.101.dylib
0x1318f8000 - 0x13190efff +_geometry.cpython-37m-darwin.so (0) <59ABE045-B3E0-3171-A812-1BD19FE11D00> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/fiona/_geometry.cpython-37m-darwin.so
0x131922000 - 0x13192cff3 +_shim.cpython-37m-darwin.so (0) <525654A0-F521-34B7-9887-C83414A01EEB> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/fiona/_shim.cpython-37m-darwin.so
0x131934000 - 0x131945ff3 +_err.cpython-37m-darwin.so (0) <17619AC4-42E2-3E39-9773-7DCC419868DB> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/fiona/_err.cpython-37m-darwin.so
0x131955000 - 0x131979ffb +_env.cpython-37m-darwin.so (0) <7B2D1954-7757-3A75-AD68-9CDCF772BFFD> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/fiona/_env.cpython-37m-darwin.so
0x1319d6000 - 0x1319dcfff +schema.cpython-37m-darwin.so (0) <2465A9C4-3C0E-326B-8E94-DC0C8B035C22> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/fiona/schema.cpython-37m-darwin.so
0x1323a4000 - 0x1323a5ff3 +_watchdog_fsevents.cpython-37m-darwin.so (0) <9DA9C06A-D0D1-38FD-B51E-D3B2BF39FBD7> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/_watchdog_fsevents.cpython-37m-darwin.so
0x1323d6000 - 0x13283bfff +etree.cpython-37m-darwin.so (???) <3886C02D-DC32-3A51-93EE-1C2E3C6B0347> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/lxml/etree.cpython-37m-darwin.so
0x132bab000 - 0x132bc3ff3 +_logistic_sigmoid.cpython-37m-darwin.so (0) <4A023C13-709C-3196-B62B-F9A100A06722> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/sklearn/utils/_logistic_sigmoid.cpython-37m-darwin.so
0x132bd5000 - 0x132c55ff7 +sparsefuncs_fast.cpython-37m-darwin.so (0) <3CEB2E51-7B4B-3872-87DD-5DFE245A7AD6> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/sklearn/utils/sparsefuncs_fast.cpython-37m-darwin.so
0x132cc4000 - 0x132ce4fff +mio5_utils.cpython-37m-darwin.so (0) <DC538C39-B83C-3365-9E70-8136E9D6D0A4> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/io/matlab/mio5_utils.cpython-37m-darwin.so
0x132cfc000 - 0x132d0dfff +streams.cpython-37m-darwin.so (0) <53836439-5533-3289-96F0-98F056370ACC> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/io/matlab/streams.cpython-37m-darwin.so
0x132db6000 - 0x132dddff7 +_csr_polynomial_expansion.cpython-37m-darwin.so (0) <90ADD849-7D5F-393B-90C2-27224E353C75> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/sklearn/preprocessing/_csr_polynomial_expansion.cpython-37m-darwin.so
0x132e35000 - 0x132e46ff3 +expected_mutual_info_fast.cpython-37m-darwin.so (0) <C2936B26-36C7-3B1A-833C-6857F06E7C0A> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/sklearn/metrics/cluster/expected_mutual_info_fast.cpython-37m-darwin.so
0x132e53000 - 0x132e76ff3 +pairwise_fast.cpython-37m-darwin.so (0) <E567B2C0-115F-3AA2-A8A6-46C3CCD203D4> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/sklearn/metrics/pairwise_fast.cpython-37m-darwin.so
0x132e8d000 - 0x132ed2ff3 +_cython_blas.cpython-37m-darwin.so (0) <AD7800DE-BFF6-3B0E-84E7-AB6B0D1B7213> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/sklearn/utils/_cython_blas.cpython-37m-darwin.so
0x1330b3000 - 0x1330b8fff +_asyncio.cpython-37m-darwin.so (0) <53DC9766-AA75-3F19-BABC-F5DDAB748676> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload/_asyncio.cpython-37m-darwin.so
0x1332c4000 - 0x1332c7ff3 +greenlet.cpython-37m-darwin.so (0) <88DB7900-3B3E-3C83-A448-4CEB643EDEB0> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/greenlet.cpython-37m-darwin.so
0x1334cc000 - 0x1334d0ffb +pvectorc.cpython-37m-darwin.so (0) <BE1C2C0F-C528-3355-9996-D90F6DCE376A> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pvectorc.cpython-37m-darwin.so
0x133597000 - 0x1335ccff3 +corecext.cpython-37m-darwin.so (0) <85614F8D-070C-3B09-9F12-570186F44021> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/gevent/libev/corecext.cpython-37m-darwin.so
0x133635000 - 0x13363dfff +__hub_local.cpython-37m-darwin.so (0) <2992A652-8006-36B6-B758-75F169E473F2> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/gevent/__hub_local.cpython-37m-darwin.so
0x133647000 - 0x13364ffff +__greenlet_primitives.cpython-37m-darwin.so (0) <38D73EFD-3771-3F36-9E66-A26B1AB3286F> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/gevent/__greenlet_primitives.cpython-37m-darwin.so
0x13365a000 - 0x133666ff3 +__waiter.cpython-37m-darwin.so (0) <069331A5-9282-3E0B-8993-F9B757FE18AF> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/gevent/__waiter.cpython-37m-darwin.so
0x133674000 - 0x13368cfff +__hub_primitives.cpython-37m-darwin.so (0) <D7131A67-B584-3B79-BA07-0D8E37F5BA35> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/gevent/__hub_primitives.cpython-37m-darwin.so
0x1336a0000 - 0x1336ceff7 +_greenlet.cpython-37m-darwin.so (0) <E7937A9B-92AE-3F1D-8765-C336B232D995> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/gevent/_greenlet.cpython-37m-darwin.so
0x1336f2000 - 0x1336f8ff3 +__ident.cpython-37m-darwin.so (0) <B0D4077A-8DC4-3B95-810D-68A429707AC2> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/gevent/__ident.cpython-37m-darwin.so
0x133741000 - 0x13374dfff +__abstract_linkable.cpython-37m-darwin.so (0) <245CCB93-5B7C-3CF9-AEEE-5A6FB59977A8> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/gevent/__abstract_linkable.cpython-37m-darwin.so
0x13375a000 - 0x13376cff7 +_event.cpython-37m-darwin.so (0) <9D0476FA-89E5-33F1-9B5F-DFA8C1E91456> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/gevent/_event.cpython-37m-darwin.so
0x1337bf000 - 0x1337c0ff7 +termios.cpython-37m-darwin.so (0) <C7A91EC6-C4DF-388D-BB4D-C0940A5CD9BC> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload/termios.cpython-37m-darwin.so
0x7fff37c8f000 - 0x7fff37c8ffff com.apple.Accelerate (1.11 - Accelerate 1.11) <956D070C-B522-3A08-891A-CAD6BA4082D1> /System/Library/Frameworks/Accelerate.framework/Versions/A/Accelerate
0x7fff37ca7000 - 0x7fff38312fdf com.apple.vImage (8.1 - 524.2) <45A48EA9-49AA-33A0-B830-AF754BD01009> /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vImage.framework/Versions/A/vImage
0x7fff38313000 - 0x7fff3857bfff libBLAS.dylib (1303) <112B19CC-925A-3E28-8B32-2002D30A37FA> /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libBLAS.dylib
0x7fff3857c000 - 0x7fff3886bfdf libBNNS.dylib (144.11.2) <A806AED9-837B-3C6C-AB0B-A41983C1CD07> /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libBNNS.dylib
0x7fff3886d000 - 0x7fff38c12fff libLAPACK.dylib (1303) <5C248B39-F233-3074-A3A5-AF8F436FBF87> /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libLAPACK.dylib
0x7fff38c13000 - 0x7fff38c28ff8 libLinearAlgebra.dylib (1303) <C21931B4-F6BD-324D-A2D9-F13EE8AFB29E> /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libLinearAlgebra.dylib
0x7fff38c29000 - 0x7fff38c2eff3 libQuadrature.dylib (7) <826897ED-B7AD-33DC-B9CB-F9787784F312> /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libQuadrature.dylib
0x7fff38c2f000 - 0x7fff38c9ffff libSparse.dylib (103) <55467C29-2096-36AB-8A6D-5231A342809D> /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libSparse.dylib
0x7fff38ca0000 - 0x7fff38cb2fef libSparseBLAS.dylib (1303) <3244FCAF-A1FE-3248-AF22-BFB3E9D12555> /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libSparseBLAS.dylib
0x7fff38cb3000 - 0x7fff38e8cffb libvDSP.dylib (735) <E849AEB0-2995-38A4-B0C3-4ACEAF434D12> /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libvDSP.dylib
0x7fff38e8d000 - 0x7fff38f48fd3 libvMisc.dylib (735) <D6248EC4-7772-37BB-87F7-9BAB7F5D31A0> /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libvMisc.dylib
0x7fff38f49000 - 0x7fff38f49fff com.apple.Accelerate.vecLib (3.11 - vecLib 3.11) <79C1A1C7-E97A-3B7A-8737-444B402A7AA0> /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/vecLib
0x7fff3a674000 - 0x7fff3a9eaffe com.apple.CFNetwork (1111 - 1111) <642753C5-5D26-3794-9A4C-4F63F226C01A> /System/Library/Frameworks/CFNetwork.framework/Versions/A/CFNetwork
0x7fff3bea9000 - 0x7fff3c328ff7 com.apple.CoreFoundation (6.9 - 1671.15) <BF8A8279-9C5E-37C6-8426-90C8182EFBDD> /System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation
0x7fff3d290000 - 0x7fff3d290fff com.apple.CoreServices (1069.2 - 1069.2) <C5F7AABC-BADC-3331-A7D6-9B0A82A23E58> /System/Library/Frameworks/CoreServices.framework/Versions/A/CoreServices
0x7fff3d291000 - 0x7fff3d316ff7 com.apple.AE (838 - 838) <7295ED82-7087-3602-9DCA-4FE205F6499C> /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/AE.framework/Versions/A/AE
0x7fff3d317000 - 0x7fff3d5f8fff com.apple.CoreServices.CarbonCore (1217 - 1217) <7AA0ECB3-0993-3081-A9EC-0365EF72B24D> /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/CarbonCore.framework/Versions/A/CarbonCore
0x7fff3d5f9000 - 0x7fff3d646ff1 com.apple.DictionaryServices (1.2 - 321) <3D0FFBDE-E425-37C7-B780-39A3D024462A> /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/DictionaryServices.framework/Versions/A/DictionaryServices
0x7fff3d647000 - 0x7fff3d64fff7 com.apple.CoreServices.FSEvents (1268.0.6 - 1268.0.6) <78D2AB1A-9053-3D32-8C18-C1DD31FF9400> /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/FSEvents.framework/Versions/A/FSEvents
0x7fff3d650000 - 0x7fff3d888ff1 com.apple.LaunchServices (1069.2 - 1069.2) <68B4C10C-D536-33E9-9719-E7BA5B753F2B> /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/LaunchServices.framework/Versions/A/LaunchServices
0x7fff3d889000 - 0x7fff3d921ff1 com.apple.Metadata (10.7.0 - 2066.12) <249EA615-8446-3A36-B6B7-ED613C8B8148> /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/Metadata.framework/Versions/A/Metadata
0x7fff3d922000 - 0x7fff3d94fff7 com.apple.CoreServices.OSServices (1069.2 - 1069.2) <2FECF3BA-B785-35E2-85E9-2A2267801AA4> /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/OSServices.framework/Versions/A/OSServices
0x7fff3d950000 - 0x7fff3d9b7fff com.apple.SearchKit (1.4.1 - 1.4.1) <0068BD72-CF2B-34E4-B461-002D5E56C31C> /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/SearchKit.framework/Versions/A/SearchKit
0x7fff3d9b8000 - 0x7fff3d9dcffd com.apple.coreservices.SharedFileList (131 - 131) <61F62948-4109-38F0-BB91-5EBB6BEEAB10> /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/SharedFileList.framework/Versions/A/SharedFileList
0x7fff3e231000 - 0x7fff3e237ff7 com.apple.DiskArbitration (2.7 - 2.7) <23104F29-F120-354B-97BE-4514A675BB14> /System/Library/Frameworks/DiskArbitration.framework/Versions/A/DiskArbitration
0x7fff3e564000 - 0x7fff3e92bff3 com.apple.Foundation (6.9 - 1671.15) <4BEAB72D-10AA-3009-B0F5-B82B4FE1C441> /System/Library/Frameworks/Foundation.framework/Versions/C/Foundation
0x7fff3e998000 - 0x7fff3e9c7ff3 com.apple.GSS (4.0 - 2.0) <9520F096-C643-36D7-B8CB-3922B6E6D7EC> /System/Library/Frameworks/GSS.framework/Versions/A/GSS
0x7fff3ec7d000 - 0x7fff3ed20ffb com.apple.framework.IOKit (2.0.2 - 1726.11.1) <9E81E92C-7EC2-330F-B0AF-BBFD9D3E46F6> /System/Library/Frameworks/IOKit.framework/Versions/A/IOKit
0x7fff406e3000 - 0x7fff406f5ff3 com.apple.Kerberos (3.0 - 1) <91DF5D16-E721-39F0-A77B-87DA6032F870> /System/Library/Frameworks/Kerberos.framework/Versions/A/Kerberos
0x7fff406f6000 - 0x7fff406f6fff libHeimdalProxy.dylib (77) <51DB9CFB-808F-32E8-BB34-39F6702DBDED> /System/Library/Frameworks/Kerberos.framework/Versions/A/Libraries/libHeimdalProxy.dylib
0x7fff406f7000 - 0x7fff4072dfff com.apple.LDAPFramework (2.4.28 - 194.5) <32FAF82F-BA91-366A-83A3-CDFF6CDD1AF9> /System/Library/Frameworks/LDAP.framework/Versions/A/LDAP
0x7fff42571000 - 0x7fff4257dffe com.apple.NetFS (6.0 - 4.0) <10ECF7E4-98A5-3751-B7AC-0AAAF0050777> /System/Library/Frameworks/NetFS.framework/Versions/A/NetFS
0x7fff45141000 - 0x7fff4515dfff com.apple.CFOpenDirectory (10.15 - 220.11.1) <6F4D018B-FA8B-35B2-8120-F8215DDA01CB> /System/Library/Frameworks/OpenDirectory.framework/Versions/A/Frameworks/CFOpenDirectory.framework/Versions/A/CFOpenDirectory
0x7fff4515e000 - 0x7fff45169fff com.apple.OpenDirectory (10.15 - 220.11.1) <170173C2-DF22-3D11-914F-465AA7C4760A> /System/Library/Frameworks/OpenDirectory.framework/Versions/A/OpenDirectory
0x7fff484eb000 - 0x7fff4882fff9 com.apple.security (7.0 - 59306.11.20) <1B0AE660-0EC5-3497-ACE8-1AF2BB772BAB> /System/Library/Frameworks/Security.framework/Versions/A/Security
0x7fff48830000 - 0x7fff488b8ff7 com.apple.securityfoundation (6.0 - 55236) <903B8365-1F35-3EB2-9821-9D2C440BE2DD> /System/Library/Frameworks/SecurityFoundation.framework/Versions/A/SecurityFoundation
0x7fff48911000 - 0x7fff48915ff8 com.apple.xpc.ServiceManagement (1.0 - 1) <EF42F840-AF78-38A4-B6E1-FDF445CA3477> /System/Library/Frameworks/ServiceManagement.framework/Versions/A/ServiceManagement
0x7fff496ff000 - 0x7fff49769ff7 com.apple.SystemConfiguration (1.19 - 1.19) <C0C089C3-FC64-3107-B23E-4073E800C5D2> /System/Library/Frameworks/SystemConfiguration.framework/Versions/A/SystemConfiguration
0x7fff4d4bd000 - 0x7fff4d581ff7 com.apple.APFS (1412.11.7 - 1412.11.7) <71DAB6CE-FF14-373D-A183-F388C5D9FE84> /System/Library/PrivateFrameworks/APFS.framework/Versions/A/APFS
0x7fff4f105000 - 0x7fff4f114fdf com.apple.AppleFSCompression (119 - 1.0) <725908C4-7585-3AB6-8A1A-691B8A9074B8> /System/Library/PrivateFrameworks/AppleFSCompression.framework/Versions/A/AppleFSCompression
0x7fff4f67c000 - 0x7fff4f680ff7 com.apple.AppleSRP (5.0 - 1) <B251E119-3F06-3CDB-9559-8CC8BBAF1529> /System/Library/PrivateFrameworks/AppleSRP.framework/Versions/A/AppleSRP
0x7fff508a0000 - 0x7fff508a9ff3 com.apple.coreservices.BackgroundTaskManagement (1.0 - 104) <156CFAE3-07D0-332C-90BE-BB2E4C893C99> /System/Library/PrivateFrameworks/BackgroundTaskManagement.framework/Versions/A/BackgroundTaskManagement
0x7fff52547000 - 0x7fff52550ff7 com.apple.CommonAuth (4.0 - 2.0) <BDE39131-6389-3243-9C4A-DBA165B8A2F9> /System/Library/PrivateFrameworks/CommonAuth.framework/Versions/A/CommonAuth
0x7fff53368000 - 0x7fff53373ff7 com.apple.frameworks.CoreDaemon (1.3 - 1.3) <76AC1239-46B1-387E-B053-ED56BDF55DE7> /System/Library/PrivateFrameworks/CoreDaemon.framework/Versions/B/CoreDaemon
0x7fff535ea000 - 0x7fff535faff3 com.apple.CoreEmoji (1.0 - 100) <9AB89183-635C-3859-9DC6-7BCE3A94D15E> /System/Library/PrivateFrameworks/CoreEmoji.framework/Versions/A/CoreEmoji
0x7fff53c46000 - 0x7fff53cb0ff8 com.apple.CoreNLP (1.0 - 211) <8F08AEF6-A380-3811-BAF0-D80E7C84B5AD> /System/Library/PrivateFrameworks/CoreNLP.framework/Versions/A/CoreNLP
0x7fff548c0000 - 0x7fff548eeff7 com.apple.CSStore (1069.2 - 1069.2) <5E3C50AB-9C00-36F3-A986-CE033480CA8B> /System/Library/PrivateFrameworks/CoreServicesStore.framework/Versions/A/CoreServicesStore
0x7fff5ddb3000 - 0x7fff5de26ffc com.apple.Heimdal (4.0 - 2.0) <169702C2-B210-3258-947C-F8EE6B361C26> /System/Library/PrivateFrameworks/Heimdal.framework/Versions/A/Heimdal
0x7fff6089c000 - 0x7fff60969ffd com.apple.LanguageModeling (1.0 - 212) <A9F41C84-E574-3624-8C00-60F228E0FF97> /System/Library/PrivateFrameworks/LanguageModeling.framework/Versions/A/LanguageModeling
0x7fff6096a000 - 0x7fff609b2ff7 com.apple.Lexicon-framework (1.0 - 70) <BEADF30C-37D3-3112-90DA-18A85406DBF3> /System/Library/PrivateFrameworks/Lexicon.framework/Versions/A/Lexicon
0x7fff609b9000 - 0x7fff609bdff6 com.apple.LinguisticData (1.0 - 349) <A392AD13-9AEB-31BB-A9ED-F29437CFBDB4> /System/Library/PrivateFrameworks/LinguisticData.framework/Versions/A/LinguisticData
0x7fff61dee000 - 0x7fff61e3aff7 com.apple.spotlight.metadata.utilities (1.0 - 2066.12) <989018A3-4BD0-3FD1-9A2D-88FD3A521452> /System/Library/PrivateFrameworks/MetadataUtilities.framework/Versions/A/MetadataUtilities
0x7fff6285d000 - 0x7fff62867fff com.apple.NetAuth (6.2 - 6.2) <90F9ADF4-8A9C-3603-8F55-24F8C472430B> /System/Library/PrivateFrameworks/NetAuth.framework/Versions/A/NetAuth
0x7fff6b4f8000 - 0x7fff6b508ff3 com.apple.TCC (1.0 - 1) <DCE1D8C7-7BEB-3201-A0E5-38F012E6B1BC> /System/Library/PrivateFrameworks/TCC.framework/Versions/A/TCC
0x7fff6bc27000 - 0x7fff6bc28fff com.apple.TrustEvaluationAgent (2.0 - 33) <B691985E-2E58-37C3-B336-8882DE4BF19A> /System/Library/PrivateFrameworks/TrustEvaluationAgent.framework/Versions/A/TrustEvaluationAgent
0x7fff6f748000 - 0x7fff6f74aff3 com.apple.loginsupport (1.0 - 1) <40974390-AFD7-3CEF-8B8D-6219BF916A4E> /System/Library/PrivateFrameworks/login.framework/Versions/A/Frameworks/loginsupport.framework/Versions/A/loginsupport
0x7fff6fab8000 - 0x7fff6faedff7 libCRFSuite.dylib (48) <45ADF347-A43F-3E95-BF26-94DC525DCC81> /usr/lib/libCRFSuite.dylib
0x7fff6faf0000 - 0x7fff6fafaff3 libChineseTokenizer.dylib (34) <94E822B6-F850-33C5-888C-D5C8AE12122C> /usr/lib/libChineseTokenizer.dylib
0x7fff6fb87000 - 0x7fff6fb89fff libDiagnosticMessagesClient.dylib (112) <418D550B-C748-3D55-A6D5-0398E032F9F3> /usr/lib/libDiagnosticMessagesClient.dylib
0x7fff7004e000 - 0x7fff7004fff3 libSystem.B.dylib (1281) <66742D2E-591A-32B2-8E92-4A54BEE843F6> /usr/lib/libSystem.B.dylib
0x7fff700df000 - 0x7fff700e0fff libThaiTokenizer.dylib (3) <D2A89215-5281-310F-8C75-47F1E6A14F62> /usr/lib/libThaiTokenizer.dylib
0x7fff700f8000 - 0x7fff7010efff libapple_nghttp2.dylib (1.39.2) <0A685DAA-9FC6-3C87-83F1-1D11FC87C1F4> /usr/lib/libapple_nghttp2.dylib
0x7fff70143000 - 0x7fff701b5ff7 libarchive.2.dylib (72.11.2) <8636AE5A-0CBB-306C-8A4B-2E612D2D6B13> /usr/lib/libarchive.2.dylib
0x7fff70250000 - 0x7fff70250ff3 libauto.dylib (187) <AACF68BC-9A05-36F8-8F60-78951422E090> /usr/lib/libauto.dylib
0x7fff7030e000 - 0x7fff7031efff libbsm.0.dylib (60) <F03FA480-0B22-3300-833F-03E88F43C504> /usr/lib/libbsm.0.dylib
0x7fff7031f000 - 0x7fff7032bfff libbz2.1.0.dylib (44) <8B522880-BEF8-3668-B785-F2AB4DE8F366> /usr/lib/libbz2.1.0.dylib
0x7fff7032c000 - 0x7fff7037ffff libc++.1.dylib (800.6) <328FB687-2363-38B1-AC11-11736925C775> /usr/lib/libc++.1.dylib
0x7fff70380000 - 0x7fff70394fff libc++abi.dylib (800.7) <02753D3D-91C6-3670-8B5D-EBE040B516FC> /usr/lib/libc++abi.dylib
0x7fff70395000 - 0x7fff70395ffb libcharset.1.dylib (59) <12D52FA5-EBCA-3F3C-A269-1095F268F92F> /usr/lib/libcharset.1.dylib
0x7fff70396000 - 0x7fff703a7ffb libcmph.dylib (8) <7DD1F726-F3E3-341A-AFAC-83E9A470883C> /usr/lib/libcmph.dylib
0x7fff703a8000 - 0x7fff703bffe7 libcompression.dylib (87) <10B82884-BE1A-3A36-9B38-3C92AF566D3E> /usr/lib/libcompression.dylib
0x7fff70681000 - 0x7fff70697fff libcoretls.dylib (167) <1C64EA6F-8E0D-319D-99D4-026150EEB2B2> /usr/lib/libcoretls.dylib
0x7fff70698000 - 0x7fff70699ffb libcoretls_cfhelpers.dylib (167) <724B0181-4D9E-3D2F-B1AB-B4FD6A7BAB2C> /usr/lib/libcoretls_cfhelpers.dylib
0x7fff70b3d000 - 0x7fff70c41fe7 libcrypto.44.dylib (47.11.1) <55C6ABAB-C237-39F1-A78C-4594896CF7E6> /usr/lib/libcrypto.44.dylib
0x7fff70cb8000 - 0x7fff70d1ffff libcurl.4.dylib (118) <2B9763A5-A54D-3F2B-98BD-1F9BAEADE5E0> /usr/lib/libcurl.4.dylib
0x7fff70dc2000 - 0x7fff70dc2ff3 libenergytrace.dylib (21) <E42B4AFF-3FAC-3CE4-A7BF-48621458B356> /usr/lib/libenergytrace.dylib
0x7fff70dc3000 - 0x7fff70ddcff7 libexpat.1.dylib (19) <A0E2F6F3-BFFA-3D59-872F-F093487F0B42> /usr/lib/libexpat.1.dylib
0x7fff70dea000 - 0x7fff70decff7 libfakelink.dylib (149) <FC5712CB-2188-3DAD-8DD4-CC3ECCA3F9A8> /usr/lib/libfakelink.dylib
0x7fff70dfb000 - 0x7fff70e00fff libgermantok.dylib (24) <93E95178-E436-3611-A4A3-CB1D42CF4064> /usr/lib/libgermantok.dylib
0x7fff70e01000 - 0x7fff70e05ff7 libheimdal-asn1.dylib (564.0.3) <51551E63-5AC6-30D3-B178-8BDA782C80EA> /usr/lib/libheimdal-asn1.dylib
0x7fff70e06000 - 0x7fff70ef6ff7 libiconv.2.dylib (59) <1A648E74-25D4-3F9B-94C4-10B58AD091B7> /usr/lib/libiconv.2.dylib
0x7fff70ef7000 - 0x7fff7114fff7 libicucore.A.dylib (64232.0.1) <88E47471-605C-3C86-871B-5D2F4628A936> /usr/lib/libicucore.A.dylib
0x7fff71169000 - 0x7fff7116afff liblangid.dylib (133) <78DF87EE-CDCE-3628-B239-56743169BC93> /usr/lib/liblangid.dylib
0x7fff7116b000 - 0x7fff71183ffb liblzma.5.dylib (16) <D416FC97-76EC-38B5-A134-85DDFEA9D297> /usr/lib/liblzma.5.dylib
0x7fff7119b000 - 0x7fff71242fff libmecab.dylib (879) <DD60E6AA-154E-3294-B2C0-3277B754F3BC> /usr/lib/libmecab.dylib
0x7fff71243000 - 0x7fff714a2fe9 libmecabra.dylib (879) <B5BE574C-DD3A-336D-87A3-202CE7803A45> /usr/lib/libmecabra.dylib
0x7fff71961000 - 0x7fff71dd3ff6 libnetwork.dylib (1880.11.2) <CC02BF51-A056-3656-B735-E8CD0F4B7B15> /usr/lib/libnetwork.dylib
0x7fff71e71000 - 0x7fff71ea2fe6 libobjc.A.dylib (779.1) <722C0959-69B8-3843-B5EA-CDD8FAA91D5E> /usr/lib/libobjc.A.dylib
0x7fff71eb5000 - 0x7fff71eb9fff libpam.2.dylib (25) <86317F48-E926-30AC-AE9F-ABB33543FBC8> /usr/lib/libpam.2.dylib
0x7fff71ebc000 - 0x7fff71eefff7 libpcap.A.dylib (89.11.2) <65A8EBD2-F059-353B-9538-20D1314EFD89> /usr/lib/libpcap.A.dylib
0x7fff71f71000 - 0x7fff71f89ff7 libresolv.9.dylib (67) <06480BFC-6372-3225-B77A-F5AC9969DB64> /usr/lib/libresolv.9.dylib
0x7fff71fcf000 - 0x7fff71fe1fff libsasl2.2.dylib (213) <277129F1-29AE-34EB-BBAB-FF6DF4B43FAB> /usr/lib/libsasl2.2.dylib
0x7fff71fe4000 - 0x7fff721d1ff7 libsqlite3.dylib (308.1) <7033723E-DD65-3AA3-ADCA-8746F7BAD75D> /usr/lib/libsqlite3.dylib
0x7fff722c5000 - 0x7fff722f2ffb libssl.46.dylib (47.11.1) <CA81BD73-E5BF-3F88-A70E-49BA7C6B2781> /usr/lib/libssl.46.dylib
0x7fff723c4000 - 0x7fff7241fff0 libusrtcp.dylib (1880.11.2) <B821F69B-2E28-36C7-8F11-6990F8D4E26B> /usr/lib/libusrtcp.dylib
0x7fff72420000 - 0x7fff72423ffb libutil.dylib (57) <844B7887-09B3-3D12-ACDE-C4EB8F53DC62> /usr/lib/libutil.dylib
0x7fff72424000 - 0x7fff72431fff libxar.1.dylib (420) <E0D7C0A6-76EC-3682-A393-6596D4986269> /usr/lib/libxar.1.dylib
0x7fff72437000 - 0x7fff72519ff7 libxml2.2.dylib (32.12) <C0A87484-D334-3B50-8F8A-A9C63295F49E> /usr/lib/libxml2.2.dylib
0x7fff7251d000 - 0x7fff72545fff libxslt.1.dylib (16.6) <CD9E79B0-159A-3055-B62A-57AB2B445912> /usr/lib/libxslt.1.dylib
0x7fff72546000 - 0x7fff72558fff libz.1.dylib (76) <3FC3FC3E-ABF3-3167-9078-B54C952608B4> /usr/lib/libz.1.dylib
0x7fff72fbd000 - 0x7fff72fc2ff7 libcache.dylib (83) <8EC69760-6DAA-3068-9372-F1D554C548E5> /usr/lib/system/libcache.dylib
0x7fff72fc3000 - 0x7fff72fceff7 libcommonCrypto.dylib (60165) <698BE754-137D-361D-B826-57B8DD969E4A> /usr/lib/system/libcommonCrypto.dylib
0x7fff72fcf000 - 0x7fff72fd6fff libcompiler_rt.dylib (101.2) <0BE7F119-C9C2-3612-A3F4-2336D4444476> /usr/lib/system/libcompiler_rt.dylib
0x7fff72fd7000 - 0x7fff72fe0ff7 libcopyfile.dylib (166) <B5E73B1C-0BCF-33FE-80A1-D9E3BA3033D4> /usr/lib/system/libcopyfile.dylib
0x7fff72fe1000 - 0x7fff73078fc3 libcorecrypto.dylib (866.0.10) <58344B13-CD10-3697-A516-6F5B06DD1EB7> /usr/lib/system/libcorecrypto.dylib
0x7fff7318f000 - 0x7fff731d0ff0 libdispatch.dylib (1173.0.3) <F4260D89-F67D-30CB-AF61-7ED25CB687DB> /usr/lib/system/libdispatch.dylib
0x7fff731d1000 - 0x7fff73206fff libdyld.dylib (732.8) <98960E27-A08B-36DA-A5CB-8538B2D6757E> /usr/lib/system/libdyld.dylib
0x7fff73207000 - 0x7fff73207ffb libkeymgr.dylib (30) <682B41BC-BDC2-38D5-8820-87099606FA12> /usr/lib/system/libkeymgr.dylib
0x7fff73208000 - 0x7fff73214ff7 libkxld.dylib (6153.11.26) <53BE9630-BDC8-3649-8709-7A4F86777B1A> /usr/lib/system/libkxld.dylib
0x7fff73215000 - 0x7fff73215ff7 liblaunch.dylib (1738.11.1) <7FE11F0D-65BC-3726-B1DD-E84F329193E0> /usr/lib/system/liblaunch.dylib
0x7fff73216000 - 0x7fff7321bff7 libmacho.dylib (949.0.1) <163DFE06-2FAD-3CBC-80EF-C38EED6AEA52> /usr/lib/system/libmacho.dylib
0x7fff7321c000 - 0x7fff7321eff3 libquarantine.dylib (110.0.4) <C8F39330-8CB5-30B0-8564-96453DCEFAD7> /usr/lib/system/libquarantine.dylib
0x7fff7321f000 - 0x7fff73220ff7 libremovefile.dylib (48) <FB280185-B5ED-3F08-B08A-A378865C1398> /usr/lib/system/libremovefile.dylib
0x7fff73221000 - 0x7fff73238fff libsystem_asl.dylib (377.0.1) <30CE9DAF-B1FA-3510-832B-F1CE19933ED7> /usr/lib/system/libsystem_asl.dylib
0x7fff73239000 - 0x7fff73239fff libsystem_blocks.dylib (74) <E0B8C825-E62B-357E-A2A0-13776F0A0F8C> /usr/lib/system/libsystem_blocks.dylib
0x7fff7323a000 - 0x7fff732c1ff7 libsystem_c.dylib (1353.11.2) <2A5BFAFE-8214-3B35-AD46-C07A1A8B8941> /usr/lib/system/libsystem_c.dylib
0x7fff732c2000 - 0x7fff732c5fff libsystem_configuration.dylib (1061.0.2) <56174463-22ED-337F-B0D4-94995DCDB9B7> /usr/lib/system/libsystem_configuration.dylib
0x7fff732c6000 - 0x7fff732c9ff7 libsystem_coreservices.dylib (114) <01695913-028E-3AE1-8D4E-2B2769109811> /usr/lib/system/libsystem_coreservices.dylib
0x7fff732ca000 - 0x7fff732d1fff libsystem_darwin.dylib (1353.11.2) <4CE52C63-87AA-3C6D-899F-8C983E5FC061> /usr/lib/system/libsystem_darwin.dylib
0x7fff732d2000 - 0x7fff732d9ffb libsystem_dnssd.dylib (1096.0.2) <DC7381E8-F09F-3441-8267-9B8F50A0EBA9> /usr/lib/system/libsystem_dnssd.dylib
0x7fff732da000 - 0x7fff732dbffb libsystem_featureflags.dylib (17) <DBCA4AA2-CA05-38D5-AB4B-BE0F3E09BB8B> /usr/lib/system/libsystem_featureflags.dylib
0x7fff732dc000 - 0x7fff73329ff7 libsystem_info.dylib (538) <9F9D6368-A11E-32F1-9BB5-C153F42EFED8> /usr/lib/system/libsystem_info.dylib
0x7fff7332a000 - 0x7fff73356fff libsystem_kernel.dylib (6153.11.26) <4CE9D54A-A975-348E-B878-EE74EDFC956B> /usr/lib/system/libsystem_kernel.dylib
0x7fff73357000 - 0x7fff7339eff7 libsystem_m.dylib (3178) <4F516261-0C0E-3105-AF35-EF39D6347B50> /usr/lib/system/libsystem_m.dylib
0x7fff7339f000 - 0x7fff733c6fff libsystem_malloc.dylib (283) <02925539-3CBA-39EB-BA6B-9A936CFA0BF8> /usr/lib/system/libsystem_malloc.dylib
0x7fff733c7000 - 0x7fff733d4ff3 libsystem_networkextension.dylib (1095.11.9) <8B5EE189-E3D1-31FD-878F-50286B6E7047> /usr/lib/system/libsystem_networkextension.dylib
0x7fff733d5000 - 0x7fff733defff libsystem_notify.dylib (241) <89381127-2A07-3F07-B865-358FACCF9102> /usr/lib/system/libsystem_notify.dylib
0x7fff733df000 - 0x7fff733e8fe7 libsystem_platform.dylib (220) <90E508E4-46D8-33FF-8552-DDAA079977A0> /usr/lib/system/libsystem_platform.dylib
0x7fff733e9000 - 0x7fff733f3fff libsystem_pthread.dylib (416.11.1) <2EA6F95F-F264-30B6-8AF2-24197B5AED84> /usr/lib/system/libsystem_pthread.dylib
0x7fff733f4000 - 0x7fff733f8ffb libsystem_sandbox.dylib (1217.11.16) <51129CBB-BC46-37F1-A1B5-ECFA9530704D> /usr/lib/system/libsystem_sandbox.dylib
0x7fff733f9000 - 0x7fff733fbfff libsystem_secinit.dylib (62.0.4) <A48D9AF3-75F2-3331-A0C8-0A23771F4AC7> /usr/lib/system/libsystem_secinit.dylib
0x7fff733fc000 - 0x7fff73403ffb libsystem_symptoms.dylib (1238.0.2) <08E8CF75-5F77-3475-A48E-A01CBDF09173> /usr/lib/system/libsystem_symptoms.dylib
0x7fff73404000 - 0x7fff7341aff2 libsystem_trace.dylib (1147.0.3) <5836645E-9862-326D-AB3B-A19E76BE29B9> /usr/lib/system/libsystem_trace.dylib
0x7fff7341c000 - 0x7fff73421ffb libunwind.dylib (35.4) <F5AE1D43-7C5F-3BA2-94D3-E769F82C0F61> /usr/lib/system/libunwind.dylib
0x7fff73422000 - 0x7fff73456ff6 libxpc.dylib (1738.11.1) <2E9076CD-6C0E-38B6-8403-B2DDCE125FBF> /usr/lib/system/libxpc.dylib
External Modification Summary:
Calls made by other processes targeting this process:
task_for_pid: 32
thread_create: 0
thread_set_state: 0
Calls made by this process:
task_for_pid: 0
thread_create: 0
thread_set_state: 0
Calls made by all processes on this machine:
task_for_pid: 7803078
thread_create: 0
thread_set_state: 0
VM Region Summary:
ReadOnly portion of Libraries: Total=660.2M resident=0K(0%) swapped_out_or_unallocated=660.2M(100%)
Writable regions: Total=1.5G written=0K(0%) resident=0K(0%) swapped_out=0K(0%) unallocated=1.5G(100%)
VIRTUAL REGION
REGION TYPE SIZE COUNT (non-coalesced)
=========== ======= =======
Activity Tracing 256K 1
Dispatch continuations 16.0M 1
Kernel Alloc Once 8K 1
MALLOC 1.1G 724
MALLOC guard page 16K 3
MALLOC_LARGE (reserved) 256K 2 reserved VM address space (unallocated)
STACK GUARD 36K 9
Stack 51.6M 9
VM_ALLOCATE 4K 1
VM_ALLOCATE (reserved) 192.0M 2 reserved VM address space (unallocated)
__DATA 62.5M 464
__DATA_CONST 20K 1
__LINKEDIT 380.1M 248
__OBJC_RO 31.8M 1
__OBJC_RW 1764K 2
__TEXT 280.2M 382
__UNICODE 564K 1
mapped file 28K 1
shared memory 44K 5
=========== ======= =======
TOTAL 2.1G 1858
TOTAL, minus reserved VM space 2.0G 1858
Model: MacBookPro13,3, BootROM 262.0.0.0.0, 4 processors, Quad-Core Intel Core i7, 2.9 GHz, 16 GB, SMC 2.38f8
Graphics: kHW_IntelHDGraphics530Item, Intel HD Graphics 530, spdisplays_builtin
Graphics: kHW_AMDRadeonPro460Item, AMD Radeon Pro 460, spdisplays_pcie_device, 4 GB
Memory Module: BANK 0/DIMM0, 8 GB, LPDDR3, 2133 MHz, 0x802C, 0x4D5435324C31473332443450472D30393320
Memory Module: BANK 1/DIMM0, 8 GB, LPDDR3, 2133 MHz, 0x802C, 0x4D5435324C31473332443450472D30393320
AirPort: spairport_wireless_card_type_airport_extreme (0x14E4, 0x15A), Broadcom BCM43xx 1.0 (7.77.105.1 AirPortDriverBrcmNIC-1429)
Bluetooth: Version 7.0.0f8, 3 services, 27 devices, 1 incoming serial ports
Network Service: Wi-Fi, AirPort, en0
USB Device: USB 3.0 Bus
USB Device: Apple T1 Controller
Thunderbolt Bus: MacBook Pro, Apple Inc., 41.2
Thunderbolt Bus: MacBook Pro, Apple Inc., 41.2
</details>
#### Expected Output
#### Output of ``pd.show_versions()``
<details>
NSTALLED VERSIONS
------------------
commit : None
python : 3.7.4.final.0
python-bits : 64
OS : Darwin
OS-release : 19.0.0
machine : x86_64
processor : i386
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 0.25.1
numpy : 1.17.2
pytz : 2019.2
dateutil : 2.8.0
pip : 19.2.3
setuptools : 40.8.0
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : 4.4.1
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 2.10.1
IPython : None
pandas_datareader: 0.8.1
bs4 : None
bottleneck : None
fastparquet : None
gcsfs : None
lxml.etree : 4.4.1
matplotlib : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pytables : None
s3fs : None
scipy : 1.3.1
sqlalchemy : 1.3.9
tables : None
xarray : None
xlrd : 1.2.0
xlwt : None
xlsxwriter : None
</details>
</issue>
<code>
[start of README.md]
1 <div align="center">
2 <img src="https://dev.pandas.io/static/img/pandas.svg"><br>
3 </div>
4
5 -----------------
6
7 # pandas: powerful Python data analysis toolkit
8
9 <table>
10 <tr>
11 <td>Latest Release</td>
12 <td>
13 <a href="https://pypi.org/project/pandas/">
14 <img src="https://img.shields.io/pypi/v/pandas.svg" alt="latest release" />
15 </a>
16 </td>
17 </tr>
18 <td></td>
19 <td>
20 <a href="https://anaconda.org/anaconda/pandas/">
21 <img src="https://anaconda.org/conda-forge/pandas/badges/version.svg" alt="latest release" />
22 </a>
23 </td>
24 </tr>
25 <tr>
26 <td>Package Status</td>
27 <td>
28 <a href="https://pypi.org/project/pandas/">
29 <img src="https://img.shields.io/pypi/status/pandas.svg" alt="status" />
30 </a>
31 </td>
32 </tr>
33 <tr>
34 <td>License</td>
35 <td>
36 <a href="https://github.com/pandas-dev/pandas/blob/master/LICENSE">
37 <img src="https://img.shields.io/pypi/l/pandas.svg" alt="license" />
38 </a>
39 </td>
40 </tr>
41 <tr>
42 <td>Build Status</td>
43 <td>
44 <a href="https://travis-ci.org/pandas-dev/pandas">
45 <img src="https://travis-ci.org/pandas-dev/pandas.svg?branch=master" alt="travis build status" />
46 </a>
47 </td>
48 </tr>
49 <tr>
50 <td></td>
51 <td>
52 <a href="https://dev.azure.com/pandas-dev/pandas/_build/latest?definitionId=1&branch=master">
53 <img src="https://dev.azure.com/pandas-dev/pandas/_apis/build/status/pandas-dev.pandas?branch=master" alt="Azure Pipelines build status" />
54 </a>
55 </td>
56 </tr>
57 <tr>
58 <td>Coverage</td>
59 <td>
60 <a href="https://codecov.io/gh/pandas-dev/pandas">
61 <img src="https://codecov.io/github/pandas-dev/pandas/coverage.svg?branch=master" alt="coverage" />
62 </a>
63 </td>
64 </tr>
65 <tr>
66 <td>Downloads</td>
67 <td>
68 <a href="https://pandas.pydata.org">
69 <img src="https://anaconda.org/conda-forge/pandas/badges/downloads.svg" alt="conda-forge downloads" />
70 </a>
71 </td>
72 </tr>
73 <tr>
74 <td>Gitter</td>
75 <td>
76 <a href="https://gitter.im/pydata/pandas">
77 <img src="https://badges.gitter.im/Join%20Chat.svg" />
78 </a>
79 </td>
80 </tr>
81 </table>
82
83
84
85 ## What is it?
86
87 **pandas** is a Python package providing fast, flexible, and expressive data
88 structures designed to make working with "relational" or "labeled" data both
89 easy and intuitive. It aims to be the fundamental high-level building block for
90 doing practical, **real world** data analysis in Python. Additionally, it has
91 the broader goal of becoming **the most powerful and flexible open source data
92 analysis / manipulation tool available in any language**. It is already well on
93 its way towards this goal.
94
95 ## Main Features
96 Here are just a few of the things that pandas does well:
97
98 - Easy handling of [**missing data**][missing-data] (represented as
99 `NaN`) in floating point as well as non-floating point data
100 - Size mutability: columns can be [**inserted and
101 deleted**][insertion-deletion] from DataFrame and higher dimensional
102 objects
103 - Automatic and explicit [**data alignment**][alignment]: objects can
104 be explicitly aligned to a set of labels, or the user can simply
105 ignore the labels and let `Series`, `DataFrame`, etc. automatically
106 align the data for you in computations
107 - Powerful, flexible [**group by**][groupby] functionality to perform
108 split-apply-combine operations on data sets, for both aggregating
109 and transforming data
110 - Make it [**easy to convert**][conversion] ragged,
111 differently-indexed data in other Python and NumPy data structures
112 into DataFrame objects
113 - Intelligent label-based [**slicing**][slicing], [**fancy
114 indexing**][fancy-indexing], and [**subsetting**][subsetting] of
115 large data sets
116 - Intuitive [**merging**][merging] and [**joining**][joining] data
117 sets
118 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of
119 data sets
120 - [**Hierarchical**][mi] labeling of axes (possible to have multiple
121 labels per tick)
122 - Robust IO tools for loading data from [**flat files**][flat-files]
123 (CSV and delimited), [**Excel files**][excel], [**databases**][db],
124 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore]
125 - [**Time series**][timeseries]-specific functionality: date range
126 generation and frequency conversion, moving window statistics,
127 moving window linear regressions, date shifting and lagging, etc.
128
129
130 [missing-data]: https://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data
131 [insertion-deletion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#column-selection-addition-deletion
132 [alignment]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html?highlight=alignment#intro-to-data-structures
133 [groupby]: https://pandas.pydata.org/pandas-docs/stable/groupby.html#group-by-split-apply-combine
134 [conversion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe
135 [slicing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges
136 [fancy-indexing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#advanced-indexing-with-ix
137 [subsetting]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing
138 [merging]: https://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging
139 [joining]: https://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index
140 [reshape]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-and-pivot-tables
141 [pivot-table]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#pivot-tables-and-cross-tabulations
142 [mi]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex
143 [flat-files]: https://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files
144 [excel]: https://pandas.pydata.org/pandas-docs/stable/io.html#excel-files
145 [db]: https://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries
146 [hdfstore]: https://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables
147 [timeseries]: https://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-series-date-functionality
148
149 ## Where to get it
150 The source code is currently hosted on GitHub at:
151 https://github.com/pandas-dev/pandas
152
153 Binary installers for the latest released version are available at the [Python
154 package index](https://pypi.org/project/pandas) and on conda.
155
156 ```sh
157 # conda
158 conda install pandas
159 ```
160
161 ```sh
162 # or PyPI
163 pip install pandas
164 ```
165
166 ## Dependencies
167 - [NumPy](https://www.numpy.org): 1.13.3 or higher
168 - [python-dateutil](https://labix.org/python-dateutil): 2.5.0 or higher
169 - [pytz](https://pythonhosted.org/pytz): 2015.4 or higher
170
171 See the [full installation instructions](https://pandas.pydata.org/pandas-docs/stable/install.html#dependencies)
172 for recommended and optional dependencies.
173
174 ## Installation from sources
175 To install pandas from source you need Cython in addition to the normal
176 dependencies above. Cython can be installed from pypi:
177
178 ```sh
179 pip install cython
180 ```
181
182 In the `pandas` directory (same one where you found this file after
183 cloning the git repo), execute:
184
185 ```sh
186 python setup.py install
187 ```
188
189 or for installing in [development mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs):
190
191
192 ```sh
193 python -m pip install --no-build-isolation -e .
194 ```
195
196 If you have `make`, you can also use `make develop` to run the same command.
197
198 or alternatively
199
200 ```sh
201 python setup.py develop
202 ```
203
204 See the full instructions for [installing from source](https://pandas.pydata.org/pandas-docs/stable/install.html#installing-from-source).
205
206 ## License
207 [BSD 3](LICENSE)
208
209 ## Documentation
210 The official documentation is hosted on PyData.org: https://pandas.pydata.org/pandas-docs/stable
211
212 ## Background
213 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and
214 has been under active development since then.
215
216 ## Getting Help
217
218 For usage questions, the best place to go to is [StackOverflow](https://stackoverflow.com/questions/tagged/pandas).
219 Further, general questions and discussions can also take place on the [pydata mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata).
220
221 ## Discussion and Development
222 Most development discussion is taking place on github in this repo. Further, the [pandas-dev mailing list](https://mail.python.org/mailman/listinfo/pandas-dev) can also be used for specialized discussions or design issues, and a [Gitter channel](https://gitter.im/pydata/pandas) is available for quick development related questions.
223
224 ## Contributing to pandas [](https://www.codetriage.com/pandas-dev/pandas)
225
226 All contributions, bug reports, bug fixes, documentation improvements, enhancements and ideas are welcome.
227
228 A detailed overview on how to contribute can be found in the **[contributing guide](https://dev.pandas.io/docs/contributing.html)**. There is also an [overview](.github/CONTRIBUTING.md) on GitHub.
229
230 If you are simply looking to start working with the pandas codebase, navigate to the [GitHub "issues" tab](https://github.com/pandas-dev/pandas/issues) and start looking through interesting issues. There are a number of issues listed under [Docs](https://github.com/pandas-dev/pandas/issues?labels=Docs&sort=updated&state=open) and [good first issue](https://github.com/pandas-dev/pandas/issues?labels=good+first+issue&sort=updated&state=open) where you could start out.
231
232 You can also triage issues which may include reproducing bug reports, or asking for vital information such as version numbers or reproduction instructions. If you would like to start triaging issues, one easy way to get started is to [subscribe to pandas on CodeTriage](https://www.codetriage.com/pandas-dev/pandas).
233
234 Or maybe through using pandas you have an idea of your own or are looking for something in the documentation and thinking ‘this can be improved’...you can do something about it!
235
236 Feel free to ask questions on the [mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata) or on [Gitter](https://gitter.im/pydata/pandas).
237
238 As contributors and maintainers to this project, you are expected to abide by pandas' code of conduct. More information can be found at: [Contributor Code of Conduct](https://github.com/pandas-dev/pandas/blob/master/.github/CODE_OF_CONDUCT.md)
239
[end of README.md]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pandas-dev/pandas
|
d7de334359f1872b52abd34e5e42d2a77f97e55d
|
Crash during groupby quantile
#### Code Sample, a copy-pastable example if possible
```python
dtf = dtf.groupby(cut(dtf[X], rng)).quantile(.5)
```
#### Problem description
Process: Python [24642]
Path: /Library/Frameworks/Python.framework/Versions/3.7/Resources/Python.app/Contents/MacOS/Python
Identifier: Python
Version: 3.7.4 (3.7.4)
Code Type: X86-64 (Native)
Parent Process: Python [24593]
Responsible: iTerm2 [1703]
User ID: 501
Date/Time: 2019-10-09 17:11:04.949 -0500
OS Version: Mac OS X 10.15 (19A583)
Report Version: 12
Bridge OS Version: 3.0 (14Y904)
Anonymous UUID: F986CCB3-5DD1-9587-8492-6D8B8A43979D
Sleep/Wake UUID: 42F77302-9822-4979-89CB-7C39F3C0556A
Time Awake Since Boot: 67000 seconds
Time Since Wake: 1900 seconds
System Integrity Protection: enabled
Crashed Thread: 7
Exception Type: EXC_BAD_ACCESS (SIGSEGV)
Exception Codes: KERN_INVALID_ADDRESS at 0x000000013a6dcff8
Exception Note: EXC_CORPSE_NOTIFY
Termination Signal: Segmentation fault: 11
Termination Reason: Namespace SIGNAL, Code 0xb
Terminating Process: exc handler [24642]
VM Regions Near 0x13a6dcff8:
MALLOC_LARGE 000000013a5ee000-000000013a633000 [ 276K] rw-/rwx SM=PRV
-->
MALLOC_LARGE 000000013a6dd000-000000013a74d000 [ 448K] rw-/rwx SM=PRV
0 groupby.cpython-37m-darwin.so 0x000000011b1596cf **__pyx_fuse_9__pyx_pw_6pandas_5_libs_7groupby_125group_quantile** + 6719
1 algos.cpython-37m-darwin.so 0x0000000119e8937c __pyx_FusedFunction_call + 812
<details>
Thread 0:: Dispatch queue: com.apple.main-thread
0 org.python.python 0x000000010d924bbf _PyEval_EvalFrameDefault + 1423
1 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
2 org.python.python 0x000000010d92d8c2 call_function + 738
3 org.python.python 0x000000010d92a92e _PyEval_EvalFrameDefault + 25342
4 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
5 org.python.python 0x000000010d92d8c2 call_function + 738
6 org.python.python 0x000000010d92a88d _PyEval_EvalFrameDefault + 25181
7 org.python.python 0x000000010d92e413 _PyEval_EvalCodeWithName + 2467
8 org.python.python 0x000000010d86e5a1 _PyFunction_FastCallKeywords + 257
9 org.python.python 0x000000010d92d8c2 call_function + 738
10 org.python.python 0x000000010d92a92e _PyEval_EvalFrameDefault + 25342
11 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
12 org.python.python 0x000000010d92d8c2 call_function + 738
13 org.python.python 0x000000010d92a873 _PyEval_EvalFrameDefault + 25155
14 org.python.python 0x000000010d92e413 _PyEval_EvalCodeWithName + 2467
15 org.python.python 0x000000010d86e5a1 _PyFunction_FastCallKeywords + 257
16 org.python.python 0x000000010d92d8c2 call_function + 738
17 org.python.python 0x000000010d92a92e _PyEval_EvalFrameDefault + 25342
18 org.python.python 0x000000010d92e413 _PyEval_EvalCodeWithName + 2467
19 org.python.python 0x000000010d86e17b _PyFunction_FastCallDict + 523
20 org.python.python 0x000000010d92ab9e _PyEval_EvalFrameDefault + 25966
21 org.python.python 0x000000010d92e413 _PyEval_EvalCodeWithName + 2467
22 org.python.python 0x000000010d86e5a1 _PyFunction_FastCallKeywords + 257
23 org.python.python 0x000000010d92d8c2 call_function + 738
24 org.python.python 0x000000010d92a9d4 _PyEval_EvalFrameDefault + 25508
25 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
26 org.python.python 0x000000010d92d8c2 call_function + 738
27 org.python.python 0x000000010d92a92e _PyEval_EvalFrameDefault + 25342
28 org.python.python 0x000000010d92e413 _PyEval_EvalCodeWithName + 2467
29 org.python.python 0x000000010d924554 PyEval_EvalCode + 100
30 org.python.python 0x000000010d961c31 PyRun_FileExFlags + 209
31 org.python.python 0x000000010d9614aa PyRun_SimpleFileExFlags + 890
32 org.python.python 0x000000010d980903 pymain_main + 6915
33 org.python.python 0x000000010d980e6a _Py_UnixMain + 58
34 libdyld.dylib 0x00007fff731e2405 start + 1
Thread 1:
0 libsystem_pthread.dylib 0x00007fff733eb5b4 start_wqthread + 0
Thread 2:
0 libsystem_kernel.dylib 0x00007fff7333359e poll + 10
1 select.cpython-37m-darwin.so 0x000000010e0c1982 poll_poll + 466
2 org.python.python 0x000000010d86f1cc _PyMethodDef_RawFastCallKeywords + 668
3 org.python.python 0x000000010d874d42 _PyMethodDescr_FastCallKeywords + 82
4 org.python.python 0x000000010d92d8ec call_function + 780
5 org.python.python 0x000000010d92a873 _PyEval_EvalFrameDefault + 25155
6 org.python.python 0x000000010d92e413 _PyEval_EvalCodeWithName + 2467
7 org.python.python 0x000000010d86e5a1 _PyFunction_FastCallKeywords + 257
8 org.python.python 0x000000010d92d8c2 call_function + 738
9 org.python.python 0x000000010d92a873 _PyEval_EvalFrameDefault + 25155
10 org.python.python 0x000000010d92e413 _PyEval_EvalCodeWithName + 2467
11 org.python.python 0x000000010d86e5a1 _PyFunction_FastCallKeywords + 257
12 org.python.python 0x000000010d92d8c2 call_function + 738
13 org.python.python 0x000000010d92a88d _PyEval_EvalFrameDefault + 25181
14 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
15 org.python.python 0x000000010d92d8c2 call_function + 738
16 org.python.python 0x000000010d92a873 _PyEval_EvalFrameDefault + 25155
17 org.python.python 0x000000010d92e413 _PyEval_EvalCodeWithName + 2467
18 org.python.python 0x000000010d86e17b _PyFunction_FastCallDict + 523
19 org.python.python 0x000000010d92ab9e _PyEval_EvalFrameDefault + 25966
20 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
21 org.python.python 0x000000010d92d8c2 call_function + 738
22 org.python.python 0x000000010d92a873 _PyEval_EvalFrameDefault + 25155
23 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
24 org.python.python 0x000000010d92d8c2 call_function + 738
25 org.python.python 0x000000010d92a873 _PyEval_EvalFrameDefault + 25155
26 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
27 org.python.python 0x000000010d86e004 _PyFunction_FastCallDict + 148
28 org.python.python 0x000000010d86f4bf _PyObject_Call_Prepend + 143
29 org.python.python 0x000000010d86e707 PyObject_Call + 135
30 org.python.python 0x000000010d9b5b77 t_bootstrap + 71
31 org.python.python 0x000000010d96c939 pythread_wrapper + 25
32 libsystem_pthread.dylib 0x00007fff733eed76 _pthread_start + 125
33 libsystem_pthread.dylib 0x00007fff733eb5d7 thread_start + 15
Thread 3:
0 libsystem_kernel.dylib 0x00007fff7332e8f6 __psynch_cvwait + 10
1 libsystem_pthread.dylib 0x00007fff733ef082 _pthread_cond_wait + 701
2 org.python.python 0x000000010d96ce01 PyThread_acquire_lock_timed + 673
3 org.python.python 0x000000010d9b620f acquire_timed + 111
4 org.python.python 0x000000010d9b6320 lock_PyThread_acquire_lock + 48
5 org.python.python 0x000000010d86f1dd _PyMethodDef_RawFastCallKeywords + 685
6 org.python.python 0x000000010d874d42 _PyMethodDescr_FastCallKeywords + 82
7 org.python.python 0x000000010d92d8ec call_function + 780
8 org.python.python 0x000000010d92a873 _PyEval_EvalFrameDefault + 25155
9 org.python.python 0x000000010d92e413 _PyEval_EvalCodeWithName + 2467
10 org.python.python 0x000000010d86e5a1 _PyFunction_FastCallKeywords + 257
11 org.python.python 0x000000010d92d8c2 call_function + 738
12 org.python.python 0x000000010d92a873 _PyEval_EvalFrameDefault + 25155
13 org.python.python 0x000000010d92e413 _PyEval_EvalCodeWithName + 2467
14 org.python.python 0x000000010d86e5a1 _PyFunction_FastCallKeywords + 257
15 org.python.python 0x000000010d92d8c2 call_function + 738
16 org.python.python 0x000000010d92a9d4 _PyEval_EvalFrameDefault + 25508
17 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
18 org.python.python 0x000000010d92d8c2 call_function + 738
19 org.python.python 0x000000010d92a873 _PyEval_EvalFrameDefault + 25155
20 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
21 org.python.python 0x000000010d92d8c2 call_function + 738
22 org.python.python 0x000000010d92a873 _PyEval_EvalFrameDefault + 25155
23 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
24 org.python.python 0x000000010d92d8c2 call_function + 738
25 org.python.python 0x000000010d92a873 _PyEval_EvalFrameDefault + 25155
26 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
27 org.python.python 0x000000010d86e004 _PyFunction_FastCallDict + 148
28 org.python.python 0x000000010d86f4bf _PyObject_Call_Prepend + 143
29 org.python.python 0x000000010d86e707 PyObject_Call + 135
30 org.python.python 0x000000010d9b5b77 t_bootstrap + 71
31 org.python.python 0x000000010d96c939 pythread_wrapper + 25
32 libsystem_pthread.dylib 0x00007fff733eed76 _pthread_start + 125
33 libsystem_pthread.dylib 0x00007fff733eb5d7 thread_start + 15
Thread 4:
0 libsystem_kernel.dylib 0x00007fff7332b146 mach_msg_trap + 10
1 libsystem_kernel.dylib 0x00007fff7332b6ac mach_msg + 60
2 com.apple.CoreFoundation 0x00007fff3bee419b __CFRunLoopServiceMachPort + 322
3 com.apple.CoreFoundation 0x00007fff3bee3737 __CFRunLoopRun + 1695
4 com.apple.CoreFoundation 0x00007fff3bee2e13 CFRunLoopRunSpecific + 499
5 com.apple.CoreFoundation 0x00007fff3bee2bea CFRunLoopRun + 40
6 _watchdog_fsevents.cpython-37m-darwin.so 0x00000001323a5915 watchdog_read_events + 149
7 org.python.python 0x000000010d86f1cc _PyMethodDef_RawFastCallKeywords + 668
8 org.python.python 0x000000010d86e5da _PyCFunction_FastCallKeywords + 42
9 org.python.python 0x000000010d92d8b4 call_function + 724
10 org.python.python 0x000000010d92a88d _PyEval_EvalFrameDefault + 25181
11 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
12 org.python.python 0x000000010d92d8c2 call_function + 738
13 org.python.python 0x000000010d92a873 _PyEval_EvalFrameDefault + 25155
14 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
15 org.python.python 0x000000010d92d8c2 call_function + 738
16 org.python.python 0x000000010d92a873 _PyEval_EvalFrameDefault + 25155
17 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
18 org.python.python 0x000000010d86e004 _PyFunction_FastCallDict + 148
19 org.python.python 0x000000010d86f4bf _PyObject_Call_Prepend + 143
20 org.python.python 0x000000010d86e707 PyObject_Call + 135
21 org.python.python 0x000000010d9b5b77 t_bootstrap + 71
22 org.python.python 0x000000010d96c939 pythread_wrapper + 25
23 libsystem_pthread.dylib 0x00007fff733eed76 _pthread_start + 125
24 libsystem_pthread.dylib 0x00007fff733eb5d7 thread_start + 15
Thread 5:
0 libsystem_kernel.dylib 0x00007fff7332b146 mach_msg_trap + 10
1 libsystem_kernel.dylib 0x00007fff7332b6ac mach_msg + 60
2 com.apple.CoreFoundation 0x00007fff3bee419b __CFRunLoopServiceMachPort + 322
3 com.apple.CoreFoundation 0x00007fff3bee3737 __CFRunLoopRun + 1695
4 com.apple.CoreFoundation 0x00007fff3bee2e13 CFRunLoopRunSpecific + 499
5 com.apple.CoreFoundation 0x00007fff3bee2bea CFRunLoopRun + 40
6 _watchdog_fsevents.cpython-37m-darwin.so 0x00000001323a5915 watchdog_read_events + 149
7 org.python.python 0x000000010d86f1cc _PyMethodDef_RawFastCallKeywords + 668
8 org.python.python 0x000000010d86e5da _PyCFunction_FastCallKeywords + 42
9 org.python.python 0x000000010d92d8b4 call_function + 724
10 org.python.python 0x000000010d92a88d _PyEval_EvalFrameDefault + 25181
11 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
12 org.python.python 0x000000010d92d8c2 call_function + 738
13 org.python.python 0x000000010d92a873 _PyEval_EvalFrameDefault + 25155
14 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
15 org.python.python 0x000000010d92d8c2 call_function + 738
16 org.python.python 0x000000010d92a873 _PyEval_EvalFrameDefault + 25155
17 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
18 org.python.python 0x000000010d86e004 _PyFunction_FastCallDict + 148
19 org.python.python 0x000000010d86f4bf _PyObject_Call_Prepend + 143
20 org.python.python 0x000000010d86e707 PyObject_Call + 135
21 org.python.python 0x000000010d9b5b77 t_bootstrap + 71
22 org.python.python 0x000000010d96c939 pythread_wrapper + 25
23 libsystem_pthread.dylib 0x00007fff733eed76 _pthread_start + 125
24 libsystem_pthread.dylib 0x00007fff733eb5d7 thread_start + 15
Thread 6:
0 libsystem_kernel.dylib 0x00007fff7332b146 mach_msg_trap + 10
1 libsystem_kernel.dylib 0x00007fff7332b6ac mach_msg + 60
2 com.apple.CoreFoundation 0x00007fff3bee419b __CFRunLoopServiceMachPort + 322
3 com.apple.CoreFoundation 0x00007fff3bee3737 __CFRunLoopRun + 1695
4 com.apple.CoreFoundation 0x00007fff3bee2e13 CFRunLoopRunSpecific + 499
5 com.apple.CoreFoundation 0x00007fff3bee2bea CFRunLoopRun + 40
6 _watchdog_fsevents.cpython-37m-darwin.so 0x00000001323a5915 watchdog_read_events + 149
7 org.python.python 0x000000010d86f1cc _PyMethodDef_RawFastCallKeywords + 668
8 org.python.python 0x000000010d86e5da _PyCFunction_FastCallKeywords + 42
9 org.python.python 0x000000010d92d8b4 call_function + 724
10 org.python.python 0x000000010d92a88d _PyEval_EvalFrameDefault + 25181
11 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
12 org.python.python 0x000000010d92d8c2 call_function + 738
13 org.python.python 0x000000010d92a873 _PyEval_EvalFrameDefault + 25155
14 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
15 org.python.python 0x000000010d92d8c2 call_function + 738
16 org.python.python 0x000000010d92a873 _PyEval_EvalFrameDefault + 25155
17 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
18 org.python.python 0x000000010d86e004 _PyFunction_FastCallDict + 148
19 org.python.python 0x000000010d86f4bf _PyObject_Call_Prepend + 143
20 org.python.python 0x000000010d86e707 PyObject_Call + 135
21 org.python.python 0x000000010d9b5b77 t_bootstrap + 71
22 org.python.python 0x000000010d96c939 pythread_wrapper + 25
23 libsystem_pthread.dylib 0x00007fff733eed76 _pthread_start + 125
24 libsystem_pthread.dylib 0x00007fff733eb5d7 thread_start + 15
Thread 7 Crashed:
0 groupby.cpython-37m-darwin.so 0x000000011b1596cf __pyx_fuse_9__pyx_pw_6pandas_5_libs_7groupby_125group_quantile + 6719
1 algos.cpython-37m-darwin.so 0x0000000119e8937c __pyx_FusedFunction_call + 812
2 org.python.python 0x000000010d86e707 PyObject_Call + 135
3 org.python.python 0x000000010d9a45c0 partial_call + 256
4 org.python.python 0x000000010d86e707 PyObject_Call + 135
5 org.python.python 0x000000010d92ab9e _PyEval_EvalFrameDefault + 25966
6 org.python.python 0x000000010d92e413 _PyEval_EvalCodeWithName + 2467
7 org.python.python 0x000000010d86e5a1 _PyFunction_FastCallKeywords + 257
8 org.python.python 0x000000010d92d8c2 call_function + 738
9 org.python.python 0x000000010d92a9d4 _PyEval_EvalFrameDefault + 25508
10 org.python.python 0x000000010d92e413 _PyEval_EvalCodeWithName + 2467
11 org.python.python 0x000000010d86e5a1 _PyFunction_FastCallKeywords + 257
12 org.python.python 0x000000010d92d8c2 call_function + 738
13 org.python.python 0x000000010d92a88d _PyEval_EvalFrameDefault + 25181
14 org.python.python 0x000000010d92e413 _PyEval_EvalCodeWithName + 2467
15 org.python.python 0x000000010d86e5a1 _PyFunction_FastCallKeywords + 257
16 org.python.python 0x000000010d92d8c2 call_function + 738
17 org.python.python 0x000000010d92a9d4 _PyEval_EvalFrameDefault + 25508
18 org.python.python 0x000000010d92e413 _PyEval_EvalCodeWithName + 2467
19 org.python.python 0x000000010d86e17b _PyFunction_FastCallDict + 523
20 org.python.python 0x000000010d92ab9e _PyEval_EvalFrameDefault + 25966
21 org.python.python 0x000000010d92e413 _PyEval_EvalCodeWithName + 2467
22 org.python.python 0x000000010d86e17b _PyFunction_FastCallDict + 523
23 org.python.python 0x000000010d92ab9e _PyEval_EvalFrameDefault + 25966
24 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
25 org.python.python 0x000000010d92d8c2 call_function + 738
26 org.python.python 0x000000010d92a873 _PyEval_EvalFrameDefault + 25155
27 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
28 org.python.python 0x000000010d92d8c2 call_function + 738
29 org.python.python 0x000000010d92a873 _PyEval_EvalFrameDefault + 25155
30 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
31 org.python.python 0x000000010d92d8c2 call_function + 738
32 org.python.python 0x000000010d92a873 _PyEval_EvalFrameDefault + 25155
33 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
34 org.python.python 0x000000010d86e004 _PyFunction_FastCallDict + 148
35 org.python.python 0x000000010d86f4bf _PyObject_Call_Prepend + 143
36 org.python.python 0x000000010d8bc926 slot_tp_call + 150
37 org.python.python 0x000000010d86e3f1 _PyObject_FastCallKeywords + 433
38 org.python.python 0x000000010d92d784 call_function + 420
39 org.python.python 0x000000010d92a88d _PyEval_EvalFrameDefault + 25181
40 org.python.python 0x000000010d87ca5e gen_send_ex + 206
41 org.python.python 0x000000010d92a06f _PyEval_EvalFrameDefault + 23103
42 org.python.python 0x000000010d92e413 _PyEval_EvalCodeWithName + 2467
43 org.python.python 0x000000010d86e5a1 _PyFunction_FastCallKeywords + 257
44 org.python.python 0x000000010d92d8c2 call_function + 738
45 org.python.python 0x000000010d92a92e _PyEval_EvalFrameDefault + 25342
46 org.python.python 0x000000010d92e413 _PyEval_EvalCodeWithName + 2467
47 org.python.python 0x000000010d86e5a1 _PyFunction_FastCallKeywords + 257
48 org.python.python 0x000000010d92d8c2 call_function + 738
49 org.python.python 0x000000010d92a873 _PyEval_EvalFrameDefault + 25155
50 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
51 org.python.python 0x000000010d92d8c2 call_function + 738
52 org.python.python 0x000000010d92a873 _PyEval_EvalFrameDefault + 25155
53 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
54 org.python.python 0x000000010d92d8c2 call_function + 738
55 org.python.python 0x000000010d92a88d _PyEval_EvalFrameDefault + 25181
56 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
57 org.python.python 0x000000010d92d8c2 call_function + 738
58 org.python.python 0x000000010d92a873 _PyEval_EvalFrameDefault + 25155
59 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
60 org.python.python 0x000000010d86e004 _PyFunction_FastCallDict + 148
61 org.python.python 0x000000010d86f4bf _PyObject_Call_Prepend + 143
62 org.python.python 0x000000010d8bdd61 slot_tp_init + 145
63 org.python.python 0x000000010d8b9749 type_call + 297
64 org.python.python 0x000000010d86e3f1 _PyObject_FastCallKeywords + 433
65 org.python.python 0x000000010d92d784 call_function + 420
66 org.python.python 0x000000010d92a88d _PyEval_EvalFrameDefault + 25181
67 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
68 org.python.python 0x000000010d92d8c2 call_function + 738
69 org.python.python 0x000000010d92a873 _PyEval_EvalFrameDefault + 25155
70 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
71 org.python.python 0x000000010d86e004 _PyFunction_FastCallDict + 148
72 org.python.python 0x000000010d86f4bf _PyObject_Call_Prepend + 143
73 org.python.python 0x000000010d86e707 PyObject_Call + 135
74 org.python.python 0x000000010d92ab9e _PyEval_EvalFrameDefault + 25966
75 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
76 org.python.python 0x000000010d92d8c2 call_function + 738
77 org.python.python 0x000000010d92a873 _PyEval_EvalFrameDefault + 25155
78 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
79 org.python.python 0x000000010d92d8c2 call_function + 738
80 org.python.python 0x000000010d92a873 _PyEval_EvalFrameDefault + 25155
81 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
82 org.python.python 0x000000010d86e004 _PyFunction_FastCallDict + 148
83 org.python.python 0x000000010d86f4bf _PyObject_Call_Prepend + 143
84 org.python.python 0x000000010d86e707 PyObject_Call + 135
85 org.python.python 0x000000010d9b5b77 t_bootstrap + 71
86 org.python.python 0x000000010d96c939 pythread_wrapper + 25
87 libsystem_pthread.dylib 0x00007fff733eed76 _pthread_start + 125
88 libsystem_pthread.dylib 0x00007fff733eb5d7 thread_start + 15
Thread 8:
0 _multiarray_umath.cpython-37m-darwin.so 0x000000010ea13664 BOOL_logical_not + 612
1 _multiarray_umath.cpython-37m-darwin.so 0x000000010eab05a1 trivial_two_operand_loop + 273
2 _multiarray_umath.cpython-37m-darwin.so 0x000000010eaa9140 PyUFunc_GenericFunction + 15792
3 _multiarray_umath.cpython-37m-darwin.so 0x000000010eaabc68 ufunc_generic_call + 136
4 org.python.python 0x000000010d86def9 _PyObject_FastCallDict + 297
5 org.python.python 0x000000010d87014c object_vacall + 316
6 org.python.python 0x000000010d870334 PyObject_CallFunctionObjArgs + 148
7 org.python.python 0x000000010d86f225 _PyMethodDef_RawFastCallKeywords + 757
8 org.python.python 0x000000010d86e5da _PyCFunction_FastCallKeywords + 42
9 org.python.python 0x000000010d92d8b4 call_function + 724
10 org.python.python 0x000000010d92a88d _PyEval_EvalFrameDefault + 25181
11 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
12 org.python.python 0x000000010d86e004 _PyFunction_FastCallDict + 148
13 org.python.python 0x000000010d86f3df _PyObject_FastCall_Prepend + 127
14 org.python.python 0x000000010d8c0316 slot_nb_invert + 134
15 org.python.python 0x000000010d925152 _PyEval_EvalFrameDefault + 2850
16 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
17 org.python.python 0x000000010d92d8c2 call_function + 738
18 org.python.python 0x000000010d92a92e _PyEval_EvalFrameDefault + 25342
19 org.python.python 0x000000010d92e413 _PyEval_EvalCodeWithName + 2467
20 org.python.python 0x000000010d86e5a1 _PyFunction_FastCallKeywords + 257
21 org.python.python 0x000000010d92d8c2 call_function + 738
22 org.python.python 0x000000010d92a9d4 _PyEval_EvalFrameDefault + 25508
23 org.python.python 0x000000010d92e413 _PyEval_EvalCodeWithName + 2467
24 org.python.python 0x000000010d86e5a1 _PyFunction_FastCallKeywords + 257
25 org.python.python 0x000000010d92d8c2 call_function + 738
26 org.python.python 0x000000010d92a9d4 _PyEval_EvalFrameDefault + 25508
27 org.python.python 0x000000010d92e413 _PyEval_EvalCodeWithName + 2467
28 org.python.python 0x000000010d86e5a1 _PyFunction_FastCallKeywords + 257
29 org.python.python 0x000000010d92d8c2 call_function + 738
30 org.python.python 0x000000010d92a9d4 _PyEval_EvalFrameDefault + 25508
31 org.python.python 0x000000010d92e413 _PyEval_EvalCodeWithName + 2467
32 org.python.python 0x000000010d86e17b _PyFunction_FastCallDict + 523
33 org.python.python 0x000000010d92ab9e _PyEval_EvalFrameDefault + 25966
34 org.python.python 0x000000010d92e413 _PyEval_EvalCodeWithName + 2467
35 org.python.python 0x000000010d86e17b _PyFunction_FastCallDict + 523
36 org.python.python 0x000000010d92ab9e _PyEval_EvalFrameDefault + 25966
37 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
38 org.python.python 0x000000010d92d8c2 call_function + 738
39 org.python.python 0x000000010d92a873 _PyEval_EvalFrameDefault + 25155
40 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
41 org.python.python 0x000000010d92d8c2 call_function + 738
42 org.python.python 0x000000010d92a873 _PyEval_EvalFrameDefault + 25155
43 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
44 org.python.python 0x000000010d92d8c2 call_function + 738
45 org.python.python 0x000000010d92a873 _PyEval_EvalFrameDefault + 25155
46 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
47 org.python.python 0x000000010d86e004 _PyFunction_FastCallDict + 148
48 org.python.python 0x000000010d86f4bf _PyObject_Call_Prepend + 143
49 org.python.python 0x000000010d8bc926 slot_tp_call + 150
50 org.python.python 0x000000010d86e3f1 _PyObject_FastCallKeywords + 433
51 org.python.python 0x000000010d92d784 call_function + 420
52 org.python.python 0x000000010d92a88d _PyEval_EvalFrameDefault + 25181
53 org.python.python 0x000000010d87ca5e gen_send_ex + 206
54 org.python.python 0x000000010d92a06f _PyEval_EvalFrameDefault + 23103
55 org.python.python 0x000000010d92e413 _PyEval_EvalCodeWithName + 2467
56 org.python.python 0x000000010d86e5a1 _PyFunction_FastCallKeywords + 257
57 org.python.python 0x000000010d92d8c2 call_function + 738
58 org.python.python 0x000000010d92a92e _PyEval_EvalFrameDefault + 25342
59 org.python.python 0x000000010d92e413 _PyEval_EvalCodeWithName + 2467
60 org.python.python 0x000000010d86e5a1 _PyFunction_FastCallKeywords + 257
61 org.python.python 0x000000010d92d8c2 call_function + 738
62 org.python.python 0x000000010d92a873 _PyEval_EvalFrameDefault + 25155
63 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
64 org.python.python 0x000000010d92d8c2 call_function + 738
65 org.python.python 0x000000010d92a873 _PyEval_EvalFrameDefault + 25155
66 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
67 org.python.python 0x000000010d92d8c2 call_function + 738
68 org.python.python 0x000000010d92a88d _PyEval_EvalFrameDefault + 25181
69 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
70 org.python.python 0x000000010d92d8c2 call_function + 738
71 org.python.python 0x000000010d92a873 _PyEval_EvalFrameDefault + 25155
72 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
73 org.python.python 0x000000010d86e004 _PyFunction_FastCallDict + 148
74 org.python.python 0x000000010d86f4bf _PyObject_Call_Prepend + 143
75 org.python.python 0x000000010d8bdd61 slot_tp_init + 145
76 org.python.python 0x000000010d8b9749 type_call + 297
77 org.python.python 0x000000010d86e3f1 _PyObject_FastCallKeywords + 433
78 org.python.python 0x000000010d92d784 call_function + 420
79 org.python.python 0x000000010d92a88d _PyEval_EvalFrameDefault + 25181
80 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
81 org.python.python 0x000000010d92d8c2 call_function + 738
82 org.python.python 0x000000010d92a873 _PyEval_EvalFrameDefault + 25155
83 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
84 org.python.python 0x000000010d86e004 _PyFunction_FastCallDict + 148
85 org.python.python 0x000000010d86f4bf _PyObject_Call_Prepend + 143
86 org.python.python 0x000000010d86e707 PyObject_Call + 135
87 org.python.python 0x000000010d92ab9e _PyEval_EvalFrameDefault + 25966
88 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
89 org.python.python 0x000000010d92d8c2 call_function + 738
90 org.python.python 0x000000010d92a873 _PyEval_EvalFrameDefault + 25155
91 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
92 org.python.python 0x000000010d92d8c2 call_function + 738
93 org.python.python 0x000000010d92a873 _PyEval_EvalFrameDefault + 25155
94 org.python.python 0x000000010d86ea10 function_code_fastcall + 128
95 org.python.python 0x000000010d86e004 _PyFunction_FastCallDict + 148
96 org.python.python 0x000000010d86f4bf _PyObject_Call_Prepend + 143
97 org.python.python 0x000000010d86e707 PyObject_Call + 135
98 org.python.python 0x000000010d9b5b77 t_bootstrap + 71
99 org.python.python 0x000000010d96c939 pythread_wrapper + 25
100 libsystem_pthread.dylib 0x00007fff733eed76 _pthread_start + 125
101 libsystem_pthread.dylib 0x00007fff733eb5d7 thread_start + 15
Thread 7 crashed with X86 Thread State (64-bit):
rax: 0x00007fc23e50d830 rbx: 0xffffffffffffffff rcx: 0x0000000148926000 rdx: 0x0000000131c66000
rdi: 0x000000000013dc10 rsi: 0xfffffffffffffff8 rbp: 0x0000700006b28490 rsp: 0x0000700006b27df0
r8: 0x0000000000000008 r9: 0x0000000000000008 r10: 0x0000000000000008 r11: 0x0000000000000001
r12: 0x000000013a6dd000 r13: 0x0000700006b282f8 r14: 0x00000001352d0760 r15: 0x000000000000880b
rip: 0x000000011b1596cf rfl: 0x0000000000010282 cr2: 0x000000013a6dcff8
Logical CPU: 4
Error Code: 0x00000006 (no mapping for user data read)
Trap Number: 14
Binary Images:
0x10d844000 - 0x10d844fff +org.python.python (3.7.4 - 3.7.4) <4B030EC4-815E-34B7-90E7-D0720C31E072> /Library/Frameworks/Python.framework/Versions/3.7/Resources/Python.app/Contents/MacOS/Python
0x10d84d000 - 0x10da26fff +org.python.python (3.7.4, [c] 2001-2019 Python Software Foundation. - 3.7.4) <AC1AEBEB-FF5A-32AD-BAE0-C6A0BCA86B84> /Library/Frameworks/Python.framework/Versions/3.7/Python
0x10de68000 - 0x10de69fff +_heapq.cpython-37m-darwin.so (0) <E8B35F18-1B5A-3C9E-B1F4-0BE0432459A2> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload/_heapq.cpython-37m-darwin.so
0x10deed000 - 0x10def0ff7 +zlib.cpython-37m-darwin.so (0) <993EF100-1498-3D6A-91FD-79558CAC8F13> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload/zlib.cpython-37m-darwin.so
0x10df42000 - 0x10df43ff7 +_bz2.cpython-37m-darwin.so (0) <F89816AF-0BA9-3228-BAE7-54BA0D68EF67> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload/_bz2.cpython-37m-darwin.so
0x10df47000 - 0x10df77ff7 +_lzma.cpython-37m-darwin.so (0) <AEA78736-809A-3F3E-A2A3-BDA83B0ECBA8> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload/_lzma.cpython-37m-darwin.so
0x10df81000 - 0x10df81fff +grp.cpython-37m-darwin.so (0) <CF2821DC-6D7D-36C4-9F67-5D20E43D70B2> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload/grp.cpython-37m-darwin.so
0x10df93000 - 0x10df96ffb +_comb.cpython-37m-darwin.so (0) <007AB3F6-F95A-3A84-A311-9A47B603490E> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/special/_comb.cpython-37m-darwin.so
0x10df9b000 - 0x10df9bff7 +_raw_ecb.cpython-37m-darwin.so (???) <8F4B4796-E875-304A-84A1-1612D5965846> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/Crypto/Cipher/_raw_ecb.cpython-37m-darwin.so
0x10e01d000 - 0x10e021fff +_struct.cpython-37m-darwin.so (0) <2379780F-4AB4-394B-B5AB-55A517D6627E> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload/_struct.cpython-37m-darwin.so
0x10e02a000 - 0x10e02dff7 +binascii.cpython-37m-darwin.so (0) <58A5F4AD-285A-35E3-90C4-08A3D3D14BF2> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload/binascii.cpython-37m-darwin.so
0x10e0ba000 - 0x10e0bbff7 +_posixsubprocess.cpython-37m-darwin.so (0) <11920A4C-3AD4-3C87-95E5-418D30950610> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload/_posixsubprocess.cpython-37m-darwin.so
0x10e0bf000 - 0x10e0c2fff +select.cpython-37m-darwin.so (0) <473A1E84-EAC7-30DD-A0C0-111ECA9BC60A> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload/select.cpython-37m-darwin.so
0x10e0c8000 - 0x10e0ccfff +math.cpython-37m-darwin.so (0) <C780CA87-2A8D-342E-930E-7EDBB84B3896> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload/math.cpython-37m-darwin.so
0x10e113000 - 0x10e121ff7 +_datetime.cpython-37m-darwin.so (0) <C1603837-F8C7-3FFF-8C6B-D527535D7535> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload/_datetime.cpython-37m-darwin.so
0x10e12d000 - 0x10e156ff7 +pyexpat.cpython-37m-darwin.so (0) <DFD21217-38D1-329A-844A-67778791E921> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload/pyexpat.cpython-37m-darwin.so
0x10e1e9000 - 0x10e1ebff7 +_hashlib.cpython-37m-darwin.so (0) <A6066959-BCC0-3790-9FB2-8B8A9ECBF097> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload/_hashlib.cpython-37m-darwin.so
0x10e1f0000 - 0x10e248ff7 +libssl.1.1.dylib (0) <1DF55B16-0F3A-3620-A4C8-6CEDF39B9620> /Library/Frameworks/Python.framework/Versions/3.7/lib/libssl.1.1.dylib
0x10e271000 - 0x10e4871df +libcrypto.1.1.dylib (0) <34708DE8-CBA8-3112-91FA-3333E07F30DB> /Library/Frameworks/Python.framework/Versions/3.7/lib/libcrypto.1.1.dylib
0x10e517000 - 0x10e51cff7 +_blake2.cpython-37m-darwin.so (0) <5D4A9B1B-FE9F-34EA-BD75-7B3CDDBB7CD0> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload/_blake2.cpython-37m-darwin.so
0x10e521000 - 0x10e531ff7 +_sha3.cpython-37m-darwin.so (0) <E32B9196-5FD3-38FF-BF4E-EF74519A0AFA> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload/_sha3.cpython-37m-darwin.so
0x10e537000 - 0x10e537ff7 +_bisect.cpython-37m-darwin.so (0) <A4FCF31A-2AA6-3EAC-AF46-2F2D10EC1AB1> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload/_bisect.cpython-37m-darwin.so
0x10e53a000 - 0x10e53bff7 +_random.cpython-37m-darwin.so (0) <7E1DAB2E-F4F2-3DDD-BD85-C74BC8983933> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload/_random.cpython-37m-darwin.so
0x10e53f000 - 0x10e548ff7 +_socket.cpython-37m-darwin.so (0) <7B684803-C0A8-34D7-81CE-7A4EE7DEA614> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload/_socket.cpython-37m-darwin.so
0x10e615000 - 0x10e615ff7 +_opcode.cpython-37m-darwin.so (0) <11A650B3-FF7B-3DF1-81E2-A906553221C9> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload/_opcode.cpython-37m-darwin.so
0x10e6c3000 - 0x10e6c4ff3 +_zeros.cpython-37m-darwin.so (0) <05F50EFF-5388-3F0E-8034-D9031383D3AA> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/optimize/_zeros.cpython-37m-darwin.so
0x10e6c7000 - 0x10e6c7ff7 +_raw_cbc.cpython-37m-darwin.so (???) <B161CC1C-8823-32C3-A77F-125C1F15F391> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/Crypto/Cipher/_raw_cbc.cpython-37m-darwin.so
0x10e814000 - 0x10e814ff3 +_api_implementation.cpython-37m-darwin.so (0) <7AD2BE44-57F1-385A-AD04-ECF361EFBF65> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/google/protobuf/internal/_api_implementation.cpython-37m-darwin.so
0x10e817000 - 0x10e817fff +_raw_cfb.cpython-37m-darwin.so (???) <D1F530FC-1F2F-3868-BF2C-6A3E1CA296E0> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/Crypto/Cipher/_raw_cfb.cpython-37m-darwin.so
0x10e8db000 - 0x10e8e0ff3 +messagestream.cpython-37m-darwin.so (0) <892D9031-1B21-35AE-9E89-8684E88BE576> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/_lib/messagestream.cpython-37m-darwin.so
0x10e8e7000 - 0x10e8e7ff7 +_raw_ofb.cpython-37m-darwin.so (???) <F59104DC-4122-34B0-92E5-5A2989E14249> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/Crypto/Cipher/_raw_ofb.cpython-37m-darwin.so
0x10e8ea000 - 0x10e8eaff7 +_strxor.cpython-37m-darwin.so (???) <3C58F5A3-8D98-33B2-814F-0EBBC5F20333> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/Crypto/Util/_strxor.cpython-37m-darwin.so
0x10e8ec000 - 0x10eb61fff +_multiarray_umath.cpython-37m-darwin.so (0) <671D7C13-F80F-39BB-AAAC-7812A00AF0AD> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/numpy/core/_multiarray_umath.cpython-37m-darwin.so
0x10ec73000 - 0x112701797 +libopenblasp-r0.3.7.dev.dylib (0) <0E19F9FE-2367-3794-9260-55F4BB058EF2> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/numpy/.dylibs/libopenblasp-r0.3.7.dev.dylib
0x112945000 - 0x112a5cff7 +libgfortran.3.dylib (0) <9ABE5EDE-AD43-391A-9E54-866711FAC32A> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/numpy/.dylibs/libgfortran.3.dylib
0x112ac0000 - 0x112af6fff +libquadmath.0.dylib (0) <7FFA409F-FB04-3B64-BE9A-3E3A494C975E> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/numpy/.dylibs/libquadmath.0.dylib
0x112b05000 - 0x112b1aff7 +libgcc_s.1.dylib (0) <7C6D7CB7-82DB-3290-8181-07646FEA1F80> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/numpy/.dylibs/libgcc_s.1.dylib
0x118ba5000 - 0x118bb8ff7 +_pickle.cpython-37m-darwin.so (0) <9C74285E-75A9-33BD-8836-AE129AFA3A86> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload/_pickle.cpython-37m-darwin.so
0x118cc4000 - 0x118cc4fff +_cpuid_c.cpython-37m-darwin.so (???) <E61506B0-F069-3A2D-847B-4006A2DBD5BF> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/Crypto/Util/_cpuid_c.cpython-37m-darwin.so
0x118d06000 - 0x118d13fff +_multiarray_tests.cpython-37m-darwin.so (0) <79FE98ED-E4E1-30CE-8345-D110F170574F> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/numpy/core/_multiarray_tests.cpython-37m-darwin.so
0x118d23000 - 0x118d33fff +_ctypes.cpython-37m-darwin.so (0) <B0740DFD-2C92-3A81-9E85-B7CAA9F7EF67> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload/_ctypes.cpython-37m-darwin.so
0x118dc4000 - 0x118dc5ff7 +lapack_lite.cpython-37m-darwin.so (0) <69D4AA05-FED8-3329-97EF-5F1D0B0C7D4D> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/numpy/linalg/lapack_lite.cpython-37m-darwin.so
0x118dc9000 - 0x118de2fff +_umath_linalg.cpython-37m-darwin.so (0) <F2C3E3AE-7A1D-3981-B31A-DC92F46EAE81> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/numpy/linalg/_umath_linalg.cpython-37m-darwin.so
0x118eb0000 - 0x118ef4ff7 +_decimal.cpython-37m-darwin.so (0) <F035ADB0-3946-309B-8C35-E789BD3A7696> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload/_decimal.cpython-37m-darwin.so
0x118f53000 - 0x118f62ff7 +pocketfft_internal.cpython-37m-darwin.so (0) <190AE76A-4D5F-3035-A1FC-6205292B1543> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/numpy/fft/pocketfft_internal.cpython-37m-darwin.so
0x118fa6000 - 0x119018fff +mtrand.cpython-37m-darwin.so (0) <7A0F0AE5-72B5-3D7B-B3B8-475F664F9AFA> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/numpy/random/mtrand.cpython-37m-darwin.so
0x119069000 - 0x1190a1fff +common.cpython-37m-darwin.so (0) <7A31D2A9-7507-3A37-B23B-C63CD062B806> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/numpy/random/common.cpython-37m-darwin.so
0x1190b6000 - 0x119114ff7 +bounded_integers.cpython-37m-darwin.so (0) <8A5547BC-C82A-3E41-8320-A788E2DC1801> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/numpy/random/bounded_integers.cpython-37m-darwin.so
0x119136000 - 0x11914aff7 +mt19937.cpython-37m-darwin.so (0) <BC393547-41A0-3F0F-9652-201F8B610385> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/numpy/random/mt19937.cpython-37m-darwin.so
0x119156000 - 0x119175ff7 +bit_generator.cpython-37m-darwin.so (0) <9AF84E7A-4923-34C9-9430-788D70CCB66B> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/numpy/random/bit_generator.cpython-37m-darwin.so
0x119190000 - 0x1191b2ff7 +entropy.cpython-37m-darwin.so (0) <A97081D3-BB5C-3BD4-962E-5B2A0C72FD26> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/numpy/random/entropy.cpython-37m-darwin.so
0x1191c8000 - 0x1191d5ff7 +philox.cpython-37m-darwin.so (0) <488F375C-7017-38DF-BA7B-74AF2913019F> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/numpy/random/philox.cpython-37m-darwin.so
0x1191e0000 - 0x1191ebfff +pcg64.cpython-37m-darwin.so (0) <BF7967AA-BF0B-3BF9-8EAE-AAF5A73302FB> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/numpy/random/pcg64.cpython-37m-darwin.so
0x1191f6000 - 0x1191feff7 +sfc64.cpython-37m-darwin.so (0) <6CB1F36F-C4FC-3CE7-B5BE-0FA005F65E2C> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/numpy/random/sfc64.cpython-37m-darwin.so
0x119208000 - 0x11928bfff +generator.cpython-37m-darwin.so (0) <868CE861-C95A-383B-935D-942F28314F69> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/numpy/random/generator.cpython-37m-darwin.so
0x11933d000 - 0x119348ff7 +_flinalg.cpython-37m-darwin.so (0) <9C1F46F8-2DA2-3943-8DBA-D6BF8932E0B7> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/linalg/_flinalg.cpython-37m-darwin.so
0x1194d2000 - 0x119508ffb +conversion.cpython-37m-darwin.so (0) <7E026496-33EB-37FB-B2B1-9E59112C1202> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/tslibs/conversion.cpython-37m-darwin.so
0x119523000 - 0x119551fff +c_timestamp.cpython-37m-darwin.so (0) <6103BAF4-3AF5-3352-B179-4DE0A932BFF1> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/tslibs/c_timestamp.cpython-37m-darwin.so
0x1195ac000 - 0x1195cbfff +nattype.cpython-37m-darwin.so (0) <2443C3F9-7228-3839-B38B-B0FDCD9B921B> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/tslibs/nattype.cpython-37m-darwin.so
0x1195e5000 - 0x1195ecff7 +np_datetime.cpython-37m-darwin.so (0) <746367F4-693A-3E0E-B820-2114D5BC93E2> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/tslibs/np_datetime.cpython-37m-darwin.so
0x1195f4000 - 0x11961aff3 +timezones.cpython-37m-darwin.so (0) <286416F5-248C-3583-A79A-37E3E95770D2> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/tslibs/timezones.cpython-37m-darwin.so
0x119671000 - 0x1196b1ff7 +tzconversion.cpython-37m-darwin.so (0) <1CCBA52F-B8E6-32C2-BF26-7D6A4365B415> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/tslibs/tzconversion.cpython-37m-darwin.so
0x1196cb000 - 0x119722ffb +timedeltas.cpython-37m-darwin.so (0) <9474E801-68EE-3035-87C0-FAB39EFFDC50> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/tslibs/timedeltas.cpython-37m-darwin.so
0x119750000 - 0x119799ffb +offsets.cpython-37m-darwin.so (0) <F8F7176B-2E00-347F-B839-BF92D81E0CA2> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/tslibs/offsets.cpython-37m-darwin.so
0x1197c8000 - 0x1197d0fff +ccalendar.cpython-37m-darwin.so (0) <39FE416F-D8BE-3B91-8324-09787BBDFFE5> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/tslibs/ccalendar.cpython-37m-darwin.so
0x11981b000 - 0x119868ff3 +strptime.cpython-37m-darwin.so (0) <4F4ED8D8-D2C7-3B18-91AE-6C592082E58A> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/tslibs/strptime.cpython-37m-darwin.so
0x1198d0000 - 0x1198fbff3 +fields.cpython-37m-darwin.so (0) <58EF15EB-CAF3-3AF8-BAFF-F6DD9456E8EE> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/tslibs/fields.cpython-37m-darwin.so
0x119915000 - 0x119963ff3 +parsing.cpython-37m-darwin.so (0) <F1038D43-02A4-350A-93A7-110FBDC5EEAA> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/tslibs/parsing.cpython-37m-darwin.so
0x11998d000 - 0x1199e1fff +period.cpython-37m-darwin.so (0) <509F2D3F-95A5-39CE-A910-B67DE9FC8930> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/tslibs/period.cpython-37m-darwin.so
0x119a4b000 - 0x119a5efff +frequencies.cpython-37m-darwin.so (0) <E8080AA8-3BEC-3D95-AE57-52DDE03AAC30> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/tslibs/frequencies.cpython-37m-darwin.so
0x119a6f000 - 0x119aa6ffb +timestamps.cpython-37m-darwin.so (0) <E9B5D2EF-6108-3BD1-9E5F-911D411D13F3> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/tslibs/timestamps.cpython-37m-darwin.so
0x119ace000 - 0x119af9ff3 +resolution.cpython-37m-darwin.so (0) <616CA3A1-5E97-3E9B-8544-A58C13CA8FB6> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/tslibs/resolution.cpython-37m-darwin.so
0x119b17000 - 0x119b8cfff +hashtable.cpython-37m-darwin.so (0) <6515F28C-9FA4-3D61-89B6-E329E4C532A9> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/hashtable.cpython-37m-darwin.so
0x119bbd000 - 0x119bcaffb +missing.cpython-37m-darwin.so (0) <D129B9BF-A6A5-3FE3-9FEF-867119DCCFD5> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/missing.cpython-37m-darwin.so
0x119bd4000 - 0x119c3dff3 +lib.cpython-37m-darwin.so (0) <A1B28D8E-0A89-39F0-A4D3-FDEFAE6464E3> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/lib.cpython-37m-darwin.so
0x119cb6000 - 0x119cf0fff +tslib.cpython-37m-darwin.so (0) <B1872F61-F555-3139-AAE2-06F77C01DA82> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/tslib.cpython-37m-darwin.so
0x119d0c000 - 0x119e90fff +algos.cpython-37m-darwin.so (0) <EC939FA9-67B7-3D9F-8DDD-3E552DEA7F01> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/algos.cpython-37m-darwin.so
0x119f27000 - 0x11a0dcff7 +interval.cpython-37m-darwin.so (0) <60B34032-C2E3-3818-9D5C-D34771EEBC32> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/interval.cpython-37m-darwin.so
0x11a193000 - 0x11a19bfff +properties.cpython-37m-darwin.so (0) <BEE72926-AD95-3673-B647-C9A705E770A4> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/properties.cpython-37m-darwin.so
0x11a1e4000 - 0x11a200ff7 +hashing.cpython-37m-darwin.so (0) <88438F20-9EA1-31DB-BDF0-5C93BB68E102> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/hashing.cpython-37m-darwin.so
0x11a213000 - 0x11a238fff +ops.cpython-37m-darwin.so (0) <04703BD6-9898-308A-AC6F-29C8BE32156F> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/ops.cpython-37m-darwin.so
0x11a3cd000 - 0x11a453ff3 +index.cpython-37m-darwin.so (0) <20F36BD4-9F1D-314B-8392-0D63CC742C43> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/index.cpython-37m-darwin.so
0x11a483000 - 0x11a6c4ff3 +join.cpython-37m-darwin.so (0) <344FAA02-156E-3C95-845E-E7619E27314E> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/join.cpython-37m-darwin.so
0x11a845000 - 0x11a865fff +_elementpath.cpython-37m-darwin.so (???) <429F29F9-50B3-33CC-9E45-AEDA036695FB> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/lxml/_elementpath.cpython-37m-darwin.so
0x11a900000 - 0x11a901fff +_check_build.cpython-37m-darwin.so (0) <B87447C3-40DD-37C4-9E59-11EC294212EE> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/sklearn/__check_build/_check_build.cpython-37m-darwin.so
0x11a906000 - 0x11a9c3ffb +sparse.cpython-37m-darwin.so (0) <526BE788-0E0D-3680-9B30-20D25684ECC5> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/sparse.cpython-37m-darwin.so
0x11aa4b000 - 0x11aa4cff7 +_raw_ctr.cpython-37m-darwin.so (???) <212D5173-BF56-324C-BF96-75DA28AD6D41> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/Crypto/Cipher/_raw_ctr.cpython-37m-darwin.so
0x11aa4f000 - 0x11aa4fff7 +_BLAKE2s.cpython-37m-darwin.so (???) <10F5EA93-0EB0-3E8A-A621-E0F5BEB14C86> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/Crypto/Hash/_BLAKE2s.cpython-37m-darwin.so
0x11aa52000 - 0x11aa54ff7 +_SHA1.cpython-37m-darwin.so (???) <29C10BF6-3352-3F5E-BB0B-663D9BA3B34E> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/Crypto/Hash/_SHA1.cpython-37m-darwin.so
0x11aa57000 - 0x11aa5efff +minpack2.cpython-37m-darwin.so (0) <B547A37A-E772-3C2D-A07D-F944F3D89961> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/optimize/minpack2.cpython-37m-darwin.so
0x11aae4000 - 0x11aae9fff +_json.cpython-37m-darwin.so (0) <58573D55-4505-383C-89CE-7B16ED7981AD> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload/_json.cpython-37m-darwin.so
0x11abee000 - 0x11abf3ff3 +indexing.cpython-37m-darwin.so (0) <3C9B3910-F390-33CB-B1A6-571B9D9AEF51> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/indexing.cpython-37m-darwin.so
0x11ac3a000 - 0x11ac67ffb +internals.cpython-37m-darwin.so (0) <795FFB1B-2E84-3251-A206-06D464DD4426> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/internals.cpython-37m-darwin.so
0x11acc1000 - 0x11acc2fff +_MD5.cpython-37m-darwin.so (???) <09E8FD08-E36A-35F9-807A-9EC5871408DE> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/Crypto/Hash/_MD5.cpython-37m-darwin.so
0x11acc5000 - 0x11adc2fff +unicodedata.cpython-37m-darwin.so (0) <B4AE629C-6564-3E2E-9A6E-AE586EE0AD79> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload/unicodedata.cpython-37m-darwin.so
0x11adc8000 - 0x11adcbfff +_csv.cpython-37m-darwin.so (0) <F629A3FE-5724-37C1-8940-6E5C172BFD77> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload/_csv.cpython-37m-darwin.so
0x11ae51000 - 0x11ae5eff7 +_ssl.cpython-37m-darwin.so (0) <D1740549-C698-31F9-95C7-88A38F5385F5> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload/_ssl.cpython-37m-darwin.so
0x11aeaf000 - 0x11aeb1ff7 +mmap.cpython-37m-darwin.so (0) <BA9E74DF-BF4B-34B0-BC25-AF2E4712468A> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload/mmap.cpython-37m-darwin.so
0x11aef5000 - 0x11aef5ff7 +_scproxy.cpython-37m-darwin.so (0) <1C12C693-374D-3CDA-8235-D20E4F60F2D7> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload/_scproxy.cpython-37m-darwin.so
0x11aef8000 - 0x11af25fff +reshape.cpython-37m-darwin.so (0) <3EFC5C55-6B9E-38BB-9CEA-F1AD738C57E2> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/reshape.cpython-37m-darwin.so
0x11afbe000 - 0x11b04eff7 +window.cpython-37m-darwin.so (0) <FD91798B-305C-395E-98AD-DFD6E27FBCFB> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/window.cpython-37m-darwin.so
0x11b07c000 - 0x11b08bff7 +skiplist.cpython-37m-darwin.so (0) <1C8A7441-A005-31E9-B1DC-2E12A64BE530> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/skiplist.cpython-37m-darwin.so
0x11b0d5000 - 0x11b1b3ff7 +groupby.cpython-37m-darwin.so (0) <38D47B27-F8F5-3209-A258-40AF5104539B> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/groupby.cpython-37m-darwin.so
0x11b223000 - 0x11b266ff3 +reduction.cpython-37m-darwin.so (0) <1FDC291C-62F5-34E4-A080-528C5372305E> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/reduction.cpython-37m-darwin.so
0x11b346000 - 0x11b3b1ff3 +parsers.cpython-37m-darwin.so (0) <D436D433-AFB0-30CB-94AC-EE18B760AA2C> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/parsers.cpython-37m-darwin.so
0x11b41d000 - 0x11b42bfff +json.cpython-37m-darwin.so (0) <5457E458-ABB1-3B38-9B4D-0DF00198FA6A> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/json.cpython-37m-darwin.so
0x11b435000 - 0x11b455ff3 +writers.cpython-37m-darwin.so (0) <63C148BE-23C9-35A1-B4CE-915F2DBAF243> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/writers.cpython-37m-darwin.so
0x11b46c000 - 0x11b46cffb +_move.cpython-37m-darwin.so (0) <9F92A2B0-79E4-3647-901D-37ECBF8387FE> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/util/_move.cpython-37m-darwin.so
0x11b4af000 - 0x11b4baffb +_packer.cpython-37m-darwin.so (0) <502E1D7D-FF86-386D-A102-CDEC1D2A4614> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/io/msgpack/_packer.cpython-37m-darwin.so
0x11b4c5000 - 0x11b4d4ff7 +_unpacker.cpython-37m-darwin.so (0) <95ECB8B7-7DD6-3AB7-8C88-7A58EB442678> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/io/msgpack/_unpacker.cpython-37m-darwin.so
0x11b5e2000 - 0x11b5f1ff7 +testing.cpython-37m-darwin.so (0) <CCEF6A15-EB6A-39B1-B296-D271DBEC2F6E> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pandas/_libs/testing.cpython-37m-darwin.so
0x11b87a000 - 0x11b87bff3 +cprocessors.cpython-37m-darwin.so (0) <D739212E-3C17-39B1-B068-EE1948A71018> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/sqlalchemy/cprocessors.cpython-37m-darwin.so
0x11b97f000 - 0x11b97ffff +cutils.cpython-37m-darwin.so (0) <7FC0CA82-C081-30D2-896C-E8E780682181> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/sqlalchemy/cutils.cpython-37m-darwin.so
0x11ba02000 - 0x11ba03ff7 +cresultproxy.cpython-37m-darwin.so (0) <90BBE3F6-E1A5-3E3D-8FEF-7095622F11E2> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/sqlalchemy/cresultproxy.cpython-37m-darwin.so
0x11bd47000 - 0x11bd48ff7 +_queue.cpython-37m-darwin.so (0) <B9D80A7C-A744-3A24-AA10-1CEF3CFFD022> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload/_queue.cpython-37m-darwin.so
0x11bdcc000 - 0x11bdccfff +_uuid.cpython-37m-darwin.so (0) <4283C23E-E755-3642-9450-F25DED17AE4D> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload/_uuid.cpython-37m-darwin.so
0x11be6e000 - 0x11befc93f dyld (732.8) <42C11B81-6928-369F-B03E-D57355572700> /usr/lib/dyld
0x11bfef000 - 0x11c2d9ff3 +cygrpc.cpython-37m-darwin.so (0) <60A0DCC9-ACBA-33CA-9005-DB149C9B8520> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/grpc/_cython/cygrpc.cpython-37m-darwin.so
0x11c73a000 - 0x11c898ff7 +_message.cpython-37m-darwin.so (0) <9E0844FB-B4A0-3A08-9583-7EE6C3431BB2> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/google/protobuf/pyext/_message.cpython-37m-darwin.so
0x11cda2000 - 0x11cda4ff3 +lgamma.cpython-37m-darwin.so (0) <B68C97ED-5D34-3885-B77F-0799388D8581> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/sklearn/utils/lgamma.cpython-37m-darwin.so
0x11cdaa000 - 0x11cdafff7 +array.cpython-37m-darwin.so (0) <7934FE3A-C258-3F4F-AD15-47D5BE9FCE15> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload/array.cpython-37m-darwin.so
0x11cdb9000 - 0x11cdb9fff +_Salsa20.cpython-37m-darwin.so (???) <2C457652-7378-3C11-9BF6-EEF20A1ECC2D> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/Crypto/Cipher/_Salsa20.cpython-37m-darwin.so
0x11ce7d000 - 0x11ce7ffff +_SHA256.cpython-37m-darwin.so (???) <DADFF09A-A82C-31DB-AA28-58613525E993> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/Crypto/Hash/_SHA256.cpython-37m-darwin.so
0x11ce82000 - 0x11ce83ff7 +_scrypt.cpython-37m-darwin.so (???) <CDA31FE5-A642-3D51-A95E-A5274A12CF21> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/Crypto/Protocol/_scrypt.cpython-37m-darwin.so
0x11d106000 - 0x11d112ff7 +_ccallback_c.cpython-37m-darwin.so (0) <56789943-E473-3E97-B057-4D00CD59C800> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/_lib/_ccallback_c.cpython-37m-darwin.so
0x11d11e000 - 0x11d11efff +_ghash_portable.cpython-37m-darwin.so (???) <9308BC75-0900-3678-B259-8A8B8B96CC86> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/Crypto/Hash/_ghash_portable.cpython-37m-darwin.so
0x11d1a1000 - 0x11d1a8fff +_elementtree.cpython-37m-darwin.so (0) <BCBD7BDA-D6E4-3986-AE4F-BABD7C9F1B29> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload/_elementtree.cpython-37m-darwin.so
0x11d1f2000 - 0x11d1fbffb +moduleTNC.cpython-37m-darwin.so (0) <2514C81C-13FD-3A19-8658-0F5A796846E0> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/optimize/moduleTNC.cpython-37m-darwin.so
0x11e5e5000 - 0x11e5e5fff +_ghash_clmul.cpython-37m-darwin.so (???) <245E6E62-37E7-37E1-8EF5-02F1D1F27BAA> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/Crypto/Hash/_ghash_clmul.cpython-37m-darwin.so
0x11e5e8000 - 0x11e5e9ff7 +_raw_ocb.cpython-37m-darwin.so (???) <559E6F1E-78A8-3B46-87AA-B1A16C69643A> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/Crypto/Cipher/_raw_ocb.cpython-37m-darwin.so
0x11e5ec000 - 0x11e5ecfff +_ARC4.cpython-37m-darwin.so (???) <9A105AD4-8C53-3276-8F5E-A60AE4D07299> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/Crypto/Cipher/_ARC4.cpython-37m-darwin.so
0x11e5ef000 - 0x11e5fbff3 +murmurhash.cpython-37m-darwin.so (0) <8A36719F-1606-3E51-9B60-4A105A38E8F9> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/sklearn/utils/murmurhash.cpython-37m-darwin.so
0x11e61b000 - 0x11e61cff7 +_multiprocessing.cpython-37m-darwin.so (0) <31A2882A-FE2F-3243-BB8A-D24B0B99DD41> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload/_multiprocessing.cpython-37m-darwin.so
0x11eb23000 - 0x11eb6affb +libomp.dylib (0) <BC7C4D7D-BD45-3672-8D71-70A964A65AC1> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/sklearn/.dylibs/libomp.dylib
0x11eb97000 - 0x11eb9bffb +mio_utils.cpython-37m-darwin.so (0) <88C81605-0DFE-389B-AD34-67A521FADBD0> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/io/matlab/mio_utils.cpython-37m-darwin.so
0x11eba2000 - 0x11eba3fff +_speedups.cpython-37m-darwin.so (???) <E9B73517-643A-3FD1-8B18-600595CA2B65> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/markupsafe/_speedups.cpython-37m-darwin.so
0x11f1e6000 - 0x11f200ff3 +_tools.cpython-37m-darwin.so (0) <9DAA9185-9847-33AB-AEFF-8CC3D71E6E2D> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/sparse/csgraph/_tools.cpython-37m-darwin.so
0x11f2a5000 - 0x11f2aaff7 +_datadir.cpython-37m-darwin.so (???) <61E3FE1E-4E30-3A18-933B-49A79C824B33> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pyproj/_datadir.cpython-37m-darwin.so
0x11fb7f000 - 0x11ff80ff7 +_sparsetools.cpython-37m-darwin.so (0) <9B6DC85E-8A9B-38E6-8A0B-45476F0D491C> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/sparse/_sparsetools.cpython-37m-darwin.so
0x12009e000 - 0x120101ffb +_csparsetools.cpython-37m-darwin.so (0) <4843EB5F-2CF8-3625-A14C-C2BB1C00AB43> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/sparse/_csparsetools.cpython-37m-darwin.so
0x12012d000 - 0x12017dff3 +_shortest_path.cpython-37m-darwin.so (0) <DDA3A585-1D73-39C8-9261-B78F2A102F1D> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/sparse/csgraph/_shortest_path.cpython-37m-darwin.so
0x1201e6000 - 0x1201ffff7 +_traversal.cpython-37m-darwin.so (0) <C10079CB-2153-3AE6-AA41-802DF15E48CA> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/sparse/csgraph/_traversal.cpython-37m-darwin.so
0x12020f000 - 0x12022cffb +_min_spanning_tree.cpython-37m-darwin.so (0) <DE0724C0-F449-3C3C-8D5E-3FFBBF8964A7> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/sparse/csgraph/_min_spanning_tree.cpython-37m-darwin.so
0x120241000 - 0x120276fff +_reordering.cpython-37m-darwin.so (0) <7F605572-00DF-31A8-80EF-6FB38BEA5B82> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/sparse/csgraph/_reordering.cpython-37m-darwin.so
0x120297000 - 0x12031ffff +ckdtree.cpython-37m-darwin.so (0) <593DFC90-CA73-3A14-9DE3-7AD597879471> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/spatial/ckdtree.cpython-37m-darwin.so
0x120362000 - 0x120426ff7 +qhull.cpython-37m-darwin.so (0) <AF26961F-68AA-3833-A1C7-257D526EEA9D> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/spatial/qhull.cpython-37m-darwin.so
0x12046c000 - 0x123efa797 +libopenblasp-r0.3.7.dev.dylib (0) <0E19F9FE-2367-3794-9260-55F4BB058EF2> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/.dylibs/libopenblasp-r0.3.7.dev.dylib
0x12413e000 - 0x124255ff7 +libgfortran.3.dylib (0) <9ABE5EDE-AD43-391A-9E54-866711FAC32A> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/.dylibs/libgfortran.3.dylib
0x1242b9000 - 0x1242effff +libquadmath.0.dylib (0) <7FFA409F-FB04-3B64-BE9A-3E3A494C975E> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/.dylibs/libquadmath.0.dylib
0x1242fe000 - 0x124313ff7 +libgcc_s.1.dylib (0) <7C6D7CB7-82DB-3290-8181-07646FEA1F80> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/.dylibs/libgcc_s.1.dylib
0x12a31e000 - 0x12a338ff3 +_voronoi.cpython-37m-darwin.so (0) <C3ADEAE2-5485-3E7C-AC5F-C0FE5EF61FC5> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/spatial/_voronoi.cpython-37m-darwin.so
0x12a34b000 - 0x12a358fff +_distance_wrap.cpython-37m-darwin.so (0) <A1F90DCF-4476-3B3D-9248-959CE54C3521> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/spatial/_distance_wrap.cpython-37m-darwin.so
0x12a362000 - 0x12a37dff3 +_hausdorff.cpython-37m-darwin.so (0) <BBF0B71A-E24B-30CC-B0CC-CEBECF2A84B0> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/spatial/_hausdorff.cpython-37m-darwin.so
0x12a3d0000 - 0x12a429ff7 +_fblas.cpython-37m-darwin.so (0) <6BA9FD7D-004E-3374-90D4-C959B8F759EE> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/linalg/_fblas.cpython-37m-darwin.so
0x12a461000 - 0x12a532fff +_flapack.cpython-37m-darwin.so (0) <9730DD4E-02D3-37B7-9277-22E7080D1CBE> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/linalg/_flapack.cpython-37m-darwin.so
0x12a602000 - 0x12a626ff7 +_solve_toeplitz.cpython-37m-darwin.so (0) <D4FF436D-39FD-3B0D-B315-CB0027952B4C> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/linalg/_solve_toeplitz.cpython-37m-darwin.so
0x12a63f000 - 0x12a673ff3 +_decomp_update.cpython-37m-darwin.so (0) <E4FBDC23-F6F0-3743-920A-3AA7A21D8069> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/linalg/_decomp_update.cpython-37m-darwin.so
0x12a6c9000 - 0x12a6eeff7 +cython_blas.cpython-37m-darwin.so (0) <7B73EDCD-02F8-3F72-91C8-3D57187F3476> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/linalg/cython_blas.cpython-37m-darwin.so
0x12a70b000 - 0x12a777ff7 +cython_lapack.cpython-37m-darwin.so (0) <E8047A89-0FFC-3019-A549-6242E04C5D04> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/linalg/cython_lapack.cpython-37m-darwin.so
0x12a7d5000 - 0x12a944ff7 +_ufuncs.cpython-37m-darwin.so (0) <231123C7-D03E-39E7-A40C-B2A4BDB656EB> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/special/_ufuncs.cpython-37m-darwin.so
0x12a9ba000 - 0x12a9ceff3 +_ufuncs_cxx.cpython-37m-darwin.so (0) <54656921-42D2-3BFC-B1A3-D89B37345272> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/special/_ufuncs_cxx.cpython-37m-darwin.so
0x12aa2d000 - 0x12aaeafef +specfun.cpython-37m-darwin.so (0) <B4BF7EF8-2C94-3180-BE22-7D05DA004D35> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/special/specfun.cpython-37m-darwin.so
0x12aafd000 - 0x12ab0aff3 +_ellip_harm_2.cpython-37m-darwin.so (0) <FA8A764D-D157-3585-B135-EF1296C9142E> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/special/_ellip_harm_2.cpython-37m-darwin.so
0x12ab55000 - 0x12ab66ff7 +_vq.cpython-37m-darwin.so (0) <711944FB-2A8B-3F72-9BCB-51CB1DCE33E5> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/cluster/_vq.cpython-37m-darwin.so
0x12ab72000 - 0x12abaeff7 +_hierarchy.cpython-37m-darwin.so (0) <39E0A236-8538-3948-B428-360A12A786AA> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/cluster/_hierarchy.cpython-37m-darwin.so
0x12abd4000 - 0x12abffff3 +_optimal_leaf_ordering.cpython-37m-darwin.so (0) <A00A07F0-D477-34F4-870F-E1FC5BDDE84E> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/cluster/_optimal_leaf_ordering.cpython-37m-darwin.so
0x12ad1f000 - 0x12ad52fff +_trlib.cpython-37m-darwin.so (0) <EDC680B5-E488-3DDE-AB77-40F03A2A9339> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/optimize/_trlib/_trlib.cpython-37m-darwin.so
0x12ad6e000 - 0x12ad9bff7 +_iterative.cpython-37m-darwin.so (0) <D87A7BA8-D6D1-3C20-A8E3-6C271354B114> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/sparse/linalg/isolve/_iterative.cpython-37m-darwin.so
0x12adf3000 - 0x12ae40fff +_superlu.cpython-37m-darwin.so (0) <7B89A303-CDC4-31AE-BEA4-C7735BE91E78> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/sparse/linalg/dsolve/_superlu.cpython-37m-darwin.so
0x12ae57000 - 0x12aedcfff +_arpack.cpython-37m-darwin.so (0) <B00F585E-589F-3E6B-9065-8304F602AD6E> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/sparse/linalg/eigen/arpack/_arpack.cpython-37m-darwin.so
0x12af3c000 - 0x12af58ffb +_group_columns.cpython-37m-darwin.so (0) <7B6FB61C-76E1-34E3-A5B4-1E0B7095D645> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/optimize/_group_columns.cpython-37m-darwin.so
0x12afab000 - 0x12afc7ff7 +_lbfgsb.cpython-37m-darwin.so (0) <62513E45-54E0-3341-AAEF-7E2BF7EA3691> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/optimize/_lbfgsb.cpython-37m-darwin.so
0x12afcd000 - 0x12afe9fff +_cobyla.cpython-37m-darwin.so (0) <7CDE2C8B-C8AA-3695-8F9F-BA00A0191866> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/optimize/_cobyla.cpython-37m-darwin.so
0x12afee000 - 0x12b00bff7 +_slsqp.cpython-37m-darwin.so (0) <EDB0BB9F-0CEC-3B8E-A642-A312C2F90FAF> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/optimize/_slsqp.cpython-37m-darwin.so
0x12b010000 - 0x12b02dff7 +_minpack.cpython-37m-darwin.so (0) <6CFDC72D-829F-3920-8C24-AB25E2FF2B58> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/optimize/_minpack.cpython-37m-darwin.so
0x12b032000 - 0x12b04bfff +givens_elimination.cpython-37m-darwin.so (0) <A884AB38-43EF-31C1-8DA2-E1163ACD577D> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/optimize/_lsq/givens_elimination.cpython-37m-darwin.so
0x12b09c000 - 0x12b0a4ff7 +_nnls.cpython-37m-darwin.so (0) <D61A8ED9-C639-380C-BE82-E4338D8EBAA0> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/optimize/_nnls.cpython-37m-darwin.so
0x12b0e9000 - 0x12b116ffb +_bglu_dense.cpython-37m-darwin.so (0) <B34FDC8A-C379-3C7A-999B-35722F6ADAE6> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/optimize/_bglu_dense.cpython-37m-darwin.so
0x12b174000 - 0x12b189fc7 +_odepack.cpython-37m-darwin.so (0) <01715935-1352-3AE2-98D3-16CB731655E9> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/integrate/_odepack.cpython-37m-darwin.so
0x12b18e000 - 0x12b1a5fd7 +_quadpack.cpython-37m-darwin.so (0) <F1217CB4-76B4-3DC8-A382-02790B7D8E22> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/integrate/_quadpack.cpython-37m-darwin.so
0x12b1ab000 - 0x12b1e1ff7 +vode.cpython-37m-darwin.so (0) <8264B1DF-E9B8-38B1-B9A8-F3806142ACBF> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/integrate/vode.cpython-37m-darwin.so
0x12b1e9000 - 0x12b200fff +_dop.cpython-37m-darwin.so (0) <C4BBF863-FA3E-35BF-8092-14C10FEDE39D> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/integrate/_dop.cpython-37m-darwin.so
0x12b207000 - 0x12b21efdf +lsoda.cpython-37m-darwin.so (0) <E97962D2-0742-3A3A-9395-35500BED7672> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/integrate/lsoda.cpython-37m-darwin.so
0x12b376000 - 0x12b3aaff7 +_fitpack.cpython-37m-darwin.so (0) <1EB60BA3-DFC5-3970-A052-DDAEC52BBC7B> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/interpolate/_fitpack.cpython-37m-darwin.so
0x12b3b1000 - 0x12b40dfff +dfitpack.cpython-37m-darwin.so (0) <6DDDE435-2622-3511-9598-E414C1076614> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/interpolate/dfitpack.cpython-37m-darwin.so
0x12b41e000 - 0x12b448fff +_bspl.cpython-37m-darwin.so (0) <6301FAAE-E019-3ED9-A7E8-396550ABFD10> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/interpolate/_bspl.cpython-37m-darwin.so
0x12b463000 - 0x12b49ffff +_ppoly.cpython-37m-darwin.so (0) <D68A7950-D74D-3DCE-A92A-E44427175F87> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/interpolate/_ppoly.cpython-37m-darwin.so
0x12b4fc000 - 0x12b538ff7 +interpnd.cpython-37m-darwin.so (0) <A202C753-3B48-3192-8172-939D328E6D63> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/interpolate/interpnd.cpython-37m-darwin.so
0x12b55a000 - 0x12b59eff7 +_stats.cpython-37m-darwin.so (0) <364241A1-BF01-370A-88CA-42D37D88FBAB> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/stats/_stats.cpython-37m-darwin.so
0x12b6c3000 - 0x12b6c8ff7 +_raw_aes.cpython-37m-darwin.so (???) <1F862189-D7BD-34FC-8218-C98348A3B57D> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/Crypto/Cipher/_raw_aes.cpython-37m-darwin.so
0x12b6cb000 - 0x12b6ccfff +_raw_aesni.cpython-37m-darwin.so (???) <D90003E6-B955-3F3C-ACD5-E3406613E935> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/Crypto/Cipher/_raw_aesni.cpython-37m-darwin.so
0x12b6cf000 - 0x12b6cfff7 +_contextvars.cpython-37m-darwin.so (0) <BFABAB06-4010-3C23-9E3D-BF0705E87D09> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload/_contextvars.cpython-37m-darwin.so
0x12b6d2000 - 0x12b6d3fff +fcntl.cpython-37m-darwin.so (0) <10868A3A-7663-33DC-B405-8F0BEE4DAA6A> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload/fcntl.cpython-37m-darwin.so
0x12b758000 - 0x12b764ff7 +statlib.cpython-37m-darwin.so (0) <8F2CFCAC-47D6-372E-9ABF-3D4424DB4C0B> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/stats/statlib.cpython-37m-darwin.so
0x12b769000 - 0x12b77bfef +mvn.cpython-37m-darwin.so (0) <26820C7D-BBEC-31F9-8627-1E8D07FF5906> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/stats/mvn.cpython-37m-darwin.so
0x12b8f9000 - 0x12bdeffff +libproj.15.dylib (0) <D3CA33A2-7C0E-32EF-9028-DFD61B4D51B0> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pyproj/.dylibs/libproj.15.dylib
0x12bf49000 - 0x12c06bff3 +libsqlite3.0.dylib (0) <C34BBD4D-8251-3D89-B334-1D94033A93EB> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pyproj/.dylibs/libsqlite3.0.dylib
0x12c093000 - 0x12c0a2ff7 +_list.cpython-37m-darwin.so (???) <2DC6661F-21A8-399A-BE13-86683E1129DD> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pyproj/_list.cpython-37m-darwin.so
0x12c0f0000 - 0x12c128fff +_crs.cpython-37m-darwin.so (???) <D9EF9403-F5CB-3F58-A496-EE28064D4C0D> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pyproj/_crs.cpython-37m-darwin.so
0x12c159000 - 0x12c163fff +_geod.cpython-37m-darwin.so (???) <5FA2FBB9-C364-3142-BC54-57B8FDB815FB> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pyproj/_geod.cpython-37m-darwin.so
0x12c16d000 - 0x12c177ff7 +_proj.cpython-37m-darwin.so (???) <4578E36F-5E47-3AF2-B97B-4ADF5FEC37EF> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pyproj/_proj.cpython-37m-darwin.so
0x12c182000 - 0x12c19aff7 +_transformer.cpython-37m-darwin.so (???) <25B17D5E-2BCB-3F1D-BFAC-8EB5ABFBA9C3> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pyproj/_transformer.cpython-37m-darwin.so
0x12c22f000 - 0x12c24eff3 +libgeos_c.1.dylib (0) <7A4B8EDB-A092-3095-B708-D7F261F7C5F5> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/shapely/.dylibs/libgeos_c.1.dylib
0x12c260000 - 0x12c333ffb +libgeos-3.6.2.dylib (0) <60DF366F-25E3-30C4-9682-E393C9A21B83> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/shapely/.dylibs/libgeos-3.6.2.dylib
0x12c450000 - 0x12c46aff3 +_speedups.cpython-37m-darwin.so (0) <E549A48D-033E-3251-9EEE-2E00E61423D9> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/shapely/speedups/_speedups.cpython-37m-darwin.so
0x12c4b9000 - 0x12c519ff7 +ogrext.cpython-37m-darwin.so (0) <C16F1087-E476-3F22-A4A3-F42013983D0B> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/fiona/ogrext.cpython-37m-darwin.so
0x12c550000 - 0x12dac0ffb +libgdal.20.dylib (0) <B4C09B26-10D8-367F-B05E-D132FD5A43D5> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/fiona/.dylibs/libgdal.20.dylib
0x12df74000 - 0x12dfd5ffb +libproj.12.dylib (0) <6EB36E24-CDD5-358F-AB3B-B6E657CB8935> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/fiona/.dylibs/libproj.12.dylib
0x12dfea000 - 0x12dff3ffb +libjson-c.2.dylib (0) <310BF741-A36A-3B81-BF65-F686C7E07ED0> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/fiona/.dylibs/libjson-c.2.dylib
0x12dff8000 - 0x12e016fff +libgeos_c.1.dylib (0) <18817ADA-5E51-3124-BC36-77EE8827876B> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/fiona/.dylibs/libgeos_c.1.dylib
0x12e029000 - 0x12e14bff3 +libsqlite3.0.dylib (0) <6C3DF904-B8C3-3A42-A9DE-48D0DBABB703> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/fiona/.dylibs/libsqlite3.0.dylib
0x12e173000 - 0x12e1baffb +libopenjp2.2.3.0.dylib (0) <BD0F539C-FCA3-3846-A7F5-4C8FB9287E27> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/fiona/.dylibs/libopenjp2.2.3.0.dylib
0x12e1c3000 - 0x12e290ffb +libnetcdf.11.dylib (0) <4DF67557-E4CC-3338-A523-AC3E0B7CF686> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/fiona/.dylibs/libnetcdf.11.dylib
0x13130b000 - 0x13135cff3 +libjpeg.9.dylib (0) <564E6966-6C6D-3A9C-8C54-0187163BA378> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/fiona/.dylibs/libjpeg.9.dylib
0x131364000 - 0x1313eefff +libtiff.5.dylib (0) <035CAD7E-6D4C-3329-9065-C655E068A9B2> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/fiona/.dylibs/libtiff.5.dylib
0x1313ff000 - 0x13143bff7 +libpng16.16.dylib (0) <24554181-3A37-31D7-B77B-F4FE6ADB016C> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/fiona/.dylibs/libpng16.16.dylib
0x131445000 - 0x1314f4ff7 +libgeos-3.6.2.dylib (0) <9586199A-F1C2-36A5-B94B-58EFA92A1E4E> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/fiona/.dylibs/libgeos-3.6.2.dylib
0x13159d000 - 0x1315b4ff3 +libhdf5_hl.100.dylib (0) <8585B545-1C2A-3AF7-8E15-3ECB59B00EE7> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/fiona/.dylibs/libhdf5_hl.100.dylib
0x1315bd000 - 0x1318aafff +libhdf5.101.dylib (0) <ABC3515E-5271-37A2-AC1D-30B40BE4D22B> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/fiona/.dylibs/libhdf5.101.dylib
0x1318f8000 - 0x13190efff +_geometry.cpython-37m-darwin.so (0) <59ABE045-B3E0-3171-A812-1BD19FE11D00> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/fiona/_geometry.cpython-37m-darwin.so
0x131922000 - 0x13192cff3 +_shim.cpython-37m-darwin.so (0) <525654A0-F521-34B7-9887-C83414A01EEB> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/fiona/_shim.cpython-37m-darwin.so
0x131934000 - 0x131945ff3 +_err.cpython-37m-darwin.so (0) <17619AC4-42E2-3E39-9773-7DCC419868DB> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/fiona/_err.cpython-37m-darwin.so
0x131955000 - 0x131979ffb +_env.cpython-37m-darwin.so (0) <7B2D1954-7757-3A75-AD68-9CDCF772BFFD> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/fiona/_env.cpython-37m-darwin.so
0x1319d6000 - 0x1319dcfff +schema.cpython-37m-darwin.so (0) <2465A9C4-3C0E-326B-8E94-DC0C8B035C22> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/fiona/schema.cpython-37m-darwin.so
0x1323a4000 - 0x1323a5ff3 +_watchdog_fsevents.cpython-37m-darwin.so (0) <9DA9C06A-D0D1-38FD-B51E-D3B2BF39FBD7> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/_watchdog_fsevents.cpython-37m-darwin.so
0x1323d6000 - 0x13283bfff +etree.cpython-37m-darwin.so (???) <3886C02D-DC32-3A51-93EE-1C2E3C6B0347> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/lxml/etree.cpython-37m-darwin.so
0x132bab000 - 0x132bc3ff3 +_logistic_sigmoid.cpython-37m-darwin.so (0) <4A023C13-709C-3196-B62B-F9A100A06722> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/sklearn/utils/_logistic_sigmoid.cpython-37m-darwin.so
0x132bd5000 - 0x132c55ff7 +sparsefuncs_fast.cpython-37m-darwin.so (0) <3CEB2E51-7B4B-3872-87DD-5DFE245A7AD6> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/sklearn/utils/sparsefuncs_fast.cpython-37m-darwin.so
0x132cc4000 - 0x132ce4fff +mio5_utils.cpython-37m-darwin.so (0) <DC538C39-B83C-3365-9E70-8136E9D6D0A4> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/io/matlab/mio5_utils.cpython-37m-darwin.so
0x132cfc000 - 0x132d0dfff +streams.cpython-37m-darwin.so (0) <53836439-5533-3289-96F0-98F056370ACC> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/io/matlab/streams.cpython-37m-darwin.so
0x132db6000 - 0x132dddff7 +_csr_polynomial_expansion.cpython-37m-darwin.so (0) <90ADD849-7D5F-393B-90C2-27224E353C75> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/sklearn/preprocessing/_csr_polynomial_expansion.cpython-37m-darwin.so
0x132e35000 - 0x132e46ff3 +expected_mutual_info_fast.cpython-37m-darwin.so (0) <C2936B26-36C7-3B1A-833C-6857F06E7C0A> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/sklearn/metrics/cluster/expected_mutual_info_fast.cpython-37m-darwin.so
0x132e53000 - 0x132e76ff3 +pairwise_fast.cpython-37m-darwin.so (0) <E567B2C0-115F-3AA2-A8A6-46C3CCD203D4> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/sklearn/metrics/pairwise_fast.cpython-37m-darwin.so
0x132e8d000 - 0x132ed2ff3 +_cython_blas.cpython-37m-darwin.so (0) <AD7800DE-BFF6-3B0E-84E7-AB6B0D1B7213> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/sklearn/utils/_cython_blas.cpython-37m-darwin.so
0x1330b3000 - 0x1330b8fff +_asyncio.cpython-37m-darwin.so (0) <53DC9766-AA75-3F19-BABC-F5DDAB748676> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload/_asyncio.cpython-37m-darwin.so
0x1332c4000 - 0x1332c7ff3 +greenlet.cpython-37m-darwin.so (0) <88DB7900-3B3E-3C83-A448-4CEB643EDEB0> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/greenlet.cpython-37m-darwin.so
0x1334cc000 - 0x1334d0ffb +pvectorc.cpython-37m-darwin.so (0) <BE1C2C0F-C528-3355-9996-D90F6DCE376A> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pvectorc.cpython-37m-darwin.so
0x133597000 - 0x1335ccff3 +corecext.cpython-37m-darwin.so (0) <85614F8D-070C-3B09-9F12-570186F44021> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/gevent/libev/corecext.cpython-37m-darwin.so
0x133635000 - 0x13363dfff +__hub_local.cpython-37m-darwin.so (0) <2992A652-8006-36B6-B758-75F169E473F2> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/gevent/__hub_local.cpython-37m-darwin.so
0x133647000 - 0x13364ffff +__greenlet_primitives.cpython-37m-darwin.so (0) <38D73EFD-3771-3F36-9E66-A26B1AB3286F> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/gevent/__greenlet_primitives.cpython-37m-darwin.so
0x13365a000 - 0x133666ff3 +__waiter.cpython-37m-darwin.so (0) <069331A5-9282-3E0B-8993-F9B757FE18AF> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/gevent/__waiter.cpython-37m-darwin.so
0x133674000 - 0x13368cfff +__hub_primitives.cpython-37m-darwin.so (0) <D7131A67-B584-3B79-BA07-0D8E37F5BA35> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/gevent/__hub_primitives.cpython-37m-darwin.so
0x1336a0000 - 0x1336ceff7 +_greenlet.cpython-37m-darwin.so (0) <E7937A9B-92AE-3F1D-8765-C336B232D995> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/gevent/_greenlet.cpython-37m-darwin.so
0x1336f2000 - 0x1336f8ff3 +__ident.cpython-37m-darwin.so (0) <B0D4077A-8DC4-3B95-810D-68A429707AC2> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/gevent/__ident.cpython-37m-darwin.so
0x133741000 - 0x13374dfff +__abstract_linkable.cpython-37m-darwin.so (0) <245CCB93-5B7C-3CF9-AEEE-5A6FB59977A8> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/gevent/__abstract_linkable.cpython-37m-darwin.so
0x13375a000 - 0x13376cff7 +_event.cpython-37m-darwin.so (0) <9D0476FA-89E5-33F1-9B5F-DFA8C1E91456> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/gevent/_event.cpython-37m-darwin.so
0x1337bf000 - 0x1337c0ff7 +termios.cpython-37m-darwin.so (0) <C7A91EC6-C4DF-388D-BB4D-C0940A5CD9BC> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload/termios.cpython-37m-darwin.so
0x7fff37c8f000 - 0x7fff37c8ffff com.apple.Accelerate (1.11 - Accelerate 1.11) <956D070C-B522-3A08-891A-CAD6BA4082D1> /System/Library/Frameworks/Accelerate.framework/Versions/A/Accelerate
0x7fff37ca7000 - 0x7fff38312fdf com.apple.vImage (8.1 - 524.2) <45A48EA9-49AA-33A0-B830-AF754BD01009> /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vImage.framework/Versions/A/vImage
0x7fff38313000 - 0x7fff3857bfff libBLAS.dylib (1303) <112B19CC-925A-3E28-8B32-2002D30A37FA> /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libBLAS.dylib
0x7fff3857c000 - 0x7fff3886bfdf libBNNS.dylib (144.11.2) <A806AED9-837B-3C6C-AB0B-A41983C1CD07> /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libBNNS.dylib
0x7fff3886d000 - 0x7fff38c12fff libLAPACK.dylib (1303) <5C248B39-F233-3074-A3A5-AF8F436FBF87> /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libLAPACK.dylib
0x7fff38c13000 - 0x7fff38c28ff8 libLinearAlgebra.dylib (1303) <C21931B4-F6BD-324D-A2D9-F13EE8AFB29E> /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libLinearAlgebra.dylib
0x7fff38c29000 - 0x7fff38c2eff3 libQuadrature.dylib (7) <826897ED-B7AD-33DC-B9CB-F9787784F312> /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libQuadrature.dylib
0x7fff38c2f000 - 0x7fff38c9ffff libSparse.dylib (103) <55467C29-2096-36AB-8A6D-5231A342809D> /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libSparse.dylib
0x7fff38ca0000 - 0x7fff38cb2fef libSparseBLAS.dylib (1303) <3244FCAF-A1FE-3248-AF22-BFB3E9D12555> /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libSparseBLAS.dylib
0x7fff38cb3000 - 0x7fff38e8cffb libvDSP.dylib (735) <E849AEB0-2995-38A4-B0C3-4ACEAF434D12> /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libvDSP.dylib
0x7fff38e8d000 - 0x7fff38f48fd3 libvMisc.dylib (735) <D6248EC4-7772-37BB-87F7-9BAB7F5D31A0> /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libvMisc.dylib
0x7fff38f49000 - 0x7fff38f49fff com.apple.Accelerate.vecLib (3.11 - vecLib 3.11) <79C1A1C7-E97A-3B7A-8737-444B402A7AA0> /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/vecLib
0x7fff3a674000 - 0x7fff3a9eaffe com.apple.CFNetwork (1111 - 1111) <642753C5-5D26-3794-9A4C-4F63F226C01A> /System/Library/Frameworks/CFNetwork.framework/Versions/A/CFNetwork
0x7fff3bea9000 - 0x7fff3c328ff7 com.apple.CoreFoundation (6.9 - 1671.15) <BF8A8279-9C5E-37C6-8426-90C8182EFBDD> /System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation
0x7fff3d290000 - 0x7fff3d290fff com.apple.CoreServices (1069.2 - 1069.2) <C5F7AABC-BADC-3331-A7D6-9B0A82A23E58> /System/Library/Frameworks/CoreServices.framework/Versions/A/CoreServices
0x7fff3d291000 - 0x7fff3d316ff7 com.apple.AE (838 - 838) <7295ED82-7087-3602-9DCA-4FE205F6499C> /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/AE.framework/Versions/A/AE
0x7fff3d317000 - 0x7fff3d5f8fff com.apple.CoreServices.CarbonCore (1217 - 1217) <7AA0ECB3-0993-3081-A9EC-0365EF72B24D> /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/CarbonCore.framework/Versions/A/CarbonCore
0x7fff3d5f9000 - 0x7fff3d646ff1 com.apple.DictionaryServices (1.2 - 321) <3D0FFBDE-E425-37C7-B780-39A3D024462A> /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/DictionaryServices.framework/Versions/A/DictionaryServices
0x7fff3d647000 - 0x7fff3d64fff7 com.apple.CoreServices.FSEvents (1268.0.6 - 1268.0.6) <78D2AB1A-9053-3D32-8C18-C1DD31FF9400> /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/FSEvents.framework/Versions/A/FSEvents
0x7fff3d650000 - 0x7fff3d888ff1 com.apple.LaunchServices (1069.2 - 1069.2) <68B4C10C-D536-33E9-9719-E7BA5B753F2B> /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/LaunchServices.framework/Versions/A/LaunchServices
0x7fff3d889000 - 0x7fff3d921ff1 com.apple.Metadata (10.7.0 - 2066.12) <249EA615-8446-3A36-B6B7-ED613C8B8148> /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/Metadata.framework/Versions/A/Metadata
0x7fff3d922000 - 0x7fff3d94fff7 com.apple.CoreServices.OSServices (1069.2 - 1069.2) <2FECF3BA-B785-35E2-85E9-2A2267801AA4> /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/OSServices.framework/Versions/A/OSServices
0x7fff3d950000 - 0x7fff3d9b7fff com.apple.SearchKit (1.4.1 - 1.4.1) <0068BD72-CF2B-34E4-B461-002D5E56C31C> /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/SearchKit.framework/Versions/A/SearchKit
0x7fff3d9b8000 - 0x7fff3d9dcffd com.apple.coreservices.SharedFileList (131 - 131) <61F62948-4109-38F0-BB91-5EBB6BEEAB10> /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/SharedFileList.framework/Versions/A/SharedFileList
0x7fff3e231000 - 0x7fff3e237ff7 com.apple.DiskArbitration (2.7 - 2.7) <23104F29-F120-354B-97BE-4514A675BB14> /System/Library/Frameworks/DiskArbitration.framework/Versions/A/DiskArbitration
0x7fff3e564000 - 0x7fff3e92bff3 com.apple.Foundation (6.9 - 1671.15) <4BEAB72D-10AA-3009-B0F5-B82B4FE1C441> /System/Library/Frameworks/Foundation.framework/Versions/C/Foundation
0x7fff3e998000 - 0x7fff3e9c7ff3 com.apple.GSS (4.0 - 2.0) <9520F096-C643-36D7-B8CB-3922B6E6D7EC> /System/Library/Frameworks/GSS.framework/Versions/A/GSS
0x7fff3ec7d000 - 0x7fff3ed20ffb com.apple.framework.IOKit (2.0.2 - 1726.11.1) <9E81E92C-7EC2-330F-B0AF-BBFD9D3E46F6> /System/Library/Frameworks/IOKit.framework/Versions/A/IOKit
0x7fff406e3000 - 0x7fff406f5ff3 com.apple.Kerberos (3.0 - 1) <91DF5D16-E721-39F0-A77B-87DA6032F870> /System/Library/Frameworks/Kerberos.framework/Versions/A/Kerberos
0x7fff406f6000 - 0x7fff406f6fff libHeimdalProxy.dylib (77) <51DB9CFB-808F-32E8-BB34-39F6702DBDED> /System/Library/Frameworks/Kerberos.framework/Versions/A/Libraries/libHeimdalProxy.dylib
0x7fff406f7000 - 0x7fff4072dfff com.apple.LDAPFramework (2.4.28 - 194.5) <32FAF82F-BA91-366A-83A3-CDFF6CDD1AF9> /System/Library/Frameworks/LDAP.framework/Versions/A/LDAP
0x7fff42571000 - 0x7fff4257dffe com.apple.NetFS (6.0 - 4.0) <10ECF7E4-98A5-3751-B7AC-0AAAF0050777> /System/Library/Frameworks/NetFS.framework/Versions/A/NetFS
0x7fff45141000 - 0x7fff4515dfff com.apple.CFOpenDirectory (10.15 - 220.11.1) <6F4D018B-FA8B-35B2-8120-F8215DDA01CB> /System/Library/Frameworks/OpenDirectory.framework/Versions/A/Frameworks/CFOpenDirectory.framework/Versions/A/CFOpenDirectory
0x7fff4515e000 - 0x7fff45169fff com.apple.OpenDirectory (10.15 - 220.11.1) <170173C2-DF22-3D11-914F-465AA7C4760A> /System/Library/Frameworks/OpenDirectory.framework/Versions/A/OpenDirectory
0x7fff484eb000 - 0x7fff4882fff9 com.apple.security (7.0 - 59306.11.20) <1B0AE660-0EC5-3497-ACE8-1AF2BB772BAB> /System/Library/Frameworks/Security.framework/Versions/A/Security
0x7fff48830000 - 0x7fff488b8ff7 com.apple.securityfoundation (6.0 - 55236) <903B8365-1F35-3EB2-9821-9D2C440BE2DD> /System/Library/Frameworks/SecurityFoundation.framework/Versions/A/SecurityFoundation
0x7fff48911000 - 0x7fff48915ff8 com.apple.xpc.ServiceManagement (1.0 - 1) <EF42F840-AF78-38A4-B6E1-FDF445CA3477> /System/Library/Frameworks/ServiceManagement.framework/Versions/A/ServiceManagement
0x7fff496ff000 - 0x7fff49769ff7 com.apple.SystemConfiguration (1.19 - 1.19) <C0C089C3-FC64-3107-B23E-4073E800C5D2> /System/Library/Frameworks/SystemConfiguration.framework/Versions/A/SystemConfiguration
0x7fff4d4bd000 - 0x7fff4d581ff7 com.apple.APFS (1412.11.7 - 1412.11.7) <71DAB6CE-FF14-373D-A183-F388C5D9FE84> /System/Library/PrivateFrameworks/APFS.framework/Versions/A/APFS
0x7fff4f105000 - 0x7fff4f114fdf com.apple.AppleFSCompression (119 - 1.0) <725908C4-7585-3AB6-8A1A-691B8A9074B8> /System/Library/PrivateFrameworks/AppleFSCompression.framework/Versions/A/AppleFSCompression
0x7fff4f67c000 - 0x7fff4f680ff7 com.apple.AppleSRP (5.0 - 1) <B251E119-3F06-3CDB-9559-8CC8BBAF1529> /System/Library/PrivateFrameworks/AppleSRP.framework/Versions/A/AppleSRP
0x7fff508a0000 - 0x7fff508a9ff3 com.apple.coreservices.BackgroundTaskManagement (1.0 - 104) <156CFAE3-07D0-332C-90BE-BB2E4C893C99> /System/Library/PrivateFrameworks/BackgroundTaskManagement.framework/Versions/A/BackgroundTaskManagement
0x7fff52547000 - 0x7fff52550ff7 com.apple.CommonAuth (4.0 - 2.0) <BDE39131-6389-3243-9C4A-DBA165B8A2F9> /System/Library/PrivateFrameworks/CommonAuth.framework/Versions/A/CommonAuth
0x7fff53368000 - 0x7fff53373ff7 com.apple.frameworks.CoreDaemon (1.3 - 1.3) <76AC1239-46B1-387E-B053-ED56BDF55DE7> /System/Library/PrivateFrameworks/CoreDaemon.framework/Versions/B/CoreDaemon
0x7fff535ea000 - 0x7fff535faff3 com.apple.CoreEmoji (1.0 - 100) <9AB89183-635C-3859-9DC6-7BCE3A94D15E> /System/Library/PrivateFrameworks/CoreEmoji.framework/Versions/A/CoreEmoji
0x7fff53c46000 - 0x7fff53cb0ff8 com.apple.CoreNLP (1.0 - 211) <8F08AEF6-A380-3811-BAF0-D80E7C84B5AD> /System/Library/PrivateFrameworks/CoreNLP.framework/Versions/A/CoreNLP
0x7fff548c0000 - 0x7fff548eeff7 com.apple.CSStore (1069.2 - 1069.2) <5E3C50AB-9C00-36F3-A986-CE033480CA8B> /System/Library/PrivateFrameworks/CoreServicesStore.framework/Versions/A/CoreServicesStore
0x7fff5ddb3000 - 0x7fff5de26ffc com.apple.Heimdal (4.0 - 2.0) <169702C2-B210-3258-947C-F8EE6B361C26> /System/Library/PrivateFrameworks/Heimdal.framework/Versions/A/Heimdal
0x7fff6089c000 - 0x7fff60969ffd com.apple.LanguageModeling (1.0 - 212) <A9F41C84-E574-3624-8C00-60F228E0FF97> /System/Library/PrivateFrameworks/LanguageModeling.framework/Versions/A/LanguageModeling
0x7fff6096a000 - 0x7fff609b2ff7 com.apple.Lexicon-framework (1.0 - 70) <BEADF30C-37D3-3112-90DA-18A85406DBF3> /System/Library/PrivateFrameworks/Lexicon.framework/Versions/A/Lexicon
0x7fff609b9000 - 0x7fff609bdff6 com.apple.LinguisticData (1.0 - 349) <A392AD13-9AEB-31BB-A9ED-F29437CFBDB4> /System/Library/PrivateFrameworks/LinguisticData.framework/Versions/A/LinguisticData
0x7fff61dee000 - 0x7fff61e3aff7 com.apple.spotlight.metadata.utilities (1.0 - 2066.12) <989018A3-4BD0-3FD1-9A2D-88FD3A521452> /System/Library/PrivateFrameworks/MetadataUtilities.framework/Versions/A/MetadataUtilities
0x7fff6285d000 - 0x7fff62867fff com.apple.NetAuth (6.2 - 6.2) <90F9ADF4-8A9C-3603-8F55-24F8C472430B> /System/Library/PrivateFrameworks/NetAuth.framework/Versions/A/NetAuth
0x7fff6b4f8000 - 0x7fff6b508ff3 com.apple.TCC (1.0 - 1) <DCE1D8C7-7BEB-3201-A0E5-38F012E6B1BC> /System/Library/PrivateFrameworks/TCC.framework/Versions/A/TCC
0x7fff6bc27000 - 0x7fff6bc28fff com.apple.TrustEvaluationAgent (2.0 - 33) <B691985E-2E58-37C3-B336-8882DE4BF19A> /System/Library/PrivateFrameworks/TrustEvaluationAgent.framework/Versions/A/TrustEvaluationAgent
0x7fff6f748000 - 0x7fff6f74aff3 com.apple.loginsupport (1.0 - 1) <40974390-AFD7-3CEF-8B8D-6219BF916A4E> /System/Library/PrivateFrameworks/login.framework/Versions/A/Frameworks/loginsupport.framework/Versions/A/loginsupport
0x7fff6fab8000 - 0x7fff6faedff7 libCRFSuite.dylib (48) <45ADF347-A43F-3E95-BF26-94DC525DCC81> /usr/lib/libCRFSuite.dylib
0x7fff6faf0000 - 0x7fff6fafaff3 libChineseTokenizer.dylib (34) <94E822B6-F850-33C5-888C-D5C8AE12122C> /usr/lib/libChineseTokenizer.dylib
0x7fff6fb87000 - 0x7fff6fb89fff libDiagnosticMessagesClient.dylib (112) <418D550B-C748-3D55-A6D5-0398E032F9F3> /usr/lib/libDiagnosticMessagesClient.dylib
0x7fff7004e000 - 0x7fff7004fff3 libSystem.B.dylib (1281) <66742D2E-591A-32B2-8E92-4A54BEE843F6> /usr/lib/libSystem.B.dylib
0x7fff700df000 - 0x7fff700e0fff libThaiTokenizer.dylib (3) <D2A89215-5281-310F-8C75-47F1E6A14F62> /usr/lib/libThaiTokenizer.dylib
0x7fff700f8000 - 0x7fff7010efff libapple_nghttp2.dylib (1.39.2) <0A685DAA-9FC6-3C87-83F1-1D11FC87C1F4> /usr/lib/libapple_nghttp2.dylib
0x7fff70143000 - 0x7fff701b5ff7 libarchive.2.dylib (72.11.2) <8636AE5A-0CBB-306C-8A4B-2E612D2D6B13> /usr/lib/libarchive.2.dylib
0x7fff70250000 - 0x7fff70250ff3 libauto.dylib (187) <AACF68BC-9A05-36F8-8F60-78951422E090> /usr/lib/libauto.dylib
0x7fff7030e000 - 0x7fff7031efff libbsm.0.dylib (60) <F03FA480-0B22-3300-833F-03E88F43C504> /usr/lib/libbsm.0.dylib
0x7fff7031f000 - 0x7fff7032bfff libbz2.1.0.dylib (44) <8B522880-BEF8-3668-B785-F2AB4DE8F366> /usr/lib/libbz2.1.0.dylib
0x7fff7032c000 - 0x7fff7037ffff libc++.1.dylib (800.6) <328FB687-2363-38B1-AC11-11736925C775> /usr/lib/libc++.1.dylib
0x7fff70380000 - 0x7fff70394fff libc++abi.dylib (800.7) <02753D3D-91C6-3670-8B5D-EBE040B516FC> /usr/lib/libc++abi.dylib
0x7fff70395000 - 0x7fff70395ffb libcharset.1.dylib (59) <12D52FA5-EBCA-3F3C-A269-1095F268F92F> /usr/lib/libcharset.1.dylib
0x7fff70396000 - 0x7fff703a7ffb libcmph.dylib (8) <7DD1F726-F3E3-341A-AFAC-83E9A470883C> /usr/lib/libcmph.dylib
0x7fff703a8000 - 0x7fff703bffe7 libcompression.dylib (87) <10B82884-BE1A-3A36-9B38-3C92AF566D3E> /usr/lib/libcompression.dylib
0x7fff70681000 - 0x7fff70697fff libcoretls.dylib (167) <1C64EA6F-8E0D-319D-99D4-026150EEB2B2> /usr/lib/libcoretls.dylib
0x7fff70698000 - 0x7fff70699ffb libcoretls_cfhelpers.dylib (167) <724B0181-4D9E-3D2F-B1AB-B4FD6A7BAB2C> /usr/lib/libcoretls_cfhelpers.dylib
0x7fff70b3d000 - 0x7fff70c41fe7 libcrypto.44.dylib (47.11.1) <55C6ABAB-C237-39F1-A78C-4594896CF7E6> /usr/lib/libcrypto.44.dylib
0x7fff70cb8000 - 0x7fff70d1ffff libcurl.4.dylib (118) <2B9763A5-A54D-3F2B-98BD-1F9BAEADE5E0> /usr/lib/libcurl.4.dylib
0x7fff70dc2000 - 0x7fff70dc2ff3 libenergytrace.dylib (21) <E42B4AFF-3FAC-3CE4-A7BF-48621458B356> /usr/lib/libenergytrace.dylib
0x7fff70dc3000 - 0x7fff70ddcff7 libexpat.1.dylib (19) <A0E2F6F3-BFFA-3D59-872F-F093487F0B42> /usr/lib/libexpat.1.dylib
0x7fff70dea000 - 0x7fff70decff7 libfakelink.dylib (149) <FC5712CB-2188-3DAD-8DD4-CC3ECCA3F9A8> /usr/lib/libfakelink.dylib
0x7fff70dfb000 - 0x7fff70e00fff libgermantok.dylib (24) <93E95178-E436-3611-A4A3-CB1D42CF4064> /usr/lib/libgermantok.dylib
0x7fff70e01000 - 0x7fff70e05ff7 libheimdal-asn1.dylib (564.0.3) <51551E63-5AC6-30D3-B178-8BDA782C80EA> /usr/lib/libheimdal-asn1.dylib
0x7fff70e06000 - 0x7fff70ef6ff7 libiconv.2.dylib (59) <1A648E74-25D4-3F9B-94C4-10B58AD091B7> /usr/lib/libiconv.2.dylib
0x7fff70ef7000 - 0x7fff7114fff7 libicucore.A.dylib (64232.0.1) <88E47471-605C-3C86-871B-5D2F4628A936> /usr/lib/libicucore.A.dylib
0x7fff71169000 - 0x7fff7116afff liblangid.dylib (133) <78DF87EE-CDCE-3628-B239-56743169BC93> /usr/lib/liblangid.dylib
0x7fff7116b000 - 0x7fff71183ffb liblzma.5.dylib (16) <D416FC97-76EC-38B5-A134-85DDFEA9D297> /usr/lib/liblzma.5.dylib
0x7fff7119b000 - 0x7fff71242fff libmecab.dylib (879) <DD60E6AA-154E-3294-B2C0-3277B754F3BC> /usr/lib/libmecab.dylib
0x7fff71243000 - 0x7fff714a2fe9 libmecabra.dylib (879) <B5BE574C-DD3A-336D-87A3-202CE7803A45> /usr/lib/libmecabra.dylib
0x7fff71961000 - 0x7fff71dd3ff6 libnetwork.dylib (1880.11.2) <CC02BF51-A056-3656-B735-E8CD0F4B7B15> /usr/lib/libnetwork.dylib
0x7fff71e71000 - 0x7fff71ea2fe6 libobjc.A.dylib (779.1) <722C0959-69B8-3843-B5EA-CDD8FAA91D5E> /usr/lib/libobjc.A.dylib
0x7fff71eb5000 - 0x7fff71eb9fff libpam.2.dylib (25) <86317F48-E926-30AC-AE9F-ABB33543FBC8> /usr/lib/libpam.2.dylib
0x7fff71ebc000 - 0x7fff71eefff7 libpcap.A.dylib (89.11.2) <65A8EBD2-F059-353B-9538-20D1314EFD89> /usr/lib/libpcap.A.dylib
0x7fff71f71000 - 0x7fff71f89ff7 libresolv.9.dylib (67) <06480BFC-6372-3225-B77A-F5AC9969DB64> /usr/lib/libresolv.9.dylib
0x7fff71fcf000 - 0x7fff71fe1fff libsasl2.2.dylib (213) <277129F1-29AE-34EB-BBAB-FF6DF4B43FAB> /usr/lib/libsasl2.2.dylib
0x7fff71fe4000 - 0x7fff721d1ff7 libsqlite3.dylib (308.1) <7033723E-DD65-3AA3-ADCA-8746F7BAD75D> /usr/lib/libsqlite3.dylib
0x7fff722c5000 - 0x7fff722f2ffb libssl.46.dylib (47.11.1) <CA81BD73-E5BF-3F88-A70E-49BA7C6B2781> /usr/lib/libssl.46.dylib
0x7fff723c4000 - 0x7fff7241fff0 libusrtcp.dylib (1880.11.2) <B821F69B-2E28-36C7-8F11-6990F8D4E26B> /usr/lib/libusrtcp.dylib
0x7fff72420000 - 0x7fff72423ffb libutil.dylib (57) <844B7887-09B3-3D12-ACDE-C4EB8F53DC62> /usr/lib/libutil.dylib
0x7fff72424000 - 0x7fff72431fff libxar.1.dylib (420) <E0D7C0A6-76EC-3682-A393-6596D4986269> /usr/lib/libxar.1.dylib
0x7fff72437000 - 0x7fff72519ff7 libxml2.2.dylib (32.12) <C0A87484-D334-3B50-8F8A-A9C63295F49E> /usr/lib/libxml2.2.dylib
0x7fff7251d000 - 0x7fff72545fff libxslt.1.dylib (16.6) <CD9E79B0-159A-3055-B62A-57AB2B445912> /usr/lib/libxslt.1.dylib
0x7fff72546000 - 0x7fff72558fff libz.1.dylib (76) <3FC3FC3E-ABF3-3167-9078-B54C952608B4> /usr/lib/libz.1.dylib
0x7fff72fbd000 - 0x7fff72fc2ff7 libcache.dylib (83) <8EC69760-6DAA-3068-9372-F1D554C548E5> /usr/lib/system/libcache.dylib
0x7fff72fc3000 - 0x7fff72fceff7 libcommonCrypto.dylib (60165) <698BE754-137D-361D-B826-57B8DD969E4A> /usr/lib/system/libcommonCrypto.dylib
0x7fff72fcf000 - 0x7fff72fd6fff libcompiler_rt.dylib (101.2) <0BE7F119-C9C2-3612-A3F4-2336D4444476> /usr/lib/system/libcompiler_rt.dylib
0x7fff72fd7000 - 0x7fff72fe0ff7 libcopyfile.dylib (166) <B5E73B1C-0BCF-33FE-80A1-D9E3BA3033D4> /usr/lib/system/libcopyfile.dylib
0x7fff72fe1000 - 0x7fff73078fc3 libcorecrypto.dylib (866.0.10) <58344B13-CD10-3697-A516-6F5B06DD1EB7> /usr/lib/system/libcorecrypto.dylib
0x7fff7318f000 - 0x7fff731d0ff0 libdispatch.dylib (1173.0.3) <F4260D89-F67D-30CB-AF61-7ED25CB687DB> /usr/lib/system/libdispatch.dylib
0x7fff731d1000 - 0x7fff73206fff libdyld.dylib (732.8) <98960E27-A08B-36DA-A5CB-8538B2D6757E> /usr/lib/system/libdyld.dylib
0x7fff73207000 - 0x7fff73207ffb libkeymgr.dylib (30) <682B41BC-BDC2-38D5-8820-87099606FA12> /usr/lib/system/libkeymgr.dylib
0x7fff73208000 - 0x7fff73214ff7 libkxld.dylib (6153.11.26) <53BE9630-BDC8-3649-8709-7A4F86777B1A> /usr/lib/system/libkxld.dylib
0x7fff73215000 - 0x7fff73215ff7 liblaunch.dylib (1738.11.1) <7FE11F0D-65BC-3726-B1DD-E84F329193E0> /usr/lib/system/liblaunch.dylib
0x7fff73216000 - 0x7fff7321bff7 libmacho.dylib (949.0.1) <163DFE06-2FAD-3CBC-80EF-C38EED6AEA52> /usr/lib/system/libmacho.dylib
0x7fff7321c000 - 0x7fff7321eff3 libquarantine.dylib (110.0.4) <C8F39330-8CB5-30B0-8564-96453DCEFAD7> /usr/lib/system/libquarantine.dylib
0x7fff7321f000 - 0x7fff73220ff7 libremovefile.dylib (48) <FB280185-B5ED-3F08-B08A-A378865C1398> /usr/lib/system/libremovefile.dylib
0x7fff73221000 - 0x7fff73238fff libsystem_asl.dylib (377.0.1) <30CE9DAF-B1FA-3510-832B-F1CE19933ED7> /usr/lib/system/libsystem_asl.dylib
0x7fff73239000 - 0x7fff73239fff libsystem_blocks.dylib (74) <E0B8C825-E62B-357E-A2A0-13776F0A0F8C> /usr/lib/system/libsystem_blocks.dylib
0x7fff7323a000 - 0x7fff732c1ff7 libsystem_c.dylib (1353.11.2) <2A5BFAFE-8214-3B35-AD46-C07A1A8B8941> /usr/lib/system/libsystem_c.dylib
0x7fff732c2000 - 0x7fff732c5fff libsystem_configuration.dylib (1061.0.2) <56174463-22ED-337F-B0D4-94995DCDB9B7> /usr/lib/system/libsystem_configuration.dylib
0x7fff732c6000 - 0x7fff732c9ff7 libsystem_coreservices.dylib (114) <01695913-028E-3AE1-8D4E-2B2769109811> /usr/lib/system/libsystem_coreservices.dylib
0x7fff732ca000 - 0x7fff732d1fff libsystem_darwin.dylib (1353.11.2) <4CE52C63-87AA-3C6D-899F-8C983E5FC061> /usr/lib/system/libsystem_darwin.dylib
0x7fff732d2000 - 0x7fff732d9ffb libsystem_dnssd.dylib (1096.0.2) <DC7381E8-F09F-3441-8267-9B8F50A0EBA9> /usr/lib/system/libsystem_dnssd.dylib
0x7fff732da000 - 0x7fff732dbffb libsystem_featureflags.dylib (17) <DBCA4AA2-CA05-38D5-AB4B-BE0F3E09BB8B> /usr/lib/system/libsystem_featureflags.dylib
0x7fff732dc000 - 0x7fff73329ff7 libsystem_info.dylib (538) <9F9D6368-A11E-32F1-9BB5-C153F42EFED8> /usr/lib/system/libsystem_info.dylib
0x7fff7332a000 - 0x7fff73356fff libsystem_kernel.dylib (6153.11.26) <4CE9D54A-A975-348E-B878-EE74EDFC956B> /usr/lib/system/libsystem_kernel.dylib
0x7fff73357000 - 0x7fff7339eff7 libsystem_m.dylib (3178) <4F516261-0C0E-3105-AF35-EF39D6347B50> /usr/lib/system/libsystem_m.dylib
0x7fff7339f000 - 0x7fff733c6fff libsystem_malloc.dylib (283) <02925539-3CBA-39EB-BA6B-9A936CFA0BF8> /usr/lib/system/libsystem_malloc.dylib
0x7fff733c7000 - 0x7fff733d4ff3 libsystem_networkextension.dylib (1095.11.9) <8B5EE189-E3D1-31FD-878F-50286B6E7047> /usr/lib/system/libsystem_networkextension.dylib
0x7fff733d5000 - 0x7fff733defff libsystem_notify.dylib (241) <89381127-2A07-3F07-B865-358FACCF9102> /usr/lib/system/libsystem_notify.dylib
0x7fff733df000 - 0x7fff733e8fe7 libsystem_platform.dylib (220) <90E508E4-46D8-33FF-8552-DDAA079977A0> /usr/lib/system/libsystem_platform.dylib
0x7fff733e9000 - 0x7fff733f3fff libsystem_pthread.dylib (416.11.1) <2EA6F95F-F264-30B6-8AF2-24197B5AED84> /usr/lib/system/libsystem_pthread.dylib
0x7fff733f4000 - 0x7fff733f8ffb libsystem_sandbox.dylib (1217.11.16) <51129CBB-BC46-37F1-A1B5-ECFA9530704D> /usr/lib/system/libsystem_sandbox.dylib
0x7fff733f9000 - 0x7fff733fbfff libsystem_secinit.dylib (62.0.4) <A48D9AF3-75F2-3331-A0C8-0A23771F4AC7> /usr/lib/system/libsystem_secinit.dylib
0x7fff733fc000 - 0x7fff73403ffb libsystem_symptoms.dylib (1238.0.2) <08E8CF75-5F77-3475-A48E-A01CBDF09173> /usr/lib/system/libsystem_symptoms.dylib
0x7fff73404000 - 0x7fff7341aff2 libsystem_trace.dylib (1147.0.3) <5836645E-9862-326D-AB3B-A19E76BE29B9> /usr/lib/system/libsystem_trace.dylib
0x7fff7341c000 - 0x7fff73421ffb libunwind.dylib (35.4) <F5AE1D43-7C5F-3BA2-94D3-E769F82C0F61> /usr/lib/system/libunwind.dylib
0x7fff73422000 - 0x7fff73456ff6 libxpc.dylib (1738.11.1) <2E9076CD-6C0E-38B6-8403-B2DDCE125FBF> /usr/lib/system/libxpc.dylib
External Modification Summary:
Calls made by other processes targeting this process:
task_for_pid: 32
thread_create: 0
thread_set_state: 0
Calls made by this process:
task_for_pid: 0
thread_create: 0
thread_set_state: 0
Calls made by all processes on this machine:
task_for_pid: 7803078
thread_create: 0
thread_set_state: 0
VM Region Summary:
ReadOnly portion of Libraries: Total=660.2M resident=0K(0%) swapped_out_or_unallocated=660.2M(100%)
Writable regions: Total=1.5G written=0K(0%) resident=0K(0%) swapped_out=0K(0%) unallocated=1.5G(100%)
VIRTUAL REGION
REGION TYPE SIZE COUNT (non-coalesced)
=========== ======= =======
Activity Tracing 256K 1
Dispatch continuations 16.0M 1
Kernel Alloc Once 8K 1
MALLOC 1.1G 724
MALLOC guard page 16K 3
MALLOC_LARGE (reserved) 256K 2 reserved VM address space (unallocated)
STACK GUARD 36K 9
Stack 51.6M 9
VM_ALLOCATE 4K 1
VM_ALLOCATE (reserved) 192.0M 2 reserved VM address space (unallocated)
__DATA 62.5M 464
__DATA_CONST 20K 1
__LINKEDIT 380.1M 248
__OBJC_RO 31.8M 1
__OBJC_RW 1764K 2
__TEXT 280.2M 382
__UNICODE 564K 1
mapped file 28K 1
shared memory 44K 5
=========== ======= =======
TOTAL 2.1G 1858
TOTAL, minus reserved VM space 2.0G 1858
Model: MacBookPro13,3, BootROM 262.0.0.0.0, 4 processors, Quad-Core Intel Core i7, 2.9 GHz, 16 GB, SMC 2.38f8
Graphics: kHW_IntelHDGraphics530Item, Intel HD Graphics 530, spdisplays_builtin
Graphics: kHW_AMDRadeonPro460Item, AMD Radeon Pro 460, spdisplays_pcie_device, 4 GB
Memory Module: BANK 0/DIMM0, 8 GB, LPDDR3, 2133 MHz, 0x802C, 0x4D5435324C31473332443450472D30393320
Memory Module: BANK 1/DIMM0, 8 GB, LPDDR3, 2133 MHz, 0x802C, 0x4D5435324C31473332443450472D30393320
AirPort: spairport_wireless_card_type_airport_extreme (0x14E4, 0x15A), Broadcom BCM43xx 1.0 (7.77.105.1 AirPortDriverBrcmNIC-1429)
Bluetooth: Version 7.0.0f8, 3 services, 27 devices, 1 incoming serial ports
Network Service: Wi-Fi, AirPort, en0
USB Device: USB 3.0 Bus
USB Device: Apple T1 Controller
Thunderbolt Bus: MacBook Pro, Apple Inc., 41.2
Thunderbolt Bus: MacBook Pro, Apple Inc., 41.2
</details>
#### Expected Output
#### Output of ``pd.show_versions()``
<details>
NSTALLED VERSIONS
------------------
commit : None
python : 3.7.4.final.0
python-bits : 64
OS : Darwin
OS-release : 19.0.0
machine : x86_64
processor : i386
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 0.25.1
numpy : 1.17.2
pytz : 2019.2
dateutil : 2.8.0
pip : 19.2.3
setuptools : 40.8.0
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : 4.4.1
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 2.10.1
IPython : None
pandas_datareader: 0.8.1
bs4 : None
bottleneck : None
fastparquet : None
gcsfs : None
lxml.etree : 4.4.1
matplotlib : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pytables : None
s3fs : None
scipy : 1.3.1
sqlalchemy : 1.3.9
tables : None
xarray : None
xlrd : 1.2.0
xlwt : None
xlsxwriter : None
</details>
|
Can you try on master? There were a few issues with quantile in 0.25.0 and 0.25.1 that will be fixed in the next minor release
Can you provide a reproducible example? Here is a good reference for making bug reports: https://blog.dask.org/2018/02/28/minimal-bug-reports
@WillAyd, when we switched from quantile to mean, is working much better, so suspect **quantile** issue. We will test with 0.25.2 when it is available to switch from mean to quantile and will let you know if there is still an issue. Thanks.
@dsaxton it is difficult to write "good" stable example reproducing MALLOC_LARGE issue :) it is not 100% reproducible even with the same datasets. The one that it is catching the most of malloc issues is with more than 4M rows and group by of ~30K intervals.
Ran into the same issue today. The problem is with `None` keys. Here is some C&P'able code:
```python
import pandas
print(pandas.__version__)
df = pandas.DataFrame(data=[['A', 0], [None, 123], ['A', 0]], columns=['key', 'value'])
result = df.groupby('key').quantile()
print(result)
```
This works fine with pandas 0.24.2, but with 0.25.1 there are several issues:
1. The result is wrong! It says 'A' -> 61.5. the correct result would be 'A' -> 0 and ignoring the None value. Having `None` as an additional category would also be acceptable. But mixing it into one of the other groups is just wrong. (If you have multiple different groups, it will get mixed into whatever group comes first in the result).
2. This crashes frequently. The code above will crash when `result` is garbage collected. Other, slightly different variations may crash immediately. For example, adding `as_index=False` to the `groupby` will crash immediately.
Unfortunately, I cannot test this with the latest master branch, as I am on my business laptop where I cannot easily install the required C++ build tools. I will test this on another machine of mine but it may be several hours before I get around to that.
I got it installed. Here are results for pandas 0.26.0.dev0+583.g86e187f:
1. The code above no longer crashes, but it still produces the wrong result of A->61.5
2. The code starts crashing again when subject to slight modifications: Using `as_index=False` will crash immediately. Using a list of quantiles (p.e. `df.groupby('key').quantile([0, 0.5, 1])`) crashes immediately. Using the [similar example](https://github.com/pandas-dev/pandas/issues/28194#issuecomment-527058970) from the other thread on this topic crashes immediately.
Confirm, the same crash with official 0.25.2 release
```Python
dtf = dtf.groupby(cut(dtf[col[0]], rng, duplicates='drop', precision=4)).quantile(.5) # Crash
dtf = dtf.groupby(cut(dtf[col[0]], rng, duplicates='drop', precision=4)).mean() # OK
```
This segfaults for me ~10% of the time on master
```python
import pandas
print(pandas.__version__)
df = pandas.DataFrame(data=[['A', 0], [None, 123], ['A', 0]], columns=['key', 'value'])
result = df.groupby('key').quantile([0.25, 0.75])
```
If this was working on 0.24 but not 0.25 probably comes back to #20405
I'll see what I can find
Are you able to reproduce locally Will? If not, I can spend some time
digging into it.
On Tue, Oct 22, 2019 at 4:14 PM William Ayd <[email protected]>
wrote:
> If this was working on 0.24 but not 0.25 probably comes back to #20405
> <https://github.com/pandas-dev/pandas/pull/20405>
>
> I'll see what I can find
>
> —
> You are receiving this because you commented.
> Reply to this email directly, view it on GitHub
> <https://github.com/pandas-dev/pandas/issues/28882?email_source=notifications&email_token=AAKAOIXQ5QWA4SUMIK4JPXTQP5UL7A5CNFSM4I7FRTX2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEB7HL6I#issuecomment-545158649>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AAKAOIQ3JUT6UNCGPKNEVG3QP5UL7ANCNFSM4I7FRTXQ>
> .
>
Yea - the last sample you posted was great and I see the same behaviour
> On Oct 22, 2019, at 2:27 PM, Tom Augspurger <[email protected]> wrote:
>
> Are you able to reproduce locally Will? If not, I can spend some time
> digging into it.
>
> On Tue, Oct 22, 2019 at 4:14 PM William Ayd <[email protected]>
> wrote:
>
> > If this was working on 0.24 but not 0.25 probably comes back to #20405
> > <https://github.com/pandas-dev/pandas/pull/20405>
> >
> > I'll see what I can find
> >
> > —
> > You are receiving this because you commented.
> > Reply to this email directly, view it on GitHub
> > <https://github.com/pandas-dev/pandas/issues/28882?email_source=notifications&email_token=AAKAOIXQ5QWA4SUMIK4JPXTQP5UL7A5CNFSM4I7FRTX2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEB7HL6I#issuecomment-545158649>,
> > or unsubscribe
> > <https://github.com/notifications/unsubscribe-auth/AAKAOIQ3JUT6UNCGPKNEVG3QP5UL7ANCNFSM4I7FRTXQ>
> > .
> >
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub <https://github.com/pandas-dev/pandas/issues/28882?email_source=notifications&email_token=AAEU4UOJOTUVW6TL4N5UV7LQP5V23A5CNFSM4I7FRTX2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEB7IPPA#issuecomment-545163196>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AAEU4UOAFDRLPMBHOGJS6GLQP5V23ANCNFSM4I7FRTXQ>.
>
Believe I have figured it out. The problem is on this line:
https://github.com/pandas-dev/pandas/blob/bef9baedafc83df47963f2c4f4e7e465ece44342/pandas/_libs/groupby.pyx#L766
When NA values appear in the groupby they are indicated with a -1 label. However via decorators we disabled `Cython.wraparound` for negative indexing AND also `Cython.boundscheck` to detect index errors. The combination of all of those factors leads to a segfault
I'll push up a PR soon. Just thinking through test case(s)
|
2019-10-22T22:39:06Z
|
<patch>
diff --git a/doc/source/whatsnew/v1.0.0.rst b/doc/source/whatsnew/v1.0.0.rst
--- a/doc/source/whatsnew/v1.0.0.rst
+++ b/doc/source/whatsnew/v1.0.0.rst
@@ -410,6 +410,7 @@ Groupby/resample/rolling
- Bug in :meth:`DataFrame.groupby` not offering selection by column name when ``axis=1`` (:issue:`27614`)
- Bug in :meth:`DataFrameGroupby.agg` not able to use lambda function with named aggregation (:issue:`27519`)
- Bug in :meth:`DataFrame.groupby` losing column name information when grouping by a categorical column (:issue:`28787`)
+- Bug in :meth:`DataFrameGroupBy.quantile` where NA values in the grouping could cause segfaults or incorrect results (:issue:`28882`)
Reshaping
^^^^^^^^^
diff --git a/pandas/_libs/groupby.pyx b/pandas/_libs/groupby.pyx
--- a/pandas/_libs/groupby.pyx
+++ b/pandas/_libs/groupby.pyx
@@ -763,6 +763,9 @@ def group_quantile(ndarray[float64_t] out,
with nogil:
for i in range(N):
lab = labels[i]
+ if lab == -1: # NA group label
+ continue
+
counts[lab] += 1
if not mask[i]:
non_na_counts[lab] += 1
</patch>
|
[]
|
[]
| |||
Lightning-AI__lightning-919
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Relax hparams in model saving/loading
I've managed to train a model using pl.fit(model) and have the .ckpt file. Now, I'm trying to load the .ckpt file so that I can do inference on a single image:
```
model = CoolSystem()
to_infer = torch.load('checkpoints/try_ckpt_epoch_1_v0.ckpt')
model.load_from_checkpoint(to_infer) # ------------- error is thrown at this line
```
However, upon loading the .ckpt file, the following error is thrown:
```
AttributeError: 'dict' object has no attribute 'seek'. You can only torch.load from a file that is seekable. Please pre-load the data into a buffer like io.BytesIO and try to load from it instead.
```
**Am I doing something wrong when using PyTorch Lightning for inference?**
For reference, this is my system:
```
import pytorch_lightning as pl
import os
import matplotlib.pyplot as plt
import torch
import torchvision
import torchvision.transforms as transforms
import torch.nn.functional as F
class CoolSystem(pl.LightningModule):
def __init__(self):
super(CoolSystem, self).__init__()
# self.hparams = hparams
self.data_dir = '/content/hymenoptera_data'
self.model = torchvision.models.resnet18(pretrained=True) # final layer is of size [bs, 1000]
num_ftrs = self.model.fc.in_features
self.model.fc = torch.nn.Linear(num_ftrs, 2) # change final layer to be of size [bs, 2]
def forward(self, x):
x = self.model(x)
return x
def configure_optimizers(self):
# Observe that all parameters are being optimized
optimizer = torch.optim.SGD(self.model.parameters(), lr=0.001, momentum=0.9)
# Decay LR by a factor of 0.1 every 7 epochs
exp_lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=7, gamma=0.1)
return [optimizer], [exp_lr_scheduler]
def training_step(self, batch, batch_idx):
# REQUIRED
x, y = batch
y_hat = self.forward(x)
loss = F.cross_entropy(y_hat, y)
tensorboard_logs = {'train_loss': loss}
return {'loss': loss, 'log': tensorboard_logs}
def validation_step(self, batch, batch_idx):
# OPTIONAL
x, y = batch
y_hat = self.forward(x)
return {'val_loss': F.cross_entropy(y_hat, y)}
def validation_end(self, outputs):
# OPTIONAL
avg_loss = torch.stack([x['val_loss'] for x in outputs]).mean()
tensorboard_logs = {'val_loss': avg_loss}
return {'avg_val_loss': avg_loss, 'log': tensorboard_logs}
@pl.data_loader
def train_dataloader(self):
# REQUIRED
transform = transforms.Compose([
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])
train_set = torchvision.datasets.ImageFolder(os.path.join(self.data_dir, 'train'), transform)
train_loader = torch.utils.data.DataLoader(train_set, batch_size=32, shuffle=True, num_workers=4)
return train_loader
@pl.data_loader
def val_dataloader(self):
transform = transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])
val_set = torchvision.datasets.ImageFolder(os.path.join(self.data_dir, 'val'), transform)
val_loader = torch.utils.data.DataLoader(val_set, batch_size=32, shuffle=True, num_workers=4)
return val_loader
```
And I'm training it this way:
```
model = CoolSystem()
import os
checkpoint_callback = pl.callbacks.ModelCheckpoint(
filepath=os.path.join(os.getcwd(), 'checkpoints'),
verbose=True,
monitor='val_loss',
mode='min',
prefix='try',
save_top_k=-1,
period=1 # check val_loss every n periods, and saves the checkpoint if it is better than the val_loss at the previous period
)
trainer = pl.Trainer(
max_epochs=2,
checkpoint_callback=checkpoint_callback)
trainer.fit(model)
```
</issue>
<code>
[start of README.md]
1 <div align="center">
2
3 
4
5 # PyTorch Lightning
6
7 **The lightweight PyTorch wrapper for ML researchers. Scale your models. Write less boilerplate.**
8
9
10 [](https://badge.fury.io/py/pytorch-lightning)
11 [](https://pepy.tech/project/pytorch-lightning)
12 [](https://github.com/PytorchLightning/pytorch-lightning/tree/master/tests#running-coverage)
13 [](https://www.codefactor.io/repository/github/pytorchlightning/pytorch-lightning)
14
15 [](https://pytorch-lightning.readthedocs.io/en/latest/)
16 [](https://join.slack.com/t/pytorch-lightning/shared_invite/enQtODU5ODIyNTUzODQwLTFkMDg5Mzc1MDBmNjEzMDgxOTVmYTdhYjA1MDdmODUyOTg2OGQ1ZWZkYTQzODhhNzdhZDA3YmNhMDhlMDY4YzQ)
17 [](https://github.com/PytorchLightning/pytorch-lightning/blob/master/LICENSE)
18 [](https://shields.io/)
19
20 <!--
21 removed until codecov badge isn't empy. likely a config error showing nothing on master.
22 [](https://codecov.io/gh/Borda/pytorch-lightning)
23 -->
24 </div>
25
26 ---
27 ## Continuous Integration
28 <center>
29
30 | System / PyTorch Version | 1.1 | 1.2 | 1.3 | 1.4 |
31 | :---: | :---: | :---: | :---: | :---: |
32 | Linux py3.6 | [](https://circleci.com/gh/PyTorchLightning/pytorch-lightning) | [](https://circleci.com/gh/PyTorchLightning/pytorch-lightning) | [](https://circleci.com/gh/PyTorchLightning/pytorch-lightning) | [](https://circleci.com/gh/PyTorchLightning/pytorch-lightning) |
33 | Linux py3.7 |  | <center>—</center> | <center>—</center> |  |
34 | OSX py3.6 |  | <center>—</center> | <center>—</center> |  |
35 | OSX py3.7 |  | <center>—</center> | <center>—</center> |  |
36 | Windows py3.6 |  | <center>—</center> | <center>—</center> |  |
37 | Windows py3.7 |  | <center>—</center> | <center>—</center> |  |
38
39 </center>
40
41 Simple installation from PyPI
42 ```bash
43 pip install pytorch-lightning
44 ```
45
46 ## Docs
47 - [master](https://pytorch-lightning.readthedocs.io/en/latest)
48 - [0.6.0](https://pytorch-lightning.readthedocs.io/en/0.6.0/)
49 - [0.5.3.2](https://pytorch-lightning.readthedocs.io/en/0.5.3.2/)
50
51
52 ## Demo
53 [Copy and run this COLAB!](https://colab.research.google.com/drive/1F_RNcHzTfFuQf-LeKvSlud6x7jXYkG31#scrollTo=HOk9c4_35FKg)
54
55 ## What is it?
56 Lightning is a very lightweight wrapper on PyTorch that decouples the science code from the engineering code. It's more of a style-guide than a framework. By refactoring your code, we can automate most of the non-research code.
57
58 To use Lightning, simply refactor your research code into the [LightningModule](https://github.com/PytorchLightning/pytorch-lightning#how-do-i-do-use-it) format (the science) and Lightning will automate the rest (the engineering). Lightning guarantees tested, correct, modern best practices for the automated parts.
59
60 - If you are a researcher, Lightning is infinitely flexible, you can modify everything down to the way .backward is called or distributed is set up.
61 - If you are a scientist or production team, lightning is very simple to use with best practice defaults.
62
63 ## What does lightning control for me?
64
65 Everything in Blue!
66 This is how lightning separates the science (red) from the engineering (blue).
67
68 
69
70 ## How much effort is it to convert?
71 You're probably tired of switching frameworks at this point. But it is a very quick process to refactor into the Lightning format (ie: hours). [Check out this tutorial](https://towardsdatascience.com/how-to-refactor-your-pytorch-code-to-get-these-42-benefits-of-pytorch-lighting-6fdd0dc97538)
72
73 ## Starting a new project?
74 [Use our seed-project aimed at reproducibility!](https://github.com/PytorchLightning/pytorch-lightning-conference-seed)
75
76 ## Why do I want to use lightning?
77 Every research project starts the same, a model, a training loop, validation loop, etc. As your research advances, you're likely to need distributed training, 16-bit precision, checkpointing, gradient accumulation, etc.
78
79 Lightning sets up all the boilerplate state-of-the-art training for you so you can focus on the research.
80
81 ---
82
83 ## README Table of Contents
84 - [How do I use it](https://github.com/PytorchLightning/pytorch-lightning#how-do-i-do-use-it)
85 - [What lightning automates](https://github.com/PytorchLightning/pytorch-lightning#what-does-lightning-control-for-me)
86 - [Tensorboard integration](https://github.com/PytorchLightning/pytorch-lightning#tensorboard)
87 - [Lightning features](https://github.com/PytorchLightning/pytorch-lightning#lightning-automates-all-of-the-following-each-is-also-configurable)
88 - [Examples](https://github.com/PytorchLightning/pytorch-lightning#examples)
89 - [Tutorials](https://github.com/PytorchLightning/pytorch-lightning#tutorials)
90 - [Asking for help](https://github.com/PytorchLightning/pytorch-lightning#asking-for-help)
91 - [Contributing](https://github.com/PytorchLightning/pytorch-lightning/blob/master/.github/CONTRIBUTING.md)
92 - [Bleeding edge install](https://github.com/PytorchLightning/pytorch-lightning#bleeding-edge)
93 - [Lightning Design Principles](https://github.com/PytorchLightning/pytorch-lightning#lightning-design-principles)
94 - [Lightning team](https://github.com/PytorchLightning/pytorch-lightning#lightning-team)
95 - [FAQ](https://github.com/PytorchLightning/pytorch-lightning#faq)
96
97 ---
98
99 ## How do I do use it?
100 Think about Lightning as refactoring your research code instead of using a new framework. The research code goes into a [LightningModule](https://pytorch-lightning.rtfd.io/en/latest/lightning-module.html) which you fit using a Trainer.
101
102 The LightningModule defines a *system* such as seq-2-seq, GAN, etc... It can ALSO define a simple classifier such as the example below.
103
104 To use lightning do 2 things:
105 1. [Define a LightningModule](https://pytorch-lightning.rtfd.io/en/latest/lightning-module.html)
106 **WARNING:** This syntax is for version 0.5.0+ where abbreviations were removed.
107 ```python
108 import os
109
110 import torch
111 from torch.nn import functional as F
112 from torch.utils.data import DataLoader
113 from torchvision.datasets import MNIST
114 from torchvision import transforms
115
116 import pytorch_lightning as pl
117
118 class CoolSystem(pl.LightningModule):
119
120 def __init__(self):
121 super(CoolSystem, self).__init__()
122 # not the best model...
123 self.l1 = torch.nn.Linear(28 * 28, 10)
124
125 def forward(self, x):
126 return torch.relu(self.l1(x.view(x.size(0), -1)))
127
128 def training_step(self, batch, batch_idx):
129 # REQUIRED
130 x, y = batch
131 y_hat = self.forward(x)
132 loss = F.cross_entropy(y_hat, y)
133 tensorboard_logs = {'train_loss': loss}
134 return {'loss': loss, 'log': tensorboard_logs}
135
136 def validation_step(self, batch, batch_idx):
137 # OPTIONAL
138 x, y = batch
139 y_hat = self.forward(x)
140 return {'val_loss': F.cross_entropy(y_hat, y)}
141
142 def validation_end(self, outputs):
143 # OPTIONAL
144 avg_loss = torch.stack([x['val_loss'] for x in outputs]).mean()
145 tensorboard_logs = {'val_loss': avg_loss}
146 return {'avg_val_loss': avg_loss, 'log': tensorboard_logs}
147
148 def test_step(self, batch, batch_idx):
149 # OPTIONAL
150 x, y = batch
151 y_hat = self.forward(x)
152 return {'test_loss': F.cross_entropy(y_hat, y)}
153
154 def test_end(self, outputs):
155 # OPTIONAL
156 avg_loss = torch.stack([x['test_loss'] for x in outputs]).mean()
157 tensorboard_logs = {'test_loss': avg_loss}
158 return {'avg_test_loss': avg_loss, 'log': tensorboard_logs}
159
160 def configure_optimizers(self):
161 # REQUIRED
162 # can return multiple optimizers and learning_rate schedulers
163 # (LBFGS it is automatically supported, no need for closure function)
164 return torch.optim.Adam(self.parameters(), lr=0.02)
165
166 @pl.data_loader
167 def train_dataloader(self):
168 # REQUIRED
169 return DataLoader(MNIST(os.getcwd(), train=True, download=True, transform=transforms.ToTensor()), batch_size=32)
170
171 @pl.data_loader
172 def val_dataloader(self):
173 # OPTIONAL
174 return DataLoader(MNIST(os.getcwd(), train=True, download=True, transform=transforms.ToTensor()), batch_size=32)
175
176 @pl.data_loader
177 def test_dataloader(self):
178 # OPTIONAL
179 return DataLoader(MNIST(os.getcwd(), train=False, download=True, transform=transforms.ToTensor()), batch_size=32)
180 ```
181 2. Fit with a [trainer](https://pytorch-lightning.rtfd.io/en/latest/pytorch_lightning.trainer.html)
182 ```python
183 from pytorch_lightning import Trainer
184
185 model = CoolSystem()
186
187 # most basic trainer, uses good defaults
188 trainer = Trainer()
189 trainer.fit(model)
190 ```
191
192 Trainer sets up a tensorboard logger, early stopping and checkpointing by default (you can modify all of them or
193 use something other than tensorboard).
194
195 Here are more advanced examples
196 ```python
197 # train on cpu using only 10% of the data (for demo purposes)
198 trainer = Trainer(max_epochs=1, train_percent_check=0.1)
199
200 # train on 4 gpus (lightning chooses GPUs for you)
201 # trainer = Trainer(max_epochs=1, gpus=4, distributed_backend='ddp')
202
203 # train on 4 gpus (you choose GPUs)
204 # trainer = Trainer(max_epochs=1, gpus=[0, 1, 3, 7], distributed_backend='ddp')
205
206 # train on 32 gpus across 4 nodes (make sure to submit appropriate SLURM job)
207 # trainer = Trainer(max_epochs=1, gpus=8, num_gpu_nodes=4, distributed_backend='ddp')
208
209 # train (1 epoch only here for demo)
210 trainer.fit(model)
211
212 # view tensorboard logs
213 logging.info(f'View tensorboard logs by running\ntensorboard --logdir {os.getcwd()}')
214 logging.info('and going to http://localhost:6006 on your browser')
215 ```
216
217 When you're all done you can even run the test set separately.
218 ```python
219 trainer.test()
220 ```
221
222 **Could be as complex as seq-2-seq + attention**
223
224 ```python
225 # define what happens for training here
226 def training_step(self, batch, batch_idx):
227 x, y = batch
228
229 # define your own forward and loss calculation
230 hidden_states = self.encoder(x)
231
232 # even as complex as a seq-2-seq + attn model
233 # (this is just a toy, non-working example to illustrate)
234 start_token = '<SOS>'
235 last_hidden = torch.zeros(...)
236 loss = 0
237 for step in range(max_seq_len):
238 attn_context = self.attention_nn(hidden_states, start_token)
239 pred = self.decoder(start_token, attn_context, last_hidden)
240 last_hidden = pred
241 pred = self.predict_nn(pred)
242 loss += self.loss(last_hidden, y[step])
243
244 #toy example as well
245 loss = loss / max_seq_len
246 return {'loss': loss}
247 ```
248
249 **Or as basic as CNN image classification**
250
251 ```python
252 # define what happens for validation here
253 def validation_step(self, batch, batch_idx):
254 x, y = batch
255
256 # or as basic as a CNN classification
257 out = self.forward(x)
258 loss = my_loss(out, y)
259 return {'loss': loss}
260 ```
261
262 **And you also decide how to collate the output of all validation steps**
263
264 ```python
265 def validation_end(self, outputs):
266 """
267 Called at the end of validation to aggregate outputs
268 :param outputs: list of individual outputs of each validation step
269 :return:
270 """
271 val_loss_mean = 0
272 val_acc_mean = 0
273 for output in outputs:
274 val_loss_mean += output['val_loss']
275 val_acc_mean += output['val_acc']
276
277 val_loss_mean /= len(outputs)
278 val_acc_mean /= len(outputs)
279 logs = {'val_loss': val_loss_mean.item(), 'val_acc': val_acc_mean.item()}
280 result = {'log': logs}
281 return result
282 ```
283
284 ## Tensorboard
285 Lightning is fully integrated with tensorboard, MLFlow and supports any logging module.
286
287 
288
289 Lightning also adds a text column with all the hyperparameters for this experiment.
290
291 
292
293 ## Lightning automates all of the following ([each is also configurable](https://pytorch-lightning.rtfd.io/en/latest/pytorch_lightning.trainer.html)):
294
295
296 - [Running grid search on a cluster](https://pytorch-lightning.rtfd.io/en/latest/pytorch_lightning.trainer.distrib_data_parallel.html)
297 - [Fast dev run](https://pytorch-lightning.rtfd.io/en/latest/pytorch_lightning.utilities.debugging.html)
298 - [Logging](https://pytorch-lightning.rtfd.io/en/latest/pytorch_lightning.loggers.html)
299 - [Implement Your Own Distributed (DDP) training](https://pytorch-lightning.rtfd.io/en/latest/pytorch_lightning.core.lightning.html#pytorch_lightning.core.lightning.LightningModule.configure_ddp)
300 - [Multi-GPU & Multi-node](https://pytorch-lightning.rtfd.io/en/latest/pytorch_lightning.trainer.distrib_parts.html)
301 - [Training loop](https://pytorch-lightning.rtfd.io/en/latest/pytorch_lightning.trainer.training_loop.html)
302 - [Hooks](https://pytorch-lightning.rtfd.io/en/latest/pytorch_lightning.core.hooks.html)
303 - [Configure optimizers](https://pytorch-lightning.rtfd.io/en/latest/pytorch_lightning.core.lightning.html#pytorch_lightning.core.lightning.LightningModule.configure_optimizers)
304 - [Validations](https://pytorch-lightning.rtfd.io/en/latest/pytorch_lightning.trainer.evaluation_loop.html)
305 - [Model saving & Restoring training session](https://pytorch-lightning.rtfd.io/en/latest/pytorch_lightning.trainer.training_io.html)
306
307
308 ## Examples
309 - [GAN](https://github.com/PytorchLightning/pytorch-lightning/tree/master/pl_examples/domain_templates/gan.py)
310 - [MNIST](https://github.com/PytorchLightning/pytorch-lightning/tree/master/pl_examples/basic_examples)
311 - [Other projects using Lightning](https://github.com/PytorchLightning/pytorch-lightning/network/dependents?package_id=UGFja2FnZS0zNzE3NDU4OTM%3D)
312 - [Multi-node](https://github.com/PytorchLightning/pytorch-lightning/tree/master/pl_examples/multi_node_examples)
313
314 ## Tutorials
315 - [Basic Lightning use](https://towardsdatascience.com/supercharge-your-ai-research-with-pytorch-lightning-337948a99eec)
316 - [9 key speed features in Pytorch-Lightning](https://towardsdatascience.com/9-tips-for-training-lightning-fast-neural-networks-in-pytorch-8e63a502f565)
317 - [SLURM, multi-node training with Lightning](https://towardsdatascience.com/trivial-multi-node-training-with-pytorch-lightning-ff75dfb809bd)
318
319 ---
320
321 ## Asking for help
322 Welcome to the Lightning community!
323
324 If you have any questions, feel free to:
325 1. [read the docs](https://pytorch-lightning.rtfd.io/en/latest/).
326 2. [Search through the issues](https://github.com/PytorchLightning/pytorch-lightning/issues?utf8=%E2%9C%93&q=my++question).
327 3. [Ask on stackoverflow](https://stackoverflow.com/questions/ask?guided=false) with the tag pytorch-lightning.
328
329 If no one replies to you quickly enough, feel free to post the stackoverflow link to our Gitter chat!
330
331 To chat with the rest of us visit our [gitter channel](https://gitter.im/PyTorch-Lightning/community)!
332
333 ---
334 ## FAQ
335 **How do I use Lightning for rapid research?**
336 [Here's a walk-through](https://pytorch-lightning.rtfd.io/en/latest/)
337
338 **Why was Lightning created?**
339 Lightning has 3 goals in mind:
340 1. Maximal flexibility while abstracting out the common boilerplate across research projects.
341 2. Reproducibility. If all projects use the LightningModule template, it will be much much easier to understand what's going on and where to look! It will also mean every implementation follows a standard format.
342 3. Democratizing PyTorch power user features. Distributed training? 16-bit? know you need them but don't want to take the time to implement? All good... these come built into Lightning.
343
344 **How does Lightning compare with Ignite and fast.ai?**
345 [Here's a thorough comparison](https://medium.com/@_willfalcon/pytorch-lightning-vs-pytorch-ignite-vs-fast-ai-61dc7480ad8a).
346
347 **Is this another library I have to learn?**
348 Nope! We use pure Pytorch everywhere and don't add unecessary abstractions!
349
350 **Are there plans to support Python 2?**
351 Nope.
352
353 **Are there plans to support virtualenv?**
354 Nope. Please use anaconda or miniconda.
355
356 **Which PyTorch versions do you support?**
357 - **PyTorch 1.1.0**
358 ```bash
359 # install pytorch 1.1.0 using the official instructions
360
361 # install test-tube 0.6.7.6 which supports 1.1.0
362 pip install test-tube==0.6.7.6
363
364 # install latest Lightning version without upgrading deps
365 pip install -U --no-deps pytorch-lightning
366 ```
367 - **PyTorch 1.2.0, 1.3.0,**
368 Install via pip as normal
369
370 ## Custom installation
371
372 ### Bleeding edge
373
374 If you can't wait for the next release, install the most up to date code with:
375 * using GIT (locally clone whole repo with full history)
376 ```bash
377 pip install git+https://github.com/PytorchLightning/pytorch-lightning.git@master --upgrade
378 ```
379 * using instant zip (last state of the repo without git history)
380 ```bash
381 pip install https://github.com/PytorchLightning/pytorch-lightning/archive/master.zip --upgrade
382 ```
383
384 ### Any release installation
385
386 You can also install any past release `0.X.Y` from this repository:
387 ```bash
388 pip install https://github.com/PytorchLightning/pytorch-lightning/archive/0.X.Y.zip --upgrade
389 ```
390
391 ### Lightning team
392
393 #### Leads
394 - William Falcon [(williamFalcon)](https://github.com/williamFalcon) (Lightning founder)
395 - Jirka Borovec [(Borda)](https://github.com/Borda)
396 - Ethan Harris [(ethanwharris)](https://github.com/ethanwharris) (Torchbearer founder)
397 - Matthew Painter [(MattPainter01)](https://github.com/MattPainter01) (Torchbearer founder)
398
399 #### Core Maintainers
400
401 - Nick Eggert [(neggert)](https://github.com/neggert)
402 - Jeff Ling [(jeffling)](https://github.com/jeffling)
403 - Tullie Murrell [(tullie)](https://github.com/tullie)
404
405 ## Bibtex
406 If you want to cite the framework feel free to use this (but only if you loved it 😊):
407 ```
408 @misc{Falcon2019,
409 author = {Falcon, W.A. et al.},
410 title = {PyTorch Lightning},
411 year = {2019},
412 publisher = {GitHub},
413 journal = {GitHub repository},
414 howpublished = {\url{https://github.com/PytorchLightning/pytorch-lightning}}
415 }
416 ```
417
[end of README.md]
[start of pl_examples/basic_examples/lightning_module_template.py]
1 """
2 Example template for defining a system
3 """
4 import logging as log
5 import os
6 from argparse import ArgumentParser
7 from collections import OrderedDict
8
9 import torch
10 import torch.nn as nn
11 import torch.nn.functional as F
12 import torchvision.transforms as transforms
13 from torch import optim
14 from torch.utils.data import DataLoader
15 from torch.utils.data.distributed import DistributedSampler
16 from torchvision.datasets import MNIST
17
18 import pytorch_lightning as pl
19
20
21 class LightningTemplateModel(pl.LightningModule):
22 """
23 Sample model to show how to define a template
24 """
25
26 def __init__(self, hparams):
27 """
28 Pass in parsed HyperOptArgumentParser to the model
29 :param hparams:
30 """
31 # init superclass
32 super(LightningTemplateModel, self).__init__()
33 self.hparams = hparams
34
35 self.batch_size = hparams.batch_size
36
37 # if you specify an example input, the summary will show input/output for each layer
38 self.example_input_array = torch.rand(5, 28 * 28)
39
40 # build model
41 self.__build_model()
42
43 # ---------------------
44 # MODEL SETUP
45 # ---------------------
46 def __build_model(self):
47 """
48 Layout model
49 :return:
50 """
51 self.c_d1 = nn.Linear(in_features=self.hparams.in_features,
52 out_features=self.hparams.hidden_dim)
53 self.c_d1_bn = nn.BatchNorm1d(self.hparams.hidden_dim)
54 self.c_d1_drop = nn.Dropout(self.hparams.drop_prob)
55
56 self.c_d2 = nn.Linear(in_features=self.hparams.hidden_dim,
57 out_features=self.hparams.out_features)
58
59 # ---------------------
60 # TRAINING
61 # ---------------------
62 def forward(self, x):
63 """
64 No special modification required for lightning, define as you normally would
65 :param x:
66 :return:
67 """
68
69 x = self.c_d1(x)
70 x = torch.tanh(x)
71 x = self.c_d1_bn(x)
72 x = self.c_d1_drop(x)
73
74 x = self.c_d2(x)
75 logits = F.log_softmax(x, dim=1)
76
77 return logits
78
79 def loss(self, labels, logits):
80 nll = F.nll_loss(logits, labels)
81 return nll
82
83 def training_step(self, batch, batch_idx):
84 """
85 Lightning calls this inside the training loop
86 :param batch:
87 :return:
88 """
89 # forward pass
90 x, y = batch
91 x = x.view(x.size(0), -1)
92
93 y_hat = self.forward(x)
94
95 # calculate loss
96 loss_val = self.loss(y, y_hat)
97
98 # in DP mode (default) make sure if result is scalar, there's another dim in the beginning
99 if self.trainer.use_dp or self.trainer.use_ddp2:
100 loss_val = loss_val.unsqueeze(0)
101
102 tqdm_dict = {'train_loss': loss_val}
103 output = OrderedDict({
104 'loss': loss_val,
105 'progress_bar': tqdm_dict,
106 'log': tqdm_dict
107 })
108
109 # can also return just a scalar instead of a dict (return loss_val)
110 return output
111
112 def validation_step(self, batch, batch_idx):
113 """
114 Lightning calls this inside the validation loop
115 :param batch:
116 :return:
117 """
118 x, y = batch
119 x = x.view(x.size(0), -1)
120 y_hat = self.forward(x)
121
122 loss_val = self.loss(y, y_hat)
123
124 # acc
125 labels_hat = torch.argmax(y_hat, dim=1)
126 val_acc = torch.sum(y == labels_hat).item() / (len(y) * 1.0)
127 val_acc = torch.tensor(val_acc)
128
129 if self.on_gpu:
130 val_acc = val_acc.cuda(loss_val.device.index)
131
132 # in DP mode (default) make sure if result is scalar, there's another dim in the beginning
133 if self.trainer.use_dp or self.trainer.use_ddp2:
134 loss_val = loss_val.unsqueeze(0)
135 val_acc = val_acc.unsqueeze(0)
136
137 output = OrderedDict({
138 'val_loss': loss_val,
139 'val_acc': val_acc,
140 })
141
142 # can also return just a scalar instead of a dict (return loss_val)
143 return output
144
145 def validation_end(self, outputs):
146 """
147 Called at the end of validation to aggregate outputs
148 :param outputs: list of individual outputs of each validation step
149 :return:
150 """
151 # if returned a scalar from validation_step, outputs is a list of tensor scalars
152 # we return just the average in this case (if we want)
153 # return torch.stack(outputs).mean()
154
155 val_loss_mean = 0
156 val_acc_mean = 0
157 for output in outputs:
158 val_loss = output['val_loss']
159
160 # reduce manually when using dp
161 if self.trainer.use_dp or self.trainer.use_ddp2:
162 val_loss = torch.mean(val_loss)
163 val_loss_mean += val_loss
164
165 # reduce manually when using dp
166 val_acc = output['val_acc']
167 if self.trainer.use_dp or self.trainer.use_ddp2:
168 val_acc = torch.mean(val_acc)
169
170 val_acc_mean += val_acc
171
172 val_loss_mean /= len(outputs)
173 val_acc_mean /= len(outputs)
174 tqdm_dict = {'val_loss': val_loss_mean, 'val_acc': val_acc_mean}
175 result = {'progress_bar': tqdm_dict, 'log': tqdm_dict, 'val_loss': val_loss_mean}
176 return result
177
178 # ---------------------
179 # TRAINING SETUP
180 # ---------------------
181 def configure_optimizers(self):
182 """
183 return whatever optimizers we want here
184 :return: list of optimizers
185 """
186 optimizer = optim.Adam(self.parameters(), lr=self.hparams.learning_rate)
187 scheduler = optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=10)
188 return [optimizer], [scheduler]
189
190 def __dataloader(self, train):
191 # init data generators
192 transform = transforms.Compose([transforms.ToTensor(),
193 transforms.Normalize((0.5,), (1.0,))])
194 dataset = MNIST(root=self.hparams.data_root, train=train,
195 transform=transform, download=False)
196
197 # when using multi-node (ddp) we need to add the datasampler
198 batch_size = self.hparams.batch_size
199
200 loader = DataLoader(
201 dataset=dataset,
202 batch_size=batch_size,
203 num_workers=0
204 )
205
206 return loader
207
208 def prepare_data(self):
209 transform = transforms.Compose([transforms.ToTensor(),
210 transforms.Normalize((0.5,), (1.0,))])
211 dataset = MNIST(root=self.hparams.data_root, train=True,
212 transform=transform, download=True)
213 dataset = MNIST(root=self.hparams.data_root, train=False,
214 transform=transform, download=True)
215
216 def train_dataloader(self):
217 log.info('Training data loader called.')
218 return self.__dataloader(train=True)
219
220 def val_dataloader(self):
221 log.info('Validation data loader called.')
222 return self.__dataloader(train=False)
223
224 def test_dataloader(self):
225 log.info('Test data loader called.')
226 return self.__dataloader(train=False)
227
228 @staticmethod
229 def add_model_specific_args(parent_parser, root_dir): # pragma: no cover
230 """
231 Parameters you define here will be available to your model through self.hparams
232 :param parent_parser:
233 :param root_dir:
234 :return:
235 """
236 parser = ArgumentParser(parents=[parent_parser])
237
238 # param overwrites
239 # parser.set_defaults(gradient_clip_val=5.0)
240
241 # network params
242 parser.add_argument('--in_features', default=28 * 28, type=int)
243 parser.add_argument('--out_features', default=10, type=int)
244 # use 500 for CPU, 50000 for GPU to see speed difference
245 parser.add_argument('--hidden_dim', default=50000, type=int)
246 parser.add_argument('--drop_prob', default=0.2, type=float)
247 parser.add_argument('--learning_rate', default=0.001, type=float)
248
249 # data
250 parser.add_argument('--data_root', default=os.path.join(root_dir, 'mnist'), type=str)
251
252 # training params (opt)
253 parser.add_argument('--optimizer_name', default='adam', type=str)
254 parser.add_argument('--batch_size', default=64, type=int)
255 return parser
256
[end of pl_examples/basic_examples/lightning_module_template.py]
[start of pl_examples/full_examples/imagenet/imagenet_example.py]
1 """
2 This example is largely adapted from https://github.com/pytorch/examples/blob/master/imagenet/main.py
3 """
4 import argparse
5 import os
6 import random
7 from collections import OrderedDict
8
9 import torch
10 import torch.backends.cudnn as cudnn
11 import torch.nn.functional as F
12 import torch.nn.parallel
13 import torch.optim as optim
14 import torch.optim.lr_scheduler as lr_scheduler
15 import torch.utils.data
16 import torch.utils.data.distributed
17 import torchvision.datasets as datasets
18 import torchvision.models as models
19 import torchvision.transforms as transforms
20
21 import pytorch_lightning as pl
22
23 # pull out resnet names from torchvision models
24 MODEL_NAMES = sorted(
25 name for name in models.__dict__
26 if name.islower() and not name.startswith("__") and callable(models.__dict__[name])
27 )
28
29
30 class ImageNetLightningModel(pl.LightningModule):
31
32 def __init__(self, hparams):
33 super(ImageNetLightningModel, self).__init__()
34 self.hparams = hparams
35 self.model = models.__dict__[self.hparams.arch](pretrained=self.hparams.pretrained)
36
37 def forward(self, x):
38 return self.model(x)
39
40 def training_step(self, batch, batch_idx):
41 images, target = batch
42 output = self.forward(images)
43 loss_val = F.cross_entropy(output, target)
44 acc1, acc5 = self.__accuracy(output, target, topk=(1, 5))
45
46 # in DP mode (default) make sure if result is scalar, there's another dim in the beginning
47 if self.trainer.use_dp or self.trainer.use_ddp2:
48 loss_val = loss_val.unsqueeze(0)
49 acc1 = acc1.unsqueeze(0)
50 acc5 = acc5.unsqueeze(0)
51
52 tqdm_dict = {'train_loss': loss_val}
53 output = OrderedDict({
54 'loss': loss_val,
55 'acc1': acc1,
56 'acc5': acc5,
57 'progress_bar': tqdm_dict,
58 'log': tqdm_dict
59 })
60
61 return output
62
63 def validation_step(self, batch, batch_idx):
64 images, target = batch
65 output = self.forward(images)
66 loss_val = F.cross_entropy(output, target)
67 acc1, acc5 = self.__accuracy(output, target, topk=(1, 5))
68
69 # in DP mode (default) make sure if result is scalar, there's another dim in the beginning
70 if self.trainer.use_dp or self.trainer.use_ddp2:
71 loss_val = loss_val.unsqueeze(0)
72 acc1 = acc1.unsqueeze(0)
73 acc5 = acc5.unsqueeze(0)
74
75 output = OrderedDict({
76 'val_loss': loss_val,
77 'val_acc1': acc1,
78 'val_acc5': acc5,
79 })
80
81 return output
82
83 def validation_end(self, outputs):
84
85 tqdm_dict = {}
86
87 for metric_name in ["val_loss", "val_acc1", "val_acc5"]:
88 metric_total = 0
89
90 for output in outputs:
91 metric_value = output[metric_name]
92
93 # reduce manually when using dp
94 if self.trainer.use_dp or self.trainer.use_ddp2:
95 metric_value = torch.mean(metric_value)
96
97 metric_total += metric_value
98
99 tqdm_dict[metric_name] = metric_total / len(outputs)
100
101 result = {'progress_bar': tqdm_dict, 'log': tqdm_dict, 'val_loss': tqdm_dict["val_loss"]}
102 return result
103
104 @classmethod
105 def __accuracy(cls, output, target, topk=(1,)):
106 """Computes the accuracy over the k top predictions for the specified values of k"""
107 with torch.no_grad():
108 maxk = max(topk)
109 batch_size = target.size(0)
110
111 _, pred = output.topk(maxk, 1, True, True)
112 pred = pred.t()
113 correct = pred.eq(target.view(1, -1).expand_as(pred))
114
115 res = []
116 for k in topk:
117 correct_k = correct[:k].view(-1).float().sum(0, keepdim=True)
118 res.append(correct_k.mul_(100.0 / batch_size))
119 return res
120
121 def configure_optimizers(self):
122 optimizer = optim.SGD(
123 self.parameters(),
124 lr=self.hparams.lr,
125 momentum=self.hparams.momentum,
126 weight_decay=self.hparams.weight_decay
127 )
128 scheduler = lr_scheduler.ExponentialLR(optimizer, gamma=0.1)
129 return [optimizer], [scheduler]
130
131 @pl.data_loader
132 def train_dataloader(self):
133 normalize = transforms.Normalize(
134 mean=[0.485, 0.456, 0.406],
135 std=[0.229, 0.224, 0.225],
136 )
137
138 train_dir = os.path.join(self.hparams.data_path, 'train')
139 train_dataset = datasets.ImageFolder(
140 train_dir,
141 transforms.Compose([
142 transforms.RandomResizedCrop(224),
143 transforms.RandomHorizontalFlip(),
144 transforms.ToTensor(),
145 normalize,
146 ]))
147
148 if self.use_ddp:
149 train_sampler = torch.utils.data.distributed.DistributedSampler(train_dataset)
150 else:
151 train_sampler = None
152
153 train_loader = torch.utils.data.DataLoader(
154 dataset=train_dataset,
155 batch_size=self.hparams.batch_size,
156 shuffle=(train_sampler is None),
157 num_workers=0,
158 sampler=train_sampler
159 )
160 return train_loader
161
162 @pl.data_loader
163 def val_dataloader(self):
164 normalize = transforms.Normalize(
165 mean=[0.485, 0.456, 0.406],
166 std=[0.229, 0.224, 0.225],
167 )
168 val_dir = os.path.join(self.hparams.data_path, 'val')
169 val_loader = torch.utils.data.DataLoader(
170 datasets.ImageFolder(val_dir, transforms.Compose([
171 transforms.Resize(256),
172 transforms.CenterCrop(224),
173 transforms.ToTensor(),
174 normalize,
175 ])),
176 batch_size=self.hparams.batch_size,
177 shuffle=False,
178 num_workers=0,
179 )
180 return val_loader
181
182 @staticmethod
183 def add_model_specific_args(parent_parser): # pragma: no cover
184 parser = argparse.ArgumentParser(parents=[parent_parser])
185 parser.add_argument('-a', '--arch', metavar='ARCH', default='resnet18', choices=MODEL_NAMES,
186 help='model architecture: ' +
187 ' | '.join(MODEL_NAMES) +
188 ' (default: resnet18)')
189 parser.add_argument('--epochs', default=90, type=int, metavar='N',
190 help='number of total epochs to run')
191 parser.add_argument('--seed', type=int, default=42,
192 help='seed for initializing training. ')
193 parser.add_argument('-b', '--batch-size', default=256, type=int,
194 metavar='N',
195 help='mini-batch size (default: 256), this is the total '
196 'batch size of all GPUs on the current node when '
197 'using Data Parallel or Distributed Data Parallel')
198 parser.add_argument('--lr', '--learning-rate', default=0.1, type=float,
199 metavar='LR', help='initial learning rate', dest='lr')
200 parser.add_argument('--momentum', default=0.9, type=float, metavar='M',
201 help='momentum')
202 parser.add_argument('--wd', '--weight-decay', default=1e-4, type=float,
203 metavar='W', help='weight decay (default: 1e-4)',
204 dest='weight_decay')
205 parser.add_argument('--pretrained', dest='pretrained', action='store_true',
206 help='use pre-trained model')
207 return parser
208
209
210 def get_args():
211 parent_parser = argparse.ArgumentParser(add_help=False)
212 parent_parser.add_argument('--data-path', metavar='DIR', type=str,
213 help='path to dataset')
214 parent_parser.add_argument('--save-path', metavar='DIR', default=".", type=str,
215 help='path to save output')
216 parent_parser.add_argument('--gpus', type=int, default=1,
217 help='how many gpus')
218 parent_parser.add_argument('--distributed-backend', type=str, default='dp', choices=('dp', 'ddp', 'ddp2'),
219 help='supports three options dp, ddp, ddp2')
220 parent_parser.add_argument('--use-16bit', dest='use_16bit', action='store_true',
221 help='if true uses 16 bit precision')
222 parent_parser.add_argument('-e', '--evaluate', dest='evaluate', action='store_true',
223 help='evaluate model on validation set')
224
225 parser = ImageNetLightningModel.add_model_specific_args(parent_parser)
226 return parser.parse_args()
227
228
229 def main(hparams):
230 model = ImageNetLightningModel(hparams)
231 if hparams.seed is not None:
232 random.seed(hparams.seed)
233 torch.manual_seed(hparams.seed)
234 cudnn.deterministic = True
235 trainer = pl.Trainer(
236 default_save_path=hparams.save_path,
237 gpus=hparams.gpus,
238 max_epochs=hparams.epochs,
239 distributed_backend=hparams.distributed_backend,
240 use_amp=hparams.use_16bit
241 )
242 if hparams.evaluate:
243 trainer.run_evaluation()
244 else:
245 trainer.fit(model)
246
247
248 if __name__ == '__main__':
249 main(get_args())
250
[end of pl_examples/full_examples/imagenet/imagenet_example.py]
[start of pytorch_lightning/core/__init__.py]
1 """
2 A LightningModule is a strict superclass of torch.nn.Module but provides an interface to standardize
3 the "ingredients" for a research or production system.
4
5 - The model/system definition (__init__)
6 - The model/system computations (forward)
7 - What happens in the training loop (training_step, training_end)
8 - What happens in the validation loop (validation_step, validation_end)
9 - What happens in the test loop (test_step, test_end)
10 - What optimizers to use (configure_optimizers)
11 - What data to use (train_dataloader, val_dataloader, test_dataloader)
12
13 Most methods are optional. Here's a minimal example.
14
15 .. code-block:: python
16
17 import os
18 import torch
19 from torch.nn import functional as F
20 from torch.utils.data import DataLoader
21 from torchvision.datasets import MNIST
22 import torchvision.transforms as transforms
23
24 import pytorch_lightning as pl
25
26 class CoolModel(pl.LightningModule):
27
28 def __init__(self):
29 super(CoolModel, self).__init__()
30 self.l1 = torch.nn.Linear(28 * 28, 10)
31
32 def forward(self, x):
33 return torch.relu(self.l1(x.view(x.size(0), -1)))
34
35 def training_step(self, batch, batch_idx):
36 x, y = batch
37 y_hat = self.forward(x)
38 return {'loss': F.cross_entropy(y_hat, y)}
39
40 def validation_step(self, batch, batch_idx):
41 # OPTIONAL
42 x, y = batch
43 y_hat = self.forward(x)
44 return {'val_loss': F.cross_entropy(y_hat, y)}
45
46 def validation_end(self, outputs):
47 # OPTIONAL
48 val_loss_mean = torch.stack([x['val_loss'] for x in outputs]).mean()
49 return {'val_loss': val_loss_mean}
50
51 def test_step(self, batch, batch_idx):
52 # OPTIONAL
53 x, y = batch
54 y_hat = self.forward(x)
55 return {'test_loss': F.cross_entropy(y_hat, y)}
56
57 def test_end(self, outputs):
58 # OPTIONAL
59 test_loss_mean = torch.stack([x['test_loss'] for x in outputs]).mean()
60 return {'test_loss': test_loss_mean}
61
62 def configure_optimizers(self):
63 # REQUIRED
64 return torch.optim.Adam(self.parameters(), lr=0.02)
65
66 @pl.data_loader
67 def train_dataloader(self):
68 return DataLoader(MNIST(os.getcwd(), train=True, download=True,
69 transform=transforms.ToTensor()), batch_size=32)
70
71 @pl.data_loader
72 def val_dataloader(self):
73 # OPTIONAL
74 # can also return a list of val dataloaders
75 return DataLoader(MNIST(os.getcwd(), train=True, download=True,
76 transform=transforms.ToTensor()), batch_size=32)
77
78 @pl.data_loader
79 def test_dataloader(self):
80 # OPTIONAL
81 # can also return a list of test dataloaders
82 return DataLoader(MNIST(os.getcwd(), train=False, download=True,
83 transform=transforms.ToTensor()), batch_size=32)
84
85 Once you've defined the LightningModule, fit it using a trainer.
86
87 .. code-block:: python
88
89 trainer = pl.Trainer()
90 model = CoolModel()
91
92 trainer.fit(model)
93
94 Check out this
95 `COLAB <https://colab.research.google.com/drive/1F_RNcHzTfFuQf-LeKvSlud6x7jXYkG31#scrollTo=HOk9c4_35FKg>`_
96 for a live demo.
97
98 """
99
100 from .decorators import data_loader
101 from .lightning import LightningModule
102
103 __all__ = ['LightningModule', 'data_loader']
104 # __call__ = __all__
105
[end of pytorch_lightning/core/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
Lightning-AI/lightning
|
985408413643a602e0e4443862c8b64cf9d0d38e
|
Relax hparams in model saving/loading
I've managed to train a model using pl.fit(model) and have the .ckpt file. Now, I'm trying to load the .ckpt file so that I can do inference on a single image:
```
model = CoolSystem()
to_infer = torch.load('checkpoints/try_ckpt_epoch_1_v0.ckpt')
model.load_from_checkpoint(to_infer) # ------------- error is thrown at this line
```
However, upon loading the .ckpt file, the following error is thrown:
```
AttributeError: 'dict' object has no attribute 'seek'. You can only torch.load from a file that is seekable. Please pre-load the data into a buffer like io.BytesIO and try to load from it instead.
```
**Am I doing something wrong when using PyTorch Lightning for inference?**
For reference, this is my system:
```
import pytorch_lightning as pl
import os
import matplotlib.pyplot as plt
import torch
import torchvision
import torchvision.transforms as transforms
import torch.nn.functional as F
class CoolSystem(pl.LightningModule):
def __init__(self):
super(CoolSystem, self).__init__()
# self.hparams = hparams
self.data_dir = '/content/hymenoptera_data'
self.model = torchvision.models.resnet18(pretrained=True) # final layer is of size [bs, 1000]
num_ftrs = self.model.fc.in_features
self.model.fc = torch.nn.Linear(num_ftrs, 2) # change final layer to be of size [bs, 2]
def forward(self, x):
x = self.model(x)
return x
def configure_optimizers(self):
# Observe that all parameters are being optimized
optimizer = torch.optim.SGD(self.model.parameters(), lr=0.001, momentum=0.9)
# Decay LR by a factor of 0.1 every 7 epochs
exp_lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=7, gamma=0.1)
return [optimizer], [exp_lr_scheduler]
def training_step(self, batch, batch_idx):
# REQUIRED
x, y = batch
y_hat = self.forward(x)
loss = F.cross_entropy(y_hat, y)
tensorboard_logs = {'train_loss': loss}
return {'loss': loss, 'log': tensorboard_logs}
def validation_step(self, batch, batch_idx):
# OPTIONAL
x, y = batch
y_hat = self.forward(x)
return {'val_loss': F.cross_entropy(y_hat, y)}
def validation_end(self, outputs):
# OPTIONAL
avg_loss = torch.stack([x['val_loss'] for x in outputs]).mean()
tensorboard_logs = {'val_loss': avg_loss}
return {'avg_val_loss': avg_loss, 'log': tensorboard_logs}
@pl.data_loader
def train_dataloader(self):
# REQUIRED
transform = transforms.Compose([
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])
train_set = torchvision.datasets.ImageFolder(os.path.join(self.data_dir, 'train'), transform)
train_loader = torch.utils.data.DataLoader(train_set, batch_size=32, shuffle=True, num_workers=4)
return train_loader
@pl.data_loader
def val_dataloader(self):
transform = transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])
val_set = torchvision.datasets.ImageFolder(os.path.join(self.data_dir, 'val'), transform)
val_loader = torch.utils.data.DataLoader(val_set, batch_size=32, shuffle=True, num_workers=4)
return val_loader
```
And I'm training it this way:
```
model = CoolSystem()
import os
checkpoint_callback = pl.callbacks.ModelCheckpoint(
filepath=os.path.join(os.getcwd(), 'checkpoints'),
verbose=True,
monitor='val_loss',
mode='min',
prefix='try',
save_top_k=-1,
period=1 # check val_loss every n periods, and saves the checkpoint if it is better than the val_loss at the previous period
)
trainer = pl.Trainer(
max_epochs=2,
checkpoint_callback=checkpoint_callback)
trainer.fit(model)
```
|
Hey, thanks for your contribution! Great first issue!
Have not tested it, but I think it should be
`model.load_from_checkpoint('checkpoints/try_ckpt_epoch_1_v0.ckpt')`
(the method takes a string).
See docs:
https://pytorch-lightning.readthedocs.io/en/0.6.0/pytorch_lightning.core.html#pytorch_lightning.core.LightningModule.load_from_checkpoint
After trying `model.load_from_checkpoint('checkpoints/try_ckpt_epoch_1_v0.ckpt')`, the following error is now thrown:
```
OSError: Checkpoint does not contain hyperparameters. Are your model hyperparameters storedin self.hparams?
```
I built `CoolSystem()` without self.hparams, as per the example Colab notebook (https://colab.research.google.com/drive/1F_RNcHzTfFuQf-LeKvSlud6x7jXYkG31#scrollTo=HOk9c4_35FKg)
Any advice on this?
I guess it should be added to that example. The GAN example has it.
Add the hparams to the `__init__`, train it, and then try to load again.
Looks like it is always needed, even if you don't pass any hparams in.
Got it! Will take note to always add hparams to `__init__` then
@awaelchli find submitting a PR to fix?
i think the point was for hparams to be optional? or should we make it more flexible? @neggert
I can look at it.
To make it optional, I guess we could simply change the loading behaviour depending on whether the user has defined hparams or not.
I will hold back until #849 is finalized because it affects ModelCheckpoint callback.
|
2020-02-23T15:20:50Z
|
<patch>
diff --git a/pytorch_lightning/core/lightning.py b/pytorch_lightning/core/lightning.py
--- a/pytorch_lightning/core/lightning.py
+++ b/pytorch_lightning/core/lightning.py
@@ -1,4 +1,5 @@
import collections
+import inspect
import logging as log
import csv
import os
@@ -15,6 +16,7 @@
from pytorch_lightning.core.saving import ModelIO
from pytorch_lightning.core.memory import ModelSummary
from pytorch_lightning.overrides.data_parallel import LightningDistributedDataParallel
+from pytorch_lightning.utilities.debugging import MisconfigurationException
try:
import torch_xla.core.xla_model as xm
@@ -1111,13 +1113,10 @@ def load_from_metrics(cls, weights_path, tags_csv, map_location=None):
else:
checkpoint = torch.load(weights_path, map_location=lambda storage, loc: storage)
- # load the state_dict on the model automatically
- model = cls(hparams)
- model.load_state_dict(checkpoint['state_dict'])
-
- # give model a chance to load something
- model.on_load_checkpoint(checkpoint)
+ # add the hparams from csv file to checkpoint
+ checkpoint['hparams'] = vars(hparams)
+ model = cls._load_model_state(checkpoint)
return model
@classmethod
@@ -1182,17 +1181,36 @@ def __init__(self, hparams):
else:
checkpoint = torch.load(checkpoint_path, map_location=lambda storage, loc: storage)
- try:
- ckpt_hparams = checkpoint['hparams']
- except KeyError:
- raise IOError(
- "Checkpoint does not contain hyperparameters. Are your model hyperparameters stored"
- "in self.hparams?"
- )
- hparams = Namespace(**ckpt_hparams)
+ model = cls._load_model_state(checkpoint)
+ return model
+
+ @classmethod
+ def _load_model_state(cls, checkpoint):
+ cls_takes_hparams = 'hparams' in inspect.signature(cls.__init__).parameters
+ ckpt_hparams = checkpoint.get('hparams')
+
+ if cls_takes_hparams:
+ if ckpt_hparams is not None:
+ hparams = Namespace(**ckpt_hparams)
+ else:
+ warnings.warn(
+ f"Checkpoint does not contain hyperparameters but {cls.__name__}'s __init__ contains"
+ " argument 'hparams'. Will pass in an empty Namespace instead."
+ " Did you forget to store your model hyperparameters in self.hparams?"
+ )
+ hparams = Namespace()
+ else: # The user's LightningModule does not define a hparams argument
+ if ckpt_hparams is None:
+ hparams = None
+ else:
+ raise MisconfigurationException(
+ f"Checkpoint contains hyperparameters but {cls.__name__}'s __init__ is missing the"
+ " argument 'hparams'. Are you loading the correct checkpoint?"
+ )
# load the state_dict on the model automatically
- model = cls(hparams)
+ model_args = [hparams] if hparams else []
+ model = cls(*model_args)
model.load_state_dict(checkpoint['state_dict'])
# give model a chance to load something
</patch>
|
[]
|
[]
| |||
ytdl-org__youtube-dl-14716
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Cannot download certain Cartoon Network videos?
## Please follow the guide below
- You will be asked some questions and requested to provide some information, please read them **carefully** and answer honestly
- Put an `x` into all the boxes [ ] relevant to your *issue* (like this: `[x]`)
- Use the *Preview* tab to see what your issue will actually look like
---
### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2017.10.29*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.
- [x] I've **verified** and **I assure** that I'm running youtube-dl **2017.10.29**
### Before submitting an *issue* make sure you have:
- [x] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections
- [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones
### What is the purpose of your *issue*?
- [x] Bug report (encountered problems with youtube-dl)
- [ ] Site support request (request for adding support for a new site)
- [ ] Feature request (request for a new functionality)
- [ ] Question
- [ ] Other
---
### The following sections concretize particular purposed issues, you can erase any section (the contents between triple ---) not applicable to your *issue*
---
### If the purpose of this *issue* is a *bug report*, *site support request* or you are not completely sure provide the full verbose output as follows:
Add the `-v` flag to **your command line** you run youtube-dl with (`youtube-dl -v <your command line>`), copy the **whole** output and insert it here. It should look similar to one below (replace it with **your** log inserted between triple ```):
```
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: ['http://www.cartoonnetwork.com/video/regularshow/fre
e-cake-episode.html', '-v']
[debug] Encodings: locale cp1252, fs mbcs, out cp437, pref cp1252
[debug] youtube-dl version 2017.10.29
[debug] Python version 3.4.4 - Windows-7-6.1.7601-SP1
[debug] exe versions: ffmpeg N-62162-gec8789a, ffprobe 3.2.4, rtmpdump 2.4
[debug] Proxy map: {}
[CartoonNetwork] free-cake: Downloading webpage
[CartoonNetwork] 42de6efafe3f038ba941f061981bb5b287521da0: Downloading XML
[CartoonNetwork] 42de6efafe3f038ba941f061981bb5b287521da0: Downloading f4m manif
est
WARNING: Unable to download f4m manifest: HTTP Error 403: Forbidden
[CartoonNetwork] 42de6efafe3f038ba941f061981bb5b287521da0: Downloading XML
ERROR: UNKNOWN
Traceback (most recent call last):
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmpk63cqkyt\bu
ild\youtube_dl\YoutubeDL.py", line 784, in extract_info
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmpk63cqkyt\bu
ild\youtube_dl\extractor\common.py", line 434, in extract
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmpk63cqkyt\bu
ild\youtube_dl\extractor\cartoonnetwork.py", line 41, in _real_extract
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmpk63cqkyt\bu
ild\youtube_dl\extractor\turner.py", line 84, in _extract_cvp_info
youtube_dl.utils.ExtractorError: UNKNOWN
```
---
### If the purpose of this *issue* is a *site support request* please provide all kinds of example URLs support for which should be included (replace following example URLs by **yours**):
- Single video: http://www.cartoonnetwork.com/video/regularshow/free-cake-episode.html
Note that **youtube-dl does not support sites dedicated to [copyright infringement](https://github.com/rg3/youtube-dl#can-you-add-support-for-this-anime-video-site-or-site-which-shows-current-movies-for-free)**. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
---
### Description of your *issue*, suggested solution and other information
I'm trying to download a particular video and for some reason I can't. Other Cartoon Network videos work just fine, but this series(?) doesn't seem to work. I'm not sure why some work, but some don't. I'm probably missing something... Help please?
</issue>
<code>
[start of README.md]
1 [](https://travis-ci.org/rg3/youtube-dl)
2
3 youtube-dl - download videos from youtube.com or other video platforms
4
5 - [INSTALLATION](#installation)
6 - [DESCRIPTION](#description)
7 - [OPTIONS](#options)
8 - [CONFIGURATION](#configuration)
9 - [OUTPUT TEMPLATE](#output-template)
10 - [FORMAT SELECTION](#format-selection)
11 - [VIDEO SELECTION](#video-selection)
12 - [FAQ](#faq)
13 - [DEVELOPER INSTRUCTIONS](#developer-instructions)
14 - [EMBEDDING YOUTUBE-DL](#embedding-youtube-dl)
15 - [BUGS](#bugs)
16 - [COPYRIGHT](#copyright)
17
18 # INSTALLATION
19
20 To install it right away for all UNIX users (Linux, OS X, etc.), type:
21
22 sudo curl -L https://yt-dl.org/downloads/latest/youtube-dl -o /usr/local/bin/youtube-dl
23 sudo chmod a+rx /usr/local/bin/youtube-dl
24
25 If you do not have curl, you can alternatively use a recent wget:
26
27 sudo wget https://yt-dl.org/downloads/latest/youtube-dl -O /usr/local/bin/youtube-dl
28 sudo chmod a+rx /usr/local/bin/youtube-dl
29
30 Windows users can [download an .exe file](https://yt-dl.org/latest/youtube-dl.exe) and place it in any location on their [PATH](https://en.wikipedia.org/wiki/PATH_%28variable%29) except for `%SYSTEMROOT%\System32` (e.g. **do not** put in `C:\Windows\System32`).
31
32 You can also use pip:
33
34 sudo -H pip install --upgrade youtube-dl
35
36 This command will update youtube-dl if you have already installed it. See the [pypi page](https://pypi.python.org/pypi/youtube_dl) for more information.
37
38 OS X users can install youtube-dl with [Homebrew](https://brew.sh/):
39
40 brew install youtube-dl
41
42 Or with [MacPorts](https://www.macports.org/):
43
44 sudo port install youtube-dl
45
46 Alternatively, refer to the [developer instructions](#developer-instructions) for how to check out and work with the git repository. For further options, including PGP signatures, see the [youtube-dl Download Page](https://rg3.github.io/youtube-dl/download.html).
47
48 # DESCRIPTION
49 **youtube-dl** is a command-line program to download videos from YouTube.com and a few more sites. It requires the Python interpreter, version 2.6, 2.7, or 3.2+, and it is not platform specific. It should work on your Unix box, on Windows or on Mac OS X. It is released to the public domain, which means you can modify it, redistribute it or use it however you like.
50
51 youtube-dl [OPTIONS] URL [URL...]
52
53 # OPTIONS
54 -h, --help Print this help text and exit
55 --version Print program version and exit
56 -U, --update Update this program to latest version. Make
57 sure that you have sufficient permissions
58 (run with sudo if needed)
59 -i, --ignore-errors Continue on download errors, for example to
60 skip unavailable videos in a playlist
61 --abort-on-error Abort downloading of further videos (in the
62 playlist or the command line) if an error
63 occurs
64 --dump-user-agent Display the current browser identification
65 --list-extractors List all supported extractors
66 --extractor-descriptions Output descriptions of all supported
67 extractors
68 --force-generic-extractor Force extraction to use the generic
69 extractor
70 --default-search PREFIX Use this prefix for unqualified URLs. For
71 example "gvsearch2:" downloads two videos
72 from google videos for youtube-dl "large
73 apple". Use the value "auto" to let
74 youtube-dl guess ("auto_warning" to emit a
75 warning when guessing). "error" just throws
76 an error. The default value "fixup_error"
77 repairs broken URLs, but emits an error if
78 this is not possible instead of searching.
79 --ignore-config Do not read configuration files. When given
80 in the global configuration file
81 /etc/youtube-dl.conf: Do not read the user
82 configuration in ~/.config/youtube-
83 dl/config (%APPDATA%/youtube-dl/config.txt
84 on Windows)
85 --config-location PATH Location of the configuration file; either
86 the path to the config or its containing
87 directory.
88 --flat-playlist Do not extract the videos of a playlist,
89 only list them.
90 --mark-watched Mark videos watched (YouTube only)
91 --no-mark-watched Do not mark videos watched (YouTube only)
92 --no-color Do not emit color codes in output
93
94 ## Network Options:
95 --proxy URL Use the specified HTTP/HTTPS/SOCKS proxy.
96 To enable experimental SOCKS proxy, specify
97 a proper scheme. For example
98 socks5://127.0.0.1:1080/. Pass in an empty
99 string (--proxy "") for direct connection
100 --socket-timeout SECONDS Time to wait before giving up, in seconds
101 --source-address IP Client-side IP address to bind to
102 -4, --force-ipv4 Make all connections via IPv4
103 -6, --force-ipv6 Make all connections via IPv6
104
105 ## Geo Restriction:
106 --geo-verification-proxy URL Use this proxy to verify the IP address for
107 some geo-restricted sites. The default
108 proxy specified by --proxy (or none, if the
109 options is not present) is used for the
110 actual downloading.
111 --geo-bypass Bypass geographic restriction via faking
112 X-Forwarded-For HTTP header (experimental)
113 --no-geo-bypass Do not bypass geographic restriction via
114 faking X-Forwarded-For HTTP header
115 (experimental)
116 --geo-bypass-country CODE Force bypass geographic restriction with
117 explicitly provided two-letter ISO 3166-2
118 country code (experimental)
119
120 ## Video Selection:
121 --playlist-start NUMBER Playlist video to start at (default is 1)
122 --playlist-end NUMBER Playlist video to end at (default is last)
123 --playlist-items ITEM_SPEC Playlist video items to download. Specify
124 indices of the videos in the playlist
125 separated by commas like: "--playlist-items
126 1,2,5,8" if you want to download videos
127 indexed 1, 2, 5, 8 in the playlist. You can
128 specify range: "--playlist-items
129 1-3,7,10-13", it will download the videos
130 at index 1, 2, 3, 7, 10, 11, 12 and 13.
131 --match-title REGEX Download only matching titles (regex or
132 caseless sub-string)
133 --reject-title REGEX Skip download for matching titles (regex or
134 caseless sub-string)
135 --max-downloads NUMBER Abort after downloading NUMBER files
136 --min-filesize SIZE Do not download any videos smaller than
137 SIZE (e.g. 50k or 44.6m)
138 --max-filesize SIZE Do not download any videos larger than SIZE
139 (e.g. 50k or 44.6m)
140 --date DATE Download only videos uploaded in this date
141 --datebefore DATE Download only videos uploaded on or before
142 this date (i.e. inclusive)
143 --dateafter DATE Download only videos uploaded on or after
144 this date (i.e. inclusive)
145 --min-views COUNT Do not download any videos with less than
146 COUNT views
147 --max-views COUNT Do not download any videos with more than
148 COUNT views
149 --match-filter FILTER Generic video filter. Specify any key (see
150 the "OUTPUT TEMPLATE" for a list of
151 available keys) to match if the key is
152 present, !key to check if the key is not
153 present, key > NUMBER (like "comment_count
154 > 12", also works with >=, <, <=, !=, =) to
155 compare against a number, key = 'LITERAL'
156 (like "uploader = 'Mike Smith'", also works
157 with !=) to match against a string literal
158 and & to require multiple matches. Values
159 which are not known are excluded unless you
160 put a question mark (?) after the operator.
161 For example, to only match videos that have
162 been liked more than 100 times and disliked
163 less than 50 times (or the dislike
164 functionality is not available at the given
165 service), but who also have a description,
166 use --match-filter "like_count > 100 &
167 dislike_count <? 50 & description" .
168 --no-playlist Download only the video, if the URL refers
169 to a video and a playlist.
170 --yes-playlist Download the playlist, if the URL refers to
171 a video and a playlist.
172 --age-limit YEARS Download only videos suitable for the given
173 age
174 --download-archive FILE Download only videos not listed in the
175 archive file. Record the IDs of all
176 downloaded videos in it.
177 --include-ads Download advertisements as well
178 (experimental)
179
180 ## Download Options:
181 -r, --limit-rate RATE Maximum download rate in bytes per second
182 (e.g. 50K or 4.2M)
183 -R, --retries RETRIES Number of retries (default is 10), or
184 "infinite".
185 --fragment-retries RETRIES Number of retries for a fragment (default
186 is 10), or "infinite" (DASH, hlsnative and
187 ISM)
188 --skip-unavailable-fragments Skip unavailable fragments (DASH, hlsnative
189 and ISM)
190 --abort-on-unavailable-fragment Abort downloading when some fragment is not
191 available
192 --keep-fragments Keep downloaded fragments on disk after
193 downloading is finished; fragments are
194 erased by default
195 --buffer-size SIZE Size of download buffer (e.g. 1024 or 16K)
196 (default is 1024)
197 --no-resize-buffer Do not automatically adjust the buffer
198 size. By default, the buffer size is
199 automatically resized from an initial value
200 of SIZE.
201 --playlist-reverse Download playlist videos in reverse order
202 --playlist-random Download playlist videos in random order
203 --xattr-set-filesize Set file xattribute ytdl.filesize with
204 expected file size (experimental)
205 --hls-prefer-native Use the native HLS downloader instead of
206 ffmpeg
207 --hls-prefer-ffmpeg Use ffmpeg instead of the native HLS
208 downloader
209 --hls-use-mpegts Use the mpegts container for HLS videos,
210 allowing to play the video while
211 downloading (some players may not be able
212 to play it)
213 --external-downloader COMMAND Use the specified external downloader.
214 Currently supports
215 aria2c,avconv,axel,curl,ffmpeg,httpie,wget
216 --external-downloader-args ARGS Give these arguments to the external
217 downloader
218
219 ## Filesystem Options:
220 -a, --batch-file FILE File containing URLs to download ('-' for
221 stdin)
222 --id Use only video ID in file name
223 -o, --output TEMPLATE Output filename template, see the "OUTPUT
224 TEMPLATE" for all the info
225 --autonumber-start NUMBER Specify the start value for %(autonumber)s
226 (default is 1)
227 --restrict-filenames Restrict filenames to only ASCII
228 characters, and avoid "&" and spaces in
229 filenames
230 -w, --no-overwrites Do not overwrite files
231 -c, --continue Force resume of partially downloaded files.
232 By default, youtube-dl will resume
233 downloads if possible.
234 --no-continue Do not resume partially downloaded files
235 (restart from beginning)
236 --no-part Do not use .part files - write directly
237 into output file
238 --no-mtime Do not use the Last-modified header to set
239 the file modification time
240 --write-description Write video description to a .description
241 file
242 --write-info-json Write video metadata to a .info.json file
243 --write-annotations Write video annotations to a
244 .annotations.xml file
245 --load-info-json FILE JSON file containing the video information
246 (created with the "--write-info-json"
247 option)
248 --cookies FILE File to read cookies from and dump cookie
249 jar in
250 --cache-dir DIR Location in the filesystem where youtube-dl
251 can store some downloaded information
252 permanently. By default
253 $XDG_CACHE_HOME/youtube-dl or
254 ~/.cache/youtube-dl . At the moment, only
255 YouTube player files (for videos with
256 obfuscated signatures) are cached, but that
257 may change.
258 --no-cache-dir Disable filesystem caching
259 --rm-cache-dir Delete all filesystem cache files
260
261 ## Thumbnail images:
262 --write-thumbnail Write thumbnail image to disk
263 --write-all-thumbnails Write all thumbnail image formats to disk
264 --list-thumbnails Simulate and list all available thumbnail
265 formats
266
267 ## Verbosity / Simulation Options:
268 -q, --quiet Activate quiet mode
269 --no-warnings Ignore warnings
270 -s, --simulate Do not download the video and do not write
271 anything to disk
272 --skip-download Do not download the video
273 -g, --get-url Simulate, quiet but print URL
274 -e, --get-title Simulate, quiet but print title
275 --get-id Simulate, quiet but print id
276 --get-thumbnail Simulate, quiet but print thumbnail URL
277 --get-description Simulate, quiet but print video description
278 --get-duration Simulate, quiet but print video length
279 --get-filename Simulate, quiet but print output filename
280 --get-format Simulate, quiet but print output format
281 -j, --dump-json Simulate, quiet but print JSON information.
282 See the "OUTPUT TEMPLATE" for a description
283 of available keys.
284 -J, --dump-single-json Simulate, quiet but print JSON information
285 for each command-line argument. If the URL
286 refers to a playlist, dump the whole
287 playlist information in a single line.
288 --print-json Be quiet and print the video information as
289 JSON (video is still being downloaded).
290 --newline Output progress bar as new lines
291 --no-progress Do not print progress bar
292 --console-title Display progress in console titlebar
293 -v, --verbose Print various debugging information
294 --dump-pages Print downloaded pages encoded using base64
295 to debug problems (very verbose)
296 --write-pages Write downloaded intermediary pages to
297 files in the current directory to debug
298 problems
299 --print-traffic Display sent and read HTTP traffic
300 -C, --call-home Contact the youtube-dl server for debugging
301 --no-call-home Do NOT contact the youtube-dl server for
302 debugging
303
304 ## Workarounds:
305 --encoding ENCODING Force the specified encoding (experimental)
306 --no-check-certificate Suppress HTTPS certificate validation
307 --prefer-insecure Use an unencrypted connection to retrieve
308 information about the video. (Currently
309 supported only for YouTube)
310 --user-agent UA Specify a custom user agent
311 --referer URL Specify a custom referer, use if the video
312 access is restricted to one domain
313 --add-header FIELD:VALUE Specify a custom HTTP header and its value,
314 separated by a colon ':'. You can use this
315 option multiple times
316 --bidi-workaround Work around terminals that lack
317 bidirectional text support. Requires bidiv
318 or fribidi executable in PATH
319 --sleep-interval SECONDS Number of seconds to sleep before each
320 download when used alone or a lower bound
321 of a range for randomized sleep before each
322 download (minimum possible number of
323 seconds to sleep) when used along with
324 --max-sleep-interval.
325 --max-sleep-interval SECONDS Upper bound of a range for randomized sleep
326 before each download (maximum possible
327 number of seconds to sleep). Must only be
328 used along with --min-sleep-interval.
329
330 ## Video Format Options:
331 -f, --format FORMAT Video format code, see the "FORMAT
332 SELECTION" for all the info
333 --all-formats Download all available video formats
334 --prefer-free-formats Prefer free video formats unless a specific
335 one is requested
336 -F, --list-formats List all available formats of requested
337 videos
338 --youtube-skip-dash-manifest Do not download the DASH manifests and
339 related data on YouTube videos
340 --merge-output-format FORMAT If a merge is required (e.g.
341 bestvideo+bestaudio), output to given
342 container format. One of mkv, mp4, ogg,
343 webm, flv. Ignored if no merge is required
344
345 ## Subtitle Options:
346 --write-sub Write subtitle file
347 --write-auto-sub Write automatically generated subtitle file
348 (YouTube only)
349 --all-subs Download all the available subtitles of the
350 video
351 --list-subs List all available subtitles for the video
352 --sub-format FORMAT Subtitle format, accepts formats
353 preference, for example: "srt" or
354 "ass/srt/best"
355 --sub-lang LANGS Languages of the subtitles to download
356 (optional) separated by commas, use --list-
357 subs for available language tags
358
359 ## Authentication Options:
360 -u, --username USERNAME Login with this account ID
361 -p, --password PASSWORD Account password. If this option is left
362 out, youtube-dl will ask interactively.
363 -2, --twofactor TWOFACTOR Two-factor authentication code
364 -n, --netrc Use .netrc authentication data
365 --video-password PASSWORD Video password (vimeo, smotri, youku)
366
367 ## Adobe Pass Options:
368 --ap-mso MSO Adobe Pass multiple-system operator (TV
369 provider) identifier, use --ap-list-mso for
370 a list of available MSOs
371 --ap-username USERNAME Multiple-system operator account login
372 --ap-password PASSWORD Multiple-system operator account password.
373 If this option is left out, youtube-dl will
374 ask interactively.
375 --ap-list-mso List all supported multiple-system
376 operators
377
378 ## Post-processing Options:
379 -x, --extract-audio Convert video files to audio-only files
380 (requires ffmpeg or avconv and ffprobe or
381 avprobe)
382 --audio-format FORMAT Specify audio format: "best", "aac",
383 "flac", "mp3", "m4a", "opus", "vorbis", or
384 "wav"; "best" by default; No effect without
385 -x
386 --audio-quality QUALITY Specify ffmpeg/avconv audio quality, insert
387 a value between 0 (better) and 9 (worse)
388 for VBR or a specific bitrate like 128K
389 (default 5)
390 --recode-video FORMAT Encode the video to another format if
391 necessary (currently supported:
392 mp4|flv|ogg|webm|mkv|avi)
393 --postprocessor-args ARGS Give these arguments to the postprocessor
394 -k, --keep-video Keep the video file on disk after the post-
395 processing; the video is erased by default
396 --no-post-overwrites Do not overwrite post-processed files; the
397 post-processed files are overwritten by
398 default
399 --embed-subs Embed subtitles in the video (only for mp4,
400 webm and mkv videos)
401 --embed-thumbnail Embed thumbnail in the audio as cover art
402 --add-metadata Write metadata to the video file
403 --metadata-from-title FORMAT Parse additional metadata like song title /
404 artist from the video title. The format
405 syntax is the same as --output. Regular
406 expression with named capture groups may
407 also be used. The parsed parameters replace
408 existing values. Example: --metadata-from-
409 title "%(artist)s - %(title)s" matches a
410 title like "Coldplay - Paradise". Example
411 (regex): --metadata-from-title
412 "(?P<artist>.+?) - (?P<title>.+)"
413 --xattrs Write metadata to the video file's xattrs
414 (using dublin core and xdg standards)
415 --fixup POLICY Automatically correct known faults of the
416 file. One of never (do nothing), warn (only
417 emit a warning), detect_or_warn (the
418 default; fix file if we can, warn
419 otherwise)
420 --prefer-avconv Prefer avconv over ffmpeg for running the
421 postprocessors (default)
422 --prefer-ffmpeg Prefer ffmpeg over avconv for running the
423 postprocessors
424 --ffmpeg-location PATH Location of the ffmpeg/avconv binary;
425 either the path to the binary or its
426 containing directory.
427 --exec CMD Execute a command on the file after
428 downloading, similar to find's -exec
429 syntax. Example: --exec 'adb push {}
430 /sdcard/Music/ && rm {}'
431 --convert-subs FORMAT Convert the subtitles to other format
432 (currently supported: srt|ass|vtt|lrc)
433
434 # CONFIGURATION
435
436 You can configure youtube-dl by placing any supported command line option to a configuration file. On Linux and OS X, the system wide configuration file is located at `/etc/youtube-dl.conf` and the user wide configuration file at `~/.config/youtube-dl/config`. On Windows, the user wide configuration file locations are `%APPDATA%\youtube-dl\config.txt` or `C:\Users\<user name>\youtube-dl.conf`. Note that by default configuration file may not exist so you may need to create it yourself.
437
438 For example, with the following configuration file youtube-dl will always extract the audio, not copy the mtime, use a proxy and save all videos under `Movies` directory in your home directory:
439 ```
440 # Lines starting with # are comments
441
442 # Always extract audio
443 -x
444
445 # Do not copy the mtime
446 --no-mtime
447
448 # Use this proxy
449 --proxy 127.0.0.1:3128
450
451 # Save all videos under Movies directory in your home directory
452 -o ~/Movies/%(title)s.%(ext)s
453 ```
454
455 Note that options in configuration file are just the same options aka switches used in regular command line calls thus there **must be no whitespace** after `-` or `--`, e.g. `-o` or `--proxy` but not `- o` or `-- proxy`.
456
457 You can use `--ignore-config` if you want to disable the configuration file for a particular youtube-dl run.
458
459 You can also use `--config-location` if you want to use custom configuration file for a particular youtube-dl run.
460
461 ### Authentication with `.netrc` file
462
463 You may also want to configure automatic credentials storage for extractors that support authentication (by providing login and password with `--username` and `--password`) in order not to pass credentials as command line arguments on every youtube-dl execution and prevent tracking plain text passwords in the shell command history. You can achieve this using a [`.netrc` file](https://stackoverflow.com/tags/.netrc/info) on a per extractor basis. For that you will need to create a `.netrc` file in your `$HOME` and restrict permissions to read/write by only you:
464 ```
465 touch $HOME/.netrc
466 chmod a-rwx,u+rw $HOME/.netrc
467 ```
468 After that you can add credentials for an extractor in the following format, where *extractor* is the name of the extractor in lowercase:
469 ```
470 machine <extractor> login <login> password <password>
471 ```
472 For example:
473 ```
474 machine youtube login [email protected] password my_youtube_password
475 machine twitch login my_twitch_account_name password my_twitch_password
476 ```
477 To activate authentication with the `.netrc` file you should pass `--netrc` to youtube-dl or place it in the [configuration file](#configuration).
478
479 On Windows you may also need to setup the `%HOME%` environment variable manually. For example:
480 ```
481 set HOME=%USERPROFILE%
482 ```
483
484 # OUTPUT TEMPLATE
485
486 The `-o` option allows users to indicate a template for the output file names.
487
488 **tl;dr:** [navigate me to examples](#output-template-examples).
489
490 The basic usage is not to set any template arguments when downloading a single file, like in `youtube-dl -o funny_video.flv "https://some/video"`. However, it may contain special sequences that will be replaced when downloading each video. The special sequences may be formatted according to [python string formatting operations](https://docs.python.org/2/library/stdtypes.html#string-formatting). For example, `%(NAME)s` or `%(NAME)05d`. To clarify, that is a percent symbol followed by a name in parentheses, followed by a formatting operations. Allowed names along with sequence type are:
491
492 - `id` (string): Video identifier
493 - `title` (string): Video title
494 - `url` (string): Video URL
495 - `ext` (string): Video filename extension
496 - `alt_title` (string): A secondary title of the video
497 - `display_id` (string): An alternative identifier for the video
498 - `uploader` (string): Full name of the video uploader
499 - `license` (string): License name the video is licensed under
500 - `creator` (string): The creator of the video
501 - `release_date` (string): The date (YYYYMMDD) when the video was released
502 - `timestamp` (numeric): UNIX timestamp of the moment the video became available
503 - `upload_date` (string): Video upload date (YYYYMMDD)
504 - `uploader_id` (string): Nickname or id of the video uploader
505 - `location` (string): Physical location where the video was filmed
506 - `duration` (numeric): Length of the video in seconds
507 - `view_count` (numeric): How many users have watched the video on the platform
508 - `like_count` (numeric): Number of positive ratings of the video
509 - `dislike_count` (numeric): Number of negative ratings of the video
510 - `repost_count` (numeric): Number of reposts of the video
511 - `average_rating` (numeric): Average rating give by users, the scale used depends on the webpage
512 - `comment_count` (numeric): Number of comments on the video
513 - `age_limit` (numeric): Age restriction for the video (years)
514 - `format` (string): A human-readable description of the format
515 - `format_id` (string): Format code specified by `--format`
516 - `format_note` (string): Additional info about the format
517 - `width` (numeric): Width of the video
518 - `height` (numeric): Height of the video
519 - `resolution` (string): Textual description of width and height
520 - `tbr` (numeric): Average bitrate of audio and video in KBit/s
521 - `abr` (numeric): Average audio bitrate in KBit/s
522 - `acodec` (string): Name of the audio codec in use
523 - `asr` (numeric): Audio sampling rate in Hertz
524 - `vbr` (numeric): Average video bitrate in KBit/s
525 - `fps` (numeric): Frame rate
526 - `vcodec` (string): Name of the video codec in use
527 - `container` (string): Name of the container format
528 - `filesize` (numeric): The number of bytes, if known in advance
529 - `filesize_approx` (numeric): An estimate for the number of bytes
530 - `protocol` (string): The protocol that will be used for the actual download
531 - `extractor` (string): Name of the extractor
532 - `extractor_key` (string): Key name of the extractor
533 - `epoch` (numeric): Unix epoch when creating the file
534 - `autonumber` (numeric): Five-digit number that will be increased with each download, starting at zero
535 - `playlist` (string): Name or id of the playlist that contains the video
536 - `playlist_index` (numeric): Index of the video in the playlist padded with leading zeros according to the total length of the playlist
537 - `playlist_id` (string): Playlist identifier
538 - `playlist_title` (string): Playlist title
539
540 Available for the video that belongs to some logical chapter or section:
541
542 - `chapter` (string): Name or title of the chapter the video belongs to
543 - `chapter_number` (numeric): Number of the chapter the video belongs to
544 - `chapter_id` (string): Id of the chapter the video belongs to
545
546 Available for the video that is an episode of some series or programme:
547
548 - `series` (string): Title of the series or programme the video episode belongs to
549 - `season` (string): Title of the season the video episode belongs to
550 - `season_number` (numeric): Number of the season the video episode belongs to
551 - `season_id` (string): Id of the season the video episode belongs to
552 - `episode` (string): Title of the video episode
553 - `episode_number` (numeric): Number of the video episode within a season
554 - `episode_id` (string): Id of the video episode
555
556 Available for the media that is a track or a part of a music album:
557
558 - `track` (string): Title of the track
559 - `track_number` (numeric): Number of the track within an album or a disc
560 - `track_id` (string): Id of the track
561 - `artist` (string): Artist(s) of the track
562 - `genre` (string): Genre(s) of the track
563 - `album` (string): Title of the album the track belongs to
564 - `album_type` (string): Type of the album
565 - `album_artist` (string): List of all artists appeared on the album
566 - `disc_number` (numeric): Number of the disc or other physical medium the track belongs to
567 - `release_year` (numeric): Year (YYYY) when the album was released
568
569 Each aforementioned sequence when referenced in an output template will be replaced by the actual value corresponding to the sequence name. Note that some of the sequences are not guaranteed to be present since they depend on the metadata obtained by a particular extractor. Such sequences will be replaced with `NA`.
570
571 For example for `-o %(title)s-%(id)s.%(ext)s` and an mp4 video with title `youtube-dl test video` and id `BaW_jenozKcj`, this will result in a `youtube-dl test video-BaW_jenozKcj.mp4` file created in the current directory.
572
573 For numeric sequences you can use numeric related formatting, for example, `%(view_count)05d` will result in a string with view count padded with zeros up to 5 characters, like in `00042`.
574
575 Output templates can also contain arbitrary hierarchical path, e.g. `-o '%(playlist)s/%(playlist_index)s - %(title)s.%(ext)s'` which will result in downloading each video in a directory corresponding to this path template. Any missing directory will be automatically created for you.
576
577 To use percent literals in an output template use `%%`. To output to stdout use `-o -`.
578
579 The current default template is `%(title)s-%(id)s.%(ext)s`.
580
581 In some cases, you don't want special characters such as 中, spaces, or &, such as when transferring the downloaded filename to a Windows system or the filename through an 8bit-unsafe channel. In these cases, add the `--restrict-filenames` flag to get a shorter title:
582
583 #### Output template and Windows batch files
584
585 If you are using an output template inside a Windows batch file then you must escape plain percent characters (`%`) by doubling, so that `-o "%(title)s-%(id)s.%(ext)s"` should become `-o "%%(title)s-%%(id)s.%%(ext)s"`. However you should not touch `%`'s that are not plain characters, e.g. environment variables for expansion should stay intact: `-o "C:\%HOMEPATH%\Desktop\%%(title)s.%%(ext)s"`.
586
587 #### Output template examples
588
589 Note that on Windows you may need to use double quotes instead of single.
590
591 ```bash
592 $ youtube-dl --get-filename -o '%(title)s.%(ext)s' BaW_jenozKc
593 youtube-dl test video ''_ä↭𝕐.mp4 # All kinds of weird characters
594
595 $ youtube-dl --get-filename -o '%(title)s.%(ext)s' BaW_jenozKc --restrict-filenames
596 youtube-dl_test_video_.mp4 # A simple file name
597
598 # Download YouTube playlist videos in separate directory indexed by video order in a playlist
599 $ youtube-dl -o '%(playlist)s/%(playlist_index)s - %(title)s.%(ext)s' https://www.youtube.com/playlist?list=PLwiyx1dc3P2JR9N8gQaQN_BCvlSlap7re
600
601 # Download all playlists of YouTube channel/user keeping each playlist in separate directory:
602 $ youtube-dl -o '%(uploader)s/%(playlist)s/%(playlist_index)s - %(title)s.%(ext)s' https://www.youtube.com/user/TheLinuxFoundation/playlists
603
604 # Download Udemy course keeping each chapter in separate directory under MyVideos directory in your home
605 $ youtube-dl -u user -p password -o '~/MyVideos/%(playlist)s/%(chapter_number)s - %(chapter)s/%(title)s.%(ext)s' https://www.udemy.com/java-tutorial/
606
607 # Download entire series season keeping each series and each season in separate directory under C:/MyVideos
608 $ youtube-dl -o "C:/MyVideos/%(series)s/%(season_number)s - %(season)s/%(episode_number)s - %(episode)s.%(ext)s" https://videomore.ru/kino_v_detalayah/5_sezon/367617
609
610 # Stream the video being downloaded to stdout
611 $ youtube-dl -o - BaW_jenozKc
612 ```
613
614 # FORMAT SELECTION
615
616 By default youtube-dl tries to download the best available quality, i.e. if you want the best quality you **don't need** to pass any special options, youtube-dl will guess it for you by **default**.
617
618 But sometimes you may want to download in a different format, for example when you are on a slow or intermittent connection. The key mechanism for achieving this is so-called *format selection* based on which you can explicitly specify desired format, select formats based on some criterion or criteria, setup precedence and much more.
619
620 The general syntax for format selection is `--format FORMAT` or shorter `-f FORMAT` where `FORMAT` is a *selector expression*, i.e. an expression that describes format or formats you would like to download.
621
622 **tl;dr:** [navigate me to examples](#format-selection-examples).
623
624 The simplest case is requesting a specific format, for example with `-f 22` you can download the format with format code equal to 22. You can get the list of available format codes for particular video using `--list-formats` or `-F`. Note that these format codes are extractor specific.
625
626 You can also use a file extension (currently `3gp`, `aac`, `flv`, `m4a`, `mp3`, `mp4`, `ogg`, `wav`, `webm` are supported) to download the best quality format of a particular file extension served as a single file, e.g. `-f webm` will download the best quality format with the `webm` extension served as a single file.
627
628 You can also use special names to select particular edge case formats:
629 - `best`: Select the best quality format represented by a single file with video and audio.
630 - `worst`: Select the worst quality format represented by a single file with video and audio.
631 - `bestvideo`: Select the best quality video-only format (e.g. DASH video). May not be available.
632 - `worstvideo`: Select the worst quality video-only format. May not be available.
633 - `bestaudio`: Select the best quality audio only-format. May not be available.
634 - `worstaudio`: Select the worst quality audio only-format. May not be available.
635
636 For example, to download the worst quality video-only format you can use `-f worstvideo`.
637
638 If you want to download multiple videos and they don't have the same formats available, you can specify the order of preference using slashes. Note that slash is left-associative, i.e. formats on the left hand side are preferred, for example `-f 22/17/18` will download format 22 if it's available, otherwise it will download format 17 if it's available, otherwise it will download format 18 if it's available, otherwise it will complain that no suitable formats are available for download.
639
640 If you want to download several formats of the same video use a comma as a separator, e.g. `-f 22,17,18` will download all these three formats, of course if they are available. Or a more sophisticated example combined with the precedence feature: `-f 136/137/mp4/bestvideo,140/m4a/bestaudio`.
641
642 You can also filter the video formats by putting a condition in brackets, as in `-f "best[height=720]"` (or `-f "[filesize>10M]"`).
643
644 The following numeric meta fields can be used with comparisons `<`, `<=`, `>`, `>=`, `=` (equals), `!=` (not equals):
645 - `filesize`: The number of bytes, if known in advance
646 - `width`: Width of the video, if known
647 - `height`: Height of the video, if known
648 - `tbr`: Average bitrate of audio and video in KBit/s
649 - `abr`: Average audio bitrate in KBit/s
650 - `vbr`: Average video bitrate in KBit/s
651 - `asr`: Audio sampling rate in Hertz
652 - `fps`: Frame rate
653
654 Also filtering work for comparisons `=` (equals), `!=` (not equals), `^=` (begins with), `$=` (ends with), `*=` (contains) and following string meta fields:
655 - `ext`: File extension
656 - `acodec`: Name of the audio codec in use
657 - `vcodec`: Name of the video codec in use
658 - `container`: Name of the container format
659 - `protocol`: The protocol that will be used for the actual download, lower-case (`http`, `https`, `rtsp`, `rtmp`, `rtmpe`, `mms`, `f4m`, `ism`, `http_dash_segments`, `m3u8`, or `m3u8_native`)
660 - `format_id`: A short description of the format
661
662 Note that none of the aforementioned meta fields are guaranteed to be present since this solely depends on the metadata obtained by particular extractor, i.e. the metadata offered by the video hoster.
663
664 Formats for which the value is not known are excluded unless you put a question mark (`?`) after the operator. You can combine format filters, so `-f "[height <=? 720][tbr>500]"` selects up to 720p videos (or videos where the height is not known) with a bitrate of at least 500 KBit/s.
665
666 You can merge the video and audio of two formats into a single file using `-f <video-format>+<audio-format>` (requires ffmpeg or avconv installed), for example `-f bestvideo+bestaudio` will download the best video-only format, the best audio-only format and mux them together with ffmpeg/avconv.
667
668 Format selectors can also be grouped using parentheses, for example if you want to download the best mp4 and webm formats with a height lower than 480 you can use `-f '(mp4,webm)[height<480]'`.
669
670 Since the end of April 2015 and version 2015.04.26, youtube-dl uses `-f bestvideo+bestaudio/best` as the default format selection (see [#5447](https://github.com/rg3/youtube-dl/issues/5447), [#5456](https://github.com/rg3/youtube-dl/issues/5456)). If ffmpeg or avconv are installed this results in downloading `bestvideo` and `bestaudio` separately and muxing them together into a single file giving the best overall quality available. Otherwise it falls back to `best` and results in downloading the best available quality served as a single file. `best` is also needed for videos that don't come from YouTube because they don't provide the audio and video in two different files. If you want to only download some DASH formats (for example if you are not interested in getting videos with a resolution higher than 1080p), you can add `-f bestvideo[height<=?1080]+bestaudio/best` to your configuration file. Note that if you use youtube-dl to stream to `stdout` (and most likely to pipe it to your media player then), i.e. you explicitly specify output template as `-o -`, youtube-dl still uses `-f best` format selection in order to start content delivery immediately to your player and not to wait until `bestvideo` and `bestaudio` are downloaded and muxed.
671
672 If you want to preserve the old format selection behavior (prior to youtube-dl 2015.04.26), i.e. you want to download the best available quality media served as a single file, you should explicitly specify your choice with `-f best`. You may want to add it to the [configuration file](#configuration) in order not to type it every time you run youtube-dl.
673
674 #### Format selection examples
675
676 Note that on Windows you may need to use double quotes instead of single.
677
678 ```bash
679 # Download best mp4 format available or any other best if no mp4 available
680 $ youtube-dl -f 'bestvideo[ext=mp4]+bestaudio[ext=m4a]/best[ext=mp4]/best'
681
682 # Download best format available but not better that 480p
683 $ youtube-dl -f 'bestvideo[height<=480]+bestaudio/best[height<=480]'
684
685 # Download best video only format but no bigger than 50 MB
686 $ youtube-dl -f 'best[filesize<50M]'
687
688 # Download best format available via direct link over HTTP/HTTPS protocol
689 $ youtube-dl -f '(bestvideo+bestaudio/best)[protocol^=http]'
690
691 # Download the best video format and the best audio format without merging them
692 $ youtube-dl -f 'bestvideo,bestaudio' -o '%(title)s.f%(format_id)s.%(ext)s'
693 ```
694 Note that in the last example, an output template is recommended as bestvideo and bestaudio may have the same file name.
695
696
697 # VIDEO SELECTION
698
699 Videos can be filtered by their upload date using the options `--date`, `--datebefore` or `--dateafter`. They accept dates in two formats:
700
701 - Absolute dates: Dates in the format `YYYYMMDD`.
702 - Relative dates: Dates in the format `(now|today)[+-][0-9](day|week|month|year)(s)?`
703
704 Examples:
705
706 ```bash
707 # Download only the videos uploaded in the last 6 months
708 $ youtube-dl --dateafter now-6months
709
710 # Download only the videos uploaded on January 1, 1970
711 $ youtube-dl --date 19700101
712
713 $ # Download only the videos uploaded in the 200x decade
714 $ youtube-dl --dateafter 20000101 --datebefore 20091231
715 ```
716
717 # FAQ
718
719 ### How do I update youtube-dl?
720
721 If you've followed [our manual installation instructions](https://rg3.github.io/youtube-dl/download.html), you can simply run `youtube-dl -U` (or, on Linux, `sudo youtube-dl -U`).
722
723 If you have used pip, a simple `sudo pip install -U youtube-dl` is sufficient to update.
724
725 If you have installed youtube-dl using a package manager like *apt-get* or *yum*, use the standard system update mechanism to update. Note that distribution packages are often outdated. As a rule of thumb, youtube-dl releases at least once a month, and often weekly or even daily. Simply go to https://yt-dl.org to find out the current version. Unfortunately, there is nothing we youtube-dl developers can do if your distribution serves a really outdated version. You can (and should) complain to your distribution in their bugtracker or support forum.
726
727 As a last resort, you can also uninstall the version installed by your package manager and follow our manual installation instructions. For that, remove the distribution's package, with a line like
728
729 sudo apt-get remove -y youtube-dl
730
731 Afterwards, simply follow [our manual installation instructions](https://rg3.github.io/youtube-dl/download.html):
732
733 ```
734 sudo wget https://yt-dl.org/latest/youtube-dl -O /usr/local/bin/youtube-dl
735 sudo chmod a+x /usr/local/bin/youtube-dl
736 hash -r
737 ```
738
739 Again, from then on you'll be able to update with `sudo youtube-dl -U`.
740
741 ### youtube-dl is extremely slow to start on Windows
742
743 Add a file exclusion for `youtube-dl.exe` in Windows Defender settings.
744
745 ### I'm getting an error `Unable to extract OpenGraph title` on YouTube playlists
746
747 YouTube changed their playlist format in March 2014 and later on, so you'll need at least youtube-dl 2014.07.25 to download all YouTube videos.
748
749 If you have installed youtube-dl with a package manager, pip, setup.py or a tarball, please use that to update. Note that Ubuntu packages do not seem to get updated anymore. Since we are not affiliated with Ubuntu, there is little we can do. Feel free to [report bugs](https://bugs.launchpad.net/ubuntu/+source/youtube-dl/+filebug) to the [Ubuntu packaging people](mailto:[email protected]?subject=outdated%20version%20of%20youtube-dl) - all they have to do is update the package to a somewhat recent version. See above for a way to update.
750
751 ### I'm getting an error when trying to use output template: `error: using output template conflicts with using title, video ID or auto number`
752
753 Make sure you are not using `-o` with any of these options `-t`, `--title`, `--id`, `-A` or `--auto-number` set in command line or in a configuration file. Remove the latter if any.
754
755 ### Do I always have to pass `-citw`?
756
757 By default, youtube-dl intends to have the best options (incidentally, if you have a convincing case that these should be different, [please file an issue where you explain that](https://yt-dl.org/bug)). Therefore, it is unnecessary and sometimes harmful to copy long option strings from webpages. In particular, the only option out of `-citw` that is regularly useful is `-i`.
758
759 ### Can you please put the `-b` option back?
760
761 Most people asking this question are not aware that youtube-dl now defaults to downloading the highest available quality as reported by YouTube, which will be 1080p or 720p in some cases, so you no longer need the `-b` option. For some specific videos, maybe YouTube does not report them to be available in a specific high quality format you're interested in. In that case, simply request it with the `-f` option and youtube-dl will try to download it.
762
763 ### I get HTTP error 402 when trying to download a video. What's this?
764
765 Apparently YouTube requires you to pass a CAPTCHA test if you download too much. We're [considering to provide a way to let you solve the CAPTCHA](https://github.com/rg3/youtube-dl/issues/154), but at the moment, your best course of action is pointing a web browser to the youtube URL, solving the CAPTCHA, and restart youtube-dl.
766
767 ### Do I need any other programs?
768
769 youtube-dl works fine on its own on most sites. However, if you want to convert video/audio, you'll need [avconv](https://libav.org/) or [ffmpeg](https://www.ffmpeg.org/). On some sites - most notably YouTube - videos can be retrieved in a higher quality format without sound. youtube-dl will detect whether avconv/ffmpeg is present and automatically pick the best option.
770
771 Videos or video formats streamed via RTMP protocol can only be downloaded when [rtmpdump](https://rtmpdump.mplayerhq.hu/) is installed. Downloading MMS and RTSP videos requires either [mplayer](https://mplayerhq.hu/) or [mpv](https://mpv.io/) to be installed.
772
773 ### I have downloaded a video but how can I play it?
774
775 Once the video is fully downloaded, use any video player, such as [mpv](https://mpv.io/), [vlc](https://www.videolan.org/) or [mplayer](https://www.mplayerhq.hu/).
776
777 ### I extracted a video URL with `-g`, but it does not play on another machine / in my web browser.
778
779 It depends a lot on the service. In many cases, requests for the video (to download/play it) must come from the same IP address and with the same cookies and/or HTTP headers. Use the `--cookies` option to write the required cookies into a file, and advise your downloader to read cookies from that file. Some sites also require a common user agent to be used, use `--dump-user-agent` to see the one in use by youtube-dl. You can also get necessary cookies and HTTP headers from JSON output obtained with `--dump-json`.
780
781 It may be beneficial to use IPv6; in some cases, the restrictions are only applied to IPv4. Some services (sometimes only for a subset of videos) do not restrict the video URL by IP address, cookie, or user-agent, but these are the exception rather than the rule.
782
783 Please bear in mind that some URL protocols are **not** supported by browsers out of the box, including RTMP. If you are using `-g`, your own downloader must support these as well.
784
785 If you want to play the video on a machine that is not running youtube-dl, you can relay the video content from the machine that runs youtube-dl. You can use `-o -` to let youtube-dl stream a video to stdout, or simply allow the player to download the files written by youtube-dl in turn.
786
787 ### ERROR: no fmt_url_map or conn information found in video info
788
789 YouTube has switched to a new video info format in July 2011 which is not supported by old versions of youtube-dl. See [above](#how-do-i-update-youtube-dl) for how to update youtube-dl.
790
791 ### ERROR: unable to download video
792
793 YouTube requires an additional signature since September 2012 which is not supported by old versions of youtube-dl. See [above](#how-do-i-update-youtube-dl) for how to update youtube-dl.
794
795 ### Video URL contains an ampersand and I'm getting some strange output `[1] 2839` or `'v' is not recognized as an internal or external command`
796
797 That's actually the output from your shell. Since ampersand is one of the special shell characters it's interpreted by the shell preventing you from passing the whole URL to youtube-dl. To disable your shell from interpreting the ampersands (or any other special characters) you have to either put the whole URL in quotes or escape them with a backslash (which approach will work depends on your shell).
798
799 For example if your URL is https://www.youtube.com/watch?t=4&v=BaW_jenozKc you should end up with following command:
800
801 ```youtube-dl 'https://www.youtube.com/watch?t=4&v=BaW_jenozKc'```
802
803 or
804
805 ```youtube-dl https://www.youtube.com/watch?t=4\&v=BaW_jenozKc```
806
807 For Windows you have to use the double quotes:
808
809 ```youtube-dl "https://www.youtube.com/watch?t=4&v=BaW_jenozKc"```
810
811 ### ExtractorError: Could not find JS function u'OF'
812
813 In February 2015, the new YouTube player contained a character sequence in a string that was misinterpreted by old versions of youtube-dl. See [above](#how-do-i-update-youtube-dl) for how to update youtube-dl.
814
815 ### HTTP Error 429: Too Many Requests or 402: Payment Required
816
817 These two error codes indicate that the service is blocking your IP address because of overuse. Contact the service and ask them to unblock your IP address, or - if you have acquired a whitelisted IP address already - use the [`--proxy` or `--source-address` options](#network-options) to select another IP address.
818
819 ### SyntaxError: Non-ASCII character
820
821 The error
822
823 File "youtube-dl", line 2
824 SyntaxError: Non-ASCII character '\x93' ...
825
826 means you're using an outdated version of Python. Please update to Python 2.6 or 2.7.
827
828 ### What is this binary file? Where has the code gone?
829
830 Since June 2012 ([#342](https://github.com/rg3/youtube-dl/issues/342)) youtube-dl is packed as an executable zipfile, simply unzip it (might need renaming to `youtube-dl.zip` first on some systems) or clone the git repository, as laid out above. If you modify the code, you can run it by executing the `__main__.py` file. To recompile the executable, run `make youtube-dl`.
831
832 ### The exe throws an error due to missing `MSVCR100.dll`
833
834 To run the exe you need to install first the [Microsoft Visual C++ 2010 Redistributable Package (x86)](https://www.microsoft.com/en-US/download/details.aspx?id=5555).
835
836 ### On Windows, how should I set up ffmpeg and youtube-dl? Where should I put the exe files?
837
838 If you put youtube-dl and ffmpeg in the same directory that you're running the command from, it will work, but that's rather cumbersome.
839
840 To make a different directory work - either for ffmpeg, or for youtube-dl, or for both - simply create the directory (say, `C:\bin`, or `C:\Users\<User name>\bin`), put all the executables directly in there, and then [set your PATH environment variable](https://www.java.com/en/download/help/path.xml) to include that directory.
841
842 From then on, after restarting your shell, you will be able to access both youtube-dl and ffmpeg (and youtube-dl will be able to find ffmpeg) by simply typing `youtube-dl` or `ffmpeg`, no matter what directory you're in.
843
844 ### How do I put downloads into a specific folder?
845
846 Use the `-o` to specify an [output template](#output-template), for example `-o "/home/user/videos/%(title)s-%(id)s.%(ext)s"`. If you want this for all of your downloads, put the option into your [configuration file](#configuration).
847
848 ### How do I download a video starting with a `-`?
849
850 Either prepend `https://www.youtube.com/watch?v=` or separate the ID from the options with `--`:
851
852 youtube-dl -- -wNyEUrxzFU
853 youtube-dl "https://www.youtube.com/watch?v=-wNyEUrxzFU"
854
855 ### How do I pass cookies to youtube-dl?
856
857 Use the `--cookies` option, for example `--cookies /path/to/cookies/file.txt`.
858
859 In order to extract cookies from browser use any conforming browser extension for exporting cookies. For example, [cookies.txt](https://chrome.google.com/webstore/detail/cookiestxt/njabckikapfpffapmjgojcnbfjonfjfg) (for Chrome) or [Export Cookies](https://addons.mozilla.org/en-US/firefox/addon/export-cookies/) (for Firefox).
860
861 Note that the cookies file must be in Mozilla/Netscape format and the first line of the cookies file must be either `# HTTP Cookie File` or `# Netscape HTTP Cookie File`. Make sure you have correct [newline format](https://en.wikipedia.org/wiki/Newline) in the cookies file and convert newlines if necessary to correspond with your OS, namely `CRLF` (`\r\n`) for Windows and `LF` (`\n`) for Unix and Unix-like systems (Linux, Mac OS, etc.). `HTTP Error 400: Bad Request` when using `--cookies` is a good sign of invalid newline format.
862
863 Passing cookies to youtube-dl is a good way to workaround login when a particular extractor does not implement it explicitly. Another use case is working around [CAPTCHA](https://en.wikipedia.org/wiki/CAPTCHA) some websites require you to solve in particular cases in order to get access (e.g. YouTube, CloudFlare).
864
865 ### How do I stream directly to media player?
866
867 You will first need to tell youtube-dl to stream media to stdout with `-o -`, and also tell your media player to read from stdin (it must be capable of this for streaming) and then pipe former to latter. For example, streaming to [vlc](https://www.videolan.org/) can be achieved with:
868
869 youtube-dl -o - "https://www.youtube.com/watch?v=BaW_jenozKcj" | vlc -
870
871 ### How do I download only new videos from a playlist?
872
873 Use download-archive feature. With this feature you should initially download the complete playlist with `--download-archive /path/to/download/archive/file.txt` that will record identifiers of all the videos in a special file. Each subsequent run with the same `--download-archive` will download only new videos and skip all videos that have been downloaded before. Note that only successful downloads are recorded in the file.
874
875 For example, at first,
876
877 youtube-dl --download-archive archive.txt "https://www.youtube.com/playlist?list=PLwiyx1dc3P2JR9N8gQaQN_BCvlSlap7re"
878
879 will download the complete `PLwiyx1dc3P2JR9N8gQaQN_BCvlSlap7re` playlist and create a file `archive.txt`. Each subsequent run will only download new videos if any:
880
881 youtube-dl --download-archive archive.txt "https://www.youtube.com/playlist?list=PLwiyx1dc3P2JR9N8gQaQN_BCvlSlap7re"
882
883 ### Should I add `--hls-prefer-native` into my config?
884
885 When youtube-dl detects an HLS video, it can download it either with the built-in downloader or ffmpeg. Since many HLS streams are slightly invalid and ffmpeg/youtube-dl each handle some invalid cases better than the other, there is an option to switch the downloader if needed.
886
887 When youtube-dl knows that one particular downloader works better for a given website, that downloader will be picked. Otherwise, youtube-dl will pick the best downloader for general compatibility, which at the moment happens to be ffmpeg. This choice may change in future versions of youtube-dl, with improvements of the built-in downloader and/or ffmpeg.
888
889 In particular, the generic extractor (used when your website is not in the [list of supported sites by youtube-dl](https://rg3.github.io/youtube-dl/supportedsites.html) cannot mandate one specific downloader.
890
891 If you put either `--hls-prefer-native` or `--hls-prefer-ffmpeg` into your configuration, a different subset of videos will fail to download correctly. Instead, it is much better to [file an issue](https://yt-dl.org/bug) or a pull request which details why the native or the ffmpeg HLS downloader is a better choice for your use case.
892
893 ### Can you add support for this anime video site, or site which shows current movies for free?
894
895 As a matter of policy (as well as legality), youtube-dl does not include support for services that specialize in infringing copyright. As a rule of thumb, if you cannot easily find a video that the service is quite obviously allowed to distribute (i.e. that has been uploaded by the creator, the creator's distributor, or is published under a free license), the service is probably unfit for inclusion to youtube-dl.
896
897 A note on the service that they don't host the infringing content, but just link to those who do, is evidence that the service should **not** be included into youtube-dl. The same goes for any DMCA note when the whole front page of the service is filled with videos they are not allowed to distribute. A "fair use" note is equally unconvincing if the service shows copyright-protected videos in full without authorization.
898
899 Support requests for services that **do** purchase the rights to distribute their content are perfectly fine though. If in doubt, you can simply include a source that mentions the legitimate purchase of content.
900
901 ### How can I speed up work on my issue?
902
903 (Also known as: Help, my important issue not being solved!) The youtube-dl core developer team is quite small. While we do our best to solve as many issues as possible, sometimes that can take quite a while. To speed up your issue, here's what you can do:
904
905 First of all, please do report the issue [at our issue tracker](https://yt-dl.org/bugs). That allows us to coordinate all efforts by users and developers, and serves as a unified point. Unfortunately, the youtube-dl project has grown too large to use personal email as an effective communication channel.
906
907 Please read the [bug reporting instructions](#bugs) below. A lot of bugs lack all the necessary information. If you can, offer proxy, VPN, or shell access to the youtube-dl developers. If you are able to, test the issue from multiple computers in multiple countries to exclude local censorship or misconfiguration issues.
908
909 If nobody is interested in solving your issue, you are welcome to take matters into your own hands and submit a pull request (or coerce/pay somebody else to do so).
910
911 Feel free to bump the issue from time to time by writing a small comment ("Issue is still present in youtube-dl version ...from France, but fixed from Belgium"), but please not more than once a month. Please do not declare your issue as `important` or `urgent`.
912
913 ### How can I detect whether a given URL is supported by youtube-dl?
914
915 For one, have a look at the [list of supported sites](docs/supportedsites.md). Note that it can sometimes happen that the site changes its URL scheme (say, from https://example.com/video/1234567 to https://example.com/v/1234567 ) and youtube-dl reports an URL of a service in that list as unsupported. In that case, simply report a bug.
916
917 It is *not* possible to detect whether a URL is supported or not. That's because youtube-dl contains a generic extractor which matches **all** URLs. You may be tempted to disable, exclude, or remove the generic extractor, but the generic extractor not only allows users to extract videos from lots of websites that embed a video from another service, but may also be used to extract video from a service that it's hosting itself. Therefore, we neither recommend nor support disabling, excluding, or removing the generic extractor.
918
919 If you want to find out whether a given URL is supported, simply call youtube-dl with it. If you get no videos back, chances are the URL is either not referring to a video or unsupported. You can find out which by examining the output (if you run youtube-dl on the console) or catching an `UnsupportedError` exception if you run it from a Python program.
920
921 # Why do I need to go through that much red tape when filing bugs?
922
923 Before we had the issue template, despite our extensive [bug reporting instructions](#bugs), about 80% of the issue reports we got were useless, for instance because people used ancient versions hundreds of releases old, because of simple syntactic errors (not in youtube-dl but in general shell usage), because the problem was already reported multiple times before, because people did not actually read an error message, even if it said "please install ffmpeg", because people did not mention the URL they were trying to download and many more simple, easy-to-avoid problems, many of whom were totally unrelated to youtube-dl.
924
925 youtube-dl is an open-source project manned by too few volunteers, so we'd rather spend time fixing bugs where we are certain none of those simple problems apply, and where we can be reasonably confident to be able to reproduce the issue without asking the reporter repeatedly. As such, the output of `youtube-dl -v YOUR_URL_HERE` is really all that's required to file an issue. The issue template also guides you through some basic steps you can do, such as checking that your version of youtube-dl is current.
926
927 # DEVELOPER INSTRUCTIONS
928
929 Most users do not need to build youtube-dl and can [download the builds](https://rg3.github.io/youtube-dl/download.html) or get them from their distribution.
930
931 To run youtube-dl as a developer, you don't need to build anything either. Simply execute
932
933 python -m youtube_dl
934
935 To run the test, simply invoke your favorite test runner, or execute a test file directly; any of the following work:
936
937 python -m unittest discover
938 python test/test_download.py
939 nosetests
940
941 See item 6 of [new extractor tutorial](#adding-support-for-a-new-site) for how to run extractor specific test cases.
942
943 If you want to create a build of youtube-dl yourself, you'll need
944
945 * python
946 * make (only GNU make is supported)
947 * pandoc
948 * zip
949 * nosetests
950
951 ### Adding support for a new site
952
953 If you want to add support for a new site, first of all **make sure** this site is **not dedicated to [copyright infringement](README.md#can-you-add-support-for-this-anime-video-site-or-site-which-shows-current-movies-for-free)**. youtube-dl does **not support** such sites thus pull requests adding support for them **will be rejected**.
954
955 After you have ensured this site is distributing its content legally, you can follow this quick list (assuming your service is called `yourextractor`):
956
957 1. [Fork this repository](https://github.com/rg3/youtube-dl/fork)
958 2. Check out the source code with:
959
960 git clone [email protected]:YOUR_GITHUB_USERNAME/youtube-dl.git
961
962 3. Start a new git branch with
963
964 cd youtube-dl
965 git checkout -b yourextractor
966
967 4. Start with this simple template and save it to `youtube_dl/extractor/yourextractor.py`:
968
969 ```python
970 # coding: utf-8
971 from __future__ import unicode_literals
972
973 from .common import InfoExtractor
974
975
976 class YourExtractorIE(InfoExtractor):
977 _VALID_URL = r'https?://(?:www\.)?yourextractor\.com/watch/(?P<id>[0-9]+)'
978 _TEST = {
979 'url': 'https://yourextractor.com/watch/42',
980 'md5': 'TODO: md5 sum of the first 10241 bytes of the video file (use --test)',
981 'info_dict': {
982 'id': '42',
983 'ext': 'mp4',
984 'title': 'Video title goes here',
985 'thumbnail': r're:^https?://.*\.jpg$',
986 # TODO more properties, either as:
987 # * A value
988 # * MD5 checksum; start the string with md5:
989 # * A regular expression; start the string with re:
990 # * Any Python type (for example int or float)
991 }
992 }
993
994 def _real_extract(self, url):
995 video_id = self._match_id(url)
996 webpage = self._download_webpage(url, video_id)
997
998 # TODO more code goes here, for example ...
999 title = self._html_search_regex(r'<h1>(.+?)</h1>', webpage, 'title')
1000
1001 return {
1002 'id': video_id,
1003 'title': title,
1004 'description': self._og_search_description(webpage),
1005 'uploader': self._search_regex(r'<div[^>]+id="uploader"[^>]*>([^<]+)<', webpage, 'uploader', fatal=False),
1006 # TODO more properties (see youtube_dl/extractor/common.py)
1007 }
1008 ```
1009 5. Add an import in [`youtube_dl/extractor/extractors.py`](https://github.com/rg3/youtube-dl/blob/master/youtube_dl/extractor/extractors.py).
1010 6. Run `python test/test_download.py TestDownload.test_YourExtractor`. This *should fail* at first, but you can continually re-run it until you're done. If you decide to add more than one test, then rename ``_TEST`` to ``_TESTS`` and make it into a list of dictionaries. The tests will then be named `TestDownload.test_YourExtractor`, `TestDownload.test_YourExtractor_1`, `TestDownload.test_YourExtractor_2`, etc. Note that tests with `only_matching` key in test's dict are not counted in.
1011 7. Have a look at [`youtube_dl/extractor/common.py`](https://github.com/rg3/youtube-dl/blob/master/youtube_dl/extractor/common.py) for possible helper methods and a [detailed description of what your extractor should and may return](https://github.com/rg3/youtube-dl/blob/master/youtube_dl/extractor/common.py#L74-L252). Add tests and code for as many as you want.
1012 8. Make sure your code follows [youtube-dl coding conventions](#youtube-dl-coding-conventions) and check the code with [flake8](https://pypi.python.org/pypi/flake8). Also make sure your code works under all [Python](https://www.python.org/) versions claimed supported by youtube-dl, namely 2.6, 2.7, and 3.2+.
1013 9. When the tests pass, [add](https://git-scm.com/docs/git-add) the new files and [commit](https://git-scm.com/docs/git-commit) them and [push](https://git-scm.com/docs/git-push) the result, like this:
1014
1015 $ git add youtube_dl/extractor/extractors.py
1016 $ git add youtube_dl/extractor/yourextractor.py
1017 $ git commit -m '[yourextractor] Add new extractor'
1018 $ git push origin yourextractor
1019
1020 10. Finally, [create a pull request](https://help.github.com/articles/creating-a-pull-request). We'll then review and merge it.
1021
1022 In any case, thank you very much for your contributions!
1023
1024 ## youtube-dl coding conventions
1025
1026 This section introduces a guide lines for writing idiomatic, robust and future-proof extractor code.
1027
1028 Extractors are very fragile by nature since they depend on the layout of the source data provided by 3rd party media hosters out of your control and this layout tends to change. As an extractor implementer your task is not only to write code that will extract media links and metadata correctly but also to minimize dependency on the source's layout and even to make the code foresee potential future changes and be ready for that. This is important because it will allow the extractor not to break on minor layout changes thus keeping old youtube-dl versions working. Even though this breakage issue is easily fixed by emitting a new version of youtube-dl with a fix incorporated, all the previous versions become broken in all repositories and distros' packages that may not be so prompt in fetching the update from us. Needless to say, some non rolling release distros may never receive an update at all.
1029
1030 ### Mandatory and optional metafields
1031
1032 For extraction to work youtube-dl relies on metadata your extractor extracts and provides to youtube-dl expressed by an [information dictionary](https://github.com/rg3/youtube-dl/blob/master/youtube_dl/extractor/common.py#L75-L257) or simply *info dict*. Only the following meta fields in the *info dict* are considered mandatory for a successful extraction process by youtube-dl:
1033
1034 - `id` (media identifier)
1035 - `title` (media title)
1036 - `url` (media download URL) or `formats`
1037
1038 In fact only the last option is technically mandatory (i.e. if you can't figure out the download location of the media the extraction does not make any sense). But by convention youtube-dl also treats `id` and `title` as mandatory. Thus the aforementioned metafields are the critical data that the extraction does not make any sense without and if any of them fail to be extracted then the extractor is considered completely broken.
1039
1040 [Any field](https://github.com/rg3/youtube-dl/blob/master/youtube_dl/extractor/common.py#L149-L257) apart from the aforementioned ones are considered **optional**. That means that extraction should be **tolerant** to situations when sources for these fields can potentially be unavailable (even if they are always available at the moment) and **future-proof** in order not to break the extraction of general purpose mandatory fields.
1041
1042 #### Example
1043
1044 Say you have some source dictionary `meta` that you've fetched as JSON with HTTP request and it has a key `summary`:
1045
1046 ```python
1047 meta = self._download_json(url, video_id)
1048 ```
1049
1050 Assume at this point `meta`'s layout is:
1051
1052 ```python
1053 {
1054 ...
1055 "summary": "some fancy summary text",
1056 ...
1057 }
1058 ```
1059
1060 Assume you want to extract `summary` and put it into the resulting info dict as `description`. Since `description` is an optional meta field you should be ready that this key may be missing from the `meta` dict, so that you should extract it like:
1061
1062 ```python
1063 description = meta.get('summary') # correct
1064 ```
1065
1066 and not like:
1067
1068 ```python
1069 description = meta['summary'] # incorrect
1070 ```
1071
1072 The latter will break extraction process with `KeyError` if `summary` disappears from `meta` at some later time but with the former approach extraction will just go ahead with `description` set to `None` which is perfectly fine (remember `None` is equivalent to the absence of data).
1073
1074 Similarly, you should pass `fatal=False` when extracting optional data from a webpage with `_search_regex`, `_html_search_regex` or similar methods, for instance:
1075
1076 ```python
1077 description = self._search_regex(
1078 r'<span[^>]+id="title"[^>]*>([^<]+)<',
1079 webpage, 'description', fatal=False)
1080 ```
1081
1082 With `fatal` set to `False` if `_search_regex` fails to extract `description` it will emit a warning and continue extraction.
1083
1084 You can also pass `default=<some fallback value>`, for example:
1085
1086 ```python
1087 description = self._search_regex(
1088 r'<span[^>]+id="title"[^>]*>([^<]+)<',
1089 webpage, 'description', default=None)
1090 ```
1091
1092 On failure this code will silently continue the extraction with `description` set to `None`. That is useful for metafields that may or may not be present.
1093
1094 ### Provide fallbacks
1095
1096 When extracting metadata try to do so from multiple sources. For example if `title` is present in several places, try extracting from at least some of them. This makes it more future-proof in case some of the sources become unavailable.
1097
1098 #### Example
1099
1100 Say `meta` from the previous example has a `title` and you are about to extract it. Since `title` is a mandatory meta field you should end up with something like:
1101
1102 ```python
1103 title = meta['title']
1104 ```
1105
1106 If `title` disappears from `meta` in future due to some changes on the hoster's side the extraction would fail since `title` is mandatory. That's expected.
1107
1108 Assume that you have some another source you can extract `title` from, for example `og:title` HTML meta of a `webpage`. In this case you can provide a fallback scenario:
1109
1110 ```python
1111 title = meta.get('title') or self._og_search_title(webpage)
1112 ```
1113
1114 This code will try to extract from `meta` first and if it fails it will try extracting `og:title` from a `webpage`.
1115
1116 ### Make regular expressions flexible
1117
1118 When using regular expressions try to write them fuzzy and flexible.
1119
1120 #### Example
1121
1122 Say you need to extract `title` from the following HTML code:
1123
1124 ```html
1125 <span style="position: absolute; left: 910px; width: 90px; float: right; z-index: 9999;" class="title">some fancy title</span>
1126 ```
1127
1128 The code for that task should look similar to:
1129
1130 ```python
1131 title = self._search_regex(
1132 r'<span[^>]+class="title"[^>]*>([^<]+)', webpage, 'title')
1133 ```
1134
1135 Or even better:
1136
1137 ```python
1138 title = self._search_regex(
1139 r'<span[^>]+class=(["\'])title\1[^>]*>(?P<title>[^<]+)',
1140 webpage, 'title', group='title')
1141 ```
1142
1143 Note how you tolerate potential changes in the `style` attribute's value or switch from using double quotes to single for `class` attribute:
1144
1145 The code definitely should not look like:
1146
1147 ```python
1148 title = self._search_regex(
1149 r'<span style="position: absolute; left: 910px; width: 90px; float: right; z-index: 9999;" class="title">(.*?)</span>',
1150 webpage, 'title', group='title')
1151 ```
1152
1153 ### Use safe conversion functions
1154
1155 Wrap all extracted numeric data into safe functions from `utils`: `int_or_none`, `float_or_none`. Use them for string to number conversions as well.
1156
1157 # EMBEDDING YOUTUBE-DL
1158
1159 youtube-dl makes the best effort to be a good command-line program, and thus should be callable from any programming language. If you encounter any problems parsing its output, feel free to [create a report](https://github.com/rg3/youtube-dl/issues/new).
1160
1161 From a Python program, you can embed youtube-dl in a more powerful fashion, like this:
1162
1163 ```python
1164 from __future__ import unicode_literals
1165 import youtube_dl
1166
1167 ydl_opts = {}
1168 with youtube_dl.YoutubeDL(ydl_opts) as ydl:
1169 ydl.download(['https://www.youtube.com/watch?v=BaW_jenozKc'])
1170 ```
1171
1172 Most likely, you'll want to use various options. For a list of options available, have a look at [`youtube_dl/YoutubeDL.py`](https://github.com/rg3/youtube-dl/blob/3e4cedf9e8cd3157df2457df7274d0c842421945/youtube_dl/YoutubeDL.py#L137-L312). For a start, if you want to intercept youtube-dl's output, set a `logger` object.
1173
1174 Here's a more complete example of a program that outputs only errors (and a short message after the download is finished), and downloads/converts the video to an mp3 file:
1175
1176 ```python
1177 from __future__ import unicode_literals
1178 import youtube_dl
1179
1180
1181 class MyLogger(object):
1182 def debug(self, msg):
1183 pass
1184
1185 def warning(self, msg):
1186 pass
1187
1188 def error(self, msg):
1189 print(msg)
1190
1191
1192 def my_hook(d):
1193 if d['status'] == 'finished':
1194 print('Done downloading, now converting ...')
1195
1196
1197 ydl_opts = {
1198 'format': 'bestaudio/best',
1199 'postprocessors': [{
1200 'key': 'FFmpegExtractAudio',
1201 'preferredcodec': 'mp3',
1202 'preferredquality': '192',
1203 }],
1204 'logger': MyLogger(),
1205 'progress_hooks': [my_hook],
1206 }
1207 with youtube_dl.YoutubeDL(ydl_opts) as ydl:
1208 ydl.download(['https://www.youtube.com/watch?v=BaW_jenozKc'])
1209 ```
1210
1211 # BUGS
1212
1213 Bugs and suggestions should be reported at: <https://github.com/rg3/youtube-dl/issues>. Unless you were prompted to or there is another pertinent reason (e.g. GitHub fails to accept the bug report), please do not send bug reports via personal email. For discussions, join us in the IRC channel [#youtube-dl](irc://chat.freenode.net/#youtube-dl) on freenode ([webchat](https://webchat.freenode.net/?randomnick=1&channels=youtube-dl)).
1214
1215 **Please include the full output of youtube-dl when run with `-v`**, i.e. **add** `-v` flag to **your command line**, copy the **whole** output and post it in the issue body wrapped in \`\`\` for better formatting. It should look similar to this:
1216 ```
1217 $ youtube-dl -v <your command line>
1218 [debug] System config: []
1219 [debug] User config: []
1220 [debug] Command-line args: [u'-v', u'https://www.youtube.com/watch?v=BaW_jenozKcj']
1221 [debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251
1222 [debug] youtube-dl version 2015.12.06
1223 [debug] Git HEAD: 135392e
1224 [debug] Python version 2.6.6 - Windows-2003Server-5.2.3790-SP2
1225 [debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4
1226 [debug] Proxy map: {}
1227 ...
1228 ```
1229 **Do not post screenshots of verbose logs; only plain text is acceptable.**
1230
1231 The output (including the first lines) contains important debugging information. Issues without the full output are often not reproducible and therefore do not get solved in short order, if ever.
1232
1233 Please re-read your issue once again to avoid a couple of common mistakes (you can and should use this as a checklist):
1234
1235 ### Is the description of the issue itself sufficient?
1236
1237 We often get issue reports that we cannot really decipher. While in most cases we eventually get the required information after asking back multiple times, this poses an unnecessary drain on our resources. Many contributors, including myself, are also not native speakers, so we may misread some parts.
1238
1239 So please elaborate on what feature you are requesting, or what bug you want to be fixed. Make sure that it's obvious
1240
1241 - What the problem is
1242 - How it could be fixed
1243 - How your proposed solution would look like
1244
1245 If your report is shorter than two lines, it is almost certainly missing some of these, which makes it hard for us to respond to it. We're often too polite to close the issue outright, but the missing info makes misinterpretation likely. As a committer myself, I often get frustrated by these issues, since the only possible way for me to move forward on them is to ask for clarification over and over.
1246
1247 For bug reports, this means that your report should contain the *complete* output of youtube-dl when called with the `-v` flag. The error message you get for (most) bugs even says so, but you would not believe how many of our bug reports do not contain this information.
1248
1249 If your server has multiple IPs or you suspect censorship, adding `--call-home` may be a good idea to get more diagnostics. If the error is `ERROR: Unable to extract ...` and you cannot reproduce it from multiple countries, add `--dump-pages` (warning: this will yield a rather large output, redirect it to the file `log.txt` by adding `>log.txt 2>&1` to your command-line) or upload the `.dump` files you get when you add `--write-pages` [somewhere](https://gist.github.com/).
1250
1251 **Site support requests must contain an example URL**. An example URL is a URL you might want to download, like `https://www.youtube.com/watch?v=BaW_jenozKc`. There should be an obvious video present. Except under very special circumstances, the main page of a video service (e.g. `https://www.youtube.com/`) is *not* an example URL.
1252
1253 ### Are you using the latest version?
1254
1255 Before reporting any issue, type `youtube-dl -U`. This should report that you're up-to-date. About 20% of the reports we receive are already fixed, but people are using outdated versions. This goes for feature requests as well.
1256
1257 ### Is the issue already documented?
1258
1259 Make sure that someone has not already opened the issue you're trying to open. Search at the top of the window or browse the [GitHub Issues](https://github.com/rg3/youtube-dl/search?type=Issues) of this repository. If there is an issue, feel free to write something along the lines of "This affects me as well, with version 2015.01.01. Here is some more information on the issue: ...". While some issues may be old, a new post into them often spurs rapid activity.
1260
1261 ### Why are existing options not enough?
1262
1263 Before requesting a new feature, please have a quick peek at [the list of supported options](https://github.com/rg3/youtube-dl/blob/master/README.md#options). Many feature requests are for features that actually exist already! Please, absolutely do show off your work in the issue report and detail how the existing similar options do *not* solve your problem.
1264
1265 ### Is there enough context in your bug report?
1266
1267 People want to solve problems, and often think they do us a favor by breaking down their larger problems (e.g. wanting to skip already downloaded files) to a specific request (e.g. requesting us to look whether the file exists before downloading the info page). However, what often happens is that they break down the problem into two steps: One simple, and one impossible (or extremely complicated one).
1268
1269 We are then presented with a very complicated request when the original problem could be solved far easier, e.g. by recording the downloaded video IDs in a separate file. To avoid this, you must include the greater context where it is non-obvious. In particular, every feature request that does not consist of adding support for a new site should contain a use case scenario that explains in what situation the missing feature would be useful.
1270
1271 ### Does the issue involve one problem, and one problem only?
1272
1273 Some of our users seem to think there is a limit of issues they can or should open. There is no limit of issues they can or should open. While it may seem appealing to be able to dump all your issues into one ticket, that means that someone who solves one of your issues cannot mark the issue as closed. Typically, reporting a bunch of issues leads to the ticket lingering since nobody wants to attack that behemoth, until someone mercifully splits the issue into multiple ones.
1274
1275 In particular, every site support request issue should only pertain to services at one site (generally under a common domain, but always using the same backend technology). Do not request support for vimeo user videos, White house podcasts, and Google Plus pages in the same issue. Also, make sure that you don't post bug reports alongside feature requests. As a rule of thumb, a feature request does not include outputs of youtube-dl that are not immediately related to the feature at hand. Do not post reports of a network error alongside the request for a new video service.
1276
1277 ### Is anyone going to need the feature?
1278
1279 Only post features that you (or an incapacitated friend you can personally talk to) require. Do not post features because they seem like a good idea. If they are really useful, they will be requested by someone who requires them.
1280
1281 ### Is your question about youtube-dl?
1282
1283 It may sound strange, but some bug reports we receive are completely unrelated to youtube-dl and relate to a different, or even the reporter's own, application. Please make sure that you are actually using youtube-dl. If you are using a UI for youtube-dl, report the bug to the maintainer of the actual application providing the UI. On the other hand, if your UI for youtube-dl fails in some way you believe is related to youtube-dl, by all means, go ahead and report the bug.
1284
1285 # COPYRIGHT
1286
1287 youtube-dl is released into the public domain by the copyright holders.
1288
1289 This README file was originally written by [Daniel Bolton](https://github.com/dbbolton) and is likewise released into the public domain.
1290
[end of README.md]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
ytdl-org/youtube-dl
|
a9543e37c8e460e69a8556c8e5004ebd8e9b4da4
|
Cannot download certain Cartoon Network videos?
## Please follow the guide below
- You will be asked some questions and requested to provide some information, please read them **carefully** and answer honestly
- Put an `x` into all the boxes [ ] relevant to your *issue* (like this: `[x]`)
- Use the *Preview* tab to see what your issue will actually look like
---
### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2017.10.29*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.
- [x] I've **verified** and **I assure** that I'm running youtube-dl **2017.10.29**
### Before submitting an *issue* make sure you have:
- [x] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections
- [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones
### What is the purpose of your *issue*?
- [x] Bug report (encountered problems with youtube-dl)
- [ ] Site support request (request for adding support for a new site)
- [ ] Feature request (request for a new functionality)
- [ ] Question
- [ ] Other
---
### The following sections concretize particular purposed issues, you can erase any section (the contents between triple ---) not applicable to your *issue*
---
### If the purpose of this *issue* is a *bug report*, *site support request* or you are not completely sure provide the full verbose output as follows:
Add the `-v` flag to **your command line** you run youtube-dl with (`youtube-dl -v <your command line>`), copy the **whole** output and insert it here. It should look similar to one below (replace it with **your** log inserted between triple ```):
```
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: ['http://www.cartoonnetwork.com/video/regularshow/fre
e-cake-episode.html', '-v']
[debug] Encodings: locale cp1252, fs mbcs, out cp437, pref cp1252
[debug] youtube-dl version 2017.10.29
[debug] Python version 3.4.4 - Windows-7-6.1.7601-SP1
[debug] exe versions: ffmpeg N-62162-gec8789a, ffprobe 3.2.4, rtmpdump 2.4
[debug] Proxy map: {}
[CartoonNetwork] free-cake: Downloading webpage
[CartoonNetwork] 42de6efafe3f038ba941f061981bb5b287521da0: Downloading XML
[CartoonNetwork] 42de6efafe3f038ba941f061981bb5b287521da0: Downloading f4m manif
est
WARNING: Unable to download f4m manifest: HTTP Error 403: Forbidden
[CartoonNetwork] 42de6efafe3f038ba941f061981bb5b287521da0: Downloading XML
ERROR: UNKNOWN
Traceback (most recent call last):
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmpk63cqkyt\bu
ild\youtube_dl\YoutubeDL.py", line 784, in extract_info
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmpk63cqkyt\bu
ild\youtube_dl\extractor\common.py", line 434, in extract
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmpk63cqkyt\bu
ild\youtube_dl\extractor\cartoonnetwork.py", line 41, in _real_extract
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmpk63cqkyt\bu
ild\youtube_dl\extractor\turner.py", line 84, in _extract_cvp_info
youtube_dl.utils.ExtractorError: UNKNOWN
```
---
### If the purpose of this *issue* is a *site support request* please provide all kinds of example URLs support for which should be included (replace following example URLs by **yours**):
- Single video: http://www.cartoonnetwork.com/video/regularshow/free-cake-episode.html
Note that **youtube-dl does not support sites dedicated to [copyright infringement](https://github.com/rg3/youtube-dl#can-you-add-support-for-this-anime-video-site-or-site-which-shows-current-movies-for-free)**. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
---
### Description of your *issue*, suggested solution and other information
I'm trying to download a particular video and for some reason I can't. Other Cartoon Network videos work just fine, but this series(?) doesn't seem to work. I'm not sure why some work, but some don't. I'm probably missing something... Help please?
|
2017-11-10T21:34:27Z
|
<patch>
diff --git a/youtube_dl/extractor/cartoonnetwork.py b/youtube_dl/extractor/cartoonnetwork.py
--- a/youtube_dl/extractor/cartoonnetwork.py
+++ b/youtube_dl/extractor/cartoonnetwork.py
@@ -31,7 +31,7 @@ def _real_extract(self, url):
'http://www.cartoonnetwork.com/video-seo-svc/episodeservices/getCvpPlaylist?networkName=CN2&' + query, video_id, {
'secure': {
'media_src': 'http://androidhls-secure.cdn.turner.com/toon/big',
- 'tokenizer_src': 'http://www.cartoonnetwork.com/cntv/mvpd/processors/services/token_ipadAdobe.do',
+ 'tokenizer_src': 'https://token.vgtf.net/token/token_mobile',
},
}, {
'url': url,
</patch>
|
[]
|
[]
| ||||
huggingface__transformers-18648
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
XLA generation error with repetition_penalty
### System Info
- `transformers` version: 4.22.0.dev0
- Platform: Linux-5.13.0-40-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.10.1+cu113 (True)
- Tensorflow version (GPU?): 2.9.1 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@gante
@Rocketknight1
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
To reproduce error (adapted code from https://huggingface.co/blog/tf-xla-generate):
```python
import tensorflow as tf
from transformers import AutoTokenizer, TFAutoModelForCausalLM
generation_kwargs = {
"max_new_tokens": 64,
'eos_token_id': 198,
'do_sample': True,
'temperature': 0.72,
'top_k': 0,
'top_p': 0.725,
'repetition_penalty': 1.13,
}
tokenizer = AutoTokenizer.from_pretrained(
"gpt2", padding_side="left", pad_token="</s>"
)
model = TFAutoModelForCausalLM.from_pretrained("gpt2")
model.config.pad_token_id = model.config.eos_token_id
input_text = "repetition_penalty error"
xla_generate = tf.function(model.generate, jit_compile=True)
tokenized_input = tokenizer(input_text, return_tensors="tf")
print("model.generate")
model.generate(**tokenized_input, **generation_kwargs)
print("xla_generate")
xla_generate(**tokenized_input, **generation_kwargs) # error here
```
Error:
```
File "/usr/local/lib/python3.8/dist-packages/transformers/generation_tf_utils.py", line 604, in generate *
seed=model_kwargs.pop("seed", None),
File "/usr/local/lib/python3.8/dist-packages/transformers/generation_tf_utils.py", line 1651, in _generate *
input_ids,
File "/usr/local/lib/python3.8/dist-packages/transformers/generation_tf_utils.py", line 2475, in sample_body_fn *
next_tokens_scores = logits_processor(generated, next_token_logits, cur_len)
File "/usr/local/lib/python3.8/dist-packages/transformers/generation_tf_logits_process.py", line 94, in __call__ *
scores = processor(input_ids, scores, cur_len)
File "/usr/local/lib/python3.8/dist-packages/transformers/generation_tf_logits_process.py", line 278, in __call__ *
score_penalties = self._create_score_penalties(input_ids[:, :cur_len], scores)
File "/usr/local/lib/python3.8/dist-packages/transformers/generation_tf_logits_process.py", line 265, in _create_score_penalties *
indexable_prev_input_ids = tf.concat(
ValueError: None values not supported.
```
By setting `repetition_penalty` to 1.0 or by removing this parameter everything works fine.
### Expected behavior
The expected result is the work of text generation using `repetition_penalty` without any errors, taking into account the use of XLA.
</issue>
<code>
[start of README.md]
1 <!---
2 Copyright 2020 The HuggingFace Team. All rights reserved.
3
4 Licensed under the Apache License, Version 2.0 (the "License");
5 you may not use this file except in compliance with the License.
6 You may obtain a copy of the License at
7
8 http://www.apache.org/licenses/LICENSE-2.0
9
10 Unless required by applicable law or agreed to in writing, software
11 distributed under the License is distributed on an "AS IS" BASIS,
12 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 See the License for the specific language governing permissions and
14 limitations under the License.
15 -->
16
17 <p align="center">
18 <br>
19 <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers_logo_name.png" width="400"/>
20 <br>
21 <p>
22 <p align="center">
23 <a href="https://circleci.com/gh/huggingface/transformers">
24 <img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/main">
25 </a>
26 <a href="https://github.com/huggingface/transformers/blob/main/LICENSE">
27 <img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue">
28 </a>
29 <a href="https://huggingface.co/docs/transformers/index">
30 <img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers/index.svg?down_color=red&down_message=offline&up_message=online">
31 </a>
32 <a href="https://github.com/huggingface/transformers/releases">
33 <img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg">
34 </a>
35 <a href="https://github.com/huggingface/transformers/blob/main/CODE_OF_CONDUCT.md">
36 <img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg">
37 </a>
38 <a href="https://zenodo.org/badge/latestdoi/155220641"><img src="https://zenodo.org/badge/155220641.svg" alt="DOI"></a>
39 </p>
40
41 <h4 align="center">
42 <p>
43 <b>English</b> |
44 <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hans.md">简体中文</a> |
45 <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hant.md">繁體中文</a> |
46 <a href="https://github.com/huggingface/transformers/blob/main/README_ko.md">한국어</a>
47 <p>
48 </h4>
49
50 <h3 align="center">
51 <p>State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow</p>
52 </h3>
53
54 <h3 align="center">
55 <a href="https://hf.co/course"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/course_banner.png"></a>
56 </h3>
57
58 🤗 Transformers provides thousands of pretrained models to perform tasks on different modalities such as text, vision, and audio.
59
60 These models can be applied on:
61
62 * 📝 Text, for tasks like text classification, information extraction, question answering, summarization, translation, text generation, in over 100 languages.
63 * 🖼️ Images, for tasks like image classification, object detection, and segmentation.
64 * 🗣️ Audio, for tasks like speech recognition and audio classification.
65
66 Transformer models can also perform tasks on **several modalities combined**, such as table question answering, optical character recognition, information extraction from scanned documents, video classification, and visual question answering.
67
68 🤗 Transformers provides APIs to quickly download and use those pretrained models on a given text, fine-tune them on your own datasets and then share them with the community on our [model hub](https://huggingface.co/models). At the same time, each python module defining an architecture is fully standalone and can be modified to enable quick research experiments.
69
70 🤗 Transformers is backed by the three most popular deep learning libraries — [Jax](https://jax.readthedocs.io/en/latest/), [PyTorch](https://pytorch.org/) and [TensorFlow](https://www.tensorflow.org/) — with a seamless integration between them. It's straightforward to train your models with one before loading them for inference with the other.
71
72 ## Online demos
73
74 You can test most of our models directly on their pages from the [model hub](https://huggingface.co/models). We also offer [private model hosting, versioning, & an inference API](https://huggingface.co/pricing) for public and private models.
75
76 Here are a few examples:
77
78 In Natural Language Processing:
79 - [Masked word completion with BERT](https://huggingface.co/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France)
80 - [Name Entity Recognition with Electra](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city)
81 - [Text generation with GPT-2](https://huggingface.co/gpt2?text=A+long+time+ago%2C+)
82 - [Natural Language Inference with RoBERTa](https://huggingface.co/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal)
83 - [Summarization with BART](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct)
84 - [Question answering with DistilBERT](https://huggingface.co/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species)
85 - [Translation with T5](https://huggingface.co/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin)
86
87 In Computer Vision:
88 - [Image classification with ViT](https://huggingface.co/google/vit-base-patch16-224)
89 - [Object Detection with DETR](https://huggingface.co/facebook/detr-resnet-50)
90 - [Image Segmentation with DETR](https://huggingface.co/facebook/detr-resnet-50-panoptic)
91
92 In Audio:
93 - [Automatic Speech Recognition with Wav2Vec2](https://huggingface.co/facebook/wav2vec2-base-960h)
94 - [Keyword Spotting with Wav2Vec2](https://huggingface.co/superb/wav2vec2-base-superb-ks)
95
96 **[Write With Transformer](https://transformer.huggingface.co)**, built by the Hugging Face team, is the official demo of this repo’s text generation capabilities.
97
98 ## If you are looking for custom support from the Hugging Face team
99
100 <a target="_blank" href="https://huggingface.co/support">
101 <img alt="HuggingFace Expert Acceleration Program" src="https://cdn-media.huggingface.co/marketing/transformers/new-support-improved.png" style="max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);">
102 </a><br>
103
104 ## Quick tour
105
106 To immediately use a model on a given input (text, image, audio, ...), we provide the `pipeline` API. Pipelines group together a pretrained model with the preprocessing that was used during that model's training. Here is how to quickly use a pipeline to classify positive versus negative texts:
107
108 ```python
109 >>> from transformers import pipeline
110
111 # Allocate a pipeline for sentiment-analysis
112 >>> classifier = pipeline('sentiment-analysis')
113 >>> classifier('We are very happy to introduce pipeline to the transformers repository.')
114 [{'label': 'POSITIVE', 'score': 0.9996980428695679}]
115 ```
116
117 The second line of code downloads and caches the pretrained model used by the pipeline, while the third evaluates it on the given text. Here the answer is "positive" with a confidence of 99.97%.
118
119 Many tasks have a pre-trained `pipeline` ready to go, in NLP but also in computer vision and speech. For example, we can easily extract detected objects in an image:
120
121 ``` python
122 >>> import requests
123 >>> from PIL import Image
124 >>> from transformers import pipeline
125
126 # Download an image with cute cats
127 >>> url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample.png"
128 >>> image_data = requests.get(url, stream=True).raw
129 >>> image = Image.open(image_data)
130
131 # Allocate a pipeline for object detection
132 >>> object_detector = pipeline('object_detection')
133 >>> object_detector(image)
134 [{'score': 0.9982201457023621,
135 'label': 'remote',
136 'box': {'xmin': 40, 'ymin': 70, 'xmax': 175, 'ymax': 117}},
137 {'score': 0.9960021376609802,
138 'label': 'remote',
139 'box': {'xmin': 333, 'ymin': 72, 'xmax': 368, 'ymax': 187}},
140 {'score': 0.9954745173454285,
141 'label': 'couch',
142 'box': {'xmin': 0, 'ymin': 1, 'xmax': 639, 'ymax': 473}},
143 {'score': 0.9988006353378296,
144 'label': 'cat',
145 'box': {'xmin': 13, 'ymin': 52, 'xmax': 314, 'ymax': 470}},
146 {'score': 0.9986783862113953,
147 'label': 'cat',
148 'box': {'xmin': 345, 'ymin': 23, 'xmax': 640, 'ymax': 368}}]
149 ```
150
151 Here we get a list of objects detected in the image, with a box surrounding the object and a confidence score. Here is the original image on the right, with the predictions displayed on the left:
152
153 <h3 align="center">
154 <a><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample.png" width="400"></a>
155 <a><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample_post_processed.png" width="400"></a>
156 </h3>
157
158 You can learn more about the tasks supported by the `pipeline` API in [this tutorial](https://huggingface.co/docs/transformers/task_summary).
159
160 In addition to `pipeline`, to download and use any of the pretrained models on your given task, all it takes is three lines of code. Here is the PyTorch version:
161 ```python
162 >>> from transformers import AutoTokenizer, AutoModel
163
164 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
165 >>> model = AutoModel.from_pretrained("bert-base-uncased")
166
167 >>> inputs = tokenizer("Hello world!", return_tensors="pt")
168 >>> outputs = model(**inputs)
169 ```
170
171 And here is the equivalent code for TensorFlow:
172 ```python
173 >>> from transformers import AutoTokenizer, TFAutoModel
174
175 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
176 >>> model = TFAutoModel.from_pretrained("bert-base-uncased")
177
178 >>> inputs = tokenizer("Hello world!", return_tensors="tf")
179 >>> outputs = model(**inputs)
180 ```
181
182 The tokenizer is responsible for all the preprocessing the pretrained model expects, and can be called directly on a single string (as in the above examples) or a list. It will output a dictionary that you can use in downstream code or simply directly pass to your model using the ** argument unpacking operator.
183
184 The model itself is a regular [Pytorch `nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) or a [TensorFlow `tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model) (depending on your backend) which you can use as usual. [This tutorial](https://huggingface.co/docs/transformers/training) explains how to integrate such a model into a classic PyTorch or TensorFlow training loop, or how to use our `Trainer` API to quickly fine-tune on a new dataset.
185
186 ## Why should I use transformers?
187
188 1. Easy-to-use state-of-the-art models:
189 - High performance on natural language understanding & generation, computer vision, and audio tasks.
190 - Low barrier to entry for educators and practitioners.
191 - Few user-facing abstractions with just three classes to learn.
192 - A unified API for using all our pretrained models.
193
194 1. Lower compute costs, smaller carbon footprint:
195 - Researchers can share trained models instead of always retraining.
196 - Practitioners can reduce compute time and production costs.
197 - Dozens of architectures with over 60,000 pretrained models across all modalities.
198
199 1. Choose the right framework for every part of a model's lifetime:
200 - Train state-of-the-art models in 3 lines of code.
201 - Move a single model between TF2.0/PyTorch/JAX frameworks at will.
202 - Seamlessly pick the right framework for training, evaluation and production.
203
204 1. Easily customize a model or an example to your needs:
205 - We provide examples for each architecture to reproduce the results published by its original authors.
206 - Model internals are exposed as consistently as possible.
207 - Model files can be used independently of the library for quick experiments.
208
209 ## Why shouldn't I use transformers?
210
211 - This library is not a modular toolbox of building blocks for neural nets. The code in the model files is not refactored with additional abstractions on purpose, so that researchers can quickly iterate on each of the models without diving into additional abstractions/files.
212 - The training API is not intended to work on any model but is optimized to work with the models provided by the library. For generic machine learning loops, you should use another library (possibly, [Accelerate](https://huggingface.co/docs/accelerate)).
213 - While we strive to present as many use cases as possible, the scripts in our [examples folder](https://github.com/huggingface/transformers/tree/main/examples) are just that: examples. It is expected that they won't work out-of-the box on your specific problem and that you will be required to change a few lines of code to adapt them to your needs.
214
215 ## Installation
216
217 ### With pip
218
219 This repository is tested on Python 3.6+, Flax 0.3.2+, PyTorch 1.3.1+ and TensorFlow 2.3+.
220
221 You should install 🤗 Transformers in a [virtual environment](https://docs.python.org/3/library/venv.html). If you're unfamiliar with Python virtual environments, check out the [user guide](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/).
222
223 First, create a virtual environment with the version of Python you're going to use and activate it.
224
225 Then, you will need to install at least one of Flax, PyTorch or TensorFlow.
226 Please refer to [TensorFlow installation page](https://www.tensorflow.org/install/), [PyTorch installation page](https://pytorch.org/get-started/locally/#start-locally) and/or [Flax](https://github.com/google/flax#quick-install) and [Jax](https://github.com/google/jax#installation) installation pages regarding the specific install command for your platform.
227
228 When one of those backends has been installed, 🤗 Transformers can be installed using pip as follows:
229
230 ```bash
231 pip install transformers
232 ```
233
234 If you'd like to play with the examples or need the bleeding edge of the code and can't wait for a new release, you must [install the library from source](https://huggingface.co/docs/transformers/installation#installing-from-source).
235
236 ### With conda
237
238 Since Transformers version v4.0.0, we now have a conda channel: `huggingface`.
239
240 🤗 Transformers can be installed using conda as follows:
241
242 ```shell script
243 conda install -c huggingface transformers
244 ```
245
246 Follow the installation pages of Flax, PyTorch or TensorFlow to see how to install them with conda.
247
248 ## Model architectures
249
250 **[All the model checkpoints](https://huggingface.co/models)** provided by 🤗 Transformers are seamlessly integrated from the huggingface.co [model hub](https://huggingface.co) where they are uploaded directly by [users](https://huggingface.co/users) and [organizations](https://huggingface.co/organizations).
251
252 Current number of checkpoints: 
253
254 🤗 Transformers currently provides the following architectures (see [here](https://huggingface.co/docs/transformers/model_summary) for a high-level summary of each them):
255
256 1. **[ALBERT](https://huggingface.co/docs/transformers/model_doc/albert)** (from Google Research and the Toyota Technological Institute at Chicago) released with the paper [ALBERT: A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942), by Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut.
257 1. **[BART](https://huggingface.co/docs/transformers/model_doc/bart)** (from Facebook) released with the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/abs/1910.13461) by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer.
258 1. **[BARThez](https://huggingface.co/docs/transformers/model_doc/barthez)** (from École polytechnique) released with the paper [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321) by Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis.
259 1. **[BARTpho](https://huggingface.co/docs/transformers/model_doc/bartpho)** (from VinAI Research) released with the paper [BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese](https://arxiv.org/abs/2109.09701) by Nguyen Luong Tran, Duong Minh Le and Dat Quoc Nguyen.
260 1. **[BEiT](https://huggingface.co/docs/transformers/model_doc/beit)** (from Microsoft) released with the paper [BEiT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) by Hangbo Bao, Li Dong, Furu Wei.
261 1. **[BERT](https://huggingface.co/docs/transformers/model_doc/bert)** (from Google) released with the paper [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova.
262 1. **[BERT For Sequence Generation](https://huggingface.co/docs/transformers/model_doc/bert-generation)** (from Google) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn.
263 1. **[BERTweet](https://huggingface.co/docs/transformers/model_doc/bertweet)** (from VinAI Research) released with the paper [BERTweet: A pre-trained language model for English Tweets](https://aclanthology.org/2020.emnlp-demos.2/) by Dat Quoc Nguyen, Thanh Vu and Anh Tuan Nguyen.
264 1. **[BigBird-Pegasus](https://huggingface.co/docs/transformers/model_doc/bigbird_pegasus)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed.
265 1. **[BigBird-RoBERTa](https://huggingface.co/docs/transformers/model_doc/big_bird)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed.
266 1. **[Blenderbot](https://huggingface.co/docs/transformers/model_doc/blenderbot)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston.
267 1. **[BlenderbotSmall](https://huggingface.co/docs/transformers/model_doc/blenderbot-small)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston.
268 1. **[BLOOM](https://huggingface.co/docs/transformers/model_doc/bloom)** (from BigScience workshop) released by the [BigSicence Workshop](https://bigscience.huggingface.co/).
269 1. **[BORT](https://huggingface.co/docs/transformers/model_doc/bort)** (from Alexa) released with the paper [Optimal Subarchitecture Extraction For BERT](https://arxiv.org/abs/2010.10499) by Adrian de Wynter and Daniel J. Perry.
270 1. **[ByT5](https://huggingface.co/docs/transformers/model_doc/byt5)** (from Google Research) released with the paper [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) by Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel.
271 1. **[CamemBERT](https://huggingface.co/docs/transformers/model_doc/camembert)** (from Inria/Facebook/Sorbonne) released with the paper [CamemBERT: a Tasty French Language Model](https://arxiv.org/abs/1911.03894) by Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz Suárez*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot.
272 1. **[CANINE](https://huggingface.co/docs/transformers/model_doc/canine)** (from Google Research) released with the paper [CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation](https://arxiv.org/abs/2103.06874) by Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting.
273 1. **[CLIP](https://huggingface.co/docs/transformers/model_doc/clip)** (from OpenAI) released with the paper [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) by Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever.
274 1. **[CodeGen](https://huggingface.co/docs/transformers/model_doc/codegen)** (from Salesforce) released with the paper [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong.
275 1. **[ConvBERT](https://huggingface.co/docs/transformers/model_doc/convbert)** (from YituTech) released with the paper [ConvBERT: Improving BERT with Span-based Dynamic Convolution](https://arxiv.org/abs/2008.02496) by Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng Yan.
276 1. **[ConvNeXT](https://huggingface.co/docs/transformers/model_doc/convnext)** (from Facebook AI) released with the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, Saining Xie.
277 1. **[CPM](https://huggingface.co/docs/transformers/model_doc/cpm)** (from Tsinghua University) released with the paper [CPM: A Large-scale Generative Chinese Pre-trained Language Model](https://arxiv.org/abs/2012.00413) by Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen, Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, Maosong Sun.
278 1. **[CTRL](https://huggingface.co/docs/transformers/model_doc/ctrl)** (from Salesforce) released with the paper [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858) by Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher.
279 1. **[CvT](https://huggingface.co/docs/transformers/model_doc/cvt)** (from Microsoft) released with the paper [CvT: Introducing Convolutions to Vision Transformers](https://arxiv.org/abs/2103.15808) by Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan, Lei Zhang.
280 1. **[Data2Vec](https://huggingface.co/docs/transformers/model_doc/data2vec)** (from Facebook) released with the paper [Data2Vec: A General Framework for Self-supervised Learning in Speech, Vision and Language](https://arxiv.org/abs/2202.03555) by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli.
281 1. **[DeBERTa](https://huggingface.co/docs/transformers/model_doc/deberta)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen.
282 1. **[DeBERTa-v2](https://huggingface.co/docs/transformers/model_doc/deberta-v2)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen.
283 1. **[Decision Transformer](https://huggingface.co/docs/transformers/model_doc/decision_transformer)** (from Berkeley/Facebook/Google) released with the paper [Decision Transformer: Reinforcement Learning via Sequence Modeling](https://arxiv.org/abs/2106.01345) by Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch.
284 1. **[DeiT](https://huggingface.co/docs/transformers/model_doc/deit)** (from Facebook) released with the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou.
285 1. **[DETR](https://huggingface.co/docs/transformers/model_doc/detr)** (from Facebook) released with the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko.
286 1. **[DialoGPT](https://huggingface.co/docs/transformers/model_doc/dialogpt)** (from Microsoft Research) released with the paper [DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation](https://arxiv.org/abs/1911.00536) by Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan.
287 1. **[DistilBERT](https://huggingface.co/docs/transformers/model_doc/distilbert)** (from HuggingFace), released together with the paper [DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108) by Victor Sanh, Lysandre Debut and Thomas Wolf. The same method has been applied to compress GPT2 into [DistilGPT2](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation), RoBERTa into [DistilRoBERTa](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation), Multilingual BERT into [DistilmBERT](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation) and a German version of DistilBERT.
288 1. **[DiT](https://huggingface.co/docs/transformers/model_doc/dit)** (from Microsoft Research) released with the paper [DiT: Self-supervised Pre-training for Document Image Transformer](https://arxiv.org/abs/2203.02378) by Junlong Li, Yiheng Xu, Tengchao Lv, Lei Cui, Cha Zhang, Furu Wei.
289 1. **[Donut](https://huggingface.co/docs/transformers/main/model_doc/donut)** (from NAVER), released together with the paper [OCR-free Document Understanding Transformer](https://arxiv.org/abs/2111.15664) by Geewook Kim, Teakgyu Hong, Moonbin Yim, Jeongyeon Nam, Jinyoung Park, Jinyeong Yim, Wonseok Hwang, Sangdoo Yun, Dongyoon Han, Seunghyun Park.
290 1. **[DPR](https://huggingface.co/docs/transformers/model_doc/dpr)** (from Facebook) released with the paper [Dense Passage Retrieval for Open-Domain Question Answering](https://arxiv.org/abs/2004.04906) by Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih.
291 1. **[DPT](https://huggingface.co/docs/transformers/master/model_doc/dpt)** (from Intel Labs) released with the paper [Vision Transformers for Dense Prediction](https://arxiv.org/abs/2103.13413) by René Ranftl, Alexey Bochkovskiy, Vladlen Koltun.
292 1. **[ELECTRA](https://huggingface.co/docs/transformers/model_doc/electra)** (from Google Research/Stanford University) released with the paper [ELECTRA: Pre-training text encoders as discriminators rather than generators](https://arxiv.org/abs/2003.10555) by Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning.
293 1. **[EncoderDecoder](https://huggingface.co/docs/transformers/model_doc/encoder-decoder)** (from Google Research) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn.
294 1. **[FlauBERT](https://huggingface.co/docs/transformers/model_doc/flaubert)** (from CNRS) released with the paper [FlauBERT: Unsupervised Language Model Pre-training for French](https://arxiv.org/abs/1912.05372) by Hang Le, Loïc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoît Crabbé, Laurent Besacier, Didier Schwab.
295 1. **[FLAVA](https://huggingface.co/docs/transformers/model_doc/flava)** (from Facebook AI) released with the paper [FLAVA: A Foundational Language And Vision Alignment Model](https://arxiv.org/abs/2112.04482) by Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, and Douwe Kiela.
296 1. **[FNet](https://huggingface.co/docs/transformers/model_doc/fnet)** (from Google Research) released with the paper [FNet: Mixing Tokens with Fourier Transforms](https://arxiv.org/abs/2105.03824) by James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon.
297 1. **[Funnel Transformer](https://huggingface.co/docs/transformers/model_doc/funnel)** (from CMU/Google Brain) released with the paper [Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing](https://arxiv.org/abs/2006.03236) by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le.
298 1. **[GLPN](https://huggingface.co/docs/transformers/model_doc/glpn)** (from KAIST) released with the paper [Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth](https://arxiv.org/abs/2201.07436) by Doyeon Kim, Woonghyun Ga, Pyungwhan Ahn, Donggyu Joo, Sehwan Chun, Junmo Kim.
299 1. **[GPT](https://huggingface.co/docs/transformers/model_doc/openai-gpt)** (from OpenAI) released with the paper [Improving Language Understanding by Generative Pre-Training](https://blog.openai.com/language-unsupervised/) by Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever.
300 1. **[GPT Neo](https://huggingface.co/docs/transformers/model_doc/gpt_neo)** (from EleutherAI) released in the repository [EleutherAI/gpt-neo](https://github.com/EleutherAI/gpt-neo) by Sid Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy.
301 1. **[GPT NeoX](https://huggingface.co/docs/transformers/model_doc/gpt_neox)** (from EleutherAI) released with the paper [GPT-NeoX-20B: An Open-Source Autoregressive Language Model](https://arxiv.org/abs/2204.06745) by Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, Samuel Weinbach
302 1. **[GPT-2](https://huggingface.co/docs/transformers/model_doc/gpt2)** (from OpenAI) released with the paper [Language Models are Unsupervised Multitask Learners](https://blog.openai.com/better-language-models/) by Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever**.
303 1. **[GPT-J](https://huggingface.co/docs/transformers/model_doc/gptj)** (from EleutherAI) released in the repository [kingoflolz/mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/) by Ben Wang and Aran Komatsuzaki.
304 1. **[GroupViT](https://huggingface.co/docs/transformers/model_doc/groupvit)** (from UCSD, NVIDIA) released with the paper [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094) by Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang.
305 1. **[Hubert](https://huggingface.co/docs/transformers/model_doc/hubert)** (from Facebook) released with the paper [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed.
306 1. **[I-BERT](https://huggingface.co/docs/transformers/model_doc/ibert)** (from Berkeley) released with the paper [I-BERT: Integer-only BERT Quantization](https://arxiv.org/abs/2101.01321) by Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney, Kurt Keutzer.
307 1. **[ImageGPT](https://huggingface.co/docs/transformers/model_doc/imagegpt)** (from OpenAI) released with the paper [Generative Pretraining from Pixels](https://openai.com/blog/image-gpt/) by Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever.
308 1. **[LayoutLM](https://huggingface.co/docs/transformers/model_doc/layoutlm)** (from Microsoft Research Asia) released with the paper [LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318) by Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou.
309 1. **[LayoutLMv2](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (from Microsoft Research Asia) released with the paper [LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding](https://arxiv.org/abs/2012.14740) by Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou.
310 1. **[LayoutLMv3](https://huggingface.co/docs/transformers/model_doc/layoutlmv3)** (from Microsoft Research Asia) released with the paper [LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking](https://arxiv.org/abs/2204.08387) by Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei.
311 1. **[LayoutXLM](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (from Microsoft Research Asia) released with the paper [LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding](https://arxiv.org/abs/2104.08836) by Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Furu Wei.
312 1. **[LED](https://huggingface.co/docs/transformers/model_doc/led)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan.
313 1. **[LeViT](https://huggingface.co/docs/transformers/model_doc/levit)** (from Meta AI) released with the paper [LeViT: A Vision Transformer in ConvNet's Clothing for Faster Inference](https://arxiv.org/abs/2104.01136) by Ben Graham, Alaaeldin El-Nouby, Hugo Touvron, Pierre Stock, Armand Joulin, Hervé Jégou, Matthijs Douze.
314 1. **[Longformer](https://huggingface.co/docs/transformers/model_doc/longformer)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan.
315 1. **[LongT5](https://huggingface.co/docs/transformers/model_doc/longt5)** (from Google AI) released with the paper [LongT5: Efficient Text-To-Text Transformer for Long Sequences](https://arxiv.org/abs/2112.07916) by Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung, Yinfei Yang.
316 1. **[LUKE](https://huggingface.co/docs/transformers/model_doc/luke)** (from Studio Ousia) released with the paper [LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention](https://arxiv.org/abs/2010.01057) by Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, Yuji Matsumoto.
317 1. **[LXMERT](https://huggingface.co/docs/transformers/model_doc/lxmert)** (from UNC Chapel Hill) released with the paper [LXMERT: Learning Cross-Modality Encoder Representations from Transformers for Open-Domain Question Answering](https://arxiv.org/abs/1908.07490) by Hao Tan and Mohit Bansal.
318 1. **[M-CTC-T](https://huggingface.co/docs/transformers/model_doc/mctct)** (from Facebook) released with the paper [Pseudo-Labeling For Massively Multilingual Speech Recognition](https://arxiv.org/abs/2111.00161) by Loren Lugosch, Tatiana Likhomanenko, Gabriel Synnaeve, and Ronan Collobert.
319 1. **[M2M100](https://huggingface.co/docs/transformers/model_doc/m2m_100)** (from Facebook) released with the paper [Beyond English-Centric Multilingual Machine Translation](https://arxiv.org/abs/2010.11125) by Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin.
320 1. **[MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)** Machine translation models trained using [OPUS](http://opus.nlpl.eu/) data by Jörg Tiedemann. The [Marian Framework](https://marian-nmt.github.io/) is being developed by the Microsoft Translator Team.
321 1. **[MaskFormer](https://huggingface.co/docs/transformers/model_doc/maskformer)** (from Meta and UIUC) released with the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) by Bowen Cheng, Alexander G. Schwing, Alexander Kirillov.
322 1. **[mBART](https://huggingface.co/docs/transformers/model_doc/mbart)** (from Facebook) released with the paper [Multilingual Denoising Pre-training for Neural Machine Translation](https://arxiv.org/abs/2001.08210) by Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer.
323 1. **[mBART-50](https://huggingface.co/docs/transformers/model_doc/mbart)** (from Facebook) released with the paper [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) by Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, Angela Fan.
324 1. **[Megatron-BERT](https://huggingface.co/docs/transformers/model_doc/megatron-bert)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro.
325 1. **[Megatron-GPT2](https://huggingface.co/docs/transformers/model_doc/megatron_gpt2)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro.
326 1. **[mLUKE](https://huggingface.co/docs/transformers/model_doc/mluke)** (from Studio Ousia) released with the paper [mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models](https://arxiv.org/abs/2110.08151) by Ryokan Ri, Ikuya Yamada, and Yoshimasa Tsuruoka.
327 1. **[MobileBERT](https://huggingface.co/docs/transformers/model_doc/mobilebert)** (from CMU/Google Brain) released with the paper [MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices](https://arxiv.org/abs/2004.02984) by Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou.
328 1. **[MobileViT](https://huggingface.co/docs/transformers/model_doc/mobilevit)** (from Apple) released with the paper [MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer](https://arxiv.org/abs/2110.02178) by Sachin Mehta and Mohammad Rastegari.
329 1. **[MPNet](https://huggingface.co/docs/transformers/model_doc/mpnet)** (from Microsoft Research) released with the paper [MPNet: Masked and Permuted Pre-training for Language Understanding](https://arxiv.org/abs/2004.09297) by Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu.
330 1. **[MT5](https://huggingface.co/docs/transformers/model_doc/mt5)** (from Google AI) released with the paper [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) by Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel.
331 1. **[MVP](https://huggingface.co/docs/transformers/model_doc/mvp)** (from RUC AI Box) released with the paper [MVP: Multi-task Supervised Pre-training for Natural Language Generation](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen.
332 1. **[Nezha](https://huggingface.co/docs/transformers/model_doc/nezha)** (from Huawei Noah’s Ark Lab) released with the paper [NEZHA: Neural Contextualized Representation for Chinese Language Understanding](https://arxiv.org/abs/1909.00204) by Junqiu Wei, Xiaozhe Ren, Xiaoguang Li, Wenyong Huang, Yi Liao, Yasheng Wang, Jiashu Lin, Xin Jiang, Xiao Chen and Qun Liu.
333 1. **[NLLB](https://huggingface.co/docs/transformers/model_doc/nllb)** (from Meta) released with the paper [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672) by the NLLB team.
334 1. **[Nyströmformer](https://huggingface.co/docs/transformers/model_doc/nystromformer)** (from the University of Wisconsin - Madison) released with the paper [Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention](https://arxiv.org/abs/2102.03902) by Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, Vikas Singh.
335 1. **[OPT](https://huggingface.co/docs/transformers/master/model_doc/opt)** (from Meta AI) released with the paper [OPT: Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) by Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al.
336 1. **[OWL-ViT](https://huggingface.co/docs/transformers/model_doc/owlvit)** (from Google AI) released with the paper [Simple Open-Vocabulary Object Detection with Vision Transformers](https://arxiv.org/abs/2205.06230) by Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, and Neil Houlsby.
337 1. **[Pegasus](https://huggingface.co/docs/transformers/model_doc/pegasus)** (from Google) released with the paper [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777) by Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu.
338 1. **[Perceiver IO](https://huggingface.co/docs/transformers/model_doc/perceiver)** (from Deepmind) released with the paper [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) by Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hénaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, João Carreira.
339 1. **[PhoBERT](https://huggingface.co/docs/transformers/model_doc/phobert)** (from VinAI Research) released with the paper [PhoBERT: Pre-trained language models for Vietnamese](https://www.aclweb.org/anthology/2020.findings-emnlp.92/) by Dat Quoc Nguyen and Anh Tuan Nguyen.
340 1. **[PLBart](https://huggingface.co/docs/transformers/model_doc/plbart)** (from UCLA NLP) released with the paper [Unified Pre-training for Program Understanding and Generation](https://arxiv.org/abs/2103.06333) by Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, Kai-Wei Chang.
341 1. **[PoolFormer](https://huggingface.co/docs/transformers/model_doc/poolformer)** (from Sea AI Labs) released with the paper [MetaFormer is Actually What You Need for Vision](https://arxiv.org/abs/2111.11418) by Yu, Weihao and Luo, Mi and Zhou, Pan and Si, Chenyang and Zhou, Yichen and Wang, Xinchao and Feng, Jiashi and Yan, Shuicheng.
342 1. **[ProphetNet](https://huggingface.co/docs/transformers/model_doc/prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou.
343 1. **[QDQBert](https://huggingface.co/docs/transformers/model_doc/qdqbert)** (from NVIDIA) released with the paper [Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation](https://arxiv.org/abs/2004.09602) by Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev and Paulius Micikevicius.
344 1. **[RAG](https://huggingface.co/docs/transformers/model_doc/rag)** (from Facebook) released with the paper [Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks](https://arxiv.org/abs/2005.11401) by Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, Douwe Kiela.
345 1. **[REALM](https://huggingface.co/docs/transformers/model_doc/realm.html)** (from Google Research) released with the paper [REALM: Retrieval-Augmented Language Model Pre-Training](https://arxiv.org/abs/2002.08909) by Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat and Ming-Wei Chang.
346 1. **[Reformer](https://huggingface.co/docs/transformers/model_doc/reformer)** (from Google Research) released with the paper [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451) by Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya.
347 1. **[RegNet](https://huggingface.co/docs/transformers/model_doc/regnet)** (from META Platforms) released with the paper [Designing Network Design Space](https://arxiv.org/abs/2003.13678) by Ilija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, Piotr Dollár.
348 1. **[RemBERT](https://huggingface.co/docs/transformers/model_doc/rembert)** (from Google Research) released with the paper [Rethinking embedding coupling in pre-trained language models](https://arxiv.org/abs/2010.12821) by Hyung Won Chung, Thibault Févry, Henry Tsai, M. Johnson, Sebastian Ruder.
349 1. **[ResNet](https://huggingface.co/docs/transformers/model_doc/resnet)** (from Microsoft Research) released with the paper [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) by Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun.
350 1. **[RoBERTa](https://huggingface.co/docs/transformers/model_doc/roberta)** (from Facebook), released together with the paper [RoBERTa: A Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov.
351 1. **[RoFormer](https://huggingface.co/docs/transformers/model_doc/roformer)** (from ZhuiyiTechnology), released together with the paper [RoFormer: Enhanced Transformer with Rotary Position Embedding](https://arxiv.org/abs/2104.09864) by Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu.
352 1. **[SegFormer](https://huggingface.co/docs/transformers/model_doc/segformer)** (from NVIDIA) released with the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo.
353 1. **[SEW](https://huggingface.co/docs/transformers/model_doc/sew)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi.
354 1. **[SEW-D](https://huggingface.co/docs/transformers/model_doc/sew_d)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi.
355 1. **[SpeechToTextTransformer](https://huggingface.co/docs/transformers/model_doc/speech_to_text)** (from Facebook), released together with the paper [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino.
356 1. **[SpeechToTextTransformer2](https://huggingface.co/docs/transformers/model_doc/speech_to_text_2)** (from Facebook), released together with the paper [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) by Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau.
357 1. **[Splinter](https://huggingface.co/docs/transformers/model_doc/splinter)** (from Tel Aviv University), released together with the paper [Few-Shot Question Answering by Pretraining Span Selection](https://arxiv.org/abs/2101.00438) by Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy.
358 1. **[SqueezeBERT](https://huggingface.co/docs/transformers/model_doc/squeezebert)** (from Berkeley) released with the paper [SqueezeBERT: What can computer vision teach NLP about efficient neural networks?](https://arxiv.org/abs/2006.11316) by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W. Keutzer.
359 1. **[Swin Transformer](https://huggingface.co/docs/transformers/model_doc/swin)** (from Microsoft) released with the paper [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) by Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo.
360 1. **[Swin Transformer V2](https://huggingface.co/docs/transformers/main/model_doc/swinv2)** (from Microsoft) released with the paper [Swin Transformer V2: Scaling Up Capacity and Resolution](https://arxiv.org/abs/2111.09883) by Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, Furu Wei, Baining Guo.
361 1. **[T5](https://huggingface.co/docs/transformers/model_doc/t5)** (from Google AI) released with the paper [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu.
362 1. **[T5v1.1](https://huggingface.co/docs/transformers/model_doc/t5v1.1)** (from Google AI) released in the repository [google-research/text-to-text-transfer-transformer](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu.
363 1. **[TAPAS](https://huggingface.co/docs/transformers/model_doc/tapas)** (from Google AI) released with the paper [TAPAS: Weakly Supervised Table Parsing via Pre-training](https://arxiv.org/abs/2004.02349) by Jonathan Herzig, Paweł Krzysztof Nowak, Thomas Müller, Francesco Piccinno and Julian Martin Eisenschlos.
364 1. **[TAPEX](https://huggingface.co/docs/transformers/model_doc/tapex)** (from Microsoft Research) released with the paper [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou.
365 1. **[Trajectory Transformer](https://huggingface.co/docs/transformers/model_doc/trajectory_transformers)** (from the University of California at Berkeley) released with the paper [Offline Reinforcement Learning as One Big Sequence Modeling Problem](https://arxiv.org/abs/2106.02039) by Michael Janner, Qiyang Li, Sergey Levine
366 1. **[Transformer-XL](https://huggingface.co/docs/transformers/model_doc/transfo-xl)** (from Google/CMU) released with the paper [Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context](https://arxiv.org/abs/1901.02860) by Zihang Dai*, Zhilin Yang*, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov.
367 1. **[TrOCR](https://huggingface.co/docs/transformers/model_doc/trocr)** (from Microsoft), released together with the paper [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) by Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei.
368 1. **[UL2](https://huggingface.co/docs/transformers/model_doc/ul2)** (from Google Research) released with the paper [Unifying Language Learning Paradigms](https://arxiv.org/abs/2205.05131v1) by Yi Tay, Mostafa Dehghani, Vinh Q. Tran, Xavier Garcia, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Neil Houlsby, Donald Metzler
369 1. **[UniSpeech](https://huggingface.co/docs/transformers/model_doc/unispeech)** (from Microsoft Research) released with the paper [UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data](https://arxiv.org/abs/2101.07597) by Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang.
370 1. **[UniSpeechSat](https://huggingface.co/docs/transformers/model_doc/unispeech-sat)** (from Microsoft Research) released with the paper [UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER AWARE PRE-TRAINING](https://arxiv.org/abs/2110.05752) by Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu.
371 1. **[VAN](https://huggingface.co/docs/transformers/model_doc/van)** (from Tsinghua University and Nankai University) released with the paper [Visual Attention Network](https://arxiv.org/abs/2202.09741) by Meng-Hao Guo, Cheng-Ze Lu, Zheng-Ning Liu, Ming-Ming Cheng, Shi-Min Hu.
372 1. **[VideoMAE](https://huggingface.co/docs/transformers/main/model_doc/videomae)** (from Multimedia Computing Group, Nanjing University) released with the paper [VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training](https://arxiv.org/abs/2203.12602) by Zhan Tong, Yibing Song, Jue Wang, Limin Wang.
373 1. **[ViLT](https://huggingface.co/docs/transformers/model_doc/vilt)** (from NAVER AI Lab/Kakao Enterprise/Kakao Brain) released with the paper [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) by Wonjae Kim, Bokyung Son, Ildoo Kim.
374 1. **[Vision Transformer (ViT)](https://huggingface.co/docs/transformers/model_doc/vit)** (from Google AI) released with the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby.
375 1. **[VisualBERT](https://huggingface.co/docs/transformers/model_doc/visual_bert)** (from UCLA NLP) released with the paper [VisualBERT: A Simple and Performant Baseline for Vision and Language](https://arxiv.org/pdf/1908.03557) by Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang.
376 1. **[ViTMAE](https://huggingface.co/docs/transformers/model_doc/vit_mae)** (from Meta AI) released with the paper [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377) by Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick.
377 1. **[Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/wav2vec2)** (from Facebook AI) released with the paper [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.
378 1. **[Wav2Vec2-Conformer](https://huggingface.co/docs/transformers/model_doc/wav2vec2-conformer)** (from Facebook AI) released with the paper [FAIRSEQ S2T: Fast Speech-to-Text Modeling with FAIRSEQ](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino.
379 1. **[Wav2Vec2Phoneme](https://huggingface.co/docs/transformers/model_doc/wav2vec2_phoneme)** (from Facebook AI) released with the paper [Simple and Effective Zero-shot Cross-lingual Phoneme Recognition](https://arxiv.org/abs/2109.11680) by Qiantong Xu, Alexei Baevski, Michael Auli.
380 1. **[WavLM](https://huggingface.co/docs/transformers/model_doc/wavlm)** (from Microsoft Research) released with the paper [WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900) by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei.
381 1. **[XGLM](https://huggingface.co/docs/transformers/model_doc/xglm)** (From Facebook AI) released with the paper [Few-shot Learning with Multilingual Language Models](https://arxiv.org/abs/2112.10668) by Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, Xian Li.
382 1. **[XLM](https://huggingface.co/docs/transformers/model_doc/xlm)** (from Facebook) released together with the paper [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample and Alexis Conneau.
383 1. **[XLM-ProphetNet](https://huggingface.co/docs/transformers/model_doc/xlm-prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou.
384 1. **[XLM-RoBERTa](https://huggingface.co/docs/transformers/model_doc/xlm-roberta)** (from Facebook AI), released together with the paper [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Alexis Conneau*, Kartikay Khandelwal*, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov.
385 1. **[XLM-RoBERTa-XL](https://huggingface.co/docs/transformers/model_doc/xlm-roberta-xl)** (from Facebook AI), released together with the paper [Larger-Scale Transformers for Multilingual Masked Language Modeling](https://arxiv.org/abs/2105.00572) by Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, Alexis Conneau.
386 1. **[XLNet](https://huggingface.co/docs/transformers/model_doc/xlnet)** (from Google/CMU) released with the paper [XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) by Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le.
387 1. **[XLS-R](https://huggingface.co/docs/transformers/model_doc/xls_r)** (from Facebook AI) released with the paper [XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale](https://arxiv.org/abs/2111.09296) by Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, Michael Auli.
388 1. **[XLSR-Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/xlsr_wav2vec2)** (from Facebook AI) released with the paper [Unsupervised Cross-Lingual Representation Learning For Speech Recognition](https://arxiv.org/abs/2006.13979) by Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli.
389 1. **[YOLOS](https://huggingface.co/docs/transformers/model_doc/yolos)** (from Huazhong University of Science & Technology) released with the paper [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://arxiv.org/abs/2106.00666) by Yuxin Fang, Bencheng Liao, Xinggang Wang, Jiemin Fang, Jiyang Qi, Rui Wu, Jianwei Niu, Wenyu Liu.
390 1. **[YOSO](https://huggingface.co/docs/transformers/model_doc/yoso)** (from the University of Wisconsin - Madison) released with the paper [You Only Sample (Almost) Once: Linear Cost Self-Attention Via Bernoulli Sampling](https://arxiv.org/abs/2111.09714) by Zhanpeng Zeng, Yunyang Xiong, Sathya N. Ravi, Shailesh Acharya, Glenn Fung, Vikas Singh.
391 1. Want to contribute a new model? We have added a **detailed guide and templates** to guide you in the process of adding a new model. You can find them in the [`templates`](./templates) folder of the repository. Be sure to check the [contributing guidelines](./CONTRIBUTING.md) and contact the maintainers or open an issue to collect feedbacks before starting your PR.
392
393 To check if each model has an implementation in Flax, PyTorch or TensorFlow, or has an associated tokenizer backed by the 🤗 Tokenizers library, refer to [this table](https://huggingface.co/docs/transformers/index#supported-frameworks).
394
395 These implementations have been tested on several datasets (see the example scripts) and should match the performance of the original implementations. You can find more details on performance in the Examples section of the [documentation](https://huggingface.co/docs/transformers/examples).
396
397
398 ## Learn more
399
400 | Section | Description |
401 |-|-|
402 | [Documentation](https://huggingface.co/docs/transformers/) | Full API documentation and tutorials |
403 | [Task summary](https://huggingface.co/docs/transformers/task_summary) | Tasks supported by 🤗 Transformers |
404 | [Preprocessing tutorial](https://huggingface.co/docs/transformers/preprocessing) | Using the `Tokenizer` class to prepare data for the models |
405 | [Training and fine-tuning](https://huggingface.co/docs/transformers/training) | Using the models provided by 🤗 Transformers in a PyTorch/TensorFlow training loop and the `Trainer` API |
406 | [Quick tour: Fine-tuning/usage scripts](https://github.com/huggingface/transformers/tree/main/examples) | Example scripts for fine-tuning models on a wide range of tasks |
407 | [Model sharing and uploading](https://huggingface.co/docs/transformers/model_sharing) | Upload and share your fine-tuned models with the community |
408 | [Migration](https://huggingface.co/docs/transformers/migration) | Migrate to 🤗 Transformers from `pytorch-transformers` or `pytorch-pretrained-bert` |
409
410 ## Citation
411
412 We now have a [paper](https://www.aclweb.org/anthology/2020.emnlp-demos.6/) you can cite for the 🤗 Transformers library:
413 ```bibtex
414 @inproceedings{wolf-etal-2020-transformers,
415 title = "Transformers: State-of-the-Art Natural Language Processing",
416 author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush",
417 booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
418 month = oct,
419 year = "2020",
420 address = "Online",
421 publisher = "Association for Computational Linguistics",
422 url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6",
423 pages = "38--45"
424 }
425 ```
426
[end of README.md]
[start of README_ko.md]
1 <!---
2 Copyright 2020 The HuggingFace Team. All rights reserved.
3
4 Licensed under the Apache License, Version 2.0 (the "License");
5 you may not use this file except in compliance with the License.
6 You may obtain a copy of the License at
7
8 http://www.apache.org/licenses/LICENSE-2.0
9
10 Unless required by applicable law or agreed to in writing, software
11 distributed under the License is distributed on an "AS IS" BASIS,
12 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 See the License for the specific language governing permissions and
14 limitations under the License.
15 -->
16
17 <p align="center">
18 <br>
19 <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers_logo_name.png" width="400"/>
20 <br>
21 <p>
22 <p align="center">
23 <a href="https://circleci.com/gh/huggingface/transformers">
24 <img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/main">
25 </a>
26 <a href="https://github.com/huggingface/transformers/blob/main/LICENSE">
27 <img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue">
28 </a>
29 <a href="https://huggingface.co/docs/transformers/index">
30 <img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers/index.svg?down_color=red&down_message=offline&up_message=online">
31 </a>
32 <a href="https://github.com/huggingface/transformers/releases">
33 <img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg">
34 </a>
35 <a href="https://github.com/huggingface/transformers/blob/main/CODE_OF_CONDUCT.md">
36 <img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg">
37 </a>
38 <a href="https://zenodo.org/badge/latestdoi/155220641"><img src="https://zenodo.org/badge/155220641.svg" alt="DOI"></a>
39 </p>
40
41 <h4 align="center">
42 <p>
43 <a href="https://github.com/huggingface/transformers/">English</a> |
44 <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hans.md">简体中文</a> |
45 <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hant.md">繁體中文</a> |
46 <b>한국어</b>
47 <p>
48 </h4>
49
50 <h3 align="center">
51 <p> Jax, Pytorch, TensorFlow를 위한 최첨단 자연어처리</p>
52 </h3>
53
54 <h3 align="center">
55 <a href="https://hf.co/course"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/course_banner.png"></a>
56 </h3>
57
58 🤗 Transformers는 분류, 정보 추출, 질문 답변, 요약, 번역, 문장 생성 등을 100개 이상의 언어로 수행할 수 있는 수천개의 사전학습된 모델을 제공합니다. 우리의 목표는 모두가 최첨단의 NLP 기술을 쉽게 사용하는 것입니다.
59
60 🤗 Transformers는 이러한 사전학습 모델을 빠르게 다운로드해 특정 텍스트에 사용하고, 원하는 데이터로 fine-tuning해 커뮤니티나 우리의 [모델 허브](https://huggingface.co/models)에 공유할 수 있도록 API를 제공합니다. 또한, 모델 구조를 정의하는 각 파이썬 모듈은 완전히 독립적이여서 연구 실험을 위해 손쉽게 수정할 수 있습니다.
61
62 🤗 Transformers는 가장 유명한 3개의 딥러닝 라이브러리를 지원합니다. 이들은 서로 완벽히 연동됩니다 — [Jax](https://jax.readthedocs.io/en/latest/), [PyTorch](https://pytorch.org/), [TensorFlow](https://www.tensorflow.org/). 간단하게 이 라이브러리 중 하나로 모델을 학습하고, 또 다른 라이브러리로 추론을 위해 모델을 불러올 수 있습니다.
63
64 ## 온라인 데모
65
66 대부분의 모델을 [모델 허브](https://huggingface.co/models) 페이지에서 바로 테스트해볼 수 있습니다. 공개 및 비공개 모델을 위한 [비공개 모델 호스팅, 버전 관리, 추론 API](https://huggingface.co/pricing)도 제공합니다.
67
68 예시:
69 - [BERT로 마스킹된 단어 완성하기](https://huggingface.co/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France)
70 - [Electra를 이용한 개체명 인식](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city)
71 - [GPT-2로 텍스트 생성하기](https://huggingface.co/gpt2?text=A+long+time+ago%2C+)
72 - [RoBERTa로 자연어 추론하기](https://huggingface.co/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal)
73 - [BART를 이용한 요약](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct)
74 - [DistilBERT를 이용한 질문 답변](https://huggingface.co/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species)
75 - [T5로 번역하기](https://huggingface.co/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin)
76
77 **[Transformer와 글쓰기](https://transformer.huggingface.co)** 는 이 저장소의 텍스트 생성 능력에 관한 Hugging Face 팀의 공식 데모입니다.
78
79 ## Hugging Face 팀의 커스텀 지원을 원한다면
80
81 <a target="_blank" href="https://huggingface.co/support">
82 <img alt="HuggingFace Expert Acceleration Program" src="https://huggingface.co/front/thumbnails/support.png" style="max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);">
83 </a><br>
84
85 ## 퀵 투어
86
87 원하는 텍스트에 바로 모델을 사용할 수 있도록, 우리는 `pipeline` API를 제공합니다. Pipeline은 사전학습 모델과 그 모델을 학습할 때 적용한 전처리 방식을 하나로 합칩니다. 다음은 긍정적인 텍스트와 부정적인 텍스트를 분류하기 위해 pipeline을 사용한 간단한 예시입니다:
88
89 ```python
90 >>> from transformers import pipeline
91
92 # Allocate a pipeline for sentiment-analysis
93 >>> classifier = pipeline('sentiment-analysis')
94 >>> classifier('We are very happy to introduce pipeline to the transformers repository.')
95 [{'label': 'POSITIVE', 'score': 0.9996980428695679}]
96 ```
97
98 코드의 두번째 줄은 pipeline이 사용하는 사전학습 모델을 다운로드하고 캐시로 저장합니다. 세번째 줄에선 그 모델이 주어진 텍스트를 평가합니다. 여기서 모델은 99.97%의 확률로 텍스트가 긍정적이라고 평가했습니다.
99
100 많은 NLP 과제들을 `pipeline`으로 바로 수행할 수 있습니다. 예를 들어, 질문과 문맥이 주어지면 손쉽게 답변을 추출할 수 있습니다:
101
102 ``` python
103 >>> from transformers import pipeline
104
105 # Allocate a pipeline for question-answering
106 >>> question_answerer = pipeline('question-answering')
107 >>> question_answerer({
108 ... 'question': 'What is the name of the repository ?',
109 ... 'context': 'Pipeline has been included in the huggingface/transformers repository'
110 ... })
111 {'score': 0.30970096588134766, 'start': 34, 'end': 58, 'answer': 'huggingface/transformers'}
112
113 ```
114
115 답변뿐만 아니라, 여기에 사용된 사전학습 모델은 확신도와 토크나이즈된 문장 속 답변의 시작점, 끝점까지 반환합니다. [이 튜토리얼](https://huggingface.co/docs/transformers/task_summary)에서 `pipeline` API가 지원하는 다양한 과제를 확인할 수 있습니다.
116
117 코드 3줄로 원하는 과제에 맞게 사전학습 모델을 다운로드 받고 사용할 수 있습니다. 다음은 PyTorch 버전입니다:
118 ```python
119 >>> from transformers import AutoTokenizer, AutoModel
120
121 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
122 >>> model = AutoModel.from_pretrained("bert-base-uncased")
123
124 >>> inputs = tokenizer("Hello world!", return_tensors="pt")
125 >>> outputs = model(**inputs)
126 ```
127 다음은 TensorFlow 버전입니다:
128 ```python
129 >>> from transformers import AutoTokenizer, TFAutoModel
130
131 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
132 >>> model = TFAutoModel.from_pretrained("bert-base-uncased")
133
134 >>> inputs = tokenizer("Hello world!", return_tensors="tf")
135 >>> outputs = model(**inputs)
136 ```
137
138 토크나이저는 사전학습 모델의 모든 전처리를 책임집니다. 그리고 (위의 예시처럼) 1개의 스트링이나 리스트도 처리할 수 있습니다. 토크나이저는 딕셔너리를 반환하는데, 이는 다운스트림 코드에 사용하거나 언패킹 연산자 ** 를 이용해 모델에 바로 전달할 수도 있습니다.
139
140 모델 자체는 일반적으로 사용되는 [Pytorch `nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)나 [TensorFlow `tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model)입니다. [이 튜토리얼](https://huggingface.co/transformers/training.html)은 이러한 모델을 표준적인 PyTorch나 TensorFlow 학습 과정에서 사용하는 방법, 또는 새로운 데이터로 fine-tune하기 위해 `Trainer` API를 사용하는 방법을 설명해줍니다.
141
142 ## 왜 transformers를 사용해야 할까요?
143
144 1. 손쉽게 사용할 수 있는 최첨단 모델:
145 - NLU와 NLG 과제에서 뛰어난 성능을 보입니다.
146 - 교육자 실무자에게 진입 장벽이 낮습니다.
147 - 3개의 클래스만 배우면 바로 사용할 수 있습니다.
148 - 하나의 API로 모든 사전학습 모델을 사용할 수 있습니다.
149
150 1. 더 적은 계산 비용, 더 적은 탄소 발자국:
151 - 연구자들은 모델을 계속 다시 학습시키는 대신 학습된 모델을 공유할 수 있습니다.
152 - 실무자들은 학습에 필요한 시간과 비용을 절약할 수 있습니다.
153 - 수십개의 모델 구조, 2,000개 이상의 사전학습 모델, 100개 이상의 언어로 학습된 모델 등.
154
155 1. 모델의 각 생애주기에 적합한 프레임워크:
156 - 코드 3줄로 최첨단 모델을 학습하세요.
157 - 자유롭게 모델을 TF2.0나 PyTorch 프레임워크로 변환하세요.
158 - 학습, 평가, 공개 등 각 단계에 맞는 프레임워크를 원하는대로 선택하세요.
159
160 1. 필요한 대로 모델이나 예시를 커스터마이즈하세요:
161 - 우리는 저자가 공개한 결과를 재현하기 위해 각 모델 구조의 예시를 제공합니다.
162 - 모델 내부 구조는 가능한 일관적으로 공개되어 있습니다.
163 - 빠른 실험을 위해 모델 파일은 라이브러리와 독립적으로 사용될 수 있습니다.
164
165 ## 왜 transformers를 사용하지 말아야 할까요?
166
167 - 이 라이브러리는 신경망 블록을 만들기 위한 모듈이 아닙니다. 연구자들이 여러 파일을 살펴보지 않고 바로 각 모델을 사용할 수 있도록, 모델 파일 코드의 추상화 수준을 적정하게 유지했습니다.
168 - 학습 API는 모든 모델에 적용할 수 있도록 만들어지진 않았지만, 라이브러리가 제공하는 모델들에 적용할 수 있도록 최적화되었습니다. 일반적인 머신 러닝을 위해선, 다른 라이브러리를 사용하세요.
169 - 가능한 많은 사용 예시를 보여드리고 싶어서, [예시 폴더](https://github.com/huggingface/transformers/tree/main/examples)의 스크립트를 준비했습니다. 이 스크립트들을 수정 없이 특정한 문제에 바로 적용하지 못할 수 있습니다. 필요에 맞게 일부 코드를 수정해야 할 수 있습니다.
170
171 ## 설치
172
173 ### pip로 설치하기
174
175 이 저장소는 Python 3.6+, Flax 0.3.2+, PyTorch 1.3.1+, TensorFlow 2.3+에서 테스트 되었습니다.
176
177 [가상 환경](https://docs.python.org/3/library/venv.html)에 🤗 Transformers를 설치하세요. Python 가상 환경에 익숙하지 않다면, [사용자 가이드](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/)를 확인하세요.
178
179 우선, 사용할 Python 버전으로 가상 환경을 만들고 실행하세요.
180
181 그 다음, Flax, PyTorch, TensorFlow 중 적어도 하나는 설치해야 합니다.
182 플랫폼에 맞는 설치 명령어를 확인하기 위해 [TensorFlow 설치 페이지](https://www.tensorflow.org/install/), [PyTorch 설치 페이지](https://pytorch.org/get-started/locally/#start-locally), [Flax 설치 페이지](https://github.com/google/flax#quick-install)를 확인하세요.
183
184 이들 중 적어도 하나가 설치되었다면, 🤗 Transformers는 다음과 같이 pip을 이용해 설치할 수 있습니다:
185
186 ```bash
187 pip install transformers
188 ```
189
190 예시들을 체험해보고 싶거나, 최최최첨단 코드를 원하거나, 새로운 버전이 나올 때까지 기다릴 수 없다면 [라이브러리를 소스에서 바로 설치](https://huggingface.co/docs/transformers/installation#installing-from-source)하셔야 합니다.
191
192 ### conda로 설치하기
193
194 Transformers 버전 v4.0.0부터, conda 채널이 생겼습니다: `huggingface`.
195
196 🤗 Transformers는 다음과 같이 conda로 설치할 수 있습니다:
197
198 ```shell script
199 conda install -c huggingface transformers
200 ```
201
202 Flax, PyTorch, TensorFlow 설치 페이지에서 이들을 conda로 설치하는 방법을 확인하세요.
203
204 ## 모델 구조
205
206 **🤗 Transformers가 제공하는 [모든 모델 체크포인트](https://huggingface.co/models)** 는 huggingface.co [모델 허브](https://huggingface.co)에 완벽히 연동되어 있습니다. [개인](https://huggingface.co/users)과 [기관](https://huggingface.co/organizations)이 모델 허브에 직접 업로드할 수 있습니다.
207
208 현재 사용 가능한 모델 체크포인트의 개수: 
209
210 🤗 Transformers는 다음 모델들을 제공합니다 (각 모델의 요약은 [여기](https://huggingface.co/docs/transformers/model_summary)서 확인하세요):
211
212 1. **[ALBERT](https://huggingface.co/docs/transformers/model_doc/albert)** (from Google Research and the Toyota Technological Institute at Chicago) released with the paper [ALBERT: A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942), by Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut.
213 1. **[BART](https://huggingface.co/docs/transformers/model_doc/bart)** (from Facebook) released with the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/pdf/1910.13461.pdf) by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer.
214 1. **[BARThez](https://huggingface.co/docs/transformers/model_doc/barthez)** (from École polytechnique) released with the paper [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321) by Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis.
215 1. **[BARTpho](https://huggingface.co/docs/transformers/model_doc/bartpho)** (from VinAI Research) released with the paper [BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese](https://arxiv.org/abs/2109.09701) by Nguyen Luong Tran, Duong Minh Le and Dat Quoc Nguyen.
216 1. **[BEiT](https://huggingface.co/docs/transformers/model_doc/beit)** (from Microsoft) released with the paper [BEiT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) by Hangbo Bao, Li Dong, Furu Wei.
217 1. **[BERT](https://huggingface.co/docs/transformers/model_doc/bert)** (from Google) released with the paper [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova.
218 1. **[BERT For Sequence Generation](https://huggingface.co/docs/transformers/model_doc/bert-generation)** (from Google) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn.
219 1. **[BERTweet](https://huggingface.co/docs/transformers/model_doc/bertweet)** (from VinAI Research) released with the paper [BERTweet: A pre-trained language model for English Tweets](https://aclanthology.org/2020.emnlp-demos.2/) by Dat Quoc Nguyen, Thanh Vu and Anh Tuan Nguyen.
220 1. **[BigBird-Pegasus](https://huggingface.co/docs/transformers/model_doc/bigbird_pegasus)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed.
221 1. **[BigBird-RoBERTa](https://huggingface.co/docs/transformers/model_doc/big_bird)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed.
222 1. **[Blenderbot](https://huggingface.co/docs/transformers/model_doc/blenderbot)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston.
223 1. **[BlenderbotSmall](https://huggingface.co/docs/transformers/model_doc/blenderbot-small)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston.
224 1. **[BLOOM](https://huggingface.co/docs/transformers/model_doc/bloom)** (from BigScience workshop) released by the [BigSicence Workshop](https://bigscience.huggingface.co/).
225 1. **[BORT](https://huggingface.co/docs/transformers/model_doc/bort)** (from Alexa) released with the paper [Optimal Subarchitecture Extraction For BERT](https://arxiv.org/abs/2010.10499) by Adrian de Wynter and Daniel J. Perry.
226 1. **[ByT5](https://huggingface.co/docs/transformers/model_doc/byt5)** (from Google Research) released with the paper [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) by Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel.
227 1. **[CamemBERT](https://huggingface.co/docs/transformers/model_doc/camembert)** (from Inria/Facebook/Sorbonne) released with the paper [CamemBERT: a Tasty French Language Model](https://arxiv.org/abs/1911.03894) by Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz Suárez*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot.
228 1. **[CANINE](https://huggingface.co/docs/transformers/model_doc/canine)** (from Google Research) released with the paper [CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation](https://arxiv.org/abs/2103.06874) by Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting.
229 1. **[CLIP](https://huggingface.co/docs/transformers/model_doc/clip)** (from OpenAI) released with the paper [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) by Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever.
230 1. **[CodeGen](https://huggingface.co/docs/transformers/model_doc/codegen)** (from Salesforce) released with the paper [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong.
231 1. **[ConvBERT](https://huggingface.co/docs/transformers/model_doc/convbert)** (from YituTech) released with the paper [ConvBERT: Improving BERT with Span-based Dynamic Convolution](https://arxiv.org/abs/2008.02496) by Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng Yan.
232 1. **[ConvNeXT](https://huggingface.co/docs/transformers/model_doc/convnext)** (from Facebook AI) released with the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, Saining Xie.
233 1. **[CPM](https://huggingface.co/docs/transformers/model_doc/cpm)** (from Tsinghua University) released with the paper [CPM: A Large-scale Generative Chinese Pre-trained Language Model](https://arxiv.org/abs/2012.00413) by Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen, Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, Maosong Sun.
234 1. **[CTRL](https://huggingface.co/docs/transformers/model_doc/ctrl)** (from Salesforce) released with the paper [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858) by Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher.
235 1. **[CvT](https://huggingface.co/docs/transformers/model_doc/cvt)** (from Microsoft) released with the paper [CvT: Introducing Convolutions to Vision Transformers](https://arxiv.org/abs/2103.15808) by Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan, Lei Zhang.
236 1. **[Data2Vec](https://huggingface.co/docs/transformers/model_doc/data2vec)** (from Facebook) released with the paper [Data2Vec: A General Framework for Self-supervised Learning in Speech, Vision and Language](https://arxiv.org/abs/2202.03555) by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli.
237 1. **[DeBERTa](https://huggingface.co/docs/transformers/model_doc/deberta)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen.
238 1. **[DeBERTa-v2](https://huggingface.co/docs/transformers/model_doc/deberta-v2)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen.
239 1. **[Decision Transformer](https://huggingface.co/docs/transformers/model_doc/decision_transformer)** (from Berkeley/Facebook/Google) released with the paper [Decision Transformer: Reinforcement Learning via Sequence Modeling](https://arxiv.org/abs/2106.01345) by Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch.
240 1. **[DeiT](https://huggingface.co/docs/transformers/model_doc/deit)** (from Facebook) released with the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou.
241 1. **[DETR](https://huggingface.co/docs/transformers/model_doc/detr)** (from Facebook) released with the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko.
242 1. **[DialoGPT](https://huggingface.co/docs/transformers/model_doc/dialogpt)** (from Microsoft Research) released with the paper [DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation](https://arxiv.org/abs/1911.00536) by Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan.
243 1. **[DistilBERT](https://huggingface.co/docs/transformers/model_doc/distilbert)** (from HuggingFace), released together with the paper [DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108) by Victor Sanh, Lysandre Debut and Thomas Wolf. The same method has been applied to compress GPT2 into [DistilGPT2](https://github.com/huggingface/transformers/tree/main/examples/distillation), RoBERTa into [DistilRoBERTa](https://github.com/huggingface/transformers/tree/main/examples/distillation), Multilingual BERT into [DistilmBERT](https://github.com/huggingface/transformers/tree/main/examples/distillation) and a German version of DistilBERT.
244 1. **[DiT](https://huggingface.co/docs/transformers/model_doc/dit)** (from Microsoft Research) released with the paper [DiT: Self-supervised Pre-training for Document Image Transformer](https://arxiv.org/abs/2203.02378) by Junlong Li, Yiheng Xu, Tengchao Lv, Lei Cui, Cha Zhang, Furu Wei.
245 1. **[Donut](https://huggingface.co/docs/transformers/main/model_doc/donut)** (from NAVER) released with the paper [OCR-free Document Understanding Transformer](https://arxiv.org/abs/2111.15664) by Geewook Kim, Teakgyu Hong, Moonbin Yim, Jeongyeon Nam, Jinyoung Park, Jinyeong Yim, Wonseok Hwang, Sangdoo Yun, Dongyoon Han, Seunghyun Park.
246 1. **[DPR](https://huggingface.co/docs/transformers/model_doc/dpr)** (from Facebook) released with the paper [Dense Passage Retrieval for Open-Domain Question Answering](https://arxiv.org/abs/2004.04906) by Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih.
247 1. **[DPT](https://huggingface.co/docs/transformers/master/model_doc/dpt)** (from Intel Labs) released with the paper [Vision Transformers for Dense Prediction](https://arxiv.org/abs/2103.13413) by René Ranftl, Alexey Bochkovskiy, Vladlen Koltun.
248 1. **[ELECTRA](https://huggingface.co/docs/transformers/model_doc/electra)** (from Google Research/Stanford University) released with the paper [ELECTRA: Pre-training text encoders as discriminators rather than generators](https://arxiv.org/abs/2003.10555) by Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning.
249 1. **[EncoderDecoder](https://huggingface.co/docs/transformers/model_doc/encoder-decoder)** (from Google Research) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn.
250 1. **[FlauBERT](https://huggingface.co/docs/transformers/model_doc/flaubert)** (from CNRS) released with the paper [FlauBERT: Unsupervised Language Model Pre-training for French](https://arxiv.org/abs/1912.05372) by Hang Le, Loïc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoît Crabbé, Laurent Besacier, Didier Schwab.
251 1. **[FLAVA](https://huggingface.co/docs/transformers/model_doc/flava)** (from Facebook AI) released with the paper [FLAVA: A Foundational Language And Vision Alignment Model](https://arxiv.org/abs/2112.04482) by Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, and Douwe Kiela.
252 1. **[FNet](https://huggingface.co/docs/transformers/model_doc/fnet)** (from Google Research) released with the paper [FNet: Mixing Tokens with Fourier Transforms](https://arxiv.org/abs/2105.03824) by James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon.
253 1. **[Funnel Transformer](https://huggingface.co/docs/transformers/model_doc/funnel)** (from CMU/Google Brain) released with the paper [Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing](https://arxiv.org/abs/2006.03236) by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le.
254 1. **[GLPN](https://huggingface.co/docs/transformers/model_doc/glpn)** (from KAIST) released with the paper [Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth](https://arxiv.org/abs/2201.07436) by Doyeon Kim, Woonghyun Ga, Pyungwhan Ahn, Donggyu Joo, Sehwan Chun, Junmo Kim.
255 1. **[GPT](https://huggingface.co/docs/transformers/model_doc/openai-gpt)** (from OpenAI) released with the paper [Improving Language Understanding by Generative Pre-Training](https://blog.openai.com/language-unsupervised/) by Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever.
256 1. **[GPT Neo](https://huggingface.co/docs/transformers/model_doc/gpt_neo)** (from EleutherAI) released in the repository [EleutherAI/gpt-neo](https://github.com/EleutherAI/gpt-neo) by Sid Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy.
257 1. **[GPT NeoX](https://huggingface.co/docs/transformers/model_doc/gpt_neox)** (from EleutherAI) released with the paper [GPT-NeoX-20B: An Open-Source Autoregressive Language Model](https://arxiv.org/abs/2204.06745) by Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, Samuel Weinbach
258 1. **[GPT-2](https://huggingface.co/docs/transformers/model_doc/gpt2)** (from OpenAI) released with the paper [Language Models are Unsupervised Multitask Learners](https://blog.openai.com/better-language-models/) by Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever**.
259 1. **[GPT-J](https://huggingface.co/docs/transformers/model_doc/gptj)** (from EleutherAI) released in the repository [kingoflolz/mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/) by Ben Wang and Aran Komatsuzaki.
260 1. **[GroupViT](https://huggingface.co/docs/transformers/model_doc/groupvit)** (from UCSD, NVIDIA) released with the paper [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094) by Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang.
261 1. **[Hubert](https://huggingface.co/docs/transformers/model_doc/hubert)** (from Facebook) released with the paper [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed.
262 1. **[I-BERT](https://huggingface.co/docs/transformers/model_doc/ibert)** (from Berkeley) released with the paper [I-BERT: Integer-only BERT Quantization](https://arxiv.org/abs/2101.01321) by Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney, Kurt Keutzer.
263 1. **[ImageGPT](https://huggingface.co/docs/transformers/model_doc/imagegpt)** (from OpenAI) released with the paper [Generative Pretraining from Pixels](https://openai.com/blog/image-gpt/) by Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever.
264 1. **[LayoutLM](https://huggingface.co/docs/transformers/model_doc/layoutlm)** (from Microsoft Research Asia) released with the paper [LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318) by Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou.
265 1. **[LayoutLMv2](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (from Microsoft Research Asia) released with the paper [LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding](https://arxiv.org/abs/2012.14740) by Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou.
266 1. **[LayoutLMv3](https://huggingface.co/docs/transformers/model_doc/layoutlmv3)** (from Microsoft Research Asia) released with the paper [LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking](https://arxiv.org/abs/2204.08387) by Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei.
267 1. **[LayoutXLM](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (from Microsoft Research Asia) released with the paper [LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding](https://arxiv.org/abs/2104.08836) by Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Furu Wei.
268 1. **[LED](https://huggingface.co/docs/transformers/model_doc/led)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan.
269 1. **[LeViT](https://huggingface.co/docs/transformers/model_doc/levit)** (from Meta AI) released with the paper [LeViT: A Vision Transformer in ConvNet's Clothing for Faster Inference](https://arxiv.org/abs/2104.01136) by Ben Graham, Alaaeldin El-Nouby, Hugo Touvron, Pierre Stock, Armand Joulin, Hervé Jégou, Matthijs Douze.
270 1. **[Longformer](https://huggingface.co/docs/transformers/model_doc/longformer)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan.
271 1. **[LongT5](https://huggingface.co/docs/transformers/model_doc/longt5)** (from Google AI) released with the paper [LongT5: Efficient Text-To-Text Transformer for Long Sequences](https://arxiv.org/abs/2112.07916) by Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung, Yinfei Yang.
272 1. **[LUKE](https://huggingface.co/docs/transformers/model_doc/luke)** (from Studio Ousia) released with the paper [LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention](https://arxiv.org/abs/2010.01057) by Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, Yuji Matsumoto.
273 1. **[LXMERT](https://huggingface.co/docs/transformers/model_doc/lxmert)** (from UNC Chapel Hill) released with the paper [LXMERT: Learning Cross-Modality Encoder Representations from Transformers for Open-Domain Question Answering](https://arxiv.org/abs/1908.07490) by Hao Tan and Mohit Bansal.
274 1. **[M-CTC-T](https://huggingface.co/docs/transformers/model_doc/mctct)** (from Facebook) released with the paper [Pseudo-Labeling For Massively Multilingual Speech Recognition](https://arxiv.org/abs/2111.00161) by Loren Lugosch, Tatiana Likhomanenko, Gabriel Synnaeve, and Ronan Collobert.
275 1. **[M2M100](https://huggingface.co/docs/transformers/model_doc/m2m_100)** (from Facebook) released with the paper [Beyond English-Centric Multilingual Machine Translation](https://arxiv.org/abs/2010.11125) by Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin.
276 1. **[MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)** Machine translation models trained using [OPUS](http://opus.nlpl.eu/) data by Jörg Tiedemann. The [Marian Framework](https://marian-nmt.github.io/) is being developed by the Microsoft Translator Team.
277 1. **[MaskFormer](https://huggingface.co/docs/transformers/model_doc/maskformer)** (from Meta and UIUC) released with the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) by Bowen Cheng, Alexander G. Schwing, Alexander Kirillov.
278 1. **[mBART](https://huggingface.co/docs/transformers/model_doc/mbart)** (from Facebook) released with the paper [Multilingual Denoising Pre-training for Neural Machine Translation](https://arxiv.org/abs/2001.08210) by Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer.
279 1. **[mBART-50](https://huggingface.co/docs/transformers/model_doc/mbart)** (from Facebook) released with the paper [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) by Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, Angela Fan.
280 1. **[Megatron-BERT](https://huggingface.co/docs/transformers/model_doc/megatron-bert)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro.
281 1. **[Megatron-GPT2](https://huggingface.co/docs/transformers/model_doc/megatron_gpt2)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro.
282 1. **[mLUKE](https://huggingface.co/docs/transformers/model_doc/mluke)** (from Studio Ousia) released with the paper [mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models](https://arxiv.org/abs/2110.08151) by Ryokan Ri, Ikuya Yamada, and Yoshimasa Tsuruoka.
283 1. **[MobileBERT](https://huggingface.co/docs/transformers/model_doc/mobilebert)** (from CMU/Google Brain) released with the paper [MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices](https://arxiv.org/abs/2004.02984) by Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou.
284 1. **[MobileViT](https://huggingface.co/docs/transformers/model_doc/mobilevit)** (from Apple) released with the paper [MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer](https://arxiv.org/abs/2110.02178) by Sachin Mehta and Mohammad Rastegari.
285 1. **[MPNet](https://huggingface.co/docs/transformers/model_doc/mpnet)** (from Microsoft Research) released with the paper [MPNet: Masked and Permuted Pre-training for Language Understanding](https://arxiv.org/abs/2004.09297) by Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu.
286 1. **[MT5](https://huggingface.co/docs/transformers/model_doc/mt5)** (from Google AI) released with the paper [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) by Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel.
287 1. **[MVP](https://huggingface.co/docs/transformers/model_doc/mvp)** (from RUC AI Box) released with the paper [MVP: Multi-task Supervised Pre-training for Natural Language Generation](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen.
288 1. **[Nezha](https://huggingface.co/docs/transformers/model_doc/nezha)** (from Huawei Noah’s Ark Lab) released with the paper [NEZHA: Neural Contextualized Representation for Chinese Language Understanding](https://arxiv.org/abs/1909.00204) by Junqiu Wei, Xiaozhe Ren, Xiaoguang Li, Wenyong Huang, Yi Liao, Yasheng Wang, Jiashu Lin, Xin Jiang, Xiao Chen and Qun Liu.
289 1. **[NLLB](https://huggingface.co/docs/transformers/model_doc/nllb)** (from Meta) released with the paper [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672) by the NLLB team.
290 1. **[Nyströmformer](https://huggingface.co/docs/transformers/model_doc/nystromformer)** (from the University of Wisconsin - Madison) released with the paper [Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention](https://arxiv.org/abs/2102.03902) by Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, Vikas Singh.
291 1. **[OPT](https://huggingface.co/docs/transformers/master/model_doc/opt)** (from Meta AI) released with the paper [OPT: Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) by Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al.
292 1. **[OWL-ViT](https://huggingface.co/docs/transformers/model_doc/owlvit)** (from Google AI) released with the paper [Simple Open-Vocabulary Object Detection with Vision Transformers](https://arxiv.org/abs/2205.06230) by Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, and Neil Houlsby.
293 1. **[Pegasus](https://huggingface.co/docs/transformers/model_doc/pegasus)** (from Google) released with the paper [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777) by Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu.
294 1. **[Perceiver IO](https://huggingface.co/docs/transformers/model_doc/perceiver)** (from Deepmind) released with the paper [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) by Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hénaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, João Carreira.
295 1. **[PhoBERT](https://huggingface.co/docs/transformers/model_doc/phobert)** (from VinAI Research) released with the paper [PhoBERT: Pre-trained language models for Vietnamese](https://www.aclweb.org/anthology/2020.findings-emnlp.92/) by Dat Quoc Nguyen and Anh Tuan Nguyen.
296 1. **[PLBart](https://huggingface.co/docs/transformers/model_doc/plbart)** (from UCLA NLP) released with the paper [Unified Pre-training for Program Understanding and Generation](https://arxiv.org/abs/2103.06333) by Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, Kai-Wei Chang.
297 1. **[PoolFormer](https://huggingface.co/docs/transformers/model_doc/poolformer)** (from Sea AI Labs) released with the paper [MetaFormer is Actually What You Need for Vision](https://arxiv.org/abs/2111.11418) by Yu, Weihao and Luo, Mi and Zhou, Pan and Si, Chenyang and Zhou, Yichen and Wang, Xinchao and Feng, Jiashi and Yan, Shuicheng.
298 1. **[ProphetNet](https://huggingface.co/docs/transformers/model_doc/prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou.
299 1. **[QDQBert](https://huggingface.co/docs/transformers/model_doc/qdqbert)** (from NVIDIA) released with the paper [Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation](https://arxiv.org/abs/2004.09602) by Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev and Paulius Micikevicius.
300 1. **[RAG](https://huggingface.co/docs/transformers/model_doc/rag)** (from Facebook) released with the paper [Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks](https://arxiv.org/abs/2005.11401) by Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, Douwe Kiela.
301 1. **[REALM](https://huggingface.co/docs/transformers/model_doc/realm.html)** (from Google Research) released with the paper [REALM: Retrieval-Augmented Language Model Pre-Training](https://arxiv.org/abs/2002.08909) by Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat and Ming-Wei Chang.
302 1. **[Reformer](https://huggingface.co/docs/transformers/model_doc/reformer)** (from Google Research) released with the paper [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451) by Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya.
303 1. **[RegNet](https://huggingface.co/docs/transformers/model_doc/regnet)** (from META Research) released with the paper [Designing Network Design Space](https://arxiv.org/abs/2003.13678) by Ilija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, Piotr Dollár.
304 1. **[RemBERT](https://huggingface.co/docs/transformers/model_doc/rembert)** (from Google Research) released with the paper [Rethinking embedding coupling in pre-trained language models](https://arxiv.org/pdf/2010.12821.pdf) by Hyung Won Chung, Thibault Févry, Henry Tsai, M. Johnson, Sebastian Ruder.
305 1. **[ResNet](https://huggingface.co/docs/transformers/model_doc/resnet)** (from Microsoft Research) released with the paper [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) by Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun.
306 1. **[RoBERTa](https://huggingface.co/docs/transformers/model_doc/roberta)** (from Facebook), released together with the paper a [Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov.
307 1. **[RoFormer](https://huggingface.co/docs/transformers/model_doc/roformer)** (from ZhuiyiTechnology), released together with the paper a [RoFormer: Enhanced Transformer with Rotary Position Embedding](https://arxiv.org/pdf/2104.09864v1.pdf) by Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu.
308 1. **[SegFormer](https://huggingface.co/docs/transformers/model_doc/segformer)** (from NVIDIA) released with the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo.
309 1. **[SEW](https://huggingface.co/docs/transformers/model_doc/sew)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi.
310 1. **[SEW-D](https://huggingface.co/docs/transformers/model_doc/sew_d)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi.
311 1. **[SpeechToTextTransformer](https://huggingface.co/docs/transformers/model_doc/speech_to_text)** (from Facebook), released together with the paper [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino.
312 1. **[SpeechToTextTransformer2](https://huggingface.co/docs/transformers/model_doc/speech_to_text_2)** (from Facebook), released together with the paper [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) by Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau.
313 1. **[Splinter](https://huggingface.co/docs/transformers/model_doc/splinter)** (from Tel Aviv University), released together with the paper [Few-Shot Question Answering by Pretraining Span Selection](https://arxiv.org/abs/2101.00438) by Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy.
314 1. **[SqueezeBERT](https://huggingface.co/docs/transformers/model_doc/squeezebert)** (from Berkeley) released with the paper [SqueezeBERT: What can computer vision teach NLP about efficient neural networks?](https://arxiv.org/abs/2006.11316) by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W. Keutzer.
315 1. **[Swin Transformer](https://huggingface.co/docs/transformers/model_doc/swin)** (from Microsoft) released with the paper [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) by Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo.
316 1. **[Swin Transformer V2](https://huggingface.co/docs/transformers/main/model_doc/swinv2)** (from Microsoft) released with the paper [Swin Transformer V2: Scaling Up Capacity and Resolution](https://arxiv.org/abs/2111.09883) by Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, Furu Wei, Baining Guo.
317 1. **[T5](https://huggingface.co/docs/transformers/model_doc/t5)** (from Google AI) released with the paper [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu.
318 1. **[T5v1.1](https://huggingface.co/docs/transformers/model_doc/t5v1.1)** (from Google AI) released in the repository [google-research/text-to-text-transfer-transformer](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu.
319 1. **[TAPAS](https://huggingface.co/docs/transformers/model_doc/tapas)** (from Google AI) released with the paper [TAPAS: Weakly Supervised Table Parsing via Pre-training](https://arxiv.org/abs/2004.02349) by Jonathan Herzig, Paweł Krzysztof Nowak, Thomas Müller, Francesco Piccinno and Julian Martin Eisenschlos.
320 1. **[TAPEX](https://huggingface.co/docs/transformers/model_doc/tapex)** (from Microsoft Research) released with the paper [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou.
321 1. **[Trajectory Transformer](https://huggingface.co/docs/transformers/model_doc/trajectory_transformers)** (from the University of California at Berkeley) released with the paper [Offline Reinforcement Learning as One Big Sequence Modeling Problem](https://arxiv.org/abs/2106.02039) by Michael Janner, Qiyang Li, Sergey Levine
322 1. **[Transformer-XL](https://huggingface.co/docs/transformers/model_doc/transfo-xl)** (from Google/CMU) released with the paper [Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context](https://arxiv.org/abs/1901.02860) by Zihang Dai*, Zhilin Yang*, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov.
323 1. **[TrOCR](https://huggingface.co/docs/transformers/model_doc/trocr)** (from Microsoft), released together with the paper [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) by Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei.
324 1. **[UL2](https://huggingface.co/docs/transformers/model_doc/ul2)** (from Google Research) released with the paper [Unifying Language Learning Paradigms](https://arxiv.org/abs/2205.05131v1) by Yi Tay, Mostafa Dehghani, Vinh Q. Tran, Xavier Garcia, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Neil Houlsby, Donald Metzler
325 1. **[UniSpeech](https://huggingface.co/docs/transformers/model_doc/unispeech)** (from Microsoft Research) released with the paper [UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data](https://arxiv.org/abs/2101.07597) by Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang.
326 1. **[UniSpeechSat](https://huggingface.co/docs/transformers/model_doc/unispeech-sat)** (from Microsoft Research) released with the paper [UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER AWARE PRE-TRAINING](https://arxiv.org/abs/2110.05752) by Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu.
327 1. **[VAN](https://huggingface.co/docs/transformers/model_doc/van)** (from Tsinghua University and Nankai University) released with the paper [Visual Attention Network](https://arxiv.org/pdf/2202.09741.pdf) by Meng-Hao Guo, Cheng-Ze Lu, Zheng-Ning Liu, Ming-Ming Cheng, Shi-Min Hu.
328 1. **[VideoMAE](https://huggingface.co/docs/transformers/main/model_doc/videomae)** (from Multimedia Computing Group, Nanjing University) released with the paper [VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training](https://arxiv.org/abs/2203.12602) by Zhan Tong, Yibing Song, Jue Wang, Limin Wang.
329 1. **[ViLT](https://huggingface.co/docs/transformers/model_doc/vilt)** (from NAVER AI Lab/Kakao Enterprise/Kakao Brain) released with the paper [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) by Wonjae Kim, Bokyung Son, Ildoo Kim.
330 1. **[Vision Transformer (ViT)](https://huggingface.co/docs/transformers/model_doc/vit)** (from Google AI) released with the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby.
331 1. **[VisualBERT](https://huggingface.co/docs/transformers/model_doc/visual_bert)** (from UCLA NLP) released with the paper [VisualBERT: A Simple and Performant Baseline for Vision and Language](https://arxiv.org/pdf/1908.03557) by Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang.
332 1. **[ViTMAE](https://huggingface.co/docs/transformers/model_doc/vit_mae)** (from Meta AI) released with the paper [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377) by Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick.
333 1. **[Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/wav2vec2)** (from Facebook AI) released with the paper [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.
334 1. **[Wav2Vec2-Conformer](https://huggingface.co/docs/transformers/model_doc/wav2vec2-conformer)** (from Facebook AI) released with the paper [FAIRSEQ S2T: Fast Speech-to-Text Modeling with FAIRSEQ](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino.
335 1. **[Wav2Vec2Phoneme](https://huggingface.co/docs/transformers/model_doc/wav2vec2_phoneme)** (from Facebook AI) released with the paper [Simple and Effective Zero-shot Cross-lingual Phoneme Recognition](https://arxiv.org/abs/2109.11680) by Qiantong Xu, Alexei Baevski, Michael Auli.
336 1. **[WavLM](https://huggingface.co/docs/transformers/model_doc/wavlm)** (from Microsoft Research) released with the paper [WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900) by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei.
337 1. **[XGLM](https://huggingface.co/docs/transformers/model_doc/xglm)** (From Facebook AI) released with the paper [Few-shot Learning with Multilingual Language Models](https://arxiv.org/abs/2112.10668) by Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, Xian Li.
338 1. **[XLM](https://huggingface.co/docs/transformers/model_doc/xlm)** (from Facebook) released together with the paper [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample and Alexis Conneau.
339 1. **[XLM-ProphetNet](https://huggingface.co/docs/transformers/model_doc/xlm-prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou.
340 1. **[XLM-RoBERTa](https://huggingface.co/docs/transformers/model_doc/xlm-roberta)** (from Facebook AI), released together with the paper [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Alexis Conneau*, Kartikay Khandelwal*, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov.
341 1. **[XLM-RoBERTa-XL](https://huggingface.co/docs/transformers/model_doc/xlm-roberta-xl)** (from Facebook AI) released with the paper [Larger-Scale Transformers for Multilingual Masked Language Modeling](https://arxiv.org/abs/2105.00572) by Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, Alexis Conneau.
342 1. **[XLNet](https://huggingface.co/docs/transformers/model_doc/xlnet)** (from Google/CMU) released with the paper [XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) by Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le.
343 1. **[XLS-R](https://huggingface.co/docs/transformers/model_doc/xls_r)** (from Facebook AI) released with the paper [XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale](https://arxiv.org/abs/2111.09296) by Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, Michael Auli.
344 1. **[XLSR-Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/xlsr_wav2vec2)** (from Facebook AI) released with the paper [Unsupervised Cross-Lingual Representation Learning For Speech Recognition](https://arxiv.org/abs/2006.13979) by Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli.
345 1. **[YOLOS](https://huggingface.co/docs/transformers/model_doc/yolos)** (from Huazhong University of Science & Technology) released with the paper [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://arxiv.org/abs/2106.00666) by Yuxin Fang, Bencheng Liao, Xinggang Wang, Jiemin Fang, Jiyang Qi, Rui Wu, Jianwei Niu, Wenyu Liu.
346 1. **[YOSO](https://huggingface.co/docs/transformers/model_doc/yoso)** (from the University of Wisconsin - Madison) released with the paper [You Only Sample (Almost) by Zhanpeng Zeng, Yunyang Xiong, Sathya N. Ravi, Shailesh Acharya, Glenn Fung, Vikas Singh.
347 1. 새로운 모델을 올리고 싶나요? 우리가 **상세한 가이드와 템플릿** 으로 새로운 모델을 올리도록 도와드릴게요. 가이드와 템플릿은 이 저장소의 [`templates`](./templates) 폴더에서 확인하실 수 있습니다. [컨트리뷰션 가이드라인](./CONTRIBUTING.md)을 꼭 확인해주시고, PR을 올리기 전에 메인테이너에게 연락하거나 이슈를 오픈해 피드백을 받으시길 바랍니다.
348
349 각 모델이 Flax, PyTorch, TensorFlow으로 구현되었는지 또는 🤗 Tokenizers 라이브러리가 지원하는 토크나이저를 사용하는지 확인하려면, [이 표](https://huggingface.co/docs/transformers/index#supported-frameworks)를 확인하세요.
350
351 이 구현은 여러 데이터로 검증되었고 (예시 스크립트를 참고하세요) 오리지널 구현의 성능과 같아야 합니다. [도큐먼트](https://huggingface.co/docs/transformers/examples)의 Examples 섹션에서 성능에 대한 자세한 설명을 확인할 수 있습니다.
352
353 ## 더 알아보기
354
355 | 섹션 | 설명 |
356 |-|-|
357 | [도큐먼트](https://huggingface.co/transformers/) | 전체 API 도큐먼트와 튜토리얼 |
358 | [과제 요약](https://huggingface.co/docs/transformers/task_summary) | 🤗 Transformers가 지원하는 과제들 |
359 | [전처리 튜토리얼](https://huggingface.co/docs/transformers/preprocessing) | `Tokenizer` 클래스를 이용해 모델을 위한 데이터 준비하기 |
360 | [학습과 fine-tuning](https://huggingface.co/docs/transformers/training) | 🤗 Transformers가 제공하는 모델 PyTorch/TensorFlow 학습 과정과 `Trainer` API에서 사용하기 |
361 | [퀵 투어: Fine-tuning/사용 스크립트](https://github.com/huggingface/transformers/tree/main/examples) | 다양한 과제에서 모델 fine-tuning하는 예시 스크립트 |
362 | [모델 공유 및 업로드](https://huggingface.co/docs/transformers/model_sharing) | 커뮤니티에 fine-tune된 모델을 업로드 및 공유하기 |
363 | [마이그레이션](https://huggingface.co/docs/transformers/migration) | `pytorch-transformers`나 `pytorch-pretrained-bert`에서 🤗 Transformers로 이동하기|
364
365 ## 인용
366
367 🤗 Transformers 라이브러리를 인용하고 싶다면, 이 [논문](https://www.aclweb.org/anthology/2020.emnlp-demos.6/)을 인용해 주세요:
368 ```bibtex
369 @inproceedings{wolf-etal-2020-transformers,
370 title = "Transformers: State-of-the-Art Natural Language Processing",
371 author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush",
372 booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
373 month = oct,
374 year = "2020",
375 address = "Online",
376 publisher = "Association for Computational Linguistics",
377 url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6",
378 pages = "38--45"
379 }
380 ```
381
[end of README_ko.md]
[start of README_zh-hans.md]
1 <!---
2 Copyright 2020 The HuggingFace Team. All rights reserved.
3
4 Licensed under the Apache License, Version 2.0 (the "License");
5 you may not use this file except in compliance with the License.
6 You may obtain a copy of the License at
7
8 http://www.apache.org/licenses/LICENSE-2.0
9
10 Unless required by applicable law or agreed to in writing, software
11 distributed under the License is distributed on an "AS IS" BASIS,
12 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 See the License for the specific language governing permissions and
14 limitations under the License.
15 -->
16
17 <!---
18 A useful guide for English-Chinese translation of Hugging Face documentation
19 - Add space around English words and numbers when they appear between Chinese characters. E.g., 共 100 多种语言; 使用 transformers 库。
20 - Use square quotes, e.g.,「引用」
21
22 Dictionary
23
24 Hugging Face: 抱抱脸
25 token: 词符(并用括号标注原英文)
26 tokenize: 词符化(并用括号标注原英文)
27 tokenizer: 词符化器(并用括号标注原英文)
28 transformer: transformer(不翻译)
29 pipeline: 流水线
30 API: API (不翻译)
31 inference: 推理
32 Trainer: 训练器。当作为类名出现时不翻译。
33 pretrained/pretrain: 预训练
34 finetune: 微调
35 community: 社区
36 example: 当特指仓库中 example 目录时翻译为「用例」
37 Python data structures (e.g., list, set, dict): 翻译为列表,集合,词典,并用括号标注原英文
38 NLP/Natural Language Processing: 以 NLP 出现时不翻译,以 Natural Language Processing 出现时翻译为自然语言处理
39 checkpoint: 检查点
40 -->
41
42 <p align="center">
43 <br>
44 <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers_logo_name.png" width="400"/>
45 <br>
46 <p>
47 <p align="center">
48 <a href="https://circleci.com/gh/huggingface/transformers">
49 <img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/main">
50 </a>
51 <a href="https://github.com/huggingface/transformers/blob/main/LICENSE">
52 <img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue">
53 </a>
54 <a href="https://huggingface.co/docs/transformers/index">
55 <img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers/index.svg?down_color=red&down_message=offline&up_message=online">
56 </a>
57 <a href="https://github.com/huggingface/transformers/releases">
58 <img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg">
59 </a>
60 <a href="https://github.com/huggingface/transformers/blob/main/CODE_OF_CONDUCT.md">
61 <img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg">
62 </a>
63 <a href="https://zenodo.org/badge/latestdoi/155220641"><img src="https://zenodo.org/badge/155220641.svg" alt="DOI"></a>
64 </p>
65
66 <h4 align="center">
67 <p>
68 <a href="https://github.com/huggingface/transformers/">English</a> |
69 <b>简体中文</b> |
70 <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hant.md">繁體中文</a> |
71 <a href="https://github.com/huggingface/transformers/blob/main/README_ko.md">한국어</a>
72 <p>
73 </h4>
74
75 <h3 align="center">
76 <p>为 Jax、PyTorch 和 TensorFlow 打造的先进的自然语言处理</p>
77 </h3>
78
79 <h3 align="center">
80 <a href="https://hf.co/course"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/course_banner.png"></a>
81 </h3>
82
83 🤗 Transformers 提供了数以千计的预训练模型,支持 100 多种语言的文本分类、信息抽取、问答、摘要、翻译、文本生成。它的宗旨让最先进的 NLP 技术人人易用。
84
85 🤗 Transformers 提供了便于快速下载和使用的API,让你可以把预训练模型用在给定文本、在你的数据集上微调然后通过 [model hub](https://huggingface.co/models) 与社区共享。同时,每个定义的 Python 模块均完全独立,方便修改和快速研究实验。
86
87 🤗 Transformers 支持三个最热门的深度学习库: [Jax](https://jax.readthedocs.io/en/latest/), [PyTorch](https://pytorch.org/) and [TensorFlow](https://www.tensorflow.org/) — 并与之无缝整合。你可以直接使用一个框架训练你的模型然后用另一个加载和推理。
88
89 ## 在线演示
90
91 你可以直接在模型页面上测试大多数 [model hub](https://huggingface.co/models) 上的模型。 我们也提供了 [私有模型托管、模型版本管理以及推理API](https://huggingface.co/pricing)。
92
93 这里是一些例子:
94 - [用 BERT 做掩码填词](https://huggingface.co/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France)
95 - [用 Electra 做命名实体识别](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city)
96 - [用 GPT-2 做文本生成](https://huggingface.co/gpt2?text=A+long+time+ago%2C+)
97 - [用 RoBERTa 做自然语言推理](https://huggingface.co/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal)
98 - [用 BART 做文本摘要](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct)
99 - [用 DistilBERT 做问答](https://huggingface.co/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species)
100 - [用 T5 做翻译](https://huggingface.co/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin)
101
102 **[Write With Transformer](https://transformer.huggingface.co)**,由抱抱脸团队打造,是一个文本生成的官方 demo。
103
104 ## 如果你在寻找由抱抱脸团队提供的定制化支持服务
105
106 <a target="_blank" href="https://huggingface.co/support">
107 <img alt="HuggingFace Expert Acceleration Program" src="https://huggingface.co/front/thumbnails/support.png" style="max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);">
108 </a><br>
109
110 ## 快速上手
111
112 我们为快速使用模型提供了 `pipeline` (流水线)API。流水线聚合了预训练模型和对应的文本预处理。下面是一个快速使用流水线去判断正负面情绪的例子:
113
114 ```python
115 >>> from transformers import pipeline
116
117 # 使用情绪分析流水线
118 >>> classifier = pipeline('sentiment-analysis')
119 >>> classifier('We are very happy to introduce pipeline to the transformers repository.')
120 [{'label': 'POSITIVE', 'score': 0.9996980428695679}]
121 ```
122
123 第二行代码下载并缓存了流水线使用的预训练模型,而第三行代码则在给定的文本上进行了评估。这里的答案“正面” (positive) 具有 99 的置信度。
124
125 许多的 NLP 任务都有开箱即用的预训练流水线。比如说,我们可以轻松的从给定文本中抽取问题答案:
126
127 ``` python
128 >>> from transformers import pipeline
129
130 # 使用问答流水线
131 >>> question_answerer = pipeline('question-answering')
132 >>> question_answerer({
133 ... 'question': 'What is the name of the repository ?',
134 ... 'context': 'Pipeline has been included in the huggingface/transformers repository'
135 ... })
136 {'score': 0.30970096588134766, 'start': 34, 'end': 58, 'answer': 'huggingface/transformers'}
137
138 ```
139
140 除了给出答案,预训练模型还给出了对应的置信度分数、答案在词符化 (tokenized) 后的文本中开始和结束的位置。你可以从[这个教程](https://huggingface.co/docs/transformers/task_summary)了解更多流水线API支持的任务。
141
142 要在你的任务上下载和使用任意预训练模型也很简单,只需三行代码。这里是 PyTorch 版的示例:
143 ```python
144 >>> from transformers import AutoTokenizer, AutoModel
145
146 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
147 >>> model = AutoModel.from_pretrained("bert-base-uncased")
148
149 >>> inputs = tokenizer("Hello world!", return_tensors="pt")
150 >>> outputs = model(**inputs)
151 ```
152 这里是等效的 TensorFlow 代码:
153 ```python
154 >>> from transformers import AutoTokenizer, TFAutoModel
155
156 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
157 >>> model = TFAutoModel.from_pretrained("bert-base-uncased")
158
159 >>> inputs = tokenizer("Hello world!", return_tensors="tf")
160 >>> outputs = model(**inputs)
161 ```
162
163 词符化器 (tokenizer) 为所有的预训练模型提供了预处理,并可以直接对单个字符串进行调用(比如上面的例子)或对列表 (list) 调用。它会输出一个你可以在下游代码里使用或直接通过 `**` 解包表达式传给模型的词典 (dict)。
164
165 模型本身是一个常规的 [Pytorch `nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) 或 [TensorFlow `tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model)(取决于你的后端),可以常规方式使用。 [这个教程](https://huggingface.co/transformers/training.html)解释了如何将这样的模型整合到经典的 PyTorch 或 TensorFlow 训练循环中,或是如何使用我们的 `Trainer` 训练器)API 来在一个新的数据集上快速微调。
166
167 ## 为什么要用 transformers?
168
169 1. 便于使用的先进模型:
170 - NLU 和 NLG 上表现优越
171 - 对教学和实践友好且低门槛
172 - 高级抽象,只需了解三个类
173 - 对所有模型统一的API
174
175 1. 更低计算开销,更少的碳排放:
176 - 研究人员可以分享亿训练的模型而非次次从头开始训练
177 - 工程师可以减少计算用时和生产环境开销
178 - 数十种模型架构、两千多个预训练模型、100多种语言支持
179
180 1. 对于模型生命周期的每一个部分都面面俱到:
181 - 训练先进的模型,只需 3 行代码
182 - 模型在不同深度学习框架间任意转移,随你心意
183 - 为训练、评估和生产选择最适合的框架,衔接无缝
184
185 1. 为你的需求轻松定制专属模型和用例:
186 - 我们为每种模型架构提供了多个用例来复现原论文结果
187 - 模型内部结构保持透明一致
188 - 模型文件可单独使用,方便魔改和快速实验
189
190 ## 什么情况下我不该用 transformers?
191
192 - 本库并不是模块化的神经网络工具箱。模型文件中的代码特意呈若璞玉,未经额外抽象封装,以便研究人员快速迭代魔改而不致溺于抽象和文件跳转之中。
193 - `Trainer` API 并非兼容任何模型,只为本库之模型优化。若是在寻找适用于通用机器学习的训练循环实现,请另觅他库。
194 - 尽管我们已尽力而为,[examples 目录](https://github.com/huggingface/transformers/tree/main/examples)中的脚本也仅为用例而已。对于你的特定问题,它们并不一定开箱即用,可能需要改几行代码以适之。
195
196 ## 安装
197
198 ### 使用 pip
199
200 这个仓库已在 Python 3.6+、Flax 0.3.2+、PyTorch 1.3.1+ 和 TensorFlow 2.3+ 下经过测试。
201
202 你可以在[虚拟环境](https://docs.python.org/3/library/venv.html)中安装 🤗 Transformers。如果你还不熟悉 Python 的虚拟环境,请阅此[用户说明](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/)。
203
204 首先,用你打算使用的版本的 Python 创建一个虚拟环境并激活。
205
206 然后,你需要安装 Flax、PyTorch 或 TensorFlow 其中之一。关于在你使用的平台上安装这些框架,请参阅 [TensorFlow 安装页](https://www.tensorflow.org/install/), [PyTorch 安装页](https://pytorch.org/get-started/locally/#start-locally) 或 [Flax 安装页](https://github.com/google/flax#quick-install)。
207
208 当这些后端之一安装成功后, 🤗 Transformers 可依此安装:
209
210 ```bash
211 pip install transformers
212 ```
213
214 如果你想要试试用例或者想在正式发布前使用最新的开发中代码,你得[从源代码安装](https://huggingface.co/docs/transformers/installation#installing-from-source)。
215
216 ### 使用 conda
217
218 自 Transformers 4.0.0 版始,我们有了一个 conda 频道: `huggingface`。
219
220 🤗 Transformers 可以通过 conda 依此安装:
221
222 ```shell script
223 conda install -c huggingface transformers
224 ```
225
226 要通过 conda 安装 Flax、PyTorch 或 TensorFlow 其中之一,请参阅它们各自安装页的说明。
227
228 ## 模型架构
229
230 🤗 Transformers 支持的[**所有的模型检查点**](https://huggingface.co/models)由[用户](https://huggingface.co/users)和[组织](https://huggingface.co/organizations)上传,均与 huggingface.co [model hub](https://huggingface.co) 无缝整合。
231
232 目前的检查点数量: 
233
234 🤗 Transformers 目前支持如下的架构(模型概述请阅[这里](https://huggingface.co/docs/transformers/model_summary)):
235
236 1. **[ALBERT](https://huggingface.co/docs/transformers/model_doc/albert)** (来自 Google Research and the Toyota Technological Institute at Chicago) 伴随论文 [ALBERT: A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942), 由 Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut 发布。
237 1. **[BART](https://huggingface.co/docs/transformers/model_doc/bart)** (来自 Facebook) 伴随论文 [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/pdf/1910.13461.pdf) 由 Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer 发布。
238 1. **[BARThez](https://huggingface.co/docs/transformers/model_doc/barthez)** (来自 École polytechnique) 伴随论文 [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321) 由 Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis 发布。
239 1. **[BARTpho](https://huggingface.co/docs/transformers/model_doc/bartpho)** (来自 VinAI Research) 伴随论文 [BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese](https://arxiv.org/abs/2109.09701) 由 Nguyen Luong Tran, Duong Minh Le and Dat Quoc Nguyen 发布。
240 1. **[BEiT](https://huggingface.co/docs/transformers/model_doc/beit)** (来自 Microsoft) 伴随论文 [BEiT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) 由 Hangbo Bao, Li Dong, Furu Wei 发布。
241 1. **[BERT](https://huggingface.co/docs/transformers/model_doc/bert)** (来自 Google) 伴随论文 [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) 由 Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova 发布。
242 1. **[BERT For Sequence Generation](https://huggingface.co/docs/transformers/model_doc/bert-generation)** (来自 Google) 伴随论文 [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) 由 Sascha Rothe, Shashi Narayan, Aliaksei Severyn 发布。
243 1. **[BERTweet](https://huggingface.co/docs/transformers/model_doc/bertweet)** (来自 VinAI Research) 伴随论文 [BERTweet: A pre-trained language model for English Tweets](https://aclanthology.org/2020.emnlp-demos.2/) 由 Dat Quoc Nguyen, Thanh Vu and Anh Tuan Nguyen 发布。
244 1. **[BigBird-Pegasus](https://huggingface.co/docs/transformers/model_doc/bigbird_pegasus)** (来自 Google Research) 伴随论文 [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) 由 Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed 发布。
245 1. **[BigBird-RoBERTa](https://huggingface.co/docs/transformers/model_doc/big_bird)** (来自 Google Research) 伴随论文 [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) 由 Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed 发布。
246 1. **[Blenderbot](https://huggingface.co/docs/transformers/model_doc/blenderbot)** (来自 Facebook) 伴随论文 [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) 由 Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston 发布。
247 1. **[BlenderbotSmall](https://huggingface.co/docs/transformers/model_doc/blenderbot-small)** (来自 Facebook) 伴随论文 [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) 由 Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston 发布。
248 1. **[BLOOM](https://huggingface.co/docs/transformers/model_doc/bloom)** (from BigScience workshop) released by the [BigSicence Workshop](https://bigscience.huggingface.co/).
249 1. **[BORT](https://huggingface.co/docs/transformers/model_doc/bort)** (来自 Alexa) 伴随论文 [Optimal Subarchitecture Extraction For BERT](https://arxiv.org/abs/2010.10499) 由 Adrian de Wynter and Daniel J. Perry 发布。
250 1. **[ByT5](https://huggingface.co/docs/transformers/model_doc/byt5)** (来自 Google Research) 伴随论文 [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) 由 Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel 发布。
251 1. **[CamemBERT](https://huggingface.co/docs/transformers/model_doc/camembert)** (来自 Inria/Facebook/Sorbonne) 伴随论文 [CamemBERT: a Tasty French Language Model](https://arxiv.org/abs/1911.03894) 由 Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz Suárez*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot 发布。
252 1. **[CANINE](https://huggingface.co/docs/transformers/model_doc/canine)** (来自 Google Research) 伴随论文 [CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation](https://arxiv.org/abs/2103.06874) 由 Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting 发布。
253 1. **[CLIP](https://huggingface.co/docs/transformers/model_doc/clip)** (来自 OpenAI) 伴随论文 [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) 由 Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever 发布。
254 1. **[CodeGen](https://huggingface.co/docs/transformers/model_doc/codegen)** (来自 Salesforce) 伴随论文 [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) 由 Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong 发布。
255 1. **[ConvBERT](https://huggingface.co/docs/transformers/model_doc/convbert)** (来自 YituTech) 伴随论文 [ConvBERT: Improving BERT with Span-based Dynamic Convolution](https://arxiv.org/abs/2008.02496) 由 Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng Yan 发布。
256 1. **[ConvNeXT](https://huggingface.co/docs/transformers/model_doc/convnext)** (来自 Facebook AI) 伴随论文 [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) 由 Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, Saining Xie 发布。
257 1. **[CPM](https://huggingface.co/docs/transformers/model_doc/cpm)** (来自 Tsinghua University) 伴随论文 [CPM: A Large-scale Generative Chinese Pre-trained Language Model](https://arxiv.org/abs/2012.00413) 由 Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen, Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, Maosong Sun 发布。
258 1. **[CTRL](https://huggingface.co/docs/transformers/model_doc/ctrl)** (来自 Salesforce) 伴随论文 [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858) 由 Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher 发布。
259 1. **[CvT](https://huggingface.co/docs/transformers/model_doc/cvt)** (来自 Microsoft) 伴随论文 [CvT: Introducing Convolutions to Vision Transformers](https://arxiv.org/abs/2103.15808) 由 Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan, Lei Zhang 发布。
260 1. **[Data2Vec](https://huggingface.co/docs/transformers/model_doc/data2vec)** (来自 Facebook) 伴随论文 [Data2Vec: A General Framework for Self-supervised Learning in Speech, Vision and Language](https://arxiv.org/abs/2202.03555) 由 Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli 发布。
261 1. **[DeBERTa](https://huggingface.co/docs/transformers/model_doc/deberta)** (来自 Microsoft) 伴随论文 [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) 由 Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen 发布。
262 1. **[DeBERTa-v2](https://huggingface.co/docs/transformers/model_doc/deberta-v2)** (来自 Microsoft) 伴随论文 [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) 由 Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen 发布。
263 1. **[Decision Transformer](https://huggingface.co/docs/transformers/model_doc/decision_transformer)** (来自 Berkeley/Facebook/Google) 伴随论文 [Decision Transformer: Reinforcement Learning via Sequence Modeling](https://arxiv.org/abs/2106.01345) 由 Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch 发布。
264 1. **[DeiT](https://huggingface.co/docs/transformers/model_doc/deit)** (来自 Facebook) 伴随论文 [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) 由 Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou 发布。
265 1. **[DETR](https://huggingface.co/docs/transformers/model_doc/detr)** (来自 Facebook) 伴随论文 [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) 由 Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko 发布。
266 1. **[DialoGPT](https://huggingface.co/docs/transformers/model_doc/dialogpt)** (来自 Microsoft Research) 伴随论文 [DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation](https://arxiv.org/abs/1911.00536) 由 Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan 发布。
267 1. **[DistilBERT](https://huggingface.co/docs/transformers/model_doc/distilbert)** (来自 HuggingFace), 伴随论文 [DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108) 由 Victor Sanh, Lysandre Debut and Thomas Wolf 发布。 同样的方法也应用于压缩 GPT-2 到 [DistilGPT2](https://github.com/huggingface/transformers/tree/main/examples/distillation), RoBERTa 到 [DistilRoBERTa](https://github.com/huggingface/transformers/tree/main/examples/distillation), Multilingual BERT 到 [DistilmBERT](https://github.com/huggingface/transformers/tree/main/examples/distillation) 和德语版 DistilBERT。
268 1. **[DiT](https://huggingface.co/docs/transformers/model_doc/dit)** (来自 Microsoft Research) 伴随论文 [DiT: Self-supervised Pre-training for Document Image Transformer](https://arxiv.org/abs/2203.02378) 由 Junlong Li, Yiheng Xu, Tengchao Lv, Lei Cui, Cha Zhang, Furu Wei 发布。
269 1. **[Donut](https://huggingface.co/docs/transformers/main/model_doc/donut)** (来自 NAVER) 伴随论文 [OCR-free Document Understanding Transformer](https://arxiv.org/abs/2111.15664) 由 Geewook Kim, Teakgyu Hong, Moonbin Yim, Jeongyeon Nam, Jinyoung Park, Jinyeong Yim, Wonseok Hwang, Sangdoo Yun, Dongyoon Han, Seunghyun Park 发布。
270 1. **[DPR](https://huggingface.co/docs/transformers/model_doc/dpr)** (来自 Facebook) 伴随论文 [Dense Passage Retrieval for Open-Domain Question Answering](https://arxiv.org/abs/2004.04906) 由 Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih 发布。
271 1. **[DPT](https://huggingface.co/docs/transformers/master/model_doc/dpt)** (来自 Intel Labs) 伴随论文 [Vision Transformers for Dense Prediction](https://arxiv.org/abs/2103.13413) 由 René Ranftl, Alexey Bochkovskiy, Vladlen Koltun 发布。
272 1. **[ELECTRA](https://huggingface.co/docs/transformers/model_doc/electra)** (来自 Google Research/Stanford University) 伴随论文 [ELECTRA: Pre-training text encoders as discriminators rather than generators](https://arxiv.org/abs/2003.10555) 由 Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning 发布。
273 1. **[EncoderDecoder](https://huggingface.co/docs/transformers/model_doc/encoder-decoder)** (来自 Google Research) 伴随论文 [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) 由 Sascha Rothe, Shashi Narayan, Aliaksei Severyn 发布。
274 1. **[FlauBERT](https://huggingface.co/docs/transformers/model_doc/flaubert)** (来自 CNRS) 伴随论文 [FlauBERT: Unsupervised Language Model Pre-training for French](https://arxiv.org/abs/1912.05372) 由 Hang Le, Loïc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoît Crabbé, Laurent Besacier, Didier Schwab 发布。
275 1. **[FLAVA](https://huggingface.co/docs/transformers/model_doc/flava)** (来自 Facebook AI) 伴随论文 [FLAVA: A Foundational Language And Vision Alignment Model](https://arxiv.org/abs/2112.04482) 由 Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, and Douwe Kiela 发布。
276 1. **[FNet](https://huggingface.co/docs/transformers/model_doc/fnet)** (来自 Google Research) 伴随论文 [FNet: Mixing Tokens with Fourier Transforms](https://arxiv.org/abs/2105.03824) 由 James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon 发布。
277 1. **[Funnel Transformer](https://huggingface.co/docs/transformers/model_doc/funnel)** (来自 CMU/Google Brain) 伴随论文 [Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing](https://arxiv.org/abs/2006.03236) 由 Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le 发布。
278 1. **[GLPN](https://huggingface.co/docs/transformers/model_doc/glpn)** (来自 KAIST) 伴随论文 [Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth](https://arxiv.org/abs/2201.07436) 由 Doyeon Kim, Woonghyun Ga, Pyungwhan Ahn, Donggyu Joo, Sehwan Chun, Junmo Kim 发布。
279 1. **[GPT](https://huggingface.co/docs/transformers/model_doc/openai-gpt)** (来自 OpenAI) 伴随论文 [Improving Language Understanding by Generative Pre-Training](https://blog.openai.com/language-unsupervised/) 由 Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever 发布。
280 1. **[GPT Neo](https://huggingface.co/docs/transformers/model_doc/gpt_neo)** (来自 EleutherAI) 随仓库 [EleutherAI/gpt-neo](https://github.com/EleutherAI/gpt-neo) 发布。作者为 Sid Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy 发布。
281 1. **[GPT NeoX](https://huggingface.co/docs/transformers/model_doc/gpt_neox)** (from EleutherAI) released with the paper [GPT-NeoX-20B: An Open-Source Autoregressive Language Model](https://arxiv.org/abs/2204.06745) by Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, Samuel Weinbach
282 1. **[GPT-2](https://huggingface.co/docs/transformers/model_doc/gpt2)** (来自 OpenAI) 伴随论文 [Language Models are Unsupervised Multitask Learners](https://blog.openai.com/better-language-models/) 由 Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever** 发布。
283 1. **[GPT-J](https://huggingface.co/docs/transformers/model_doc/gptj)** (来自 EleutherAI) 伴随论文 [kingoflolz/mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/) 由 Ben Wang and Aran Komatsuzaki 发布。
284 1. **[GroupViT](https://huggingface.co/docs/transformers/model_doc/groupvit)** (来自 UCSD, NVIDIA) 伴随论文 [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094) 由 Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang 发布。
285 1. **[Hubert](https://huggingface.co/docs/transformers/model_doc/hubert)** (来自 Facebook) 伴随论文 [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) 由 Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed 发布。
286 1. **[I-BERT](https://huggingface.co/docs/transformers/model_doc/ibert)** (来自 Berkeley) 伴随论文 [I-BERT: Integer-only BERT Quantization](https://arxiv.org/abs/2101.01321) 由 Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney, Kurt Keutzer 发布。
287 1. **[ImageGPT](https://huggingface.co/docs/transformers/model_doc/imagegpt)** (来自 OpenAI) 伴随论文 [Generative Pretraining from Pixels](https://openai.com/blog/image-gpt/) 由 Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever 发布。
288 1. **[LayoutLM](https://huggingface.co/docs/transformers/model_doc/layoutlm)** (来自 Microsoft Research Asia) 伴随论文 [LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318) 由 Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou 发布。
289 1. **[LayoutLMv2](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (来自 Microsoft Research Asia) 伴随论文 [LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding](https://arxiv.org/abs/2012.14740) 由 Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou 发布。
290 1. **[LayoutLMv3](https://huggingface.co/docs/transformers/model_doc/layoutlmv3)** (来自 Microsoft Research Asia) 伴随论文 [LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking](https://arxiv.org/abs/2204.08387) 由 Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei 发布。
291 1. **[LayoutXLM](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (来自 Microsoft Research Asia) 伴随论文 [LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding](https://arxiv.org/abs/2104.08836) 由 Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Furu Wei 发布。
292 1. **[LED](https://huggingface.co/docs/transformers/model_doc/led)** (来自 AllenAI) 伴随论文 [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) 由 Iz Beltagy, Matthew E. Peters, Arman Cohan 发布。
293 1. **[LeViT](https://huggingface.co/docs/transformers/model_doc/levit)** (来自 Meta AI) 伴随论文 [LeViT: A Vision Transformer in ConvNet's Clothing for Faster Inference](https://arxiv.org/abs/2104.01136) 由 Ben Graham, Alaaeldin El-Nouby, Hugo Touvron, Pierre Stock, Armand Joulin, Hervé Jégou, Matthijs Douze 发布。
294 1. **[Longformer](https://huggingface.co/docs/transformers/model_doc/longformer)** (来自 AllenAI) 伴随论文 [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) 由 Iz Beltagy, Matthew E. Peters, Arman Cohan 发布。
295 1. **[LongT5](https://huggingface.co/docs/transformers/model_doc/longt5)** (来自 Google AI) released 伴随论文 [LongT5: Efficient Text-To-Text Transformer for Long Sequences](https://arxiv.org/abs/2112.07916) 由 Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung, Yinfei Yang 发布。
296 1. **[LUKE](https://huggingface.co/docs/transformers/model_doc/luke)** (来自 Studio Ousia) 伴随论文 [LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention](https://arxiv.org/abs/2010.01057) 由 Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, Yuji Matsumoto 发布。
297 1. **[LXMERT](https://huggingface.co/docs/transformers/model_doc/lxmert)** (来自 UNC Chapel Hill) 伴随论文 [LXMERT: Learning Cross-Modality Encoder Representations from Transformers for Open-Domain Question Answering](https://arxiv.org/abs/1908.07490) 由 Hao Tan and Mohit Bansal 发布。
298 1. **[M-CTC-T](https://huggingface.co/docs/transformers/model_doc/mctct)** (来自 Facebook) 伴随论文 [Pseudo-Labeling For Massively Multilingual Speech Recognition](https://arxiv.org/abs/2111.00161) 由 Loren Lugosch, Tatiana Likhomanenko, Gabriel Synnaeve, and Ronan Collobert 发布。
299 1. **[M2M100](https://huggingface.co/docs/transformers/model_doc/m2m_100)** (来自 Facebook) 伴随论文 [Beyond English-Centric Multilingual Machine Translation](https://arxiv.org/abs/2010.11125) 由 Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin 发布。
300 1. **[MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)** 用 [OPUS](http://opus.nlpl.eu/) 数据训练的机器翻译模型由 Jörg Tiedemann 发布。[Marian Framework](https://marian-nmt.github.io/) 由微软翻译团队开发。
301 1. **[MaskFormer](https://huggingface.co/docs/transformers/model_doc/maskformer)** (from Meta and UIUC) released with the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) by Bowen Cheng, Alexander G. Schwing, Alexander Kirillov
302 1. **[mBART](https://huggingface.co/docs/transformers/model_doc/mbart)** (来自 Facebook) 伴随论文 [Multilingual Denoising Pre-training for Neural Machine Translation](https://arxiv.org/abs/2001.08210) 由 Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer 发布。
303 1. **[mBART-50](https://huggingface.co/docs/transformers/model_doc/mbart)** (来自 Facebook) 伴随论文 [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) 由 Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, Angela Fan 发布。
304 1. **[Megatron-BERT](https://huggingface.co/docs/transformers/model_doc/megatron-bert)** (来自 NVIDIA) 伴随论文 [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) 由 Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro 发布。
305 1. **[Megatron-GPT2](https://huggingface.co/docs/transformers/model_doc/megatron_gpt2)** (来自 NVIDIA) 伴随论文 [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) 由 Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro 发布。
306 1. **[mLUKE](https://huggingface.co/docs/transformers/model_doc/mluke)** (来自 Studio Ousia) 伴随论文 [mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models](https://arxiv.org/abs/2110.08151) 由 Ryokan Ri, Ikuya Yamada, and Yoshimasa Tsuruoka 发布。
307 1. **[MobileBERT](https://huggingface.co/docs/transformers/model_doc/mobilebert)** (来自 CMU/Google Brain) 伴随论文 [MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices](https://arxiv.org/abs/2004.02984) 由 Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou 发布。
308 1. **[MobileViT](https://huggingface.co/docs/transformers/model_doc/mobilevit)** (来自 Apple) 伴随论文 [MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer](https://arxiv.org/abs/2110.02178) 由 Sachin Mehta and Mohammad Rastegari 发布。
309 1. **[MPNet](https://huggingface.co/docs/transformers/model_doc/mpnet)** (来自 Microsoft Research) 伴随论文 [MPNet: Masked and Permuted Pre-training for Language Understanding](https://arxiv.org/abs/2004.09297) 由 Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu 发布。
310 1. **[MT5](https://huggingface.co/docs/transformers/model_doc/mt5)** (来自 Google AI) 伴随论文 [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) 由 Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel 发布。
311 1. **[MVP](https://huggingface.co/docs/transformers/model_doc/mvp)** (来自 中国人民大学 AI Box) 伴随论文 [MVP: Multi-task Supervised Pre-training for Natural Language Generation](https://arxiv.org/abs/2206.12131) 由 Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen 发布。
312 1. **[Nezha](https://huggingface.co/docs/transformers/model_doc/nezha)** (来自华为诺亚方舟实验室) 伴随论文 [NEZHA: Neural Contextualized Representation for Chinese Language Understanding](https://arxiv.org/abs/1909.00204) 由 Junqiu Wei, Xiaozhe Ren, Xiaoguang Li, Wenyong Huang, Yi Liao, Yasheng Wang, Jiashu Lin, Xin Jiang, Xiao Chen and Qun Liu 发布。
313 1. **[NLLB](https://huggingface.co/docs/transformers/model_doc/nllb)** (来自 Meta) 伴随论文 [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672) 由 the NLLB team 发布。
314 1. **[Nyströmformer](https://huggingface.co/docs/transformers/model_doc/nystromformer)** (来自 the University of Wisconsin - Madison) 伴随论文 [Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention](https://arxiv.org/abs/2102.03902) 由 Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, Vikas Singh 发布。
315 1. **[OPT](https://huggingface.co/docs/transformers/master/model_doc/opt)** (来自 Meta AI) 伴随论文 [OPT: Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) 由 Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al 发布。
316 1. **[OWL-ViT](https://huggingface.co/docs/transformers/model_doc/owlvit)** (来自 Google AI) 伴随论文 [Simple Open-Vocabulary Object Detection with Vision Transformers](https://arxiv.org/abs/2205.06230) 由 Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, and Neil Houlsby 发布。
317 1. **[Pegasus](https://huggingface.co/docs/transformers/model_doc/pegasus)** (来自 Google) 伴随论文 [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777) 由 Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu 发布。
318 1. **[Perceiver IO](https://huggingface.co/docs/transformers/model_doc/perceiver)** (来自 Deepmind) 伴随论文 [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) 由 Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hénaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, João Carreira 发布。
319 1. **[PhoBERT](https://huggingface.co/docs/transformers/model_doc/phobert)** (来自 VinAI Research) 伴随论文 [PhoBERT: Pre-trained language models for Vietnamese](https://www.aclweb.org/anthology/2020.findings-emnlp.92/) 由 Dat Quoc Nguyen and Anh Tuan Nguyen 发布。
320 1. **[PLBart](https://huggingface.co/docs/transformers/model_doc/plbart)** (来自 UCLA NLP) 伴随论文 [Unified Pre-training for Program Understanding and Generation](https://arxiv.org/abs/2103.06333) 由 Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, Kai-Wei Chang 发布。
321 1. **[PoolFormer](https://huggingface.co/docs/transformers/model_doc/poolformer)** (来自 Sea AI Labs) 伴随论文 [MetaFormer is Actually What You Need for Vision](https://arxiv.org/abs/2111.11418) 由 Yu, Weihao and Luo, Mi and Zhou, Pan and Si, Chenyang and Zhou, Yichen and Wang, Xinchao and Feng, Jiashi and Yan, Shuicheng 发布。
322 1. **[ProphetNet](https://huggingface.co/docs/transformers/model_doc/prophetnet)** (来自 Microsoft Research) 伴随论文 [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) 由 Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou 发布。
323 1. **[QDQBert](https://huggingface.co/docs/transformers/model_doc/qdqbert)** (来自 NVIDIA) 伴随论文 [Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation](https://arxiv.org/abs/2004.09602) 由 Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev and Paulius Micikevicius 发布。
324 1. **[RAG](https://huggingface.co/docs/transformers/model_doc/rag)** (来自 Facebook) 伴随论文 [Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks](https://arxiv.org/abs/2005.11401) 由 Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, Douwe Kiela 发布。
325 1. **[REALM](https://huggingface.co/docs/transformers/model_doc/realm.html)** (来自 Google Research) 伴随论文 [REALM: Retrieval-Augmented Language Model Pre-Training](https://arxiv.org/abs/2002.08909) 由 Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat and Ming-Wei Chang 发布。
326 1. **[Reformer](https://huggingface.co/docs/transformers/model_doc/reformer)** (来自 Google Research) 伴随论文 [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451) 由 Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya 发布。
327 1. **[RegNet](https://huggingface.co/docs/transformers/model_doc/regnet)** (from META Research) released with the paper [Designing Network Design Space](https://arxiv.org/abs/2003.13678) by Ilija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, Piotr Dollár.
328 1. **[RemBERT](https://huggingface.co/docs/transformers/model_doc/rembert)** (来自 Google Research) 伴随论文 [Rethinking embedding coupling in pre-trained language models](https://arxiv.org/pdf/2010.12821.pdf) 由 Hyung Won Chung, Thibault Févry, Henry Tsai, M. Johnson, Sebastian Ruder 发布。
329 1. **[ResNet](https://huggingface.co/docs/transformers/model_doc/resnet)** (from Microsoft Research) released with the paper [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) by Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun.
330 1. **[RoBERTa](https://huggingface.co/docs/transformers/model_doc/roberta)** (来自 Facebook), 伴随论文 [Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) 由 Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov 发布。
331 1. **[RoFormer](https://huggingface.co/docs/transformers/model_doc/roformer)** (来自 ZhuiyiTechnology), 伴随论文 [RoFormer: Enhanced Transformer with Rotary Position Embedding](https://arxiv.org/pdf/2104.09864v1.pdf) 由 Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu 发布。
332 1. **[SegFormer](https://huggingface.co/docs/transformers/model_doc/segformer)** (来自 NVIDIA) 伴随论文 [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) 由 Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo 发布。
333 1. **[SEW](https://huggingface.co/docs/transformers/model_doc/sew)** (来自 ASAPP) 伴随论文 [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) 由 Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi 发布。
334 1. **[SEW-D](https://huggingface.co/docs/transformers/model_doc/sew_d)** (来自 ASAPP) 伴随论文 [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) 由 Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi 发布。
335 1. **[SpeechToTextTransformer](https://huggingface.co/docs/transformers/model_doc/speech_to_text)** (来自 Facebook), 伴随论文 [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) 由 Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino 发布。
336 1. **[SpeechToTextTransformer2](https://huggingface.co/docs/transformers/model_doc/speech_to_text_2)** (来自 Facebook) 伴随论文 [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) 由 Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau 发布。
337 1. **[Splinter](https://huggingface.co/docs/transformers/model_doc/splinter)** (来自 Tel Aviv University) 伴随论文 [Few-Shot Question Answering by Pretraining Span Selection](https://arxiv.org/abs/2101.00438) 由 Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy 发布。
338 1. **[SqueezeBERT](https://huggingface.co/docs/transformers/model_doc/squeezebert)** (来自 Berkeley) 伴随论文 [SqueezeBERT: What can computer vision teach NLP about efficient neural networks?](https://arxiv.org/abs/2006.11316) 由 Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W. Keutzer 发布。
339 1. **[Swin Transformer](https://huggingface.co/docs/transformers/model_doc/swin)** (来自 Microsoft) 伴随论文 [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) 由 Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo 发布。
340 1. **[Swin Transformer V2](https://huggingface.co/docs/transformers/main/model_doc/swinv2)** (来自 Microsoft) 伴随论文 [Swin Transformer V2: Scaling Up Capacity and Resolution](https://arxiv.org/abs/2111.09883) 由 Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, Furu Wei, Baining Guo 发布。
341 1. **[T5](https://huggingface.co/docs/transformers/model_doc/t5)** (来自 Google AI) 伴随论文 [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) 由 Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu 发布。
342 1. **[T5v1.1](https://huggingface.co/docs/transformers/model_doc/t5v1.1)** (来自 Google AI) 伴随论文 [google-research/text-to-text-transfer-transformer](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) 由 Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu 发布。
343 1. **[TAPAS](https://huggingface.co/docs/transformers/model_doc/tapas)** (来自 Google AI) 伴随论文 [TAPAS: Weakly Supervised Table Parsing via Pre-training](https://arxiv.org/abs/2004.02349) 由 Jonathan Herzig, Paweł Krzysztof Nowak, Thomas Müller, Francesco Piccinno and Julian Martin Eisenschlos 发布。
344 1. **[TAPEX](https://huggingface.co/docs/transformers/model_doc/tapex)** (来自 Microsoft Research) 伴随论文 [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) 由 Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou 发布。
345 1. **[Trajectory Transformer](https://huggingface.co/docs/transformers/model_doc/trajectory_transformers)** (from the University of California at Berkeley) released with the paper [Offline Reinforcement Learning as One Big Sequence Modeling Problem](https://arxiv.org/abs/2106.02039) by Michael Janner, Qiyang Li, Sergey Levine
346 1. **[Transformer-XL](https://huggingface.co/docs/transformers/model_doc/transfo-xl)** (来自 Google/CMU) 伴随论文 [Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context](https://arxiv.org/abs/1901.02860) 由 Zihang Dai*, Zhilin Yang*, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov 发布。
347 1. **[TrOCR](https://huggingface.co/docs/transformers/model_doc/trocr)** (来自 Microsoft) 伴随论文 [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) 由 Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei 发布。
348 1. **[UL2](https://huggingface.co/docs/transformers/model_doc/ul2)** (from Google Research) released with the paper [Unifying Language Learning Paradigms](https://arxiv.org/abs/2205.05131v1) by Yi Tay, Mostafa Dehghani, Vinh Q. Tran, Xavier Garcia, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Neil Houlsby, Donald Metzler
349 1. **[UniSpeech](https://huggingface.co/docs/transformers/model_doc/unispeech)** (来自 Microsoft Research) 伴随论文 [UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data](https://arxiv.org/abs/2101.07597) 由 Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang 发布。
350 1. **[UniSpeechSat](https://huggingface.co/docs/transformers/model_doc/unispeech-sat)** (来自 Microsoft Research) 伴随论文 [UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER AWARE PRE-TRAINING](https://arxiv.org/abs/2110.05752) 由 Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu 发布。
351 1. **[VAN](https://huggingface.co/docs/transformers/model_doc/van)** (来自 Tsinghua University and Nankai University) 伴随论文 [Visual Attention Network](https://arxiv.org/pdf/2202.09741.pdf) 由 Meng-Hao Guo, Cheng-Ze Lu, Zheng-Ning Liu, Ming-Ming Cheng, Shi-Min Hu 发布。
352 1. **[VideoMAE](https://huggingface.co/docs/transformers/main/model_doc/videomae)** (来自 Multimedia Computing Group, Nanjing University) 伴随论文 [VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training](https://arxiv.org/abs/2203.12602) 由 Zhan Tong, Yibing Song, Jue Wang, Limin Wang 发布。
353 1. **[ViLT](https://huggingface.co/docs/transformers/model_doc/vilt)** (来自 NAVER AI Lab/Kakao Enterprise/Kakao Brain) 伴随论文 [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) 由 Wonjae Kim, Bokyung Son, Ildoo Kim 发布。
354 1. **[Vision Transformer (ViT)](https://huggingface.co/docs/transformers/model_doc/vit)** (来自 Google AI) 伴随论文 [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) 由 Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby 发布。
355 1. **[VisualBERT](https://huggingface.co/docs/transformers/model_doc/visual_bert)** (来自 UCLA NLP) 伴随论文 [VisualBERT: A Simple and Performant Baseline for Vision and Language](https://arxiv.org/pdf/1908.03557) 由 Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang 发布。
356 1. **[ViTMAE](https://huggingface.co/docs/transformers/model_doc/vit_mae)** (来自 Meta AI) 伴随论文 [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377) 由 Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick 发布。
357 1. **[Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/wav2vec2)** (来自 Facebook AI) 伴随论文 [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) 由 Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli 发布。
358 1. **[Wav2Vec2-Conformer](https://huggingface.co/docs/transformers/model_doc/wav2vec2-conformer)** (来自 Facebook AI) 伴随论文 [FAIRSEQ S2T: Fast Speech-to-Text Modeling with FAIRSEQ](https://arxiv.org/abs/2010.05171) 由 Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino 发布。
359 1. **[Wav2Vec2Phoneme](https://huggingface.co/docs/transformers/model_doc/wav2vec2_phoneme)** (来自 Facebook AI) 伴随论文 [Simple and Effective Zero-shot Cross-lingual Phoneme Recognition](https://arxiv.org/abs/2109.11680) 由 Qiantong Xu, Alexei Baevski, Michael Auli 发布。
360 1. **[WavLM](https://huggingface.co/docs/transformers/model_doc/wavlm)** (from Microsoft Research) released with the paper [WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900) by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei.
361 1. **[XGLM](https://huggingface.co/docs/transformers/model_doc/xglm)** (From Facebook AI) released with the paper [Few-shot Learning with Multilingual Language Models](https://arxiv.org/abs/2112.10668) by Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, Xian Li.
362 1. **[XLM](https://huggingface.co/docs/transformers/model_doc/xlm)** (来自 Facebook) 伴随论文 [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) 由 Guillaume Lample and Alexis Conneau 发布。
363 1. **[XLM-ProphetNet](https://huggingface.co/docs/transformers/model_doc/xlm-prophetnet)** (来自 Microsoft Research) 伴随论文 [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) 由 Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou 发布。
364 1. **[XLM-RoBERTa](https://huggingface.co/docs/transformers/model_doc/xlm-roberta)** (来自 Facebook AI), 伴随论文 [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) 由 Alexis Conneau*, Kartikay Khandelwal*, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov 发布。
365 1. **[XLM-RoBERTa-XL](https://huggingface.co/docs/transformers/model_doc/xlm-roberta-xl)** (来自 Facebook AI) 伴随论文 [Larger-Scale Transformers for Multilingual Masked Language Modeling](https://arxiv.org/abs/2105.00572) 由 Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, Alexis Conneau 发布。
366 1. **[XLNet](https://huggingface.co/docs/transformers/model_doc/xlnet)** (来自 Google/CMU) 伴随论文 [XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) 由 Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le 发布。
367 1. **[XLS-R](https://huggingface.co/docs/transformers/model_doc/xls_r)** (来自 Facebook AI) 伴随论文 [XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale](https://arxiv.org/abs/2111.09296) 由 Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, Michael Auli 发布。
368 1. **[XLSR-Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/xlsr_wav2vec2)** (来自 Facebook AI) 伴随论文 [Unsupervised Cross-Lingual Representation Learning For Speech Recognition](https://arxiv.org/abs/2006.13979) 由 Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli 发布。
369 1. **[YOLOS](https://huggingface.co/docs/transformers/model_doc/yolos)** (来自 Huazhong University of Science & Technology) 伴随论文 [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://arxiv.org/abs/2106.00666) 由 Yuxin Fang, Bencheng Liao, Xinggang Wang, Jiemin Fang, Jiyang Qi, Rui Wu, Jianwei Niu, Wenyu Liu 发布。
370 1. **[YOSO](https://huggingface.co/docs/transformers/model_doc/yoso)** (来自 the University of Wisconsin - Madison) 伴随论文 [You Only Sample (Almost) 由 Zhanpeng Zeng, Yunyang Xiong, Sathya N. Ravi, Shailesh Acharya, Glenn Fung, Vikas Singh 发布。
371 1. 想要贡献新的模型?我们这里有一份**详细指引和模板**来引导你添加新的模型。你可以在 [`templates`](./templates) 目录中找到他们。记得查看 [贡献指南](./CONTRIBUTING.md) 并在开始写 PR 前联系维护人员或开一个新的 issue 来获得反馈。
372
373 要检查某个模型是否已有 Flax、PyTorch 或 TensorFlow 的实现,或其是否在 🤗 Tokenizers 库中有对应词符化器(tokenizer),敬请参阅[此表](https://huggingface.co/docs/transformers/index#supported-frameworks)。
374
375 这些实现均已于多个数据集测试(请参看用例脚本)并应于原版实现表现相当。你可以在用例文档的[此节](https://huggingface.co/docs/transformers/examples)中了解表现的细节。
376
377
378 ## 了解更多
379
380 | 章节 | 描述 |
381 |-|-|
382 | [文档](https://huggingface.co/transformers/) | 完整的 API 文档和教程 |
383 | [任务总结](https://huggingface.co/docs/transformers/task_summary) | 🤗 Transformers 支持的任务 |
384 | [预处理教程](https://huggingface.co/docs/transformers/preprocessing) | 使用 `Tokenizer` 来为模型准备数据 |
385 | [训练和微调](https://huggingface.co/docs/transformers/training) | 在 PyTorch/TensorFlow 的训练循环或 `Trainer` API 中使用 🤗 Transformers 提供的模型 |
386 | [快速上手:微调和用例脚本](https://github.com/huggingface/transformers/tree/main/examples) | 为各种任务提供的用例脚本 |
387 | [模型分享和上传](https://huggingface.co/docs/transformers/model_sharing) | 和社区上传和分享你微调的模型 |
388 | [迁移](https://huggingface.co/docs/transformers/migration) | 从 `pytorch-transformers` 或 `pytorch-pretrained-bert` 迁移到 🤗 Transformers |
389
390 ## 引用
391
392 我们已将此库的[论文](https://www.aclweb.org/anthology/2020.emnlp-demos.6/)正式发表,如果你使用了 🤗 Transformers 库,请引用:
393 ```bibtex
394 @inproceedings{wolf-etal-2020-transformers,
395 title = "Transformers: State-of-the-Art Natural Language Processing",
396 author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush",
397 booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
398 month = oct,
399 year = "2020",
400 address = "Online",
401 publisher = "Association for Computational Linguistics",
402 url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6",
403 pages = "38--45"
404 }
405 ```
406
[end of README_zh-hans.md]
[start of README_zh-hant.md]
1 <!---
2 Copyright 2020 The HuggingFace Team. All rights reserved.
3
4 Licensed under the Apache License, Version 2.0 (the "License");
5 you may not use this file except in compliance with the License.
6 You may obtain a copy of the License at
7
8 http://www.apache.org/licenses/LICENSE-2.0
9
10 Unless required by applicable law or agreed to in writing, software
11 distributed under the License is distributed on an "AS IS" BASIS,
12 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 See the License for the specific language governing permissions and
14 limitations under the License.
15 -->
16
17 <!---
18 A useful guide for English-Traditional Chinese translation of Hugging Face documentation
19 - Add space around English words and numbers when they appear between Chinese characters. E.g., 共 100 多種語言; 使用 transformers 函式庫。
20 - Use square quotes, e.g.,「引用」
21 - Some of terms in the file can be found at National Academy for Educational Research (https://terms.naer.edu.tw/), an official website providing bilingual translations between English and Traditional Chinese.
22
23 Dictionary
24
25 API: API (不翻譯)
26 add: 加入
27 checkpoint: 檢查點
28 code: 程式碼
29 community: 社群
30 confidence: 信賴度
31 dataset: 資料集
32 documentation: 文件
33 example: 基本翻譯為「範例」,或依語意翻為「例子」
34 finetune: 微調
35 Hugging Face: Hugging Face(不翻譯)
36 implementation: 實作
37 inference: 推論
38 library: 函式庫
39 module: 模組
40 NLP/Natural Language Processing: 以 NLP 出現時不翻譯,以 Natural Language Processing 出現時翻譯為自然語言處理
41 online demos: 線上Demo
42 pipeline: pipeline(不翻譯)
43 pretrained/pretrain: 預訓練
44 Python data structures (e.g., list, set, dict): 翻譯為串列,集合,字典,並用括號標註原英文
45 repository: repository(不翻譯)
46 summary: 概覽
47 token-: token-(不翻譯)
48 Trainer: Trainer(不翻譯)
49 transformer: transformer(不翻譯)
50 tutorial: 教學
51 user: 使用者
52 -->
53
54 <p align="center">
55 <br>
56 <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers_logo_name.png" width="400"/>
57 <br>
58 <p>
59 <p align="center">
60 <a href="https://circleci.com/gh/huggingface/transformers">
61 <img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/main">
62 </a>
63 <a href="https://github.com/huggingface/transformers/blob/main/LICENSE">
64 <img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue">
65 </a>
66 <a href="https://huggingface.co/docs/transformers/index">
67 <img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers/index.svg?down_color=red&down_message=offline&up_message=online">
68 </a>
69 <a href="https://github.com/huggingface/transformers/releases">
70 <img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg">
71 </a>
72 <a href="https://github.com/huggingface/transformers/blob/main/CODE_OF_CONDUCT.md">
73 <img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg">
74 </a>
75 <a href="https://zenodo.org/badge/latestdoi/155220641"><img src="https://zenodo.org/badge/155220641.svg" alt="DOI"></a>
76 </p>
77
78 <h4 align="center">
79 <p>
80 <a href="https://github.com/huggingface/transformers/">English</a> |
81 <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hans.md">简体中文</a> |
82 <b>繁體中文</b> |
83 <a href="https://github.com/huggingface/transformers/blob/main/README_ko.md">한국어</a>
84 <p>
85 </h4>
86
87 <h3 align="center">
88 <p>為 Jax、PyTorch 以及 TensorFlow 打造的先進自然語言處理函式庫</p>
89 </h3>
90
91 <h3 align="center">
92 <a href="https://hf.co/course"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/course_banner.png"></a>
93 </h3>
94
95 🤗 Transformers 提供了數以千計的預訓練模型,支援 100 多種語言的文本分類、資訊擷取、問答、摘要、翻譯、文本生成。它的宗旨是讓最先進的 NLP 技術人人易用。
96
97 🤗 Transformers 提供了便於快速下載和使用的API,讓你可以將預訓練模型用在給定文本、在你的資料集上微調然後經由 [model hub](https://huggingface.co/models) 與社群共享。同時,每個定義的 Python 模組架構均完全獨立,方便修改和快速研究實驗。
98
99 🤗 Transformers 支援三個最熱門的深度學習函式庫: [Jax](https://jax.readthedocs.io/en/latest/), [PyTorch](https://pytorch.org/) 以及 [TensorFlow](https://www.tensorflow.org/) — 並與之完美整合。你可以直接使用其中一個框架訓練你的模型,然後用另一個載入和推論。
100
101 ## 線上Demo
102
103 你可以直接在 [model hub](https://huggingface.co/models) 上測試大多數的模型。我們也提供了 [私有模型託管、模型版本管理以及推論API](https://huggingface.co/pricing)。
104
105 這裡是一些範例:
106 - [用 BERT 做遮蓋填詞](https://huggingface.co/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France)
107 - [用 Electra 做專有名詞辨識](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city)
108 - [用 GPT-2 做文本生成](https://huggingface.co/gpt2?text=A+long+time+ago%2C+)
109 - [用 RoBERTa 做自然語言推論](https://huggingface.co/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal)
110 - [用 BART 做文本摘要](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct)
111 - [用 DistilBERT 做問答](https://huggingface.co/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species)
112 - [用 T5 做翻譯](https://huggingface.co/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin)
113
114 **[Write With Transformer](https://transformer.huggingface.co)**,由 Hugging Face 團隊所打造,是一個文本生成的官方 demo。
115
116 ## 如果你在尋找由 Hugging Face 團隊所提供的客製化支援服務
117
118 <a target="_blank" href="https://huggingface.co/support">
119 <img alt="HuggingFace Expert Acceleration Program" src="https://huggingface.co/front/thumbnails/support.png" style="max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);">
120 </a><br>
121
122 ## 快速上手
123
124 我們為快速使用模型提供了 `pipeline` API。 Pipeline 包含了預訓練模型和對應的文本預處理。下面是一個快速使用 pipeline 去判斷正負面情緒的例子:
125
126 ```python
127 >>> from transformers import pipeline
128
129 # 使用情緒分析 pipeline
130 >>> classifier = pipeline('sentiment-analysis')
131 >>> classifier('We are very happy to introduce pipeline to the transformers repository.')
132 [{'label': 'POSITIVE', 'score': 0.9996980428695679}]
133 ```
134
135 第二行程式碼下載並快取 pipeline 使用的預訓練模型,而第三行程式碼則在給定的文本上進行了評估。這裡的答案“正面” (positive) 具有 99.97% 的信賴度。
136
137 許多的 NLP 任務都有隨選即用的預訓練 `pipeline`。例如,我們可以輕鬆地從給定文本中擷取問題答案:
138
139 ``` python
140 >>> from transformers import pipeline
141
142 # 使用問答 pipeline
143 >>> question_answerer = pipeline('question-answering')
144 >>> question_answerer({
145 ... 'question': 'What is the name of the repository ?',
146 ... 'context': 'Pipeline has been included in the huggingface/transformers repository'
147 ... })
148 {'score': 0.30970096588134766, 'start': 34, 'end': 58, 'answer': 'huggingface/transformers'}
149
150 ```
151
152 除了提供問題解答,預訓練模型還提供了對應的信賴度分數以及解答在 tokenized 後的文本中開始和結束的位置。你可以從[這個教學](https://huggingface.co/docs/transformers/task_summary)了解更多 `pipeline` API支援的任務。
153
154 要在你的任務中下載和使用任何預訓練模型很簡單,只需三行程式碼。這裡是 PyTorch 版的範例:
155 ```python
156 >>> from transformers import AutoTokenizer, AutoModel
157
158 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
159 >>> model = AutoModel.from_pretrained("bert-base-uncased")
160
161 >>> inputs = tokenizer("Hello world!", return_tensors="pt")
162 >>> outputs = model(**inputs)
163 ```
164 這裡是對應的 TensorFlow 程式碼:
165 ```python
166 >>> from transformers import AutoTokenizer, TFAutoModel
167
168 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
169 >>> model = TFAutoModel.from_pretrained("bert-base-uncased")
170
171 >>> inputs = tokenizer("Hello world!", return_tensors="tf")
172 >>> outputs = model(**inputs)
173 ```
174
175 Tokenizer 為所有的預訓練模型提供了預處理,並可以直接轉換單一字串(比如上面的例子)或串列 (list)。它會輸出一個的字典 (dict) 讓你可以在下游程式碼裡使用或直接藉由 `**` 運算式傳給模型。
176
177 模型本身是一個常規的 [Pytorch `nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) 或 [TensorFlow `tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model)(取決於你的後端),可依常規方式使用。 [這個教學](https://huggingface.co/transformers/training.html)解釋了如何將這樣的模型整合到一般的 PyTorch 或 TensorFlow 訓練迴圈中,或是如何使用我們的 `Trainer` API 在一個新的資料集上快速進行微調。
178
179 ## 為什麼要用 transformers?
180
181 1. 便於使用的先進模型:
182 - NLU 和 NLG 上性能卓越
183 - 對教學和實作友好且低門檻
184 - 高度抽象,使用者只須學習 3 個類別
185 - 對所有模型使用的制式化API
186
187 1. 更低的運算成本,更少的碳排放:
188 - 研究人員可以分享預訓練的模型而非從頭開始訓練
189 - 工程師可以減少計算時間以及生產成本
190 - 數十種模型架構、兩千多個預訓練模型、100多種語言支援
191
192 1. 對於模型生命週期的每一個部分都面面俱到:
193 - 訓練先進的模型,只需 3 行程式碼
194 - 模型可以在不同深度學習框架之間任意轉換
195 - 為訓練、評估和生產選擇最適合的框架,並完美銜接
196
197 1. 為你的需求輕鬆客製化專屬模型和範例:
198 - 我們為每種模型架構提供了多個範例來重現原論文結果
199 - 一致的模型內部架構
200 - 模型檔案可單獨使用,便於修改和快速實驗
201
202 ## 什麼情況下我不該用 transformers?
203
204 - 本函式庫並不是模組化的神經網絡工具箱。模型文件中的程式碼並未做額外的抽象封裝,以便研究人員快速地翻閱及修改程式碼,而不會深陷複雜的類別包裝之中。
205 - `Trainer` API 並非相容任何模型,它只為本函式庫中的模型最佳化。對於一般的機器學習用途,請使用其他函式庫。
206 - 儘管我們已盡力而為,[examples 目錄](https://github.com/huggingface/transformers/tree/main/examples)中的腳本也僅為範例而已。對於特定問題,它們並不一定隨選即用,可能需要修改幾行程式碼以符合需求。
207
208 ## 安裝
209
210 ### 使用 pip
211
212 這個 Repository 已在 Python 3.6+、Flax 0.3.2+、PyTorch 1.3.1+ 和 TensorFlow 2.3+ 下經過測試。
213
214 你可以在[虛擬環境](https://docs.python.org/3/library/venv.html)中安裝 🤗 Transformers。如果你還不熟悉 Python 的虛擬環境,請閱此[使用者指引](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/)。
215
216 首先,用你打算使用的版本的 Python 創建一個虛擬環境並進入。
217
218 然後,你需要安裝 Flax、PyTorch 或 TensorFlow 其中之一。對於該如何在你使用的平台上安裝這些框架,請參閱 [TensorFlow 安裝頁面](https://www.tensorflow.org/install/), [PyTorch 安裝頁面](https://pytorch.org/get-started/locally/#start-locally) 或 [Flax 安裝頁面](https://github.com/google/flax#quick-install)。
219
220 當其中一個後端安裝成功後,🤗 Transformers 可依此安裝:
221
222 ```bash
223 pip install transformers
224 ```
225
226 如果你想要試試範例或者想在正式發布前使用最新開發中的程式碼,你必須[從原始碼安裝](https://huggingface.co/docs/transformers/installation#installing-from-source)。
227
228 ### 使用 conda
229
230 自 Transformers 4.0.0 版始,我們有了一個 conda channel: `huggingface`。
231
232 🤗 Transformers 可以藉由 conda 依此安裝:
233
234 ```shell script
235 conda install -c huggingface transformers
236 ```
237
238 要藉由 conda 安裝 Flax、PyTorch 或 TensorFlow 其中之一,請參閱它們各自安裝頁面的說明。
239
240 ## 模型架構
241
242 **🤗 Transformers 支援的[所有的模型檢查點](https://huggingface.co/models)**,由[使用者](https://huggingface.co/users)和[組織](https://huggingface.co/organizations)上傳,均與 huggingface.co [model hub](https://huggingface.co) 完美結合。
243
244 目前的檢查點數量: 
245
246 🤗 Transformers 目前支援以下的架構(模型概覽請參閱[這裡](https://huggingface.co/docs/transformers/model_summary)):
247
248 1. **[ALBERT](https://huggingface.co/docs/transformers/model_doc/albert)** (from Google Research and the Toyota Technological Institute at Chicago) released with the paper [ALBERT: A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942), by Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut.
249 1. **[BART](https://huggingface.co/docs/transformers/model_doc/bart)** (from Facebook) released with the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/pdf/1910.13461.pdf) by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer.
250 1. **[BARThez](https://huggingface.co/docs/transformers/model_doc/barthez)** (from École polytechnique) released with the paper [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321) by Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis.
251 1. **[BARTpho](https://huggingface.co/docs/transformers/model_doc/bartpho)** (from VinAI Research) released with the paper [BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese](https://arxiv.org/abs/2109.09701) by Nguyen Luong Tran, Duong Minh Le and Dat Quoc Nguyen.
252 1. **[BEiT](https://huggingface.co/docs/transformers/model_doc/beit)** (from Microsoft) released with the paper [BEiT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) by Hangbo Bao, Li Dong, Furu Wei.
253 1. **[BERT](https://huggingface.co/docs/transformers/model_doc/bert)** (from Google) released with the paper [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova.
254 1. **[BERT For Sequence Generation](https://huggingface.co/docs/transformers/model_doc/bert-generation)** (from Google) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn.
255 1. **[BERTweet](https://huggingface.co/docs/transformers/model_doc/bertweet)** (from VinAI Research) released with the paper [BERTweet: A pre-trained language model for English Tweets](https://aclanthology.org/2020.emnlp-demos.2/) by Dat Quoc Nguyen, Thanh Vu and Anh Tuan Nguyen.
256 1. **[BigBird-Pegasus](https://huggingface.co/docs/transformers/model_doc/bigbird_pegasus)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed.
257 1. **[BigBird-RoBERTa](https://huggingface.co/docs/transformers/model_doc/big_bird)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed.
258 1. **[Blenderbot](https://huggingface.co/docs/transformers/model_doc/blenderbot)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston.
259 1. **[BlenderbotSmall](https://huggingface.co/docs/transformers/model_doc/blenderbot-small)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston.
260 1. **[BLOOM](https://huggingface.co/docs/transformers/model_doc/bloom)** (from BigScience workshop) released by the [BigSicence Workshop](https://bigscience.huggingface.co/).
261 1. **[BORT](https://huggingface.co/docs/transformers/model_doc/bort)** (from Alexa) released with the paper [Optimal Subarchitecture Extraction For BERT](https://arxiv.org/abs/2010.10499) by Adrian de Wynter and Daniel J. Perry.
262 1. **[ByT5](https://huggingface.co/docs/transformers/model_doc/byt5)** (from Google Research) released with the paper [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) by Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel.
263 1. **[CamemBERT](https://huggingface.co/docs/transformers/model_doc/camembert)** (from Inria/Facebook/Sorbonne) released with the paper [CamemBERT: a Tasty French Language Model](https://arxiv.org/abs/1911.03894) by Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz Suárez*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot.
264 1. **[CANINE](https://huggingface.co/docs/transformers/model_doc/canine)** (from Google Research) released with the paper [CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation](https://arxiv.org/abs/2103.06874) by Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting.
265 1. **[CLIP](https://huggingface.co/docs/transformers/model_doc/clip)** (from OpenAI) released with the paper [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) by Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever.
266 1. **[CodeGen](https://huggingface.co/docs/transformers/model_doc/codegen)** (from Salesforce) released with the paper [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong.
267 1. **[ConvBERT](https://huggingface.co/docs/transformers/model_doc/convbert)** (from YituTech) released with the paper [ConvBERT: Improving BERT with Span-based Dynamic Convolution](https://arxiv.org/abs/2008.02496) by Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng Yan.
268 1. **[ConvNeXT](https://huggingface.co/docs/transformers/model_doc/convnext)** (from Facebook AI) released with the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, Saining Xie.
269 1. **[CPM](https://huggingface.co/docs/transformers/model_doc/cpm)** (from Tsinghua University) released with the paper [CPM: A Large-scale Generative Chinese Pre-trained Language Model](https://arxiv.org/abs/2012.00413) by Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen, Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, Maosong Sun.
270 1. **[CTRL](https://huggingface.co/docs/transformers/model_doc/ctrl)** (from Salesforce) released with the paper [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858) by Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher.
271 1. **[CvT](https://huggingface.co/docs/transformers/model_doc/cvt)** (from Microsoft) released with the paper [CvT: Introducing Convolutions to Vision Transformers](https://arxiv.org/abs/2103.15808) by Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan, Lei Zhang.
272 1. **[Data2Vec](https://huggingface.co/docs/transformers/model_doc/data2vec)** (from Facebook) released with the paper [Data2Vec: A General Framework for Self-supervised Learning in Speech, Vision and Language](https://arxiv.org/abs/2202.03555) by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli.
273 1. **[DeBERTa](https://huggingface.co/docs/transformers/model_doc/deberta)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen.
274 1. **[DeBERTa-v2](https://huggingface.co/docs/transformers/model_doc/deberta-v2)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen.
275 1. **[Decision Transformer](https://huggingface.co/docs/transformers/model_doc/decision_transformer)** (from Berkeley/Facebook/Google) released with the paper [Decision Transformer: Reinforcement Learning via Sequence Modeling](https://arxiv.org/abs/2106.01345) by Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch.
276 1. **[DeiT](https://huggingface.co/docs/transformers/model_doc/deit)** (from Facebook) released with the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou.
277 1. **[DETR](https://huggingface.co/docs/transformers/model_doc/detr)** (from Facebook) released with the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko.
278 1. **[DialoGPT](https://huggingface.co/docs/transformers/model_doc/dialogpt)** (from Microsoft Research) released with the paper [DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation](https://arxiv.org/abs/1911.00536) by Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan.
279 1. **[DistilBERT](https://huggingface.co/docs/transformers/model_doc/distilbert)** (from HuggingFace), released together with the paper [DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108) by Victor Sanh, Lysandre Debut and Thomas Wolf. The same method has been applied to compress GPT2 into [DistilGPT2](https://github.com/huggingface/transformers/tree/main/examples/distillation), RoBERTa into [DistilRoBERTa](https://github.com/huggingface/transformers/tree/main/examples/distillation), Multilingual BERT into [DistilmBERT](https://github.com/huggingface/transformers/tree/main/examples/distillation) and a German version of DistilBERT.
280 1. **[DiT](https://huggingface.co/docs/transformers/model_doc/dit)** (from Microsoft Research) released with the paper [DiT: Self-supervised Pre-training for Document Image Transformer](https://arxiv.org/abs/2203.02378) by Junlong Li, Yiheng Xu, Tengchao Lv, Lei Cui, Cha Zhang, Furu Wei.
281 1. **[Donut](https://huggingface.co/docs/transformers/main/model_doc/donut)** (from NAVER) released with the paper [OCR-free Document Understanding Transformer](https://arxiv.org/abs/2111.15664) by Geewook Kim, Teakgyu Hong, Moonbin Yim, Jeongyeon Nam, Jinyoung Park, Jinyeong Yim, Wonseok Hwang, Sangdoo Yun, Dongyoon Han, Seunghyun Park.
282 1. **[DPR](https://huggingface.co/docs/transformers/model_doc/dpr)** (from Facebook) released with the paper [Dense Passage Retrieval for Open-Domain Question Answering](https://arxiv.org/abs/2004.04906) by Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih.
283 1. **[DPT](https://huggingface.co/docs/transformers/master/model_doc/dpt)** (from Intel Labs) released with the paper [Vision Transformers for Dense Prediction](https://arxiv.org/abs/2103.13413) by René Ranftl, Alexey Bochkovskiy, Vladlen Koltun.
284 1. **[ELECTRA](https://huggingface.co/docs/transformers/model_doc/electra)** (from Google Research/Stanford University) released with the paper [ELECTRA: Pre-training text encoders as discriminators rather than generators](https://arxiv.org/abs/2003.10555) by Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning.
285 1. **[EncoderDecoder](https://huggingface.co/docs/transformers/model_doc/encoder-decoder)** (from Google Research) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn.
286 1. **[FlauBERT](https://huggingface.co/docs/transformers/model_doc/flaubert)** (from CNRS) released with the paper [FlauBERT: Unsupervised Language Model Pre-training for French](https://arxiv.org/abs/1912.05372) by Hang Le, Loïc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoît Crabbé, Laurent Besacier, Didier Schwab.
287 1. **[FLAVA](https://huggingface.co/docs/transformers/model_doc/flava)** (from Facebook AI) released with the paper [FLAVA: A Foundational Language And Vision Alignment Model](https://arxiv.org/abs/2112.04482) by Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, and Douwe Kiela.
288 1. **[FNet](https://huggingface.co/docs/transformers/model_doc/fnet)** (from Google Research) released with the paper [FNet: Mixing Tokens with Fourier Transforms](https://arxiv.org/abs/2105.03824) by James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon.
289 1. **[Funnel Transformer](https://huggingface.co/docs/transformers/model_doc/funnel)** (from CMU/Google Brain) released with the paper [Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing](https://arxiv.org/abs/2006.03236) by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le.
290 1. **[GLPN](https://huggingface.co/docs/transformers/model_doc/glpn)** (from KAIST) released with the paper [Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth](https://arxiv.org/abs/2201.07436) by Doyeon Kim, Woonghyun Ga, Pyungwhan Ahn, Donggyu Joo, Sehwan Chun, Junmo Kim.
291 1. **[GPT](https://huggingface.co/docs/transformers/model_doc/openai-gpt)** (from OpenAI) released with the paper [Improving Language Understanding by Generative Pre-Training](https://blog.openai.com/language-unsupervised/) by Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever.
292 1. **[GPT Neo](https://huggingface.co/docs/transformers/model_doc/gpt_neo)** (from EleutherAI) released in the repository [EleutherAI/gpt-neo](https://github.com/EleutherAI/gpt-neo) by Sid Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy.
293 1. **[GPT NeoX](https://huggingface.co/docs/transformers/model_doc/gpt_neox)** (from EleutherAI) released with the paper [GPT-NeoX-20B: An Open-Source Autoregressive Language Model](https://arxiv.org/abs/2204.06745) by Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, Samuel Weinbach
294 1. **[GPT-2](https://huggingface.co/docs/transformers/model_doc/gpt2)** (from OpenAI) released with the paper [Language Models are Unsupervised Multitask Learners](https://blog.openai.com/better-language-models/) by Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever**.
295 1. **[GPT-J](https://huggingface.co/docs/transformers/model_doc/gptj)** (from EleutherAI) released with the paper [kingoflolz/mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/) by Ben Wang and Aran Komatsuzaki.
296 1. **[GroupViT](https://huggingface.co/docs/transformers/model_doc/groupvit)** (from UCSD, NVIDIA) released with the paper [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094) by Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang.
297 1. **[Hubert](https://huggingface.co/docs/transformers/model_doc/hubert)** (from Facebook) released with the paper [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed.
298 1. **[I-BERT](https://huggingface.co/docs/transformers/model_doc/ibert)** (from Berkeley) released with the paper [I-BERT: Integer-only BERT Quantization](https://arxiv.org/abs/2101.01321) by Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney, Kurt Keutzer.
299 1. **[ImageGPT](https://huggingface.co/docs/transformers/model_doc/imagegpt)** (from OpenAI) released with the paper [Generative Pretraining from Pixels](https://openai.com/blog/image-gpt/) by Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever.
300 1. **[LayoutLM](https://huggingface.co/docs/transformers/model_doc/layoutlm)** (from Microsoft Research Asia) released with the paper [LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318) by Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou.
301 1. **[LayoutLMv2](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (from Microsoft Research Asia) released with the paper [LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding](https://arxiv.org/abs/2012.14740) by Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou.
302 1. **[LayoutLMv3](https://huggingface.co/docs/transformers/model_doc/layoutlmv3)** (from Microsoft Research Asia) released with the paper [LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking](https://arxiv.org/abs/2204.08387) by Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei.
303 1. **[LayoutXLM](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (from Microsoft Research Asia) released with the paper [LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding](https://arxiv.org/abs/2104.08836) by Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Furu Wei.
304 1. **[LED](https://huggingface.co/docs/transformers/model_doc/led)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan.
305 1. **[LeViT](https://huggingface.co/docs/transformers/model_doc/levit)** (from Meta AI) released with the paper [LeViT: A Vision Transformer in ConvNet's Clothing for Faster Inference](https://arxiv.org/abs/2104.01136) by Ben Graham, Alaaeldin El-Nouby, Hugo Touvron, Pierre Stock, Armand Joulin, Hervé Jégou, Matthijs Douze.
306 1. **[Longformer](https://huggingface.co/docs/transformers/model_doc/longformer)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan.
307 1. **[LongT5](https://huggingface.co/docs/transformers/model_doc/longt5)** (from Google AI) released with the paper [LongT5: Efficient Text-To-Text Transformer for Long Sequences](https://arxiv.org/abs/2112.07916) by Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung, Yinfei Yang.
308 1. **[LUKE](https://huggingface.co/docs/transformers/model_doc/luke)** (from Studio Ousia) released with the paper [LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention](https://arxiv.org/abs/2010.01057) by Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, Yuji Matsumoto.
309 1. **[LXMERT](https://huggingface.co/docs/transformers/model_doc/lxmert)** (from UNC Chapel Hill) released with the paper [LXMERT: Learning Cross-Modality Encoder Representations from Transformers for Open-Domain Question Answering](https://arxiv.org/abs/1908.07490) by Hao Tan and Mohit Bansal.
310 1. **[M-CTC-T](https://huggingface.co/docs/transformers/model_doc/mctct)** (from Facebook) released with the paper [Pseudo-Labeling For Massively Multilingual Speech Recognition](https://arxiv.org/abs/2111.00161) by Loren Lugosch, Tatiana Likhomanenko, Gabriel Synnaeve, and Ronan Collobert.
311 1. **[M2M100](https://huggingface.co/docs/transformers/model_doc/m2m_100)** (from Facebook) released with the paper [Beyond English-Centric Multilingual Machine Translation](https://arxiv.org/abs/2010.11125) by Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin.
312 1. **[MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)** Machine translation models trained using [OPUS](http://opus.nlpl.eu/) data by Jörg Tiedemann. The [Marian Framework](https://marian-nmt.github.io/) is being developed by the Microsoft Translator Team.
313 1. **[MaskFormer](https://huggingface.co/docs/transformers/model_doc/maskformer)** (from Meta and UIUC) released with the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) by Bowen Cheng, Alexander G. Schwing, Alexander Kirillov
314 1. **[mBART](https://huggingface.co/docs/transformers/model_doc/mbart)** (from Facebook) released with the paper [Multilingual Denoising Pre-training for Neural Machine Translation](https://arxiv.org/abs/2001.08210) by Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer.
315 1. **[mBART-50](https://huggingface.co/docs/transformers/model_doc/mbart)** (from Facebook) released with the paper [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) by Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, Angela Fan.
316 1. **[Megatron-BERT](https://huggingface.co/docs/transformers/model_doc/megatron-bert)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro.
317 1. **[Megatron-GPT2](https://huggingface.co/docs/transformers/model_doc/megatron_gpt2)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro.
318 1. **[mLUKE](https://huggingface.co/docs/transformers/model_doc/mluke)** (from Studio Ousia) released with the paper [mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models](https://arxiv.org/abs/2110.08151) by Ryokan Ri, Ikuya Yamada, and Yoshimasa Tsuruoka.
319 1. **[MobileBERT](https://huggingface.co/docs/transformers/model_doc/mobilebert)** (from CMU/Google Brain) released with the paper [MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices](https://arxiv.org/abs/2004.02984) by Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou.
320 1. **[MobileViT](https://huggingface.co/docs/transformers/model_doc/mobilevit)** (from Apple) released with the paper [MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer](https://arxiv.org/abs/2110.02178) by Sachin Mehta and Mohammad Rastegari.
321 1. **[MPNet](https://huggingface.co/docs/transformers/model_doc/mpnet)** (from Microsoft Research) released with the paper [MPNet: Masked and Permuted Pre-training for Language Understanding](https://arxiv.org/abs/2004.09297) by Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu.
322 1. **[MT5](https://huggingface.co/docs/transformers/model_doc/mt5)** (from Google AI) released with the paper [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) by Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel.
323 1. **[MVP](https://huggingface.co/docs/transformers/model_doc/mvp)** (from RUC AI Box) released with the paper [MVP: Multi-task Supervised Pre-training for Natural Language Generation](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen.
324 1. **[Nezha](https://huggingface.co/docs/transformers/model_doc/nezha)** (from Huawei Noah’s Ark Lab) released with the paper [NEZHA: Neural Contextualized Representation for Chinese Language Understanding](https://arxiv.org/abs/1909.00204) by Junqiu Wei, Xiaozhe Ren, Xiaoguang Li, Wenyong Huang, Yi Liao, Yasheng Wang, Jiashu Lin, Xin Jiang, Xiao Chen and Qun Liu.
325 1. **[NLLB](https://huggingface.co/docs/transformers/model_doc/nllb)** (from Meta) released with the paper [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672) by the NLLB team.
326 1. **[Nyströmformer](https://huggingface.co/docs/transformers/model_doc/nystromformer)** (from the University of Wisconsin - Madison) released with the paper [Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention](https://arxiv.org/abs/2102.03902) by Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, Vikas Singh.
327 1. **[OPT](https://huggingface.co/docs/transformers/master/model_doc/opt)** (from Meta AI) released with the paper [OPT: Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) by Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al.
328 1. **[OWL-ViT](https://huggingface.co/docs/transformers/model_doc/owlvit)** (from Google AI) released with the paper [Simple Open-Vocabulary Object Detection with Vision Transformers](https://arxiv.org/abs/2205.06230) by Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, and Neil Houlsby.
329 1. **[Pegasus](https://huggingface.co/docs/transformers/model_doc/pegasus)** (from Google) released with the paper [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777) by Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu.
330 1. **[Perceiver IO](https://huggingface.co/docs/transformers/model_doc/perceiver)** (from Deepmind) released with the paper [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) by Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hénaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, João Carreira.
331 1. **[PhoBERT](https://huggingface.co/docs/transformers/model_doc/phobert)** (from VinAI Research) released with the paper [PhoBERT: Pre-trained language models for Vietnamese](https://www.aclweb.org/anthology/2020.findings-emnlp.92/) by Dat Quoc Nguyen and Anh Tuan Nguyen.
332 1. **[PLBart](https://huggingface.co/docs/transformers/model_doc/plbart)** (from UCLA NLP) released with the paper [Unified Pre-training for Program Understanding and Generation](https://arxiv.org/abs/2103.06333) by Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, Kai-Wei Chang.
333 1. **[PoolFormer](https://huggingface.co/docs/transformers/model_doc/poolformer)** (from Sea AI Labs) released with the paper [MetaFormer is Actually What You Need for Vision](https://arxiv.org/abs/2111.11418) by Yu, Weihao and Luo, Mi and Zhou, Pan and Si, Chenyang and Zhou, Yichen and Wang, Xinchao and Feng, Jiashi and Yan, Shuicheng.
334 1. **[ProphetNet](https://huggingface.co/docs/transformers/model_doc/prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou.
335 1. **[QDQBert](https://huggingface.co/docs/transformers/model_doc/qdqbert)** (from NVIDIA) released with the paper [Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation](https://arxiv.org/abs/2004.09602) by Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev and Paulius Micikevicius.
336 1. **[RAG](https://huggingface.co/docs/transformers/model_doc/rag)** (from Facebook) released with the paper [Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks](https://arxiv.org/abs/2005.11401) by Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, Douwe Kiela.
337 1. **[REALM](https://huggingface.co/docs/transformers/model_doc/realm.html)** (from Google Research) released with the paper [REALM: Retrieval-Augmented Language Model Pre-Training](https://arxiv.org/abs/2002.08909) by Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat and Ming-Wei Chang.
338 1. **[Reformer](https://huggingface.co/docs/transformers/model_doc/reformer)** (from Google Research) released with the paper [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451) by Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya.
339 1. **[RegNet](https://huggingface.co/docs/transformers/model_doc/regnet)** (from META Research) released with the paper [Designing Network Design Space](https://arxiv.org/abs/2003.13678) by Ilija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, Piotr Dollár.
340 1. **[RemBERT](https://huggingface.co/docs/transformers/model_doc/rembert)** (from Google Research) released with the paper [Rethinking embedding coupling in pre-trained language models](https://arxiv.org/pdf/2010.12821.pdf) by Hyung Won Chung, Thibault Févry, Henry Tsai, M. Johnson, Sebastian Ruder.
341 1. **[ResNet](https://huggingface.co/docs/transformers/model_doc/resnet)** (from Microsoft Research) released with the paper [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) by Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun.
342 1. **[RoBERTa](https://huggingface.co/docs/transformers/model_doc/roberta)** (from Facebook), released together with the paper a [Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov.
343 1. **[RoFormer](https://huggingface.co/docs/transformers/model_doc/roformer)** (from ZhuiyiTechnology), released together with the paper a [RoFormer: Enhanced Transformer with Rotary Position Embedding](https://arxiv.org/pdf/2104.09864v1.pdf) by Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu.
344 1. **[SegFormer](https://huggingface.co/docs/transformers/model_doc/segformer)** (from NVIDIA) released with the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo.
345 1. **[SEW](https://huggingface.co/docs/transformers/model_doc/sew)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi.
346 1. **[SEW-D](https://huggingface.co/docs/transformers/model_doc/sew_d)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi.
347 1. **[SpeechToTextTransformer](https://huggingface.co/docs/transformers/model_doc/speech_to_text)** (from Facebook), released together with the paper [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino.
348 1. **[SpeechToTextTransformer2](https://huggingface.co/docs/transformers/model_doc/speech_to_text_2)** (from Facebook) released with the paper [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) by Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau.
349 1. **[Splinter](https://huggingface.co/docs/transformers/model_doc/splinter)** (from Tel Aviv University) released with the paper [Few-Shot Question Answering by Pretraining Span Selection](https://arxiv.org/abs/2101.00438) by Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy.
350 1. **[SqueezeBERT](https://huggingface.co/docs/transformers/model_doc/squeezebert)** (from Berkeley) released with the paper [SqueezeBERT: What can computer vision teach NLP about efficient neural networks?](https://arxiv.org/abs/2006.11316) by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W. Keutzer.
351 1. **[Swin Transformer](https://huggingface.co/docs/transformers/model_doc/swin)** (from Microsoft) released with the paper [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) by Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo.
352 1. **[Swin Transformer V2](https://huggingface.co/docs/transformers/main/model_doc/swinv2)** (from Microsoft) released with the paper [Swin Transformer V2: Scaling Up Capacity and Resolution](https://arxiv.org/abs/2111.09883) by Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, Furu Wei, Baining Guo.
353 1. **[T5](https://huggingface.co/docs/transformers/model_doc/t5)** (from Google AI) released with the paper [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu.
354 1. **[T5v1.1](https://huggingface.co/docs/transformers/model_doc/t5v1.1)** (from Google AI) released with the paper [google-research/text-to-text-transfer-transformer](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu.
355 1. **[TAPAS](https://huggingface.co/docs/transformers/model_doc/tapas)** (from Google AI) released with the paper [TAPAS: Weakly Supervised Table Parsing via Pre-training](https://arxiv.org/abs/2004.02349) by Jonathan Herzig, Paweł Krzysztof Nowak, Thomas Müller, Francesco Piccinno and Julian Martin Eisenschlos.
356 1. **[TAPEX](https://huggingface.co/docs/transformers/model_doc/tapex)** (from Microsoft Research) released with the paper [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou.
357 1. **[Trajectory Transformer](https://huggingface.co/docs/transformers/model_doc/trajectory_transformers)** (from the University of California at Berkeley) released with the paper [Offline Reinforcement Learning as One Big Sequence Modeling Problem](https://arxiv.org/abs/2106.02039) by Michael Janner, Qiyang Li, Sergey Levine
358 1. **[Transformer-XL](https://huggingface.co/docs/transformers/model_doc/transfo-xl)** (from Google/CMU) released with the paper [Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context](https://arxiv.org/abs/1901.02860) by Zihang Dai*, Zhilin Yang*, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov.
359 1. **[TrOCR](https://huggingface.co/docs/transformers/model_doc/trocr)** (from Microsoft) released with the paper [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) by Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei.
360 1. **[UL2](https://huggingface.co/docs/transformers/model_doc/ul2)** (from Google Research) released with the paper [Unifying Language Learning Paradigms](https://arxiv.org/abs/2205.05131v1) by Yi Tay, Mostafa Dehghani, Vinh Q. Tran, Xavier Garcia, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Neil Houlsby, Donald Metzler
361 1. **[UniSpeech](https://huggingface.co/docs/transformers/model_doc/unispeech)** (from Microsoft Research) released with the paper [UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data](https://arxiv.org/abs/2101.07597) by Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang.
362 1. **[UniSpeechSat](https://huggingface.co/docs/transformers/model_doc/unispeech-sat)** (from Microsoft Research) released with the paper [UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER AWARE PRE-TRAINING](https://arxiv.org/abs/2110.05752) by Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu.
363 1. **[VAN](https://huggingface.co/docs/transformers/model_doc/van)** (from Tsinghua University and Nankai University) released with the paper [Visual Attention Network](https://arxiv.org/pdf/2202.09741.pdf) by Meng-Hao Guo, Cheng-Ze Lu, Zheng-Ning Liu, Ming-Ming Cheng, Shi-Min Hu.
364 1. **[VideoMAE](https://huggingface.co/docs/transformers/main/model_doc/videomae)** (from Multimedia Computing Group, Nanjing University) released with the paper [VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training](https://arxiv.org/abs/2203.12602) by Zhan Tong, Yibing Song, Jue Wang, Limin Wang.
365 1. **[ViLT](https://huggingface.co/docs/transformers/model_doc/vilt)** (from NAVER AI Lab/Kakao Enterprise/Kakao Brain) released with the paper [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) by Wonjae Kim, Bokyung Son, Ildoo Kim.
366 1. **[Vision Transformer (ViT)](https://huggingface.co/docs/transformers/model_doc/vit)** (from Google AI) released with the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby.
367 1. **[VisualBERT](https://huggingface.co/docs/transformers/model_doc/visual_bert)** (from UCLA NLP) released with the paper [VisualBERT: A Simple and Performant Baseline for Vision and Language](https://arxiv.org/pdf/1908.03557) by Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang.
368 1. **[ViTMAE](https://huggingface.co/docs/transformers/model_doc/vit_mae)** (from Meta AI) released with the paper [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377) by Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick.
369 1. **[Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/wav2vec2)** (from Facebook AI) released with the paper [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.
370 1. **[Wav2Vec2-Conformer](https://huggingface.co/docs/transformers/model_doc/wav2vec2-conformer)** (from Facebook AI) released with the paper [FAIRSEQ S2T: Fast Speech-to-Text Modeling with FAIRSEQ](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino.
371 1. **[Wav2Vec2Phoneme](https://huggingface.co/docs/transformers/model_doc/wav2vec2_phoneme)** (from Facebook AI) released with the paper [Simple and Effective Zero-shot Cross-lingual Phoneme Recognition](https://arxiv.org/abs/2109.11680) by Qiantong Xu, Alexei Baevski, Michael Auli.
372 1. **[WavLM](https://huggingface.co/docs/transformers/model_doc/wavlm)** (from Microsoft Research) released with the paper [WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900) by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei.
373 1. **[XGLM](https://huggingface.co/docs/transformers/model_doc/xglm)** (From Facebook AI) released with the paper [Few-shot Learning with Multilingual Language Models](https://arxiv.org/abs/2112.10668) by Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, Xian Li.
374 1. **[XLM](https://huggingface.co/docs/transformers/model_doc/xlm)** (from Facebook) released together with the paper [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample and Alexis Conneau.
375 1. **[XLM-ProphetNet](https://huggingface.co/docs/transformers/model_doc/xlm-prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou.
376 1. **[XLM-RoBERTa](https://huggingface.co/docs/transformers/model_doc/xlm-roberta)** (from Facebook AI), released together with the paper [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Alexis Conneau*, Kartikay Khandelwal*, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov.
377 1. **[XLM-RoBERTa-XL](https://huggingface.co/docs/transformers/model_doc/xlm-roberta-xl)** (from Facebook AI) released with the paper [Larger-Scale Transformers for Multilingual Masked Language Modeling](https://arxiv.org/abs/2105.00572) by Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, Alexis Conneau.
378 1. **[XLNet](https://huggingface.co/docs/transformers/model_doc/xlnet)** (from Google/CMU) released with the paper [XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) by Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le.
379 1. **[XLS-R](https://huggingface.co/docs/transformers/model_doc/xls_r)** (from Facebook AI) released with the paper [XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale](https://arxiv.org/abs/2111.09296) by Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, Michael Auli.
380 1. **[XLSR-Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/xlsr_wav2vec2)** (from Facebook AI) released with the paper [Unsupervised Cross-Lingual Representation Learning For Speech Recognition](https://arxiv.org/abs/2006.13979) by Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli.
381 1. **[YOLOS](https://huggingface.co/docs/transformers/model_doc/yolos)** (from Huazhong University of Science & Technology) released with the paper [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://arxiv.org/abs/2106.00666) by Yuxin Fang, Bencheng Liao, Xinggang Wang, Jiemin Fang, Jiyang Qi, Rui Wu, Jianwei Niu, Wenyu Liu.
382 1. **[YOSO](https://huggingface.co/docs/transformers/model_doc/yoso)** (from the University of Wisconsin - Madison) released with the paper [You Only Sample (Almost) by Zhanpeng Zeng, Yunyang Xiong, Sathya N. Ravi, Shailesh Acharya, Glenn Fung, Vikas Singh.
383 1. 想要貢獻新的模型?我們這裡有一份**詳細指引和模板**來引導你加入新的模型。你可以在 [`templates`](./templates) 目錄中找到它們。記得查看[貢獻指引](./CONTRIBUTING.md)並在開始寫 PR 前聯繫維護人員或開一個新的 issue 來獲得 feedbacks。
384
385 要檢查某個模型是否已有 Flax、PyTorch 或 TensorFlow 的實作,或其是否在🤗 Tokenizers 函式庫中有對應的 tokenizer,敬請參閱[此表](https://huggingface.co/docs/transformers/index#supported-frameworks)。
386
387 這些實作均已於多個資料集測試(請參閱範例腳本)並應與原版實作表現相當。你可以在範例文件的[此節](https://huggingface.co/docs/transformers/examples)中了解實作的細節。
388
389
390 ## 了解更多
391
392 | 章節 | 描述 |
393 |-|-|
394 | [文件](https://huggingface.co/transformers/) | 完整的 API 文件和教學 |
395 | [任務概覽](https://huggingface.co/docs/transformers/task_summary) | 🤗 Transformers 支援的任務 |
396 | [預處理教學](https://huggingface.co/docs/transformers/preprocessing) | 使用 `Tokenizer` 來為模型準備資料 |
397 | [訓練和微調](https://huggingface.co/docs/transformers/training) | 使用 PyTorch/TensorFlow 的內建的訓練方式或於 `Trainer` API 中使用 🤗 Transformers 提供的模型 |
398 | [快速上手:微調和範例腳本](https://github.com/huggingface/transformers/tree/main/examples) | 為各種任務提供的範例腳本 |
399 | [模型分享和上傳](https://huggingface.co/docs/transformers/model_sharing) | 上傳並與社群分享你微調的模型 |
400 | [遷移](https://huggingface.co/docs/transformers/migration) | 從 `pytorch-transformers` 或 `pytorch-pretrained-bert` 遷移到 🤗 Transformers |
401
402 ## 引用
403
404 我們已將此函式庫的[論文](https://www.aclweb.org/anthology/2020.emnlp-demos.6/)正式發表。如果你使用了 🤗 Transformers 函式庫,可以引用:
405 ```bibtex
406 @inproceedings{wolf-etal-2020-transformers,
407 title = "Transformers: State-of-the-Art Natural Language Processing",
408 author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush",
409 booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
410 month = oct,
411 year = "2020",
412 address = "Online",
413 publisher = "Association for Computational Linguistics",
414 url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6",
415 pages = "38--45"
416 }
417 ```
418
[end of README_zh-hant.md]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
huggingface/transformers
|
d6eeb871706db0d64ab9ffd79f9545d95286b536
|
XLA generation error with repetition_penalty
### System Info
- `transformers` version: 4.22.0.dev0
- Platform: Linux-5.13.0-40-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.10.1+cu113 (True)
- Tensorflow version (GPU?): 2.9.1 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@gante
@Rocketknight1
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
To reproduce error (adapted code from https://huggingface.co/blog/tf-xla-generate):
```python
import tensorflow as tf
from transformers import AutoTokenizer, TFAutoModelForCausalLM
generation_kwargs = {
"max_new_tokens": 64,
'eos_token_id': 198,
'do_sample': True,
'temperature': 0.72,
'top_k': 0,
'top_p': 0.725,
'repetition_penalty': 1.13,
}
tokenizer = AutoTokenizer.from_pretrained(
"gpt2", padding_side="left", pad_token="</s>"
)
model = TFAutoModelForCausalLM.from_pretrained("gpt2")
model.config.pad_token_id = model.config.eos_token_id
input_text = "repetition_penalty error"
xla_generate = tf.function(model.generate, jit_compile=True)
tokenized_input = tokenizer(input_text, return_tensors="tf")
print("model.generate")
model.generate(**tokenized_input, **generation_kwargs)
print("xla_generate")
xla_generate(**tokenized_input, **generation_kwargs) # error here
```
Error:
```
File "/usr/local/lib/python3.8/dist-packages/transformers/generation_tf_utils.py", line 604, in generate *
seed=model_kwargs.pop("seed", None),
File "/usr/local/lib/python3.8/dist-packages/transformers/generation_tf_utils.py", line 1651, in _generate *
input_ids,
File "/usr/local/lib/python3.8/dist-packages/transformers/generation_tf_utils.py", line 2475, in sample_body_fn *
next_tokens_scores = logits_processor(generated, next_token_logits, cur_len)
File "/usr/local/lib/python3.8/dist-packages/transformers/generation_tf_logits_process.py", line 94, in __call__ *
scores = processor(input_ids, scores, cur_len)
File "/usr/local/lib/python3.8/dist-packages/transformers/generation_tf_logits_process.py", line 278, in __call__ *
score_penalties = self._create_score_penalties(input_ids[:, :cur_len], scores)
File "/usr/local/lib/python3.8/dist-packages/transformers/generation_tf_logits_process.py", line 265, in _create_score_penalties *
indexable_prev_input_ids = tf.concat(
ValueError: None values not supported.
```
By setting `repetition_penalty` to 1.0 or by removing this parameter everything works fine.
### Expected behavior
The expected result is the work of text generation using `repetition_penalty` without any errors, taking into account the use of XLA.
|
Hey @AlekseyKorshuk 👋 Thank you for the reproducible script!
I have never seen this exception, so I'll have to dig into it. Expect further information soon :)
|
2022-08-16T10:34:17Z
|
<patch>
diff --git a/src/transformers/generation_tf_logits_process.py b/src/transformers/generation_tf_logits_process.py
--- a/src/transformers/generation_tf_logits_process.py
+++ b/src/transformers/generation_tf_logits_process.py
@@ -262,9 +262,11 @@ def _create_score_penalties(self, input_ids: tf.Tensor, logits: tf.Tensor) -> tf
# Scatters the penalties
token_penalties = tf.ones(logits.shape)
+ batch_size = input_ids.shape[0]
+ seq_len = tf.shape(input_ids)[1] # the sequence length has dynamic size, hence the dynamic shape
indexable_prev_input_ids = tf.concat(
(
- tf.expand_dims(tf.repeat(tf.range(input_ids.shape[0]), input_ids.shape[1]), axis=-1),
+ tf.expand_dims(tf.repeat(tf.range(batch_size), seq_len), axis=-1),
tf.expand_dims(tf.reshape(input_ids, [-1]), axis=-1),
),
axis=1,
</patch>
|
[]
|
[]
| |||
PrefectHQ__prefect-1037
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Interpolated config options are computed prior to loading user configs
In `config.toml`:
```
[section]
host = "x"
endpoint = "${section.host}:1000"
```
In `~/.prefect/config.toml`:
```
[section]
host = "y"
```
In Prefect:
```
print(prefect.config.section.endpoint) # "x:1000"
```
We should perform variable interpolation AFTER user configs are loaded
</issue>
<code>
[start of README.md]
1 <p align="center" style="margin-bottom:40px;">
2 <img src="https://uploads-ssl.webflow.com/5ba446b0e783e26d5a2f2382/5c942c9ca934ec5c88588297_primary-color-vertical.svg" height=350 style="max-height: 350px;">
3 </p>
4
5 <p align="center">
6 <a href=https://circleci.com/gh/PrefectHQ/prefect/tree/master>
7 <img src="https://circleci.com/gh/PrefectHQ/prefect/tree/master.svg?style=shield&circle-token=28689a55edc3c373486aaa5f11a1af3e5fc53344">
8 </a>
9
10 <a href="https://codecov.io/gh/PrefectHQ/prefect">
11 <img src="https://codecov.io/gh/PrefectHQ/prefect/branch/master/graph/badge.svg" />
12 </a>
13
14 <a href=https://github.com/ambv/black style="margin-left: 10px">
15 <img src="https://img.shields.io/badge/code%20style-black-000000.svg">
16 </a>
17
18 <a href="https://gitter.im/prefectio/prefect?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge">
19 <img src="https://badges.gitter.im/prefectio/prefect.svg">
20 </a>
21 </p>
22
23 ## Hello, world! 👋
24
25 We've rebuilt data engineering for the data science era.
26
27 Prefect is a new workflow management system, designed for modern infrastructure and powered by the open-source Prefect Core workflow engine. Users organize `Tasks` into `Flows`, and Prefect takes care of the rest.
28
29 Read the [docs](https://docs.prefect.io); get the [code](#installation); ask us [anything](mailto:[email protected])!
30
31 ```python
32 from prefect import task, Flow
33
34
35 @task
36 def say_hello():
37 print("Hello, world!")
38
39
40 with Flow("My First Flow") as flow:
41 say_hello()
42
43
44 flow.run() # "Hello, world!"
45 ```
46
47 ## Docs
48
49 Prefect's documentation -- including concepts, tutorials, and a full API reference -- is always available at [docs.prefect.io](https://docs.prefect.io).
50
51 ## Contributing
52
53 Read about Prefect's [community](https://docs.prefect.io/guide/welcome/community.html) or dive in to the [development guides](https://docs.prefect.io/guide/development/overview.html) for information about contributions, documentation, code style, and testing.
54
55 Prefect is committed to ensuring a positive environment. All interactions are governed by our [Code of Conduct](https://docs.prefect.io/guide/welcome/code_of_conduct.html).
56
57 ## "...Prefect?"
58
59 From the Latin _praefectus_, meaning "one who is in charge", a prefect is an official who oversees a domain and makes sure that the rules are followed. Similarly, Prefect is responsible for making sure that workflows execute properly.
60
61 It also happens to be the name of a roving researcher for that wholly remarkable book, _The Hitchhiker's Guide to the Galaxy_.
62
63 ## Installation
64
65 ### Requirements
66
67 Prefect requires Python 3.5+.
68
69 ### Install latest release
70
71 Using `pip`:
72
73 ```bash
74 pip install prefect
75 ```
76
77 or `conda`:
78
79 ```bash
80 conda install -c conda-forge prefect
81 ```
82
83 ### Install bleeding edge
84
85 ```bash
86 git clone https://github.com/PrefectHQ/prefect.git
87 pip install ./prefect
88 ```
89
90 ## License
91
92 Prefect is licensed under the Apache Software License version 2.0.
93
[end of README.md]
[start of examples/etl.py]
1 from prefect import Flow, task
2
3
4 @task
5 def extract():
6 return [1, 2, 3]
7
8
9 @task
10 def transform(x):
11 return [i * 10 for i in x]
12
13
14 @task
15 def load(y):
16 print("Received y: {}".format(y))
17
18
19 with Flow("ETL") as flow:
20 e = extract()
21 t = transform(e)
22 l = load(t)
23
24 flow.run()
25
[end of examples/etl.py]
[start of src/prefect/configuration.py]
1 import datetime
2 import logging
3 import os
4 import re
5 from typing import Any, Optional, Union, cast
6
7 import toml
8
9 from prefect.utilities import collections
10
11 DEFAULT_CONFIG = os.path.join(os.path.dirname(__file__), "config.toml")
12 ENV_VAR_PREFIX = "PREFECT"
13 INTERPOLATION_REGEX = re.compile(r"\${(.[^${}]*)}")
14
15
16 class Config(collections.DotDict):
17
18 __protect_critical_keys__ = False
19
20 def __getattr__(self, attr: str) -> Any:
21 """
22 This method helps mypy discover attribute types without annotations
23 """
24 if attr in self:
25 return super().__getattr__(attr) # type: ignore
26 else:
27 raise AttributeError("Config has no key '{}'".format(attr))
28
29 def copy(self) -> "Config":
30 """
31 Create a copy of the config. Each level of the Config is a new Config object, so
32 modifying keys won't affect the original Config object. However, values are not
33 deep-copied, and mutations can affect the original.
34 """
35 new_config = Config()
36 for key, value in self.items():
37 if isinstance(value, Config):
38 value = value.copy()
39 new_config[key] = value
40 return new_config
41
42 def get_nested(self, key: str, default: Any = None) -> Any:
43 """
44 Retrieves a (possibly nested) config key's value, creating intermediate keys if
45 necessary
46
47 For example:
48 >>> config = Config(a=Config(b=Config(c=5)))
49 >>> assert config.get_nested('a.b.c') == 5
50
51 >>> config = Config()
52 >>> assert config.get_nested('a.b.c') is None
53
54 Args:
55 - key (str): a key, indicated nested keys by separating them with '.'
56 - default (Any): a value to return if the key is not found
57 """
58 tmp_val = self
59 for k in key.split("."):
60 if isinstance(tmp_val, Config) and k in tmp_val:
61 tmp_val = tmp_val[k]
62 else:
63 return default
64 return tmp_val
65
66 def set_nested(self, key: str, value: Any) -> None:
67 """
68 Sets a (possibly nested) config key to have some value. Creates intermediate keys
69 if necessary.
70
71 For example:
72 >>> config = Config()
73 >>> config.set_nested('a.b.c', 5)
74 >>> assert config.a.b.c == 5
75
76 Args:
77 - key (str): a key, indicated nested keys by separating them with '.'
78 - value (Any): a value to set
79
80 """
81 config = self
82 keys = key.split(".")
83 for k in keys[:-1]:
84 # get the value of the config under the provided key
85 new_config = config.setdefault(k, Config())
86 # if the value is not a config, then we are overwriting an existing config setting
87 if not isinstance(new_config, Config):
88 # assign a new config to the key
89 new_config = Config()
90 config[k] = new_config
91 # get the new config so we can continue into the nested keys
92 config = new_config
93
94 config[keys[-1]] = value
95
96 def setdefault_nested(self, key: str, value: Any) -> Any:
97 """
98 Sets a (possibly nested) config key to have some value, if it doesn't already exist.
99 Creates intermediate keys if necessary.
100
101 For example:
102 >>> config = Config()
103 >>> config.setdefault_nested('a.b.c', 5)
104 >>> assert config.a.b.c == 5
105 >>> config.setdefault_nested('a.b.c', 10)
106 >>> assert config.a.b.c == 5
107
108 Args:
109 - key (str): a key, indicated nested keys by separating them with '.'
110 - value (Any): a value to set
111
112 Returns:
113 Any: the value at the provided key
114
115 """
116 config = self
117 keys = key.split(".")
118 for k in keys[:-1]:
119 config = config.setdefault(k, Config())
120 if keys[-1] not in config:
121 config[keys[-1]] = value
122 return config[keys[-1]]
123
124
125 def string_to_type(val: str) -> Union[bool, int, float, str]:
126 """
127 Helper function for transforming string env var values into typed values.
128
129 Maps:
130 - "true" (any capitalization) to `True`
131 - "false" (any capitalization) to `False`
132 - integers to `int`
133 - floats to `float`
134
135 Arguments:
136 - val (str): the string value of an environment variable
137
138 Returns:
139 Union[bool, int, float, str]: the type-cast env var value
140 """
141
142 # bool
143 if val.upper() == "TRUE":
144 return True
145 elif val.upper() == "FALSE":
146 return False
147
148 # int
149 try:
150 val_as_int = int(val)
151 if str(val_as_int) == val:
152 return val_as_int
153 except Exception:
154 pass
155
156 # float
157 try:
158 val_as_float = float(val)
159 if str(val_as_float) == val:
160 return val_as_float
161 except Exception:
162 pass
163
164 # return string value
165 return val
166
167
168 def interpolate_env_var(env_var: str) -> Optional[Union[bool, int, float, str]]:
169 """
170 Expands (potentially nested) env vars by repeatedly applying
171 `expandvars` and `expanduser` until interpolation stops having
172 any effect.
173 """
174 if not env_var or not isinstance(env_var, str):
175 return env_var
176
177 counter = 0
178
179 while counter < 10:
180 interpolated = os.path.expanduser(os.path.expandvars(str(env_var)))
181 if interpolated == env_var:
182 # if a change was made, apply string-to-type casts; otherwise leave alone
183 # this is because we don't want to override TOML type-casting if this function
184 # is applied to a non-interpolated value
185 if counter > 1:
186 interpolated = string_to_type(interpolated) # type: ignore
187 return interpolated
188 else:
189 env_var = interpolated
190 counter += 1
191
192 return None
193
194
195 def create_user_config(dest_path: str, source_path: str = DEFAULT_CONFIG) -> None:
196 """
197 Copies the default configuration to a user-customizable file at `dest_path`
198 """
199 dest_path = cast(str, interpolate_env_var(dest_path))
200 if os.path.isfile(dest_path):
201 raise ValueError("File already exists: {}".format(dest_path))
202 os.makedirs(os.path.dirname(dest_path), exist_ok=True)
203
204 with open(dest_path, "w") as dest:
205 with open(source_path, "r") as source:
206 dest.write(source.read())
207
208
209 # Process Config -------------------------------------------------------------
210
211
212 def process_task_defaults(config: Config) -> Config:
213 """
214 Converts task defaults from basic types to Python objects like timedeltas
215
216 Args:
217 - config (Config): the configuration to modify
218 """
219 # make sure defaults exists
220 defaults = config.setdefault_nested("tasks.defaults", Config())
221
222 # max_retries defaults to 0 if not set, False, or None
223 if not defaults.setdefault("max_retries", 0):
224 defaults.max_retries = 0
225 defaults.max_retries = defaults.get("max_retries", 0) or 0
226
227 # retry_delay defaults to None if not set - also check for False because TOML has no NULL
228 if defaults.setdefault("retry_delay", False) is False:
229 defaults.retry_delay = None
230 elif isinstance(defaults.retry_delay, int):
231 defaults.retry_delay = datetime.timedelta(seconds=defaults.retry_delay)
232
233 # timeout defaults to None if not set - also check for False because TOML has no NULL
234 if defaults.setdefault("timeout", False) is False:
235 defaults.timeout = None
236 elif isinstance(defaults.timeout, int):
237 defaults.timeout = datetime.timedelta(seconds=defaults.timeout)
238
239 return config
240
241
242 # Validation ------------------------------------------------------------------
243
244
245 def validate_config(config: Config) -> None:
246 """
247 Placeholder for future config validation. For example, invalid values could raise an error.
248 """
249 pass
250
251
252 # Load configuration ----------------------------------------------------------
253
254
255 def load_config_file(path: str, env_var_prefix: str = None, env: dict = None) -> Config:
256 """
257 Loads a configuration file from a path, optionally merging it into an existing
258 configuration.
259 """
260
261 # load the configuration file
262 config = {
263 key.lower(): value
264 for key, value in toml.load(cast(str, interpolate_env_var(path))).items()
265 }
266
267 # toml supports nested dicts, so we work with a flattened representation to do any
268 # requested interpolation
269 flat_config = collections.dict_to_flatdict(config)
270
271 # --------------------- Interpolate env vars -----------------------
272 # check if any env var sets a configuration value with the format:
273 # [ENV_VAR_PREFIX]__[Section]__[Optional Sub-Sections...]__[Key] = Value
274 # and if it does, add it to the config file.
275
276 env = cast(dict, env or os.environ)
277 if env_var_prefix:
278
279 for env_var in env:
280 if env_var.startswith(env_var_prefix + "__"):
281
282 # strip the prefix off the env var
283 env_var_option = env_var[len(env_var_prefix + "__") :]
284
285 # make sure the resulting env var has at least one delimitied section and key
286 if "__" not in env_var:
287 continue
288
289 # env vars with escaped characters are interpreted as literal "\", which
290 # Python helpfully escapes with a second "\". This step makes sure that
291 # escaped characters are properly interpreted.
292 value = cast(str, env.get(env_var)).encode().decode("unicode_escape")
293
294 # place the env var in the flat config as a compound key
295 config_option = collections.CompoundKey(
296 env_var_option.lower().split("__")
297 )
298 flat_config[config_option] = string_to_type(
299 cast(str, interpolate_env_var(value))
300 )
301
302 # interpolate any env vars referenced
303 for k, v in list(flat_config.items()):
304 flat_config[k] = interpolate_env_var(v)
305
306 # --------------------- Interpolate other config keys -----------------
307 # TOML doesn't support references to other keys... but we do!
308 # This has the potential to lead to nasty recursions, so we check at most 10 times.
309 # we use a set called "keys_to_check" to track only the ones of interest, so we aren't
310 # checking every key every time.
311
312 keys_to_check = set(flat_config.keys())
313
314 for _ in range(10):
315
316 # iterate over every key and value to check if the value uses interpolation
317 for k in list(keys_to_check):
318
319 # if the value isn't a string, it can't be a reference, so we exit
320 if not isinstance(flat_config[k], str):
321 keys_to_check.remove(k)
322 continue
323
324 # see if the ${...} syntax was used in the value and exit if it wasn't
325 match = INTERPOLATION_REGEX.search(flat_config[k])
326 if not match:
327 keys_to_check.remove(k)
328 continue
329
330 # the matched_string includes "${}"; the matched_key is just the inner value
331 matched_string = match.group(0)
332 matched_key = match.group(1)
333
334 # get the referenced key from the config value
335 ref_key = collections.CompoundKey(matched_key.split("."))
336 # get the value corresponding to the referenced key
337 ref_value = flat_config.get(ref_key, "")
338
339 # if the matched was the entire value, replace it with the interpolated value
340 if flat_config[k] == matched_string:
341 flat_config[k] = ref_value
342 # if it was a partial match, then drop the interpolated value into the string
343 else:
344 flat_config[k] = flat_config[k].replace(
345 matched_string, str(ref_value), 1
346 )
347
348 return cast(Config, collections.flatdict_to_dict(flat_config, dct_class=Config))
349
350
351 def load_configuration(
352 config_path: str,
353 env_var_prefix: str = None,
354 merge_into_config: Config = None,
355 env: dict = None,
356 ) -> Config:
357 """
358 Given a `config_path` with a toml configuration file, returns a Config object.
359
360 Args:
361 - config_path (str): the path to the toml configuration file
362 - env_var_prefix (str): if provided, environment variables starting with this prefix
363 will be added as configuration settings.
364 - merge_into_config (Config): if provided, the configuration loaded from
365 `config_path` will be merged into a copy of this configuration file. The merged
366 Config is returned.
367 """
368
369 # load default config
370 config = load_config_file(config_path, env_var_prefix=env_var_prefix or "", env=env)
371
372 if merge_into_config is not None:
373 config = cast(Config, collections.merge_dicts(merge_into_config, config))
374
375 validate_config(config)
376
377 return config
378
379
380 config = load_configuration(config_path=DEFAULT_CONFIG, env_var_prefix=ENV_VAR_PREFIX)
381
382 # if user config exists, load and merge it with default config
383 user_config_path = config.get("user_config_path", None)
384 if user_config_path and os.path.isfile(user_config_path):
385 config = load_configuration(
386 config_path=user_config_path,
387 env_var_prefix=ENV_VAR_PREFIX,
388 merge_into_config=config,
389 )
390
391 config = process_task_defaults(config)
392
[end of src/prefect/configuration.py]
[start of src/prefect/engine/cache_validators.py]
1 """
2 Cache validators are functions that determine if a task's output cache
3 is still valid, or whether that task should be re-run; they are provided at
4 Task creation via the `cache_validator` keyword argument (for more information
5 on instantiating Tasks see the [Task documentation](../core/task.html)).
6
7 Task caches are created at Task runtime if and only if the `cache_for` keyword
8 argument is provided to the Task, which specifies how long the output cache will be valid for
9 after its creation. Cache validators come into play when a cached Task is re-run,
10 and are used to determine whether to re-run the Task or use the cache.
11
12 Note that _all_ validators take into account cache expiration.
13
14 A cache validator returns `True` if the cache is still valid, and `False` otherwise.
15 """
16 from typing import Any, Callable, Dict, Iterable
17
18 import pendulum
19
20 import prefect
21
22
23 def never_use(
24 state: "prefect.engine.state.Cached",
25 inputs: Dict[str, Any],
26 parameters: Dict[str, Any],
27 ) -> bool:
28 """
29 Never uses the cache.
30
31 Args:
32 - state (State): a `Success` state from the last successful Task run which contains the cache
33 - inputs (dict): a `dict` of inputs which were available on the last
34 successful run of the cached Task
35 - parameters (dict): a `dict` of parameters which were available on the
36 last successful run of the cached Task
37
38 Returns:
39 - boolean specifying whether or not the cache should be used
40 """
41 return False
42
43
44 def duration_only(
45 state: "prefect.engine.state.Cached",
46 inputs: Dict[str, Any],
47 parameters: Dict[str, Any],
48 ) -> bool:
49 """
50 Validates the cache based only on cache expiration.
51
52 Args:
53 - state (State): a `Success` state from the last successful Task run which contains the cache
54 - inputs (dict): a `dict` of inputs which were available on the last
55 successful run of the cached Task
56 - parameters (dict): a `dict` of parameters which were available on the
57 last successful run of the cached Task
58
59 Returns:
60 - boolean specifying whether or not the cache should be used
61 """
62 if state.cached_result_expiration is None:
63 return True
64 elif state.cached_result_expiration > pendulum.now("utc"):
65 return True
66 else:
67 return False
68
69
70 def all_inputs(
71 state: "prefect.engine.state.Cached",
72 inputs: Dict[str, Any],
73 parameters: Dict[str, Any],
74 ) -> bool:
75 """
76 Validates the cache based on cache expiration _and_ all inputs which were provided
77 on the last successful run.
78
79 Args:
80 - state (State): a `Success` state from the last successful Task run which contains the cache
81 - inputs (dict): a `dict` of inputs which were available on the last
82 successful run of the cached Task
83 - parameters (dict): a `dict` of parameters which were available on the
84 last successful run of the cached Task
85
86 Returns:
87 - boolean specifying whether or not the cache should be used
88 """
89 if duration_only(state, inputs, parameters) is False:
90 return False
91 elif state.cached_inputs == inputs:
92 return True
93 else:
94 return False
95
96
97 def all_parameters(
98 state: "prefect.engine.state.Cached",
99 inputs: Dict[str, Any],
100 parameters: Dict[str, Any],
101 ) -> bool:
102 """
103 Validates the cache based on cache expiration _and_ all parameters which were provided
104 on the last successful run.
105
106 Args:
107 - state (State): a `Success` state from the last successful Task run which contains the cache
108 - inputs (dict): a `dict` of inputs which were available on the last
109 successful run of the cached Task
110 - parameters (dict): a `dict` of parameters which were available on the
111 last successful run of the cached Task
112
113 Returns:
114 - boolean specifying whether or not the cache should be used
115 """
116 if duration_only(state, inputs, parameters) is False:
117 return False
118 elif state.cached_parameters == parameters:
119 return True
120 else:
121 return False
122
123
124 def partial_parameters_only(validate_on: Iterable[str] = None,) -> Callable:
125 """
126 Validates the cache based on cache expiration _and_ a subset of parameters (determined by the
127 `validate_on` keyword) which were provided on the last successful run.
128
129 Args:
130 - validate_on (list): a `list` of strings specifying the parameter names
131 to validate against
132
133 Returns:
134 - Callable: the actual validation function specifying whether or not the cache should be used
135
136 Example:
137 ```python
138 from datetime import timedelta
139 import pendulum
140 from prefect import Flow, Parameter, task
141 from prefect.engine.cache_validators import partial_parameters_only
142
143 @task(cache_for=timedelta(days=1),
144 cache_validator=partial_parameters_only(validate_on=['nrows']))
145 def daily_db_refresh(nrows, runtime):
146 pass
147
148 with Flow("My Flow") as f:
149 nrows = Parameter("nrows", default=500)
150 runtime = Parameter("runtime")
151 db_state = daily_db_refresh(nrows, runtime)
152
153 state1 = f.run(parameters=dict(nrows=1000, runtime=pendulum.now('utc')))
154
155 ## the second run will use the cache contained within state1.result[db_state]
156 ## even though `runtime` has changed
157 state2 = f.run(parameters=dict(nrows=1000, runtime=pendulum.now('utc')),
158 task_states={result: state1.result[db_state]})
159 ```
160 """
161
162 def _partial_parameters_only(
163 state: "prefect.engine.state.Cached",
164 inputs: Dict[str, Any],
165 parameters: Dict[str, Any],
166 ) -> bool:
167 """
168 The actual cache validation function which will be used.
169
170 Args:
171 - state (State): a `Success` state from the last successful Task run which contains the cache
172 - inputs (dict): a `dict` of inputs which were available on the last
173 successful run of the cached Task
174 - parameters (dict): a `dict` of parameters which were available on the
175 last successful run of the cached Task
176
177 Returns:
178 - boolean specifying whether or not the cache should be used
179 """
180 parameters = parameters or {}
181 if duration_only(state, inputs, parameters) is False:
182 return False
183 elif validate_on is None:
184 return (
185 True
186 ) # if you dont want to validate on anything, then the cache is valid
187 else:
188 cached = state.cached_parameters or {}
189 partial_provided = {
190 key: value for key, value in parameters.items() if key in validate_on
191 }
192 partial_needed = {
193 key: value for key, value in cached.items() if key in validate_on
194 }
195 return partial_provided == partial_needed
196
197 return _partial_parameters_only
198
199
200 def partial_inputs_only(validate_on: Iterable[str] = None,) -> Callable:
201 """
202 Validates the cache based on cache expiration _and_ a subset of inputs (determined by the
203 `validate_on` keyword) which were provided on the last successful run.
204
205 Args:
206 - validate_on (list): a `list` of strings specifying the input names
207 to validate against
208
209 Returns:
210 - Callable: the actual validation function specifying whether or not the cache should be used
211
212 Example:
213 ```python
214 import random
215 from datetime import timedelta
216 from prefect import Flow, task
217 from prefect.engine.cache_validators import partial_inputs_only
218
219 @task(cache_for=timedelta(days=1),
220 cache_validator=partial_inputs_only(validate_on=['x', 'y']))
221 def add(x, y, as_string=False):
222 if as_string:
223 return '{0} + {1}'.format(x, y)
224 return x + y
225
226 @task
227 def rand_bool():
228 return random.random() > 0.5
229
230 with Flow("My Flow") as f:
231 ans = add(1, 2, rand_bool())
232
233 state1 = f.run()
234 ## the second run will use the cache contained within state1.result[ans]
235 ## even though `rand_bool` might change
236 state2 = f.run(task_states={result: state1.result[ans]})
237 ```
238 """
239
240 def _partial_inputs_only(
241 state: "prefect.engine.state.Cached",
242 inputs: Dict[str, Any],
243 parameters: Dict[str, Any],
244 ) -> bool:
245 """
246 The actual cache validation function which will be used.
247
248 Args:
249 - state (State): a `Success` state from the last successful Task run which contains the cache
250 - inputs (dict): a `dict` of inputs which were available on the last
251 successful run of the cached Task
252 - parameters (dict): a `dict` of parameters which were available on the
253 last successful run of the cached Task
254
255 Returns:
256 - boolean specifying whether or not the cache should be used
257 """
258 inputs = inputs or {}
259 if duration_only(state, inputs, parameters) is False:
260 return False
261 elif validate_on is None:
262 return (
263 True
264 ) # if you dont want to validate on anything, then the cache is valid
265 else:
266 cached = state.cached_inputs or {}
267 partial_provided = {
268 key: value for key, value in inputs.items() if key in validate_on
269 }
270 partial_needed = {
271 key: value for key, value in cached.items() if key in validate_on
272 }
273 return partial_provided == partial_needed
274
275 return _partial_inputs_only
276
[end of src/prefect/engine/cache_validators.py]
[start of src/prefect/engine/flow_runner.py]
1 from typing import (
2 Any,
3 Callable,
4 Dict,
5 Iterable,
6 List,
7 NamedTuple,
8 Optional,
9 Set,
10 Tuple,
11 Union,
12 )
13
14 import pendulum
15
16 import prefect
17 from prefect import config
18 from prefect.core import Edge, Flow, Task
19 from prefect.engine import signals
20 from prefect.engine.runner import ENDRUN, Runner, call_state_handlers
21 from prefect.engine.state import (
22 Failed,
23 Mapped,
24 Pending,
25 Retrying,
26 Running,
27 Scheduled,
28 State,
29 Success,
30 )
31 from prefect.engine.task_runner import TaskRunner
32 from prefect.utilities.collections import flatten_seq
33 from prefect.utilities.executors import run_with_heartbeat
34
35 FlowRunnerInitializeResult = NamedTuple(
36 "FlowRunnerInitializeResult",
37 [
38 ("state", State),
39 ("task_states", Dict[Task, State]),
40 ("context", Dict[str, Any]),
41 ("task_contexts", Dict[Task, Dict[str, Any]]),
42 ],
43 )
44
45
46 class FlowRunner(Runner):
47 """
48 FlowRunners handle the execution of Flows and determine the State of a Flow
49 before, during and after the Flow is run.
50
51 In particular, through the FlowRunner you can specify which tasks should be
52 the first tasks to run, which tasks should be returned after the Flow is finished,
53 and what states each task should be initialized with.
54
55 Args:
56 - flow (Flow): the `Flow` to be run
57 - task_runner_cls (TaskRunner, optional): The class used for running
58 individual Tasks. Defaults to [TaskRunner](task_runner.html)
59 - state_handlers (Iterable[Callable], optional): A list of state change handlers
60 that will be called whenever the flow changes state, providing an
61 opportunity to inspect or modify the new state. The handler
62 will be passed the flow runner instance, the old (prior) state, and the new
63 (current) state, with the following signature:
64 `state_handler(fr: FlowRunner, old_state: State, new_state: State) -> Optional[State]`
65 If multiple functions are passed, then the `new_state` argument will be the
66 result of the previous handler.
67
68 Note: new FlowRunners are initialized within the call to `Flow.run()` and in general,
69 this is the endpoint through which FlowRunners will be interacted with most frequently.
70
71 Example:
72 ```python
73 @task
74 def say_hello():
75 print('hello')
76
77 with Flow("My Flow") as f:
78 say_hello()
79
80 fr = FlowRunner(flow=f)
81 flow_state = fr.run()
82 ```
83 """
84
85 def __init__(
86 self,
87 flow: Flow,
88 task_runner_cls: type = None,
89 state_handlers: Iterable[Callable] = None,
90 ):
91 self.flow = flow
92 if task_runner_cls is None:
93 task_runner_cls = prefect.engine.get_default_task_runner_class()
94 self.task_runner_cls = task_runner_cls
95 super().__init__(state_handlers=state_handlers)
96
97 def call_runner_target_handlers(self, old_state: State, new_state: State) -> State:
98 """
99 A special state handler that the FlowRunner uses to call its flow's state handlers.
100 This method is called as part of the base Runner's `handle_state_change()` method.
101
102 Args:
103 - old_state (State): the old (previous) state
104 - new_state (State): the new (current) state
105
106 Returns:
107 - State: the new state
108 """
109 self.logger.debug(
110 "Flow '{name}': Handling state change from {old} to {new}".format(
111 name=self.flow.name,
112 old=type(old_state).__name__,
113 new=type(new_state).__name__,
114 )
115 )
116 for handler in self.flow.state_handlers:
117 new_state = handler(self.flow, old_state, new_state) or new_state
118
119 return new_state
120
121 def initialize_run( # type: ignore
122 self,
123 state: Optional[State],
124 task_states: Dict[Task, State],
125 context: Dict[str, Any],
126 task_contexts: Dict[Task, Dict[str, Any]],
127 parameters: Dict[str, Any],
128 ) -> FlowRunnerInitializeResult:
129 """
130 Initializes the Task run by initializing state and context appropriately.
131
132 If the provided state is a Submitted state, the state it wraps is extracted.
133
134 Args:
135 - state (Optional[State]): the initial state of the run
136 - task_states (Dict[Task, State]): a dictionary of any initial task states
137 - context (Dict[str, Any], optional): prefect.Context to use for execution
138 to use for each Task run
139 - task_contexts (Dict[Task, Dict[str, Any]], optional): contexts that will be provided to each task
140 - parameters(dict): the parameter values for the run
141
142 Returns:
143 - NamedTuple: a tuple of initialized objects:
144 `(state, task_states, context, task_contexts)`
145 """
146
147 # overwrite context parameters one-by-one
148 if parameters:
149 context_params = context.setdefault("parameters", {})
150 for param, value in parameters.items():
151 context_params[param] = value
152
153 context.update(flow_name=self.flow.name)
154 context.setdefault("scheduled_start_time", pendulum.now("utc"))
155
156 # add various formatted dates to context
157 now = pendulum.now("utc")
158 dates = {
159 "date": now,
160 "today": now.strftime("%Y-%m-%d"),
161 "yesterday": now.add(days=-1).strftime("%Y-%m-%d"),
162 "tomorrow": now.add(days=1).strftime("%Y-%m-%d"),
163 "today_nodash": now.strftime("%Y%m%d"),
164 "yesterday_nodash": now.add(days=-1).strftime("%Y%m%d"),
165 "tomorrow_nodash": now.add(days=1).strftime("%Y%m%d"),
166 }
167 for key, val in dates.items():
168 context.setdefault(key, val)
169
170 for task in self.flow.tasks:
171 task_contexts.setdefault(task, {}).update(
172 task_name=task.name, task_slug=task.slug
173 )
174 state, context = super().initialize_run(state=state, context=context)
175 return FlowRunnerInitializeResult(
176 state=state,
177 task_states=task_states,
178 context=context,
179 task_contexts=task_contexts,
180 )
181
182 def run(
183 self,
184 state: State = None,
185 task_states: Dict[Task, State] = None,
186 return_tasks: Iterable[Task] = None,
187 parameters: Dict[str, Any] = None,
188 task_runner_state_handlers: Iterable[Callable] = None,
189 executor: "prefect.engine.executors.Executor" = None,
190 context: Dict[str, Any] = None,
191 task_contexts: Dict[Task, Dict[str, Any]] = None,
192 ) -> State:
193 """
194 The main endpoint for FlowRunners. Calling this method will perform all
195 computations contained within the Flow and return the final state of the Flow.
196
197 Args:
198 - state (State, optional): starting state for the Flow. Defaults to
199 `Pending`
200 - task_states (dict, optional): dictionary of task states to begin
201 computation with, with keys being Tasks and values their corresponding state
202 - return_tasks ([Task], optional): list of Tasks to include in the
203 final returned Flow state. Defaults to `None`
204 - parameters (dict, optional): dictionary of any needed Parameter
205 values, with keys being strings representing Parameter names and values being
206 their corresponding values
207 - task_runner_state_handlers (Iterable[Callable], optional): A list of state change
208 handlers that will be provided to the task_runner, and called whenever a task changes
209 state.
210 - executor (Executor, optional): executor to use when performing
211 computation; defaults to the executor specified in your prefect configuration
212 - context (Dict[str, Any], optional): prefect.Context to use for execution
213 to use for each Task run
214 - task_contexts (Dict[Task, Dict[str, Any]], optional): contexts that will be provided to each task
215
216 Returns:
217 - State: `State` representing the final post-run state of the `Flow`.
218
219 """
220
221 self.logger.info("Beginning Flow run for '{}'".format(self.flow.name))
222
223 # make copies to avoid modifying user inputs
224 task_states = dict(task_states or {})
225 context = dict(context or {})
226 task_contexts = dict(task_contexts or {})
227 parameters = dict(parameters or {})
228 if executor is None:
229 executor = prefect.engine.get_default_executor_class()()
230
231 try:
232 state, task_states, context, task_contexts = self.initialize_run(
233 state=state,
234 task_states=task_states,
235 context=context,
236 task_contexts=task_contexts,
237 parameters=parameters,
238 )
239
240 with prefect.context(context):
241 state = self.check_flow_is_pending_or_running(state)
242 state = self.check_flow_reached_start_time(state)
243 state = self.set_flow_to_running(state)
244 state = self.get_flow_run_state(
245 state,
246 task_states=task_states,
247 task_contexts=task_contexts,
248 return_tasks=return_tasks,
249 task_runner_state_handlers=task_runner_state_handlers,
250 executor=executor,
251 )
252
253 except ENDRUN as exc:
254 state = exc.state
255
256 # All other exceptions are trapped and turned into Failed states
257 except Exception as exc:
258 self.logger.error(
259 "Unexpected error while running flow: {}".format(repr(exc))
260 )
261 if prefect.context.get("raise_on_exception"):
262 raise exc
263 new_state = Failed(
264 message="Unexpected error while running flow: {}".format(repr(exc)),
265 result=exc,
266 )
267 state = self.handle_state_change(state or Pending(), new_state)
268
269 return state
270
271 @call_state_handlers
272 def check_flow_reached_start_time(self, state: State) -> State:
273 """
274 Checks if the Flow is in a Scheduled state and, if it is, ensures that the scheduled
275 time has been reached.
276
277 Args:
278 - state (State): the current state of this Flow
279
280 Returns:
281 - State: the state of the flow after performing the check
282
283 Raises:
284 - ENDRUN: if the flow is Scheduled with a future scheduled time
285 """
286 if isinstance(state, Scheduled):
287 if state.start_time and state.start_time > pendulum.now("utc"):
288 self.logger.debug(
289 "Flow '{name}': start_time has not been reached; ending run.".format(
290 name=self.flow.name
291 )
292 )
293 raise ENDRUN(state)
294 return state
295
296 @call_state_handlers
297 def check_flow_is_pending_or_running(self, state: State) -> State:
298 """
299 Checks if the flow is in either a Pending state or Running state. Either are valid
300 starting points (because we allow simultaneous runs of the same flow run).
301
302 Args:
303 - state (State): the current state of this flow
304
305 Returns:
306 - State: the state of the flow after running the check
307
308 Raises:
309 - ENDRUN: if the flow is not pending or running
310 """
311
312 # the flow run is already finished
313 if state.is_finished() is True:
314 self.logger.info("Flow run has already finished.")
315 raise ENDRUN(state)
316
317 # the flow run must be either pending or running (possibly redundant with above)
318 elif not (state.is_pending() or state.is_running()):
319 self.logger.info("Flow is not ready to run.")
320 raise ENDRUN(state)
321
322 return state
323
324 @call_state_handlers
325 def set_flow_to_running(self, state: State) -> State:
326 """
327 Puts Pending flows in a Running state; leaves Running flows Running.
328
329 Args:
330 - state (State): the current state of this flow
331
332 Returns:
333 - State: the state of the flow after running the check
334
335 Raises:
336 - ENDRUN: if the flow is not pending or running
337 """
338 if state.is_pending():
339 self.logger.info("Starting flow run.")
340 return Running(message="Running flow.")
341 elif state.is_running():
342 return state
343 else:
344 raise ENDRUN(state)
345
346 @run_with_heartbeat
347 @call_state_handlers
348 def get_flow_run_state(
349 self,
350 state: State,
351 task_states: Dict[Task, State],
352 task_contexts: Dict[Task, Dict[str, Any]],
353 return_tasks: Set[Task],
354 task_runner_state_handlers: Iterable[Callable],
355 executor: "prefect.engine.executors.base.Executor",
356 ) -> State:
357 """
358 Runs the flow.
359
360 Args:
361 - state (State): starting state for the Flow. Defaults to
362 `Pending`
363 - task_states (dict): dictionary of task states to begin
364 computation with, with keys being Tasks and values their corresponding state
365 - task_contexts (Dict[Task, Dict[str, Any]]): contexts that will be provided to each task
366 - return_tasks ([Task], optional): list of Tasks to include in the
367 final returned Flow state. Defaults to `None`
368 - task_runner_state_handlers (Iterable[Callable]): A list of state change
369 handlers that will be provided to the task_runner, and called whenever a task changes
370 state.
371 - executor (Executor): executor to use when performing
372 computation; defaults to the executor provided in your prefect configuration
373
374 Returns:
375 - State: `State` representing the final post-run state of the `Flow`.
376
377 """
378
379 if not state.is_running():
380 self.logger.info("Flow is not in a Running state.")
381 raise ENDRUN(state)
382
383 if return_tasks is None:
384 return_tasks = set()
385 if set(return_tasks).difference(self.flow.tasks):
386 raise ValueError("Some tasks in return_tasks were not found in the flow.")
387
388 # -- process each task in order
389
390 with executor.start():
391
392 for task in self.flow.sorted_tasks():
393
394 task_state = task_states.get(task)
395
396 # if the state is finished, don't run the task, just use the provided state
397 if (
398 isinstance(task_state, State)
399 and task_state.is_finished()
400 and not task_state.is_cached()
401 and not task_state.is_mapped()
402 ):
403 continue
404
405 upstream_states = {} # type: Dict[Edge, Union[State, Iterable]]
406
407 # -- process each edge to the task
408 for edge in self.flow.edges_to(task):
409 upstream_states[edge] = task_states.get(
410 edge.upstream_task, Pending(message="Task state not available.")
411 )
412
413 # -- run the task
414
415 task_states[task] = executor.submit(
416 self.run_task,
417 task=task,
418 state=task_state,
419 upstream_states=upstream_states,
420 context=dict(prefect.context, **task_contexts.get(task, {})),
421 task_runner_state_handlers=task_runner_state_handlers,
422 executor=executor,
423 )
424
425 # ---------------------------------------------
426 # Collect results
427 # ---------------------------------------------
428
429 # terminal tasks determine if the flow is finished
430 terminal_tasks = self.flow.terminal_tasks()
431
432 # reference tasks determine flow state
433 reference_tasks = self.flow.reference_tasks()
434
435 # wait until all terminal tasks are finished
436 final_tasks = terminal_tasks.union(reference_tasks).union(return_tasks)
437 final_states = executor.wait(
438 {
439 t: task_states.get(t, Pending("Task not evaluated by FlowRunner."))
440 for t in final_tasks
441 }
442 )
443
444 # also wait for any children of Mapped tasks to finish, and add them
445 # to the dictionary to determine flow state
446 all_final_states = final_states.copy()
447 for t, s in list(final_states.items()):
448 if s.is_mapped():
449 s.map_states = executor.wait(s.map_states)
450 s.result = [ms.result for ms in s.map_states]
451 all_final_states[t] = s.map_states
452
453 assert isinstance(final_states, dict)
454
455 key_states = set(flatten_seq([all_final_states[t] for t in reference_tasks]))
456 terminal_states = set(
457 flatten_seq([all_final_states[t] for t in terminal_tasks])
458 )
459 return_states = {t: final_states[t] for t in return_tasks}
460
461 state = self.determine_final_state(
462 key_states=key_states,
463 return_states=return_states,
464 terminal_states=terminal_states,
465 )
466
467 return state
468
469 def determine_final_state(
470 self,
471 key_states: Set[State],
472 return_states: Dict[Task, State],
473 terminal_states: Set[State],
474 ) -> State:
475 """
476 Implements the logic for determining the final state of the flow run.
477
478 Args:
479 - key_states (Set[State]): the states which will determine the success / failure of the flow run
480 - return_states (Dict[Task, State]): states to return as results
481 - terminal_states (Set[State]): the states of the terminal tasks for this flow
482
483 Returns:
484 - State: the final state of the flow run
485 """
486 state = State() # mypy initialization
487
488 # check that the flow is finished
489 if not all(s.is_finished() for s in terminal_states):
490 self.logger.info("Flow run RUNNING: terminal tasks are incomplete.")
491 state = Running(message="Flow run in progress.", result=return_states)
492
493 # check if any key task failed
494 elif any(s.is_failed() for s in key_states):
495 self.logger.info("Flow run FAILED: some reference tasks failed.")
496 state = Failed(message="Some reference tasks failed.", result=return_states)
497
498 # check if all reference tasks succeeded
499 elif all(s.is_successful() for s in key_states):
500 self.logger.info("Flow run SUCCESS: all reference tasks succeeded")
501 state = Success(
502 message="All reference tasks succeeded.", result=return_states
503 )
504
505 # check for any unanticipated state that is finished but neither success nor failed
506 else:
507 self.logger.info("Flow run SUCCESS: no reference tasks failed")
508 state = Success(message="No reference tasks failed.", result=return_states)
509
510 return state
511
512 def run_task(
513 self,
514 task: Task,
515 state: State,
516 upstream_states: Dict[Edge, State],
517 context: Dict[str, Any],
518 task_runner_state_handlers: Iterable[Callable],
519 executor: "prefect.engine.executors.Executor",
520 ) -> State:
521 """
522
523 Runs a specific task. This method is intended to be called by submitting it to
524 an executor.
525
526 Args:
527 - task (Task): the task to run
528 - state (State): starting state for the Flow. Defaults to
529 `Pending`
530 - upstream_states (Dict[Edge, State]): dictionary of upstream states
531 - context (Dict[str, Any]): a context dictionary for the task run
532 - task_runner_state_handlers (Iterable[Callable]): A list of state change
533 handlers that will be provided to the task_runner, and called whenever a task changes
534 state.
535 - executor (Executor): executor to use when performing
536 computation; defaults to the executor provided in your prefect configuration
537
538 Returns:
539 - State: `State` representing the final post-run state of the `Flow`.
540
541 """
542 default_handler = task.result_handler or self.flow.result_handler
543 task_runner = self.task_runner_cls(
544 task=task,
545 state_handlers=task_runner_state_handlers,
546 result_handler=default_handler,
547 )
548
549 # if this task reduces over a mapped state, make sure its children have finished
550 for edge, upstream_state in upstream_states.items():
551
552 # if the upstream state is Mapped, wait until its results are all available
553 if not edge.mapped and upstream_state.is_mapped():
554 assert isinstance(upstream_state, Mapped) # mypy assert
555 upstream_state.map_states = executor.wait(upstream_state.map_states)
556 upstream_state.result = [s.result for s in upstream_state.map_states]
557
558 return task_runner.run(
559 state=state,
560 upstream_states=upstream_states,
561 context=context,
562 executor=executor,
563 )
564
[end of src/prefect/engine/flow_runner.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
PrefectHQ/prefect
|
003d36ab27615b05e3f3519ef238f6f855d52a3d
|
Interpolated config options are computed prior to loading user configs
In `config.toml`:
```
[section]
host = "x"
endpoint = "${section.host}:1000"
```
In `~/.prefect/config.toml`:
```
[section]
host = "y"
```
In Prefect:
```
print(prefect.config.section.endpoint) # "x:1000"
```
We should perform variable interpolation AFTER user configs are loaded
|
2019-05-16T19:53:26Z
|
<patch>
diff --git a/src/prefect/configuration.py b/src/prefect/configuration.py
--- a/src/prefect/configuration.py
+++ b/src/prefect/configuration.py
@@ -165,7 +165,7 @@ def string_to_type(val: str) -> Union[bool, int, float, str]:
return val
-def interpolate_env_var(env_var: str) -> Optional[Union[bool, int, float, str]]:
+def interpolate_env_vars(env_var: str) -> Optional[Union[bool, int, float, str]]:
"""
Expands (potentially nested) env vars by repeatedly applying
`expandvars` and `expanduser` until interpolation stops having
@@ -196,7 +196,7 @@ def create_user_config(dest_path: str, source_path: str = DEFAULT_CONFIG) -> Non
"""
Copies the default configuration to a user-customizable file at `dest_path`
"""
- dest_path = cast(str, interpolate_env_var(dest_path))
+ dest_path = cast(str, interpolate_env_vars(dest_path))
if os.path.isfile(dest_path):
raise ValueError("File already exists: {}".format(dest_path))
os.makedirs(os.path.dirname(dest_path), exist_ok=True)
@@ -252,18 +252,21 @@ def validate_config(config: Config) -> None:
# Load configuration ----------------------------------------------------------
-def load_config_file(path: str, env_var_prefix: str = None, env: dict = None) -> Config:
+def load_toml(path: str) -> dict:
"""
- Loads a configuration file from a path, optionally merging it into an existing
- configuration.
+ Loads a config dictionary from TOML
"""
-
- # load the configuration file
- config = {
+ return {
key.lower(): value
- for key, value in toml.load(cast(str, interpolate_env_var(path))).items()
+ for key, value in toml.load(cast(str, interpolate_env_vars(path))).items()
}
+
+def interpolate_config(config: dict, env_var_prefix: str = None) -> Config:
+ """
+ Processes a config dictionary, such as the one loaded from `load_toml`.
+ """
+
# toml supports nested dicts, so we work with a flattened representation to do any
# requested interpolation
flat_config = collections.dict_to_flatdict(config)
@@ -273,10 +276,9 @@ def load_config_file(path: str, env_var_prefix: str = None, env: dict = None) ->
# [ENV_VAR_PREFIX]__[Section]__[Optional Sub-Sections...]__[Key] = Value
# and if it does, add it to the config file.
- env = cast(dict, env or os.environ)
if env_var_prefix:
- for env_var in env:
+ for env_var, env_var_value in os.environ.items():
if env_var.startswith(env_var_prefix + "__"):
# strip the prefix off the env var
@@ -289,19 +291,19 @@ def load_config_file(path: str, env_var_prefix: str = None, env: dict = None) ->
# env vars with escaped characters are interpreted as literal "\", which
# Python helpfully escapes with a second "\". This step makes sure that
# escaped characters are properly interpreted.
- value = cast(str, env.get(env_var)).encode().decode("unicode_escape")
+ value = cast(str, env_var_value.encode().decode("unicode_escape"))
# place the env var in the flat config as a compound key
config_option = collections.CompoundKey(
env_var_option.lower().split("__")
)
flat_config[config_option] = string_to_type(
- cast(str, interpolate_env_var(value))
+ cast(str, interpolate_env_vars(value))
)
# interpolate any env vars referenced
for k, v in list(flat_config.items()):
- flat_config[k] = interpolate_env_var(v)
+ flat_config[k] = interpolate_env_vars(v)
# --------------------- Interpolate other config keys -----------------
# TOML doesn't support references to other keys... but we do!
@@ -349,43 +351,42 @@ def load_config_file(path: str, env_var_prefix: str = None, env: dict = None) ->
def load_configuration(
- config_path: str,
- env_var_prefix: str = None,
- merge_into_config: Config = None,
- env: dict = None,
+ path: str, user_config_path: str = None, env_var_prefix: str = None
) -> Config:
"""
- Given a `config_path` with a toml configuration file, returns a Config object.
+ Loads a configuration from a known location.
Args:
- - config_path (str): the path to the toml configuration file
- - env_var_prefix (str): if provided, environment variables starting with this prefix
- will be added as configuration settings.
- - merge_into_config (Config): if provided, the configuration loaded from
- `config_path` will be merged into a copy of this configuration file. The merged
- Config is returned.
+ - path (str): the path to the TOML configuration file
+ - user_config_path (str): an optional path to a user config file. If not provided,
+ the main config will be checked for a `user_config_path` key. If a user config
+ is provided, it will be used to update the main config prior to interpolation
+ - env_var_prefix (str): any env vars matching this prefix will be used to create
+ configuration values
+
+ Returns:
+ - Config
"""
# load default config
- config = load_config_file(config_path, env_var_prefix=env_var_prefix or "", env=env)
-
- if merge_into_config is not None:
- config = cast(Config, collections.merge_dicts(merge_into_config, config))
-
+ default_config = load_toml(path)
+
+ # load user config
+ if not user_config_path:
+ user_config_path = default_config.get("user_config_path", None)
+
+ if user_config_path and os.path.isfile(str(interpolate_env_vars(user_config_path))):
+ user_config = load_toml(user_config_path)
+ # merge user config into default config
+ default_config = cast(
+ dict, collections.merge_dicts(default_config, user_config)
+ )
+
+ # interpolate after user config has already been merged
+ config = interpolate_config(default_config, env_var_prefix=env_var_prefix)
+ config = process_task_defaults(config)
validate_config(config)
-
return config
-config = load_configuration(config_path=DEFAULT_CONFIG, env_var_prefix=ENV_VAR_PREFIX)
-
-# if user config exists, load and merge it with default config
-user_config_path = config.get("user_config_path", None)
-if user_config_path and os.path.isfile(user_config_path):
- config = load_configuration(
- config_path=user_config_path,
- env_var_prefix=ENV_VAR_PREFIX,
- merge_into_config=config,
- )
-
-config = process_task_defaults(config)
+config = load_configuration(path=DEFAULT_CONFIG, env_var_prefix=ENV_VAR_PREFIX)
</patch>
|
[]
|
[]
| ||||
conan-io__conan-4273
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
conan install unexpectedly reports invalid version range
i have this requirement in my `conanfile.py`:
`foo/[>16.5.0 <17.0.0]@bar/stable`
`conan install` aborts with this error:
`ERROR: version range expression '>16.5.0<17.0.0' is not valid`
note: using the deprecated comma-based syntax works fine
* Python 3.6 on CentOS 7.5
* Conan 1.11.2
* node-semver 0.6.1
</issue>
<code>
[start of README.rst]
1 Conan
2 =====
3
4 A distributed, open-source, C/C++ package manager.
5
6 +------------------------+-------------------------+
7 | **master** | **develop** |
8 +========================+=========================+
9 | |Build Status Master| | |Build Status Develop| |
10 +------------------------+-------------------------+
11
12
13 +------------------------+---------------------------+---------------------------------------------+
14 | **Coverage master** | **Coverage develop** | **Coverage graph** |
15 +========================+===========================+=============================================+
16 | |Master coverage| | |Develop coverage| | |Coverage graph| |
17 +------------------------+---------------------------+---------------------------------------------+
18
19
20 Setup
21 =====
22
23 From binaries
24 -------------
25
26 We have installers for `most platforms here <http://conan.io>`__ but you
27 can run **conan** from sources if you want.
28
29 From pip
30 --------
31
32 Conan is compatible with Python 2 and Python 3.
33
34 - Install pip following `pip docs`_.
35 - Install conan:
36
37 .. code-block:: bash
38
39 $ pip install conan
40
41 You can also use `test.pypi.org <https://test.pypi.org/project/conan/#history>`_ repository to install development (non-stable) Conan versions:
42
43
44 .. code-block:: bash
45
46 $ pip install --index-url https://test.pypi.org/simple/ conan
47
48
49 From Homebrew (OSx)
50 -------------------
51
52 - Install Homebrew following `brew homepage`_.
53
54 .. code-block:: bash
55
56 $ brew update
57 $ brew install conan
58
59 From source
60 -----------
61
62 You can run **conan** client and server in Windows, MacOS, and Linux.
63
64 - **Install pip following** `pip docs`_.
65
66 - **Clone conan repository:**
67
68 .. code-block:: bash
69
70 $ git clone https://github.com/conan-io/conan.git
71
72 - **Install in editable mode**
73
74 .. code-block:: bash
75
76 $ cd conan && sudo pip install -e .
77
78 If you are in Windows, using ``sudo`` is not required.
79
80 - **You are ready, try to run conan:**
81
82 .. code-block::
83
84 $ conan --help
85
86 Consumer commands
87 install Installs the requirements specified in a conanfile (.py or .txt).
88 config Manages configuration. Edits the conan.conf or installs config files.
89 get Gets a file or list a directory of a given reference or package.
90 info Gets information about the dependency graph of a recipe.
91 search Searches package recipes and binaries in the local cache or in a remote.
92 Creator commands
93 new Creates a new package recipe template with a 'conanfile.py'.
94 create Builds a binary package for recipe (conanfile.py) located in current dir.
95 upload Uploads a recipe and binary packages to a remote.
96 export Copies the recipe (conanfile.py & associated files) to your local cache.
97 export-pkg Exports a recipe & creates a package with given files calling 'package'.
98 test Test a package, consuming it with a conanfile recipe with a test() method.
99 Package development commands
100 source Calls your local conanfile.py 'source()' method.
101 build Calls your local conanfile.py 'build()' method.
102 package Calls your local conanfile.py 'package()' method.
103 Misc commands
104 profile Lists profiles in the '.conan/profiles' folder, or shows profile details.
105 remote Manages the remote list and the package recipes associated to a remote.
106 user Authenticates against a remote with user/pass, caching the auth token.
107 imports Calls your local conanfile.py or conanfile.txt 'imports' method.
108 copy Copies conan recipes and packages to another user/channel.
109 remove Removes packages or binaries matching pattern from local cache or remote.
110 alias Creates and exports an 'alias recipe'.
111 download Downloads recipe and binaries to the local cache, without using settings.
112
113 Conan commands. Type "conan <command> -h" for help
114
115 Contributing to the project
116 ===========================
117
118 Feedback and contribution is always welcome in this project.
119 Please read our [contributing guide](https://github.com/conan-io/conan/blob/develop/.github/CONTRIBUTING.md).
120
121 Running the tests
122 =================
123
124 Using tox
125 ---------
126
127 .. code-block:: bash
128
129 $ tox
130
131 It will install the needed requirements and launch `nose` skipping some heavy and slow test.
132 If you want to run the full test suite:
133
134 .. code-block:: bash
135
136 $ tox -e full
137
138 Without tox
139 -----------
140
141 **Install python requirements**
142
143 .. code-block:: bash
144
145 $ pip install -r conans/requirements.txt
146 $ pip install -r conans/requirements_server.txt
147 $ pip install -r conans/requirements_dev.txt
148
149
150 Only in OSX:
151
152 .. code-block:: bash
153
154 $ pip install -r conans/requirements_osx.txt # You can omit this one if not running OSX
155
156
157 If you are not Windows and you are not using a python virtual environment, you will need to run these
158 commands using `sudo`.
159
160 Before you can run the tests, you need to set a few environment variables first.
161
162 .. code-block:: bash
163
164 $ export PYTHONPATH=$PYTHONPATH:$(pwd)
165
166 On Windows it would be (while being in the conan root directory):
167
168 .. code-block:: bash
169
170 $ set PYTHONPATH=.
171
172 Ensure that your ``cmake`` has version 2.8 or later. You can see the
173 version with the following command:
174
175 .. code-block:: bash
176
177 $ cmake --version
178
179 The appropriate values of ``CONAN_COMPILER`` and ``CONAN_COMPILER_VERSION`` depend on your
180 operating system and your requirements.
181
182 These should work for the GCC from ``build-essential`` on Ubuntu 14.04:
183
184 .. code-block:: bash
185
186 $ export CONAN_COMPILER=gcc
187 $ export CONAN_COMPILER_VERSION=4.8
188
189 These should work for OS X:
190
191 .. code-block:: bash
192
193 $ export CONAN_COMPILER=clang
194 $ export CONAN_COMPILER_VERSION=3.5
195
196 Finally, there are some tests that use conan to package Go-lang
197 libraries, so you might **need to install go-lang** in your computer and
198 add it to the path.
199
200 You can run the actual tests like this:
201
202 .. code-block:: bash
203
204 $ nosetests .
205
206
207 There are a couple of test attributes defined, as ``slow``, or ``golang`` that you can use
208 to filter the tests, and do not execute them:
209
210 .. code-block:: bash
211
212 $ nosetests . -a !golang
213
214 A few minutes later it should print ``OK``:
215
216 .. code-block:: bash
217
218 ............................................................................................
219 ----------------------------------------------------------------------
220 Ran 146 tests in 50.993s
221
222 OK
223
224 To run specific tests, you can specify the test name too, something like:
225
226 .. code-block:: bash
227
228 $ nosetests conans.test.command.config_install_test:ConfigInstallTest.install_file_test --nocapture
229
230 The ``--nocapture`` argument can be useful to see some output that otherwise is captured by nosetests.
231
232 License
233 -------
234
235 `MIT LICENSE <./LICENSE.md>`__
236
237 .. |Build Status Master| image:: https://conan-ci.jfrog.info/buildStatus/icon?job=ConanTestSuite/master
238 :target: https://conan-ci.jfrog.info/job/ConanTestSuite/job/master
239
240 .. |Build Status Develop| image:: https://conan-ci.jfrog.info/buildStatus/icon?job=ConanTestSuite/develop
241 :target: https://conan-ci.jfrog.info/job/ConanTestSuite/job/develop
242
243 .. |Master coverage| image:: https://codecov.io/gh/conan-io/conan/branch/master/graph/badge.svg
244 :target: https://codecov.io/gh/conan-io/conan/branch/master
245
246 .. |Develop coverage| image:: https://codecov.io/gh/conan-io/conan/branch/develop/graph/badge.svg
247 :target: https://codecov.io/gh/conan-io/conan/branch/develop
248
249 .. |Coverage graph| image:: https://codecov.io/gh/conan-io/conan/branch/develop/graphs/tree.svg
250 :height: 50px
251 :width: 50 px
252 :alt: Conan develop coverage
253
254 .. _`pip docs`: https://pip.pypa.io/en/stable/installing/
255
256 .. _`brew homepage`: http://brew.sh/
257
[end of README.rst]
[start of conans/client/build/cmake.py]
1 import os
2 import platform
3 import subprocess
4 from itertools import chain
5
6 from conans.client import defs_to_string, join_arguments, tools
7 from conans.client.build.cmake_flags import CMakeDefinitionsBuilder, \
8 get_generator, is_multi_configuration, verbose_definition, verbose_definition_name, \
9 cmake_install_prefix_var_name, get_toolset, build_type_definition, \
10 cmake_in_local_cache_var_name, runtime_definition_var_name
11 from conans.errors import ConanException
12 from conans.model.conan_file import ConanFile
13 from conans.model.version import Version
14 from conans.client.tools.oss import cpu_count, args_to_string
15 from conans.util.config_parser import get_bool_from_text
16 from conans.util.files import mkdir, get_abs_path, walk, decode_text
17
18
19 class CMake(object):
20
21 def __init__(self, conanfile, generator=None, cmake_system_name=True,
22 parallel=True, build_type=None, toolset=None, make_program=None,
23 set_cmake_flags=False):
24 """
25 :param conanfile: Conanfile instance
26 :param generator: Generator name to use or none to autodetect
27 :param cmake_system_name: False to not use CMAKE_SYSTEM_NAME variable,
28 True for auto-detect or directly a string with the system name
29 :param parallel: Try to build with multiple cores if available
30 :param build_type: Overrides default build type coming from settings
31 :param toolset: Toolset name to use (such as llvm-vs2014) or none for default one,
32 applies only to certain generators (e.g. Visual Studio)
33 :param set_cmake_flags: whether or not to set CMake flags like CMAKE_CXX_FLAGS, CMAKE_C_FLAGS, etc.
34 it's vital to set for certain projects (e.g. using CMAKE_SIZEOF_VOID_P or CMAKE_LIBRARY_ARCHITECTURE)
35 """
36 if not isinstance(conanfile, ConanFile):
37 raise ConanException("First argument of CMake() has to be ConanFile. Use CMake(self)")
38
39 self._conanfile = conanfile
40 self._settings = conanfile.settings
41 self._build_type = build_type or conanfile.settings.get_safe("build_type")
42
43 self.generator = generator or get_generator(conanfile.settings)
44 self.parallel = parallel
45 # Initialize definitions (won't be updated if conanfile or any of these variables change)
46 builder = CMakeDefinitionsBuilder(self._conanfile,
47 cmake_system_name=cmake_system_name,
48 make_program=make_program, parallel=parallel,
49 generator=self.generator, set_cmake_flags=set_cmake_flags,
50 forced_build_type=build_type,
51 output=self._conanfile.output)
52 # FIXME CONAN 2.0: CMake() interface should be always the constructor and self.definitions.
53 # FIXME CONAN 2.0: Avoid properties and attributes to make the user interface more clear
54
55 self.definitions = builder.get_definitions()
56 self.toolset = toolset or get_toolset(self._settings)
57 self.build_dir = None
58
59 @property
60 def build_folder(self):
61 return self.build_dir
62
63 @build_folder.setter
64 def build_folder(self, value):
65 self.build_dir = value
66
67 @property
68 def build_type(self):
69 return self._build_type
70
71 @build_type.setter
72 def build_type(self, build_type):
73 settings_build_type = self._settings.get_safe("build_type")
74 if build_type != settings_build_type:
75 self._conanfile.output.warn("Forced CMake build type ('%s') different from the settings"
76 " build type ('%s')" % (build_type, settings_build_type))
77 self.definitions.pop("CMAKE_BUILD_TYPE", None)
78 self.definitions.update(build_type_definition(build_type, self.generator))
79 self._build_type = build_type
80
81 @property
82 def in_local_cache(self):
83 try:
84 in_local_cache = self.definitions[cmake_in_local_cache_var_name]
85 return get_bool_from_text(str(in_local_cache))
86 except KeyError:
87 return False
88
89 @property
90 def runtime(self):
91 return defs_to_string(self.definitions.get(runtime_definition_var_name))
92
93 @property
94 def flags(self):
95 return defs_to_string(self.definitions)
96
97 @property
98 def is_multi_configuration(self):
99 return is_multi_configuration(self.generator)
100
101 @property
102 def command_line(self):
103 args = ['-G "%s"' % self.generator] if self.generator else []
104 args.append(self.flags)
105 args.append('-Wno-dev')
106
107 if self.toolset:
108 args.append('-T "%s"' % self.toolset)
109 return join_arguments(args)
110
111 @property
112 def build_config(self):
113 """ cmake --build tool have a --config option for Multi-configuration IDEs
114 """
115 if self._build_type and self.is_multi_configuration:
116 return "--config %s" % self._build_type
117 return ""
118
119 def _get_dirs(self, source_folder, build_folder, source_dir, build_dir, cache_build_folder):
120 if (source_folder or build_folder) and (source_dir or build_dir):
121 raise ConanException("Use 'build_folder'/'source_folder' arguments")
122
123 def get_dir(folder, origin):
124 if folder:
125 if os.path.isabs(folder):
126 return folder
127 return os.path.join(origin, folder)
128 return origin
129
130 if source_dir or build_dir: # OLD MODE
131 build_ret = build_dir or self.build_dir or self._conanfile.build_folder
132 source_ret = source_dir or self._conanfile.source_folder
133 else:
134 build_ret = get_dir(build_folder, self._conanfile.build_folder)
135 source_ret = get_dir(source_folder, self._conanfile.source_folder)
136
137 if self._conanfile.in_local_cache and cache_build_folder:
138 build_ret = get_dir(cache_build_folder, self._conanfile.build_folder)
139
140 return source_ret, build_ret
141
142 def _run(self, command):
143 compiler = self._settings.get_safe("compiler")
144 the_os = self._settings.get_safe("os")
145 is_clangcl = the_os == "Windows" and compiler == "clang"
146 is_msvc = compiler == "Visual Studio"
147 if (is_msvc or is_clangcl) and self.generator in ["Ninja", "NMake Makefiles",
148 "NMake Makefiles JOM"]:
149 with tools.vcvars(self._settings, force=True, filter_known_paths=False,
150 output=self._conanfile.output):
151 self._conanfile.run(command)
152 else:
153 self._conanfile.run(command)
154
155 def configure(self, args=None, defs=None, source_dir=None, build_dir=None,
156 source_folder=None, build_folder=None, cache_build_folder=None,
157 pkg_config_paths=None):
158
159 # TODO: Deprecate source_dir and build_dir in favor of xxx_folder
160 if not self._conanfile.should_configure:
161 return
162 args = args or []
163 defs = defs or {}
164 source_dir, self.build_dir = self._get_dirs(source_folder, build_folder,
165 source_dir, build_dir,
166 cache_build_folder)
167 mkdir(self.build_dir)
168 arg_list = join_arguments([
169 self.command_line,
170 args_to_string(args),
171 defs_to_string(defs),
172 args_to_string([source_dir])
173 ])
174
175 if pkg_config_paths:
176 pkg_env = {"PKG_CONFIG_PATH":
177 os.pathsep.join(get_abs_path(f, self._conanfile.install_folder)
178 for f in pkg_config_paths)}
179 else:
180 # If we are using pkg_config generator automate the pcs location, otherwise it could
181 # read wrong files
182 set_env = "pkg_config" in self._conanfile.generators \
183 and "PKG_CONFIG_PATH" not in os.environ
184 pkg_env = {"PKG_CONFIG_PATH": self._conanfile.install_folder} if set_env else {}
185
186 with tools.environment_append(pkg_env):
187 command = "cd %s && cmake %s" % (args_to_string([self.build_dir]), arg_list)
188 if platform.system() == "Windows" and self.generator == "MinGW Makefiles":
189 with tools.remove_from_path("sh"):
190 self._run(command)
191 else:
192 self._run(command)
193
194 def build(self, args=None, build_dir=None, target=None):
195 if not self._conanfile.should_build:
196 return
197 self._build(args, build_dir, target)
198
199 def _build(self, args=None, build_dir=None, target=None):
200 args = args or []
201 build_dir = build_dir or self.build_dir or self._conanfile.build_folder
202 if target is not None:
203 args = ["--target", target] + args
204
205 if self.generator and self.parallel:
206 compiler_version = self._settings.get_safe("compiler.version")
207 if "Makefiles" in self.generator and "NMake" not in self.generator:
208 if "--" not in args:
209 args.append("--")
210 args.append("-j%i" % cpu_count(self._conanfile.output))
211 elif "Visual Studio" in self.generator and \
212 compiler_version and Version(compiler_version) >= "10":
213 if "--" not in args:
214 args.append("--")
215 # Parallel for building projects in the solution
216 args.append("/m:%i" % cpu_count(output=self._conanfile.output))
217
218 arg_list = join_arguments([
219 args_to_string([build_dir]),
220 self.build_config,
221 args_to_string(args)
222 ])
223 command = "cmake --build %s" % arg_list
224 self._run(command)
225
226 def install(self, args=None, build_dir=None):
227 if not self._conanfile.should_install:
228 return
229 mkdir(self._conanfile.package_folder)
230 if not self.definitions.get(cmake_install_prefix_var_name):
231 raise ConanException("%s not defined for 'cmake.install()'\n"
232 "Make sure 'package_folder' is "
233 "defined" % cmake_install_prefix_var_name)
234 self._build(args=args, build_dir=build_dir, target="install")
235
236 def test(self, args=None, build_dir=None, target=None):
237 if not self._conanfile.should_test:
238 return
239 if not target:
240 target = "RUN_TESTS" if self.is_multi_configuration else "test"
241 self._build(args=args, build_dir=build_dir, target=target)
242
243 @property
244 def verbose(self):
245 try:
246 verbose = self.definitions[verbose_definition_name]
247 return get_bool_from_text(str(verbose))
248 except KeyError:
249 return False
250
251 @verbose.setter
252 def verbose(self, value):
253 self.definitions.update(verbose_definition(value))
254
255 def patch_config_paths(self):
256 """
257 changes references to the absolute path of the installed package and its dependencies in
258 exported cmake config files to the appropriate conan variable. This makes
259 most (sensible) cmake config files portable.
260
261 For example, if a package foo installs a file called "fooConfig.cmake" to
262 be used by cmake's find_package method, normally this file will contain
263 absolute paths to the installed package folder, for example it will contain
264 a line such as:
265
266 SET(Foo_INSTALL_DIR /home/developer/.conan/data/Foo/1.0.0/...)
267
268 This will cause cmake find_package() method to fail when someone else
269 installs the package via conan.
270
271 This function will replace such mentions to
272
273 SET(Foo_INSTALL_DIR ${CONAN_FOO_ROOT})
274
275 which is a variable that is set by conanbuildinfo.cmake, so that find_package()
276 now correctly works on this conan package.
277
278 For dependent packages, if a package foo installs a file called "fooConfig.cmake" to
279 be used by cmake's find_package method and if it depends to a package bar,
280 normally this file will contain absolute paths to the bar package folder,
281 for example it will contain a line such as:
282
283 SET_TARGET_PROPERTIES(foo PROPERTIES
284 INTERFACE_INCLUDE_DIRECTORIES
285 "/home/developer/.conan/data/Bar/1.0.0/user/channel/id/include")
286
287 This function will replace such mentions to
288
289 SET_TARGET_PROPERTIES(foo PROPERTIES
290 INTERFACE_INCLUDE_DIRECTORIES
291 "${CONAN_BAR_ROOT}/include")
292
293 If the install() method of the CMake object in the conan file is used, this
294 function should be called _after_ that invocation. For example:
295
296 def build(self):
297 cmake = CMake(self)
298 cmake.configure()
299 cmake.build()
300 cmake.install()
301 cmake.patch_config_paths()
302 """
303
304 if not self._conanfile.should_install:
305 return
306 if not self._conanfile.name:
307 raise ConanException("cmake.patch_config_paths() can't work without package name. "
308 "Define name in your recipe")
309 pf = self.definitions.get(cmake_install_prefix_var_name)
310 replstr = "${CONAN_%s_ROOT}" % self._conanfile.name.upper()
311 allwalk = chain(walk(self._conanfile.build_folder), walk(self._conanfile.package_folder))
312 for root, _, files in allwalk:
313 for f in files:
314 if f.endswith(".cmake"):
315 path = os.path.join(root, f)
316 tools.replace_path_in_file(path, pf, replstr, strict=False,
317 output=self._conanfile.output)
318
319 # patch paths of dependent packages that are found in any cmake files of the
320 # current package
321 for dep in self._conanfile.deps_cpp_info.deps:
322 from_str = self._conanfile.deps_cpp_info[dep].rootpath
323 dep_str = "${CONAN_%s_ROOT}" % dep.upper()
324 ret = tools.replace_path_in_file(path, from_str, dep_str, strict=False,
325 output=self._conanfile.output)
326 if ret:
327 self._conanfile.output.info("Patched paths for %s: %s to %s"
328 % (dep, from_str, dep_str))
329
330 @staticmethod
331 def get_version():
332 try:
333 out, _ = subprocess.Popen(["cmake", "--version"], stdout=subprocess.PIPE).communicate()
334 version_line = decode_text(out).split('\n', 1)[0]
335 version_str = version_line.rsplit(' ', 1)[-1]
336 return Version(version_str)
337 except Exception as e:
338 raise ConanException("Error retrieving CMake version: '{}'".format(e))
339
[end of conans/client/build/cmake.py]
[start of conans/client/graph/graph_manager.py]
1 import fnmatch
2 import os
3 from collections import OrderedDict
4
5 from conans.client.generators.text import TXTGenerator
6 from conans.client.graph.build_mode import BuildMode
7 from conans.client.graph.graph import BINARY_BUILD, BINARY_WORKSPACE, Node,\
8 RECIPE_CONSUMER, RECIPE_VIRTUAL
9 from conans.client.graph.graph_binaries import GraphBinariesAnalyzer
10 from conans.client.graph.graph_builder import DepsGraphBuilder
11 from conans.client.loader import ProcessedProfile
12 from conans.errors import ConanException, conanfile_exception_formatter
13 from conans.model.conan_file import get_env_context_manager
14 from conans.model.graph_info import GraphInfo
15 from conans.model.ref import ConanFileReference
16 from conans.paths import BUILD_INFO
17 from conans.util.files import load
18
19
20 class _RecipeBuildRequires(OrderedDict):
21 def __init__(self, conanfile):
22 super(_RecipeBuildRequires, self).__init__()
23 build_requires = getattr(conanfile, "build_requires", [])
24 if not isinstance(build_requires, (list, tuple)):
25 build_requires = [build_requires]
26 for build_require in build_requires:
27 self.add(build_require)
28
29 def add(self, build_require):
30 if not isinstance(build_require, ConanFileReference):
31 build_require = ConanFileReference.loads(build_require)
32 self[build_require.name] = build_require
33
34 def __call__(self, build_require):
35 self.add(build_require)
36
37 def update(self, build_requires):
38 for build_require in build_requires:
39 self.add(build_require)
40
41 def __str__(self):
42 return ", ".join(str(r) for r in self.values())
43
44
45 class GraphManager(object):
46 def __init__(self, output, client_cache, remote_manager, loader, proxy, resolver):
47 self._proxy = proxy
48 self._output = output
49 self._resolver = resolver
50 self._client_cache = client_cache
51 self._remote_manager = remote_manager
52 self._loader = loader
53
54 def load_consumer_conanfile(self, conanfile_path, info_folder,
55 deps_info_required=False, test=None):
56 """loads a conanfile for local flow: source, imports, package, build
57 """
58 try:
59 graph_info = GraphInfo.load(info_folder)
60 except IOError: # Only if file is missing
61 # This is very dirty, should be removed for Conan 2.0 (source() method only)
62 profile = self._client_cache.default_profile
63 profile.process_settings(self._client_cache)
64 else:
65 profile = graph_info.profile
66 profile.process_settings(self._client_cache, preprocess=False)
67 # This is the hack of recovering the options from the graph_info
68 profile.options.update(graph_info.options)
69 processed_profile = ProcessedProfile(profile, None)
70 if conanfile_path.endswith(".py"):
71 conanfile = self._loader.load_consumer(conanfile_path,
72 processed_profile=processed_profile, test=test)
73 with get_env_context_manager(conanfile, without_python=True):
74 with conanfile_exception_formatter(str(conanfile), "config_options"):
75 conanfile.config_options()
76 with conanfile_exception_formatter(str(conanfile), "configure"):
77 conanfile.configure()
78
79 conanfile.settings.validate() # All has to be ok!
80 conanfile.options.validate()
81 else:
82 conanfile = self._loader.load_conanfile_txt(conanfile_path, processed_profile)
83
84 load_deps_info(info_folder, conanfile, required=deps_info_required)
85
86 return conanfile
87
88 def load_simple_graph(self, reference, profile, recorder):
89 # Loads a graph without computing the binaries. It is necessary for
90 # export-pkg command, not hitting the server
91 # # https://github.com/conan-io/conan/issues/3432
92 builder = DepsGraphBuilder(self._proxy, self._output, self._loader, self._resolver,
93 workspace=None, recorder=recorder)
94 processed_profile = ProcessedProfile(profile, create_reference=None)
95 conanfile = self._loader.load_virtual([reference], processed_profile)
96 root_node = Node(None, conanfile, recipe=RECIPE_VIRTUAL)
97 graph = builder.load_graph(root_node, check_updates=False, update=False, remote_name=None,
98 processed_profile=processed_profile)
99 return graph
100
101 def load_graph(self, reference, create_reference, graph_info, build_mode, check_updates, update,
102 remote_name, recorder, workspace):
103
104 def _inject_require(conanfile, reference):
105 """ test_package functionality requires injecting the tested package as requirement
106 before running the install
107 """
108 require = conanfile.requires.get(reference.name)
109 if require:
110 require.conan_reference = require.range_reference = reference
111 else:
112 conanfile.requires(str(reference))
113 conanfile._conan_user = reference.user
114 conanfile._conan_channel = reference.channel
115
116 # Computing the full dependency graph
117 profile = graph_info.profile
118 processed_profile = ProcessedProfile(profile, create_reference)
119 conan_ref = None
120 if isinstance(reference, list): # Install workspace with multiple root nodes
121 conanfile = self._loader.load_virtual(reference, processed_profile)
122 root_node = Node(conan_ref, conanfile, recipe=RECIPE_VIRTUAL)
123 elif isinstance(reference, ConanFileReference):
124 # create without test_package and install <ref>
125 conanfile = self._loader.load_virtual([reference], processed_profile)
126 root_node = Node(conan_ref, conanfile, recipe=RECIPE_VIRTUAL)
127 else:
128 if reference.endswith(".py"):
129 test = str(create_reference) if create_reference else None
130 conanfile = self._loader.load_consumer(reference, processed_profile, test=test)
131 if create_reference: # create with test_package
132 _inject_require(conanfile, create_reference)
133 conan_ref = ConanFileReference(conanfile.name, conanfile.version, None, None,
134 validate=False)
135 else:
136 conanfile = self._loader.load_conanfile_txt(reference, processed_profile)
137 root_node = Node(conan_ref, conanfile, recipe=RECIPE_CONSUMER)
138
139 build_mode = BuildMode(build_mode, self._output)
140 deps_graph = self._load_graph(root_node, check_updates, update,
141 build_mode=build_mode, remote_name=remote_name,
142 profile_build_requires=profile.build_requires,
143 recorder=recorder, workspace=workspace,
144 processed_profile=processed_profile)
145
146 # THIS IS NECESSARY to store dependencies options in profile, for consumer
147 # FIXME: This is a hack. Might dissapear if the graph for local commands is always recomputed
148 graph_info.options = root_node.conanfile.options.values
149
150 version_ranges_output = self._resolver.output
151 if version_ranges_output:
152 self._output.success("Version ranges solved")
153 for msg in version_ranges_output:
154 self._output.info(" %s" % msg)
155 self._output.writeln("")
156
157 build_mode.report_matches()
158 return deps_graph, conanfile
159
160 @staticmethod
161 def _get_recipe_build_requires(conanfile):
162 conanfile.build_requires = _RecipeBuildRequires(conanfile)
163 if hasattr(conanfile, "build_requirements"):
164 with get_env_context_manager(conanfile):
165 with conanfile_exception_formatter(str(conanfile), "build_requirements"):
166 conanfile.build_requirements()
167
168 return conanfile.build_requires
169
170 def _recurse_build_requires(self, graph, check_updates, update, build_mode, remote_name,
171 profile_build_requires, recorder, workspace, processed_profile):
172 for node in list(graph.nodes):
173 # Virtual conanfiles doesn't have output, but conanfile.py and conanfile.txt do
174 # FIXME: To be improved and build a explicit model for this
175 if node.recipe == RECIPE_VIRTUAL:
176 continue
177 if (node.binary not in (BINARY_BUILD, BINARY_WORKSPACE) and
178 node.recipe != RECIPE_CONSUMER):
179 continue
180 package_build_requires = self._get_recipe_build_requires(node.conanfile)
181 str_ref = str(node.conan_ref)
182 new_profile_build_requires = OrderedDict()
183 profile_build_requires = profile_build_requires or {}
184 for pattern, build_requires in profile_build_requires.items():
185 if ((node.recipe == RECIPE_CONSUMER and pattern == "&") or
186 (node.recipe != RECIPE_CONSUMER and pattern == "&!") or
187 fnmatch.fnmatch(str_ref, pattern)):
188 for build_require in build_requires:
189 if build_require.name in package_build_requires: # Override existing
190 package_build_requires[build_require.name] = build_require
191 else: # Profile one
192 new_profile_build_requires[build_require.name] = build_require
193
194 if package_build_requires:
195 node.conanfile.build_requires_options.clear_unscoped_options()
196 build_requires_options = node.conanfile.build_requires_options
197 virtual = self._loader.load_virtual(package_build_requires.values(),
198 scope_options=False,
199 build_requires_options=build_requires_options,
200 processed_profile=processed_profile)
201 virtual_node = Node(None, virtual, recipe=RECIPE_VIRTUAL)
202 build_requires_package_graph = self._load_graph(virtual_node, check_updates, update,
203 build_mode, remote_name,
204 profile_build_requires,
205 recorder, workspace,
206 processed_profile)
207 graph.add_graph(node, build_requires_package_graph, build_require=True)
208
209 if new_profile_build_requires:
210 node.conanfile.build_requires_options.clear_unscoped_options()
211 build_requires_options = node.conanfile.build_requires_options
212 virtual = self._loader.load_virtual(new_profile_build_requires.values(),
213 scope_options=False,
214 build_requires_options=build_requires_options,
215 processed_profile=processed_profile)
216 virtual_node = Node(None, virtual, recipe=RECIPE_VIRTUAL)
217 build_requires_profile_graph = self._load_graph(virtual_node, check_updates, update,
218 build_mode, remote_name,
219 new_profile_build_requires,
220 recorder, workspace,
221 processed_profile)
222 graph.add_graph(node, build_requires_profile_graph, build_require=True)
223
224 def _load_graph(self, root_node, check_updates, update, build_mode, remote_name,
225 profile_build_requires, recorder, workspace, processed_profile):
226 builder = DepsGraphBuilder(self._proxy, self._output, self._loader, self._resolver,
227 workspace, recorder)
228 graph = builder.load_graph(root_node, check_updates, update, remote_name, processed_profile)
229 if build_mode is None:
230 return graph
231 binaries_analyzer = GraphBinariesAnalyzer(self._client_cache, self._output,
232 self._remote_manager, workspace)
233 binaries_analyzer.evaluate_graph(graph, build_mode, update, remote_name)
234
235 self._recurse_build_requires(graph, check_updates, update, build_mode, remote_name,
236 profile_build_requires, recorder, workspace, processed_profile)
237 return graph
238
239
240 def load_deps_info(current_path, conanfile, required):
241
242 def get_forbidden_access_object(field_name):
243 class InfoObjectNotDefined(object):
244 def __getitem__(self, item):
245 raise ConanException("self.%s not defined. If you need it for a "
246 "local command run 'conan install'" % field_name)
247 __getattr__ = __getitem__
248
249 return InfoObjectNotDefined()
250
251 if not current_path:
252 return
253 info_file_path = os.path.join(current_path, BUILD_INFO)
254 try:
255 deps_cpp_info, deps_user_info, deps_env_info = TXTGenerator.loads(load(info_file_path))
256 conanfile.deps_cpp_info = deps_cpp_info
257 conanfile.deps_user_info = deps_user_info
258 conanfile.deps_env_info = deps_env_info
259 except IOError:
260 if required:
261 raise ConanException("%s file not found in %s\nIt is required for this command\n"
262 "You can generate it using 'conan install'"
263 % (BUILD_INFO, current_path))
264 conanfile.deps_cpp_info = get_forbidden_access_object("deps_cpp_info")
265 conanfile.deps_user_info = get_forbidden_access_object("deps_user_info")
266 except ConanException:
267 raise ConanException("Parse error in '%s' file in %s" % (BUILD_INFO, current_path))
268
[end of conans/client/graph/graph_manager.py]
[start of conans/client/graph/range_resolver.py]
1 import re
2
3 from conans.errors import ConanException
4 from conans.model.ref import ConanFileReference
5 from conans.search.search import search_recipes
6
7 re_param = re.compile(r"^(?P<function>include_prerelease|loose)\s*=\s*(?P<value>True|False)$")
8 re_version = re.compile(r"^((?!(include_prerelease|loose))[a-zA-Z0-9_+.\-~<>=|*^\s])*$")
9
10
11 def _parse_versionexpr(versionexpr, result):
12 expression = [it.strip() for it in versionexpr.split(",")]
13 if len(expression) > 4:
14 raise ConanException("Invalid expression for version_range '{}'".format(versionexpr))
15
16 include_prerelease = False
17 loose = True
18 version_range = []
19
20 for i, expr in enumerate(expression):
21 match_param = re_param.match(expr)
22 match_version = re_version.match(expr)
23
24 if match_param == match_version:
25 raise ConanException("Invalid version range '{}', failed in "
26 "chunk '{}'".format(versionexpr, expr))
27
28 if match_version and i not in [0, 1]:
29 raise ConanException("Invalid version range '{}'".format(versionexpr))
30
31 if match_param and i not in [1, 2, 3]:
32 raise ConanException("Invalid version range '{}'".format(versionexpr))
33
34 if match_version:
35 version_range.append(expr)
36
37 if match_param:
38 if match_param.group('function') == 'loose':
39 loose = match_param.group('value') == "True"
40 elif match_param.group('function') == 'include_prerelease':
41 include_prerelease = match_param.group('value') == "True"
42 else:
43 raise ConanException("Unexpected version range "
44 "parameter '{}'".format(match_param.group(1)))
45
46 if len(version_range) > 1:
47 result.append("WARN: Commas as separator in version '%s' range are deprecated "
48 "and will be removed in Conan 2.0" % str(versionexpr))
49
50 version_range = " ".join(map(str, version_range))
51 return version_range, loose, include_prerelease
52
53
54 def satisfying(list_versions, versionexpr, result):
55 """ returns the maximum version that satisfies the expression
56 if some version cannot be converted to loose SemVer, it is discarded with a msg
57 This provides some workaround for failing comparisons like "2.1" not matching "<=2.1"
58 """
59 from semver import SemVer, Range, max_satisfying
60
61 version_range, loose, include_prerelease = _parse_versionexpr(versionexpr, result)
62
63 # Check version range expression
64 try:
65 act_range = Range(version_range, loose)
66 except ValueError:
67 raise ConanException("version range expression '%s' is not valid" % version_range)
68
69 # Validate all versions
70 candidates = {}
71 for v in list_versions:
72 try:
73 ver = SemVer(v, loose=loose)
74 candidates[ver] = v
75 except (ValueError, AttributeError):
76 result.append("WARN: Version '%s' is not semver, cannot be compared with a range"
77 % str(v))
78
79 # Search best matching version in range
80 result = max_satisfying(candidates, act_range, loose=loose,
81 include_prerelease=include_prerelease)
82 return candidates.get(result)
83
84
85 class RangeResolver(object):
86
87 def __init__(self, client_cache, remote_search):
88 self._client_cache = client_cache
89 self._remote_search = remote_search
90 self._cached_remote_found = {}
91 self._result = []
92
93 @property
94 def output(self):
95 result = self._result
96 self._result = []
97 return result
98
99 def resolve(self, require, base_conanref, update, remote_name):
100 version_range = require.version_range
101 if version_range is None:
102 return
103
104 if require.is_resolved:
105 ref = require.conan_reference
106 resolved = self._resolve_version(version_range, [ref])
107 if not resolved:
108 raise ConanException("Version range '%s' required by '%s' not valid for "
109 "downstream requirement '%s'"
110 % (version_range, base_conanref, str(ref)))
111 else:
112 self._result.append("Version range '%s' required by '%s' valid for "
113 "downstream requirement '%s'"
114 % (version_range, base_conanref, str(ref)))
115 return
116
117 ref = require.conan_reference
118 # The search pattern must be a string
119 search_ref = str(ConanFileReference(ref.name, "*", ref.user, ref.channel))
120
121 if update:
122 resolved = (self._resolve_remote(search_ref, version_range, remote_name) or
123 self._resolve_local(search_ref, version_range))
124 else:
125 resolved = (self._resolve_local(search_ref, version_range) or
126 self._resolve_remote(search_ref, version_range, remote_name))
127
128 if resolved:
129 self._result.append("Version range '%s' required by '%s' resolved to '%s'"
130 % (version_range, base_conanref, str(resolved)))
131 require.conan_reference = resolved
132 else:
133 raise ConanException("Version range '%s' from requirement '%s' required by '%s' "
134 "could not be resolved" % (version_range, require, base_conanref))
135
136 def _resolve_local(self, search_ref, version_range):
137 local_found = search_recipes(self._client_cache, search_ref)
138 if local_found:
139 return self._resolve_version(version_range, local_found)
140
141 def _resolve_remote(self, search_ref, version_range, remote_name):
142 remote_cache = self._cached_remote_found.setdefault(remote_name, {})
143 # We should use ignorecase=False, we want the exact case!
144 remote_found = remote_cache.get(search_ref)
145 if remote_found is None:
146 remote_found = self._remote_search.search_remotes(search_ref, remote_name)
147 # We don't want here to resolve the revision that should be done in the proxy
148 # as any other regular flow
149 remote_found = [ref.copy_clear_rev() for ref in remote_found or []]
150 # Empty list, just in case it returns None
151 remote_cache[search_ref] = remote_found
152 if remote_found:
153 return self._resolve_version(version_range, remote_found)
154
155 def _resolve_version(self, version_range, refs_found):
156 versions = {ref.version: ref for ref in refs_found}
157 result = satisfying(versions, version_range, self._result)
158 return versions.get(result)
159
[end of conans/client/graph/range_resolver.py]
[start of conans/client/tools/pkg_config.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 import subprocess
5
6 from conans.errors import ConanException
7
8
9 class PkgConfig(object):
10 @staticmethod
11 def _cmd_output(command):
12 return subprocess.check_output(command).decode().strip()
13
14 def __init__(self, library, pkg_config_executable='pkg-config', static=False, msvc_syntax=False, variables=None,
15 print_errors=True):
16 """
17 :param library: library (package) name, such as libastral
18 :param pkg_config_executable: specify custom pkg-config executable (e.g. for cross-compilation)
19 :param static: output libraries suitable for static linking (adds --static to pkg-config command line)
20 :param msvc_syntax: MSVC compatibility (adds --msvc-syntax to pkg-config command line)
21 :param variables: dictionary of pkg-config variables (passed as --define-variable=VARIABLENAME=VARIABLEVALUE)
22 :param print_errors: output error messages (adds --print-errors)
23 """
24 self.library = library
25 self.pkg_config_executable = pkg_config_executable
26 self.static = static
27 self.msvc_syntax = msvc_syntax
28 self.define_variables = variables
29 self.print_errors = print_errors
30
31 self._variables = dict()
32 self.info = dict()
33
34 def _parse_output(self, option):
35 command = [self.pkg_config_executable, '--' + option, self.library]
36 if self.static:
37 command.append('--static')
38 if self.msvc_syntax:
39 command.append('--msvc-syntax')
40 if self.print_errors:
41 command.append('--print-errors')
42 if self.define_variables:
43 for name, value in self.define_variables.items():
44 command.append('--define-variable=%s=%s' % (name, value))
45 try:
46 return self._cmd_output(command)
47 except subprocess.CalledProcessError as e:
48 raise ConanException('pkg-config command %s failed with error: %s' % (command, e))
49
50 def _get_option(self, option):
51 if option not in self.info:
52 self.info[option] = self._parse_output(option).split()
53 return self.info[option]
54
55 @property
56 def cflags(self):
57 return self._get_option('cflags')
58
59 @property
60 def cflags_only_I(self):
61 return self._get_option('cflags-only-I')
62
63 @property
64 def cflags_only_other(self):
65 return self._get_option('cflags-only-other')
66
67 @property
68 def libs(self):
69 return self._get_option('libs')
70
71 @property
72 def libs_only_L(self):
73 return self._get_option('libs-only-L')
74
75 @property
76 def libs_only_l(self):
77 return self._get_option('libs-only-l')
78
79 @property
80 def libs_only_other(self):
81 return self._get_option('libs-only-other')
82
83 @property
84 def provides(self):
85 return self._get_option('print-provides')
86
87 @property
88 def requires(self):
89 return self._get_option('print-requires')
90
91 @property
92 def requires_private(self):
93 return self._get_option('print-requires-private')
94
95 @property
96 def variables(self):
97 if not self._variables:
98 variable_names = self._parse_output('print-variables').split()
99 for name in variable_names:
100 self._variables[name] = self._parse_output('variable=%s' % name)
101 return self._variables
102
[end of conans/client/tools/pkg_config.py]
[start of setup.py]
1 """A setuptools based setup module.
2 See:
3 https://packaging.python.org/en/latest/distributing.html
4 https://github.com/pypa/sampleproject
5 """
6
7 import os
8 import platform
9 import re
10 # To use a consistent encoding
11 from codecs import open
12 from os import path
13
14 # Always prefer setuptools over distutils
15 from setuptools import find_packages, setup
16
17 here = path.abspath(path.dirname(__file__))
18
19
20 def get_requires(filename):
21 requirements = []
22 with open(filename, "rt") as req_file:
23 for line in req_file.read().splitlines():
24 if not line.strip().startswith("#"):
25 requirements.append(line)
26 return requirements
27
28
29 project_requirements = get_requires("conans/requirements.txt")
30 if platform.system() == "Darwin":
31 project_requirements.extend(get_requires("conans/requirements_osx.txt"))
32 project_requirements.extend(get_requires("conans/requirements_server.txt"))
33 dev_requirements = get_requires("conans/requirements_dev.txt")
34
35
36 def load_version():
37 '''Loads a file content'''
38 filename = os.path.abspath(os.path.join(os.path.dirname(os.path.abspath(__file__)),
39 "conans", "__init__.py"))
40 with open(filename, "rt") as version_file:
41 conan_init = version_file.read()
42 version = re.search("__version__ = '([0-9a-z.-]+)'", conan_init).group(1)
43 return version
44
45
46 # def generate_long_description_file():
47 # import pypandoc
48 #
49 # output = pypandoc.convert('README.md', 'rst')
50 # return output
51
52 setup(
53 name='conan',
54 # Versions should comply with PEP440. For a discussion on single-sourcing
55 # the version across setup.py and the project code, see
56 # https://packaging.python.org/en/latest/single_source_version.html
57 version=load_version(), # + ".rc1",
58
59 description='Conan C/C++ package manager',
60 # long_description="An open source, decentralized package manager, to automate building and sharing of packages",
61 # long_description=generate_long_description_file(),
62
63 # The project's main homepage.
64 url='https://conan.io',
65
66 # Author details
67 author='JFrog LTD',
68 author_email='[email protected]',
69
70 # Choose your license
71 license='MIT',
72
73 # See https://pypi.python.org/pypi?%3Aaction=list_classifiers
74 classifiers=[
75 'Development Status :: 5 - Production/Stable',
76 'Intended Audience :: Developers',
77 'Topic :: Software Development :: Build Tools',
78 'License :: OSI Approved :: MIT License',
79 'Programming Language :: Python :: 2',
80 'Programming Language :: Python :: 2.7',
81 'Programming Language :: Python :: 3',
82 'Programming Language :: Python :: 3.6'
83 ],
84
85 # What does your project relate to?
86 keywords=['C/C++', 'package', 'libraries', 'developer', 'manager',
87 'dependency', 'tool', 'c', 'c++', 'cpp'],
88
89 # You can just specify the packages manually here if your project is
90 # simple. Or you can use find_packages().
91 packages=find_packages(),
92
93 # Alternatively, if you want to distribute just a my_module.py, uncomment
94 # this:
95 # py_modules=["my_module"],
96
97 # List run-time dependencies here. These will be installed by pip when
98 # your project is installed. For an analysis of "install_requires" vs pip's
99 # requirements files see:
100 # https://packaging.python.org/en/latest/requirements.html
101 install_requires=project_requirements,
102
103 # List additional groups of dependencies here (e.g. development
104 # dependencies). You can install these using the following syntax,
105 # for example:
106 # $ pip install -e .[dev,test]
107 extras_require={
108 'dev': dev_requirements,
109 'test': dev_requirements,
110 },
111
112 # If there are data files included in your packages that need to be
113 # installed, specify them here. If using Python 2.6 or less, then these
114 # have to be included in MANIFEST.in as well.
115 package_data={
116 'conans': ['*.txt'],
117 },
118
119 # Although 'package_data' is the preferred approach, in some case you may
120 # need to place data files outside of your packages. See:
121 # http://docs.python.org/3.4/distutils/setupscript.html#installing-additional-files # noqa
122 # In this case, 'data_file' will be installed into '<sys.prefix>/my_data'
123 # data_files=[('my_data', ['data/data_file'])],
124
125 # To provide executable scripts, use entry points in preference to the
126 # "scripts" keyword. Entry points provide cross-platform support and allow
127 # pip to create the appropriate form of executable for the target platform.
128 entry_points={
129 'console_scripts': [
130 'conan=conans.conan:run',
131 'conan_server=conans.conan_server:run',
132 'conan_build_info=conans.build_info.command:run'
133 ],
134 },
135 )
136
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
conan-io/conan
|
0e8fd48f409faa06ce468d7dae2d01da270dc95d
|
conan install unexpectedly reports invalid version range
i have this requirement in my `conanfile.py`:
`foo/[>16.5.0 <17.0.0]@bar/stable`
`conan install` aborts with this error:
`ERROR: version range expression '>16.5.0<17.0.0' is not valid`
note: using the deprecated comma-based syntax works fine
* Python 3.6 on CentOS 7.5
* Conan 1.11.2
* node-semver 0.6.1
|
Reproduced, indeed an undesired bug. Thanks for reporting @mistafunk !
|
2019-01-10T15:11:56Z
|
<patch>
diff --git a/conans/model/ref.py b/conans/model/ref.py
--- a/conans/model/ref.py
+++ b/conans/model/ref.py
@@ -78,7 +78,6 @@ class ConanFileReference(namedtuple("ConanFileReference", "name version user cha
""" Full reference of a package recipes, e.g.:
opencv/2.4.10@lasote/testing
"""
- whitespace_pattern = re.compile(r"\s+")
sep_pattern = re.compile(r"([^/]+)/([^/]+)@([^/]+)/([^/#]+)#?(.+)?")
def __new__(cls, name, version, user, channel, revision=None, validate=True):
@@ -107,7 +106,6 @@ def _validate(self):
def loads(text, validate=True):
""" Parses a text string to generate a ConanFileReference object
"""
- text = ConanFileReference.whitespace_pattern.sub("", text)
try:
# Split returns empty start and end groups
_, name, version, user, channel, revision, _ = ConanFileReference.sep_pattern.split(text)
</patch>
|
[]
|
[]
| |||
Qiskit__qiskit-4924
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
SabreLayout Fails when Quantum Register < Coupling Map size
<!-- ⚠️ If you do not respect this template, your issue will be closed -->
<!-- ⚠️ Make sure to browse the opened and closed issues -->
### Information
- **Qiskit Terra version**: Master, installed from source, v15.0
- **Python version**: Anaconda python, v3.7.6
- **Operating system**: Windows 10 LTSC 1809
### What is the current behavior?
When specifying a QuantumRegister that is smaller in magnitude than the size of the coupling_map you're using, SabreLayout will throw an error.
### Steps to reproduce the problem
```python
qasm_circuit = """OPENQASM 2.0;
include "qelib1.inc";
qreg q[6];
cx q[3],q[0];
cx q[1],q[0];
cx q[5],q[0];
x q[2];
cx q[2],q[0];
x q[4];
cx q[4],q[0];"""
qasm_hardware = """OPENQASM 2.0;
include "qelib1.inc";
qreg q[15];
cx q[3],q[0];
cx q[1],q[0];
cx q[5],q[0];
x q[2];
cx q[2],q[0];
x q[4];
cx q[4],q[0];"""
from qiskit import QuantumCircuit
from qiskit.transpiler import CouplingMap, passes, PassManager
# Load qreg = # logical qubits and qreg = # physical qubits circuits
logical = QuantumCircuit.from_qasm_str(qasm_circuit)
physical = QuantumCircuit.from_qasm_str(qasm_hardware)
# Define hardware graph (15 qubit)
melbourne = [[1, 0], [1, 2], [2, 3], [4, 3], [4, 10], [5, 4], [5, 6], [5, 9], [6, 8], [7, 8],
[9, 8], [9, 10], [11, 10], [11, 3], [11, 12], [12, 2], [13, 1], [13, 12], [14, 0],
[14, 13], [0, 1], [2, 1], [3, 2], [3, 4], [10, 4], [4, 5], [6, 5], [9, 5], [8, 6],
[8, 7], [8, 9], [10, 9], [10, 11], [3, 11], [12, 11], [2, 12], [1, 13], [12, 13],
[0, 14], [13, 14]]
# Define coupling map object
coupling_map = CouplingMap(couplinglist=melbourne)
# Routing passes
basic_r = passes.BasicSwap(coupling_map)
# Layout passes
sabre_l = passes.SabreLayout(coupling_map)
# Make pass lists
pass_sabre = [sabre_l, basic_r]
# Define passmanager object
pm = PassManager(pass_sabre)
# Run passes
logical_qreg = pm.run(logical)
physical_qreg = pm.run(physical)
```
output
```python
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-21-ce5f1efe766e> in <module>
51
52 # Run passes
---> 53 logical_qreg = pm.run(logical)
54 physical_qreg = pm.run(physical)
c:\anaconda3\envs\qubit\lib\site-packages\qiskit\transpiler\passmanager.py in run(self, circuits, output_name, callback)
212 """
213 if isinstance(circuits, QuantumCircuit):
--> 214 return self._run_single_circuit(circuits, output_name, callback)
215 elif len(circuits) == 1:
216 return self._run_single_circuit(circuits[0], output_name, callback)
c:\anaconda3\envs\qubit\lib\site-packages\qiskit\transpiler\passmanager.py in _run_single_circuit(self, circuit, output_name, callback)
275 if callback is None and self.callback: # TODO to remove with __init__(callback)
276 callback = self.callback
--> 277 result = running_passmanager.run(circuit, output_name=output_name, callback=callback)
278 self.property_set = running_passmanager.property_set
279 return result
c:\anaconda3\envs\qubit\lib\site-packages\qiskit\transpiler\runningpassmanager.py in run(***failed resolving arguments***)
113 for passset in self.working_list:
114 for pass_ in passset:
--> 115 dag = self._do_pass(pass_, dag, passset.options)
116
117 circuit = dag_to_circuit(dag)
c:\anaconda3\envs\qubit\lib\site-packages\qiskit\transpiler\runningpassmanager.py in _do_pass(self, pass_, dag, options)
143 # Run the pass itself, if not already run
144 if pass_ not in self.valid_passes:
--> 145 dag = self._run_this_pass(pass_, dag)
146
147 # update the valid_passes property
c:\anaconda3\envs\qubit\lib\site-packages\qiskit\transpiler\runningpassmanager.py in _run_this_pass(self, pass_, dag)
174 # Measure time if we have a callback or logging set
175 start_time = time()
--> 176 pass_.run(FencedDAGCircuit(dag))
177 end_time = time()
178 run_time = end_time - start_time
c:\anaconda3\envs\qubit\lib\site-packages\qiskit\transpiler\passes\layout\sabre_layout.py in run(self, dag)
105 final_layout = self._compose_layouts(initial_layout,
106 pass_final_layout,
--> 107 circ.qregs)
108 initial_layout = final_layout
109 circ = circ.reverse_ops()
c:\anaconda3\envs\qubit\lib\site-packages\qiskit\transpiler\passes\layout\sabre_layout.py in _compose_layouts(self, initial_layout, pass_final_layout, qregs)
141 trivial_layout = Layout.generate_trivial_layout(*qregs)
142 pass_final_layout = Layout({trivial_layout[v.index]: p
--> 143 for v, p in pass_final_layout.get_virtual_bits().items()})
144 qubit_map = Layout.combine_into_edge_map(initial_layout, trivial_layout)
145 final_layout = {v: pass_final_layout[qubit_map[v]]
c:\anaconda3\envs\qubit\lib\site-packages\qiskit\transpiler\passes\layout\sabre_layout.py in <dictcomp>(.0)
141 trivial_layout = Layout.generate_trivial_layout(*qregs)
142 pass_final_layout = Layout({trivial_layout[v.index]: p
--> 143 for v, p in pass_final_layout.get_virtual_bits().items()})
144 qubit_map = Layout.combine_into_edge_map(initial_layout, trivial_layout)
145 final_layout = {v: pass_final_layout[qubit_map[v]]
c:\anaconda3\envs\qubit\lib\site-packages\qiskit\transpiler\layout.py in __getitem__(self, item)
102 if item in self._v2p:
103 return self._v2p[item]
--> 104 raise KeyError('The item %s does not exist in the Layout' % (item,))
105
106 def __setitem__(self, key, value):
KeyError: 'The item 6 does not exist in the Layout'
```
The item it specifies seems to be random as well (i.e. it can throw item 6 - item 14 as KeyError). Note that the line:
```python
physical_qreg = pm.run(physical)
```
will not fail, just the one that is using the circuit with the Quantum Register < Coupling Map size.
### What is the expected behavior?
If you specify the initial layout yourself or use the built in Trivial or Dense options, they will not fail when the Quantum Register is less than the hardware graph size and I imagine it should be the same with Sabre. As far as I can tell, if you don't specify the Quantum Register size as the # logical qubits in the circuit, if you use the:
```python
[FullAncillaAllocation(coupling_map), EnlargeWithAncilla(), ApplyLayout()]
```
passes then it won't properly label the 'extra' qubits as ancilla, so you probably want to always specify the # logical qubits as the Quantum Register size. Granted, this analysis could be wrong, but it seems that way from what I could tell.
### Suggested solutions
Unsure what the source of the problem is (new-ish to qiskit), so no suggestions from me.
</issue>
<code>
[start of README.md]
1 # Qiskit Terra
2
3 [](https://opensource.org/licenses/Apache-2.0)[](https://travis-ci.com/Qiskit/qiskit-terra)[](https://github.com/Qiskit/qiskit-terra/releases)[](https://pypi.org/project/qiskit-terra/)[](https://coveralls.io/github/Qiskit/qiskit-terra?branch=master)
4
5 **Qiskit** is an open-source framework for working with noisy quantum computers at the level of pulses, circuits, and algorithms.
6
7 Qiskit is made up of elements that work together to enable quantum computing. This element is **Terra** and is the foundation on which the rest of Qiskit is built.
8
9 ## Installation
10
11 We encourage installing Qiskit via the pip tool (a python package manager), which installs all Qiskit elements, including Terra.
12
13 ```bash
14 pip install qiskit
15 ```
16
17 PIP will handle all dependencies automatically and you will always install the latest (and well-tested) version.
18
19 To install from source, follow the instructions in the [documentation](https://qiskit.org/documentation/contributing_to_qiskit.html#install-terra-from-source).
20
21 ## Creating Your First Quantum Program in Qiskit Terra
22
23 Now that Qiskit is installed, it's time to begin working with Terra.
24
25 We are ready to try out a quantum circuit example, which is simulated locally using
26 the Qiskit BasicAer element. This is a simple example that makes an entangled state.
27
28 ```
29 $ python
30 ```
31
32 ```python
33 >>> from qiskit import *
34 >>> qc = QuantumCircuit(2, 2)
35 >>> qc.h(0)
36 >>> qc.cx(0, 1)
37 >>> qc.measure([0,1], [0,1])
38 >>> backend_sim = BasicAer.get_backend('qasm_simulator')
39 >>> result = backend_sim.run(assemble(qc)).result()
40 >>> print(result.get_counts(qc))
41 ```
42
43 In this case, the output will be:
44
45 ```python
46 {'00': 513, '11': 511}
47 ```
48
49 A script is available [here](examples/python/ibmq/hello_quantum.py), where we also show how to
50 run the same program on a real quantum computer via IBMQ.
51
52 ### Executing your code on a real quantum chip
53
54 You can also use Qiskit to execute your code on a
55 **real quantum chip**.
56 In order to do so, you need to configure Qiskit for using the credentials in
57 your IBM Q account:
58
59 #### Configure your IBMQ credentials
60
61 1. Create an _[IBM Q](https://quantum-computing.ibm.com) > Account_ if you haven't already done so.
62
63 2. Get an API token from the IBM Q website under _My Account > API Token_ and the URL for the account.
64
65 3. Take your token and url from step 2, here called `MY_API_TOKEN`, `MY_URL`, and run:
66
67 ```python
68 >>> from qiskit import IBMQ
69 >>> IBMQ.save_account('MY_API_TOKEN', 'MY_URL')
70 ```
71
72 After calling `IBMQ.save_account()`, your credentials will be stored on disk.
73 Once they are stored, at any point in the future you can load and use them
74 in your program simply via:
75
76 ```python
77 >>> from qiskit import IBMQ
78 >>> IBMQ.load_account()
79 ```
80
81 Those who do not want to save their credentials to disk should use instead:
82
83 ```python
84 >>> from qiskit import IBMQ
85 >>> IBMQ.enable_account('MY_API_TOKEN')
86 ```
87
88 and the token will only be active for the session. For examples using Terra with real
89 devices we have provided a set of examples in **examples/python** and we suggest starting with [using_qiskit_terra_level_0.py](examples/python/using_qiskit_terra_level_0.py) and working up in
90 the levels.
91
92 ## Contribution Guidelines
93
94 If you'd like to contribute to Qiskit Terra, please take a look at our
95 [contribution guidelines](CONTRIBUTING.md). This project adheres to Qiskit's [code of conduct](CODE_OF_CONDUCT.md). By participating, you are expected to uphold this code.
96
97 We use [GitHub issues](https://github.com/Qiskit/qiskit-terra/issues) for tracking requests and bugs. Please
98 [join the Qiskit Slack community](https://ibm.co/joinqiskitslack)
99 and use our [Qiskit Slack channel](https://qiskit.slack.com) for discussion and simple questions.
100 For questions that are more suited for a forum we use the Qiskit tag in the [Stack Exchange](https://quantumcomputing.stackexchange.com/questions/tagged/qiskit).
101
102 ## Next Steps
103
104 Now you're set up and ready to check out some of the other examples from our
105 [Qiskit Tutorials](https://github.com/Qiskit/qiskit-tutorials) repository.
106
107 ## Authors and Citation
108
109 Qiskit Terra is the work of [many people](https://github.com/Qiskit/qiskit-terra/graphs/contributors) who contribute
110 to the project at different levels. If you use Qiskit, please cite as per the included [BibTeX file](https://github.com/Qiskit/qiskit/blob/master/Qiskit.bib).
111
112 ## Changelog and Release Notes
113
114 The changelog for a particular release is dynamically generated and gets
115 written to the release page on Github for each release. For example, you can
116 find the page for the `0.9.0` release here:
117
118 https://github.com/Qiskit/qiskit-terra/releases/tag/0.9.0
119
120 The changelog for the current release can be found in the releases tab:
121 
122 The changelog provides a quick overview of noteble changes for a given
123 release.
124
125 Additionally, as part of each release detailed release notes are written to
126 document in detail what has changed as part of a release. This includes any
127 documentation on potential breaking changes on upgrade and new features.
128 For example, You can find the release notes for the `0.9.0` release in the
129 Qiskit documentation here:
130
131 https://qiskit.org/documentation/release_notes.html#terra-0-9
132
133 ## License
134
135 [Apache License 2.0](LICENSE.txt)
136
[end of README.md]
[start of examples/python/rippleadd.py]
1 # This code is part of Qiskit.
2 #
3 # (C) Copyright IBM 2017.
4 #
5 # This code is licensed under the Apache License, Version 2.0. You may
6 # obtain a copy of this license in the LICENSE.txt file in the root directory
7 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
8 #
9 # Any modifications or derivative works of this code must retain this
10 # copyright notice, and modified files need to carry a notice indicating
11 # that they have been altered from the originals.
12
13 """
14 Ripple adder example based on Cuccaro et al., quant-ph/0410184.
15
16 """
17
18 from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit
19 from qiskit import BasicAer
20 from qiskit import execute
21
22 ###############################################################
23 # Set the backend name and coupling map.
24 ###############################################################
25 backend = BasicAer.get_backend("qasm_simulator")
26 coupling_map = [[0, 1], [0, 8], [1, 2], [1, 9], [2, 3], [2, 10], [3, 4], [3, 11],
27 [4, 5], [4, 12], [5, 6], [5, 13], [6, 7], [6, 14], [7, 15], [8, 9],
28 [9, 10], [10, 11], [11, 12], [12, 13], [13, 14], [14, 15]]
29
30 ###############################################################
31 # Make a quantum program for the n-bit ripple adder.
32 ###############################################################
33 n = 2
34
35 a = QuantumRegister(n, "a")
36 b = QuantumRegister(n, "b")
37 cin = QuantumRegister(1, "cin")
38 cout = QuantumRegister(1, "cout")
39 ans = ClassicalRegister(n + 1, "ans")
40 qc = QuantumCircuit(a, b, cin, cout, ans, name="rippleadd")
41
42
43 def majority(p, a, b, c):
44 """Majority gate."""
45 p.cx(c, b)
46 p.cx(c, a)
47 p.ccx(a, b, c)
48
49
50 def unmajority(p, a, b, c):
51 """Unmajority gate."""
52 p.ccx(a, b, c)
53 p.cx(c, a)
54 p.cx(a, b)
55
56
57 # Build a temporary subcircuit that adds a to b,
58 # storing the result in b
59 adder_subcircuit = QuantumCircuit(cin, a, b, cout)
60 majority(adder_subcircuit, cin[0], b[0], a[0])
61 for j in range(n - 1):
62 majority(adder_subcircuit, a[j], b[j + 1], a[j + 1])
63 adder_subcircuit.cx(a[n - 1], cout[0])
64 for j in reversed(range(n - 1)):
65 unmajority(adder_subcircuit, a[j], b[j + 1], a[j + 1])
66 unmajority(adder_subcircuit, cin[0], b[0], a[0])
67
68 # Set the inputs to the adder
69 qc.x(a[0]) # Set input a = 0...0001
70 qc.x(b) # Set input b = 1...1111
71 # Apply the adder
72 qc += adder_subcircuit
73 # Measure the output register in the computational basis
74 for j in range(n):
75 qc.measure(b[j], ans[j])
76 qc.measure(cout[0], ans[n])
77
78 ###############################################################
79 # execute the program.
80 ###############################################################
81
82 # First version: not mapped
83 job = execute(qc, backend=backend, coupling_map=None, shots=1024)
84 result = job.result()
85 print(result.get_counts(qc))
86
87 # Second version: mapped to 2x8 array coupling graph
88 job = execute(qc, backend=backend, coupling_map=coupling_map, shots=1024)
89 result = job.result()
90 print(result.get_counts(qc))
91
92 # Both versions should give the same distribution
93
[end of examples/python/rippleadd.py]
[start of qiskit/compiler/transpile.py]
1 # This code is part of Qiskit.
2 #
3 # (C) Copyright IBM 2017, 2019.
4 #
5 # This code is licensed under the Apache License, Version 2.0. You may
6 # obtain a copy of this license in the LICENSE.txt file in the root directory
7 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
8 #
9 # Any modifications or derivative works of this code must retain this
10 # copyright notice, and modified files need to carry a notice indicating
11 # that they have been altered from the originals.
12
13 """Circuit transpile function"""
14 import logging
15 from time import time
16 import warnings
17 from typing import List, Union, Dict, Callable, Any, Optional, Tuple
18 from qiskit.circuit.quantumcircuit import QuantumCircuit
19 from qiskit.providers import BaseBackend
20 from qiskit.providers.models import BackendProperties
21 from qiskit.providers.models.backendproperties import Gate
22 from qiskit.transpiler import Layout, CouplingMap, PropertySet, PassManager
23 from qiskit.transpiler.basepasses import BasePass
24 from qiskit.dagcircuit import DAGCircuit
25 from qiskit.tools.parallel import parallel_map
26 from qiskit.transpiler.passmanager_config import PassManagerConfig
27 from qiskit.pulse import Schedule
28 from qiskit.circuit.quantumregister import Qubit
29 from qiskit import user_config
30 from qiskit.transpiler.exceptions import TranspilerError
31 from qiskit.transpiler.passes import ApplyLayout
32 from qiskit.converters import isinstanceint, isinstancelist, dag_to_circuit, circuit_to_dag
33 from qiskit.transpiler.passes.basis.ms_basis_decomposer import MSBasisDecomposer
34 from qiskit.transpiler.preset_passmanagers import (level_0_pass_manager,
35 level_1_pass_manager,
36 level_2_pass_manager,
37 level_3_pass_manager)
38
39 LOG = logging.getLogger(__name__)
40
41
42 def transpile(circuits: Union[QuantumCircuit, List[QuantumCircuit]],
43 backend: Optional[BaseBackend] = None,
44 basis_gates: Optional[List[str]] = None,
45 coupling_map: Optional[Union[CouplingMap, List[List[int]]]] = None,
46 backend_properties: Optional[BackendProperties] = None,
47 initial_layout: Optional[Union[Layout, Dict, List]] = None,
48 layout_method: Optional[str] = None,
49 routing_method: Optional[str] = None,
50 translation_method: Optional[str] = None,
51 seed_transpiler: Optional[int] = None,
52 optimization_level: Optional[int] = None,
53 pass_manager: Optional[PassManager] = None,
54 callback: Optional[Callable[[BasePass, DAGCircuit, float,
55 PropertySet, int], Any]] = None,
56 output_name: Optional[Union[str, List[str]]] = None) -> Union[QuantumCircuit,
57 List[QuantumCircuit]]:
58 """Transpile one or more circuits, according to some desired transpilation targets.
59
60 All arguments may be given as either a singleton or list. In case of a list,
61 the length must be equal to the number of circuits being transpiled.
62
63 Transpilation is done in parallel using multiprocessing.
64
65 Args:
66 circuits: Circuit(s) to transpile
67 backend: If set, transpiler options are automatically grabbed from
68 ``backend.configuration()`` and ``backend.properties()``.
69 If any other option is explicitly set (e.g., ``coupling_map``), it
70 will override the backend's.
71
72 .. note::
73
74 The backend arg is purely for convenience. The resulting
75 circuit may be run on any backend as long as it is compatible.
76 basis_gates: List of basis gate names to unroll to
77 (e.g: ``['u1', 'u2', 'u3', 'cx']``). If ``None``, do not unroll.
78 coupling_map: Coupling map (perhaps custom) to target in mapping.
79 Multiple formats are supported:
80
81 #. ``CouplingMap`` instance
82 #. List, must be given as an adjacency matrix, where each entry
83 specifies all two-qubit interactions supported by backend,
84 e.g: ``[[0, 1], [0, 3], [1, 2], [1, 5], [2, 5], [4, 1], [5, 3]]``
85
86 backend_properties: properties returned by a backend, including information on gate
87 errors, readout errors, qubit coherence times, etc. Find a backend
88 that provides this information with: ``backend.properties()``
89 initial_layout: Initial position of virtual qubits on physical qubits.
90 If this layout makes the circuit compatible with the coupling_map
91 constraints, it will be used. The final layout is not guaranteed to be the same,
92 as the transpiler may permute qubits through swaps or other means.
93 Multiple formats are supported:
94
95 #. ``Layout`` instance
96 #. Dict
97 * virtual to physical::
98
99 {qr[0]: 0,
100 qr[1]: 3,
101 qr[2]: 5}
102
103 * physical to virtual::
104
105 {0: qr[0],
106 3: qr[1],
107 5: qr[2]}
108
109 #. List
110
111 * virtual to physical::
112
113 [0, 3, 5] # virtual qubits are ordered (in addition to named)
114
115 * physical to virtual::
116
117 [qr[0], None, None, qr[1], None, qr[2]]
118
119 layout_method: Name of layout selection pass ('trivial', 'dense', 'noise_adaptive', 'sabre')
120 Sometimes a perfect layout can be available in which case the layout_method
121 may not run.
122 routing_method: Name of routing pass ('basic', 'lookahead', 'stochastic', 'sabre')
123 translation_method: Name of translation pass ('unroller', 'translator', 'synthesis')
124 seed_transpiler: Sets random seed for the stochastic parts of the transpiler
125 optimization_level: How much optimization to perform on the circuits.
126 Higher levels generate more optimized circuits,
127 at the expense of longer transpilation time.
128 * 0: no optimization
129 * 1: light optimization
130 * 2: heavy optimization
131 * 3: even heavier optimization
132 If ``None``, level 1 will be chosen as default.
133 pass_manager: The pass manager to use for a custom pipeline of transpiler passes.
134 If this arg is present, all other args will be ignored and the
135 pass manager will be used directly (Qiskit will not attempt to
136 auto-select a pass manager based on transpile options).
137 callback: A callback function that will be called after each
138 pass execution. The function will be called with 5 keyword
139 arguments,
140 | ``pass_``: the pass being run.
141 | ``dag``: the dag output of the pass.
142 | ``time``: the time to execute the pass.
143 | ``property_set``: the property set.
144 | ``count``: the index for the pass execution.
145 The exact arguments passed expose the internals of the pass manager,
146 and are subject to change as the pass manager internals change. If
147 you intend to reuse a callback function over multiple releases, be
148 sure to check that the arguments being passed are the same.
149 To use the callback feature, define a function that will
150 take in kwargs dict and access the variables. For example::
151
152 def callback_func(**kwargs):
153 pass_ = kwargs['pass_']
154 dag = kwargs['dag']
155 time = kwargs['time']
156 property_set = kwargs['property_set']
157 count = kwargs['count']
158 ...
159 transpile(circ, callback=callback_func)
160
161 output_name: A list with strings to identify the output circuits. The length of
162 the list should be exactly the length of the ``circuits`` parameter.
163
164 Returns:
165 The transpiled circuit(s).
166
167 Raises:
168 TranspilerError: in case of bad inputs to transpiler (like conflicting parameters)
169 or errors in passes
170 """
171 circuits = circuits if isinstance(circuits, list) else [circuits]
172
173 # transpiling schedules is not supported yet.
174 start_time = time()
175 if all(isinstance(c, Schedule) for c in circuits):
176 warnings.warn("Transpiling schedules is not supported yet.", UserWarning)
177 if len(circuits) == 1:
178 end_time = time()
179 _log_transpile_time(start_time, end_time)
180 return circuits[0]
181 end_time = time()
182 _log_transpile_time(start_time, end_time)
183 return circuits
184
185 if pass_manager is not None:
186 _check_conflicting_argument(optimization_level=optimization_level, basis_gates=basis_gates,
187 coupling_map=coupling_map, seed_transpiler=seed_transpiler,
188 backend_properties=backend_properties,
189 initial_layout=initial_layout, layout_method=layout_method,
190 routing_method=routing_method,
191 translation_method=translation_method,
192 backend=backend)
193
194 warnings.warn("The parameter pass_manager in transpile is being deprecated. "
195 "The preferred way to tranpile a circuit using a custom pass manager is"
196 " pass_manager.run(circuit)", DeprecationWarning, stacklevel=2)
197 return pass_manager.run(circuits, output_name=output_name, callback=callback)
198
199 if optimization_level is None:
200 # Take optimization level from the configuration or 1 as default.
201 config = user_config.get_config()
202 optimization_level = config.get('transpile_optimization_level', 1)
203
204 # Get transpile_args to configure the circuit transpilation job(s)
205 transpile_args = _parse_transpile_args(circuits, backend, basis_gates, coupling_map,
206 backend_properties, initial_layout,
207 layout_method, routing_method, translation_method,
208 seed_transpiler, optimization_level,
209 callback, output_name)
210
211 _check_circuits_coupling_map(circuits, transpile_args, backend)
212
213 # Transpile circuits in parallel
214 circuits = parallel_map(_transpile_circuit, list(zip(circuits, transpile_args)))
215
216 if len(circuits) == 1:
217 end_time = time()
218 _log_transpile_time(start_time, end_time)
219 return circuits[0]
220 end_time = time()
221 _log_transpile_time(start_time, end_time)
222 return circuits
223
224
225 def _check_conflicting_argument(**kargs):
226 conflicting_args = [arg for arg, value in kargs.items() if value]
227 if conflicting_args:
228 raise TranspilerError("The parameters pass_manager conflicts with the following "
229 "parameter(s): {}.".format(', '.join(conflicting_args)))
230
231
232 def _check_circuits_coupling_map(circuits, transpile_args, backend):
233 # Check circuit width against number of qubits in coupling_map(s)
234 coupling_maps_list = list(config['pass_manager_config'].coupling_map for config in
235 transpile_args)
236 for circuit, parsed_coupling_map in zip(circuits, coupling_maps_list):
237 # If coupling_map is not None or num_qubits == 1
238 num_qubits = len(circuit.qubits)
239 max_qubits = None
240 if isinstance(parsed_coupling_map, CouplingMap):
241 max_qubits = parsed_coupling_map.size()
242
243 # If coupling_map is None, the limit might be in the backend (like in 1Q devices)
244 elif backend is not None and not backend.configuration().simulator:
245 max_qubits = backend.configuration().n_qubits
246
247 if max_qubits is not None and (num_qubits > max_qubits):
248 raise TranspilerError('Number of qubits ({}) '.format(num_qubits) +
249 'in {} '.format(circuit.name) +
250 'is greater than maximum ({}) '.format(max_qubits) +
251 'in the coupling_map')
252
253
254 def _log_transpile_time(start_time, end_time):
255 log_msg = "Total Transpile Time - %.5f (ms)" % ((end_time - start_time) * 1000)
256 LOG.info(log_msg)
257
258
259 def _transpile_circuit(circuit_config_tuple: Tuple[QuantumCircuit, Dict]) -> QuantumCircuit:
260 """Select a PassManager and run a single circuit through it.
261 Args:
262 circuit_config_tuple (tuple):
263 circuit (QuantumCircuit): circuit to transpile
264 transpile_config (dict): configuration dictating how to transpile. The
265 dictionary has the following format:
266 {'optimization_level': int,
267 'output_name': string,
268 'callback': callable,
269 'pass_manager_config': PassManagerConfig}
270 Returns:
271 The transpiled circuit
272 Raises:
273 TranspilerError: if transpile_config is not valid or transpilation incurs error
274 """
275 circuit, transpile_config = circuit_config_tuple
276
277 pass_manager_config = transpile_config['pass_manager_config']
278
279 if transpile_config['faulty_qubits_map']:
280 pass_manager_config.initial_layout = _remap_layout_faulty_backend(
281 pass_manager_config.initial_layout, transpile_config['faulty_qubits_map'])
282
283 ms_basis_swap = None
284 if (pass_manager_config.translation_method == 'unroller'
285 and pass_manager_config.basis_gates is not None):
286 # Workaround for ion trap support: If basis gates includes
287 # Mølmer-Sørensen (rxx) and the circuit includes gates outside the basis,
288 # first unroll to u3, cx, then run MSBasisDecomposer to target basis.
289 basic_insts = ['measure', 'reset', 'barrier', 'snapshot']
290 device_insts = set(pass_manager_config.basis_gates).union(basic_insts)
291 if 'rxx' in pass_manager_config.basis_gates and \
292 not device_insts >= circuit.count_ops().keys():
293 ms_basis_swap = pass_manager_config.basis_gates
294 pass_manager_config.basis_gates = list(
295 set(['u3', 'cx']).union(pass_manager_config.basis_gates))
296
297 # we choose an appropriate one based on desired optimization level
298 level = transpile_config['optimization_level']
299
300 if level == 0:
301 pass_manager = level_0_pass_manager(pass_manager_config)
302 elif level == 1:
303 pass_manager = level_1_pass_manager(pass_manager_config)
304 elif level == 2:
305 pass_manager = level_2_pass_manager(pass_manager_config)
306 elif level == 3:
307 pass_manager = level_3_pass_manager(pass_manager_config)
308 else:
309 raise TranspilerError("optimization_level can range from 0 to 3.")
310
311 if ms_basis_swap is not None:
312 pass_manager.append(MSBasisDecomposer(ms_basis_swap))
313
314 result = pass_manager.run(circuit, callback=transpile_config['callback'],
315 output_name=transpile_config['output_name'])
316
317 if transpile_config['faulty_qubits_map']:
318 return _remap_circuit_faulty_backend(result, transpile_config['backend_num_qubits'],
319 pass_manager_config.backend_properties,
320 transpile_config['faulty_qubits_map'])
321
322 return result
323
324
325 def _remap_circuit_faulty_backend(circuit, num_qubits, backend_prop, faulty_qubits_map):
326 faulty_qubits = backend_prop.faulty_qubits() if backend_prop else []
327 disconnected_qubits = {k for k, v in faulty_qubits_map.items()
328 if v is None}.difference(faulty_qubits)
329 faulty_qubits_map_reverse = {v: k for k, v in faulty_qubits_map.items()}
330 if faulty_qubits:
331 faulty_qreg = circuit._create_qreg(len(faulty_qubits), 'faulty')
332 else:
333 faulty_qreg = []
334 if disconnected_qubits:
335 disconnected_qreg = circuit._create_qreg(len(disconnected_qubits), 'disconnected')
336 else:
337 disconnected_qreg = []
338
339 new_layout = Layout()
340 faulty_qubit = 0
341 disconnected_qubit = 0
342
343 for real_qubit in range(num_qubits):
344 if faulty_qubits_map[real_qubit] is not None:
345 new_layout[real_qubit] = circuit._layout[faulty_qubits_map[real_qubit]]
346 else:
347 if real_qubit in faulty_qubits:
348 new_layout[real_qubit] = faulty_qreg[faulty_qubit]
349 faulty_qubit += 1
350 else:
351 new_layout[real_qubit] = disconnected_qreg[disconnected_qubit]
352 disconnected_qubit += 1
353 physical_layout_dict = {}
354 for qubit in circuit.qubits:
355 physical_layout_dict[qubit] = faulty_qubits_map_reverse[qubit.index]
356 for qubit in faulty_qreg[:] + disconnected_qreg[:]:
357 physical_layout_dict[qubit] = new_layout[qubit]
358 dag_circuit = circuit_to_dag(circuit)
359 apply_layout_pass = ApplyLayout()
360 apply_layout_pass.property_set['layout'] = Layout(physical_layout_dict)
361 circuit = dag_to_circuit(apply_layout_pass.run(dag_circuit))
362 circuit._layout = new_layout
363 return circuit
364
365
366 def _remap_layout_faulty_backend(layout, faulty_qubits_map):
367 if layout is None:
368 return layout
369 new_layout = Layout()
370 for virtual, physical in layout.get_virtual_bits().items():
371 if faulty_qubits_map[physical] is None:
372 raise TranspilerError("The initial_layout parameter refers to faulty"
373 " or disconnected qubits")
374 new_layout[virtual] = faulty_qubits_map[physical]
375 return new_layout
376
377
378 def _parse_transpile_args(circuits, backend,
379 basis_gates, coupling_map, backend_properties,
380 initial_layout, layout_method, routing_method, translation_method,
381 seed_transpiler, optimization_level,
382 callback, output_name) -> List[Dict]:
383 """Resolve the various types of args allowed to the transpile() function through
384 duck typing, overriding args, etc. Refer to the transpile() docstring for details on
385 what types of inputs are allowed.
386
387 Here the args are resolved by converting them to standard instances, and prioritizing
388 them in case a transpile option is passed through multiple args (explicitly setting an
389 arg has more priority than the arg set by backend).
390
391 Returns:
392 list[dicts]: a list of transpile parameters.
393 """
394 if initial_layout is not None and layout_method is not None:
395 warnings.warn("initial_layout provided; layout_method is ignored.",
396 UserWarning)
397 # Each arg could be single or a list. If list, it must be the same size as
398 # number of circuits. If single, duplicate to create a list of that size.
399 num_circuits = len(circuits)
400
401 basis_gates = _parse_basis_gates(basis_gates, backend, circuits)
402 faulty_qubits_map = _parse_faulty_qubits_map(backend, num_circuits)
403 coupling_map = _parse_coupling_map(coupling_map, backend, num_circuits)
404 backend_properties = _parse_backend_properties(backend_properties, backend, num_circuits)
405 backend_num_qubits = _parse_backend_num_qubits(backend, num_circuits)
406 initial_layout = _parse_initial_layout(initial_layout, circuits)
407 layout_method = _parse_layout_method(layout_method, num_circuits)
408 routing_method = _parse_routing_method(routing_method, num_circuits)
409 translation_method = _parse_translation_method(translation_method, num_circuits)
410 seed_transpiler = _parse_seed_transpiler(seed_transpiler, num_circuits)
411 optimization_level = _parse_optimization_level(optimization_level, num_circuits)
412 output_name = _parse_output_name(output_name, circuits)
413 callback = _parse_callback(callback, num_circuits)
414
415 list_transpile_args = []
416 for args in zip(basis_gates, coupling_map, backend_properties,
417 initial_layout, layout_method, routing_method, translation_method,
418 seed_transpiler, optimization_level,
419 output_name, callback, backend_num_qubits, faulty_qubits_map):
420 transpile_args = {'pass_manager_config': PassManagerConfig(basis_gates=args[0],
421 coupling_map=args[1],
422 backend_properties=args[2],
423 initial_layout=args[3],
424 layout_method=args[4],
425 routing_method=args[5],
426 translation_method=args[6],
427 seed_transpiler=args[7]),
428 'optimization_level': args[8],
429 'output_name': args[9],
430 'callback': args[10],
431 'backend_num_qubits': args[11],
432 'faulty_qubits_map': args[12]}
433 list_transpile_args.append(transpile_args)
434
435 return list_transpile_args
436
437
438 def _create_faulty_qubits_map(backend):
439 """If the backend has faulty qubits, those should be excluded. A faulty_qubit_map is a map
440 from working qubit in the backend to dumnmy qubits that are consecutive and connected."""
441 faulty_qubits_map = None
442 if backend is not None:
443 if backend.properties():
444 faulty_qubits = backend.properties().faulty_qubits()
445 faulty_edges = [gates.qubits for gates in backend.properties().faulty_gates()]
446 else:
447 faulty_qubits = []
448 faulty_edges = []
449
450 if faulty_qubits or faulty_edges:
451 faulty_qubits_map = {}
452 configuration = backend.configuration()
453 full_coupling_map = configuration.coupling_map
454 functional_cm_list = [edge for edge in full_coupling_map
455 if (set(edge).isdisjoint(faulty_qubits) and
456 edge not in faulty_edges)]
457
458 connected_working_qubits = CouplingMap(functional_cm_list).largest_connected_component()
459 dummy_qubit_counter = 0
460 for qubit in range(configuration.n_qubits):
461 if qubit in connected_working_qubits:
462 faulty_qubits_map[qubit] = dummy_qubit_counter
463 dummy_qubit_counter += 1
464 else:
465 faulty_qubits_map[qubit] = None
466 return faulty_qubits_map
467
468
469 def _parse_basis_gates(basis_gates, backend, circuits):
470 # try getting basis_gates from user, else backend
471 if basis_gates is None:
472 if getattr(backend, 'configuration', None):
473 basis_gates = getattr(backend.configuration(), 'basis_gates', None)
474 # basis_gates could be None, or a list of basis, e.g. ['u3', 'cx']
475 if basis_gates is None or (isinstance(basis_gates, list) and
476 all(isinstance(i, str) for i in basis_gates)):
477 basis_gates = [basis_gates] * len(circuits)
478
479 return basis_gates
480
481
482 def _parse_coupling_map(coupling_map, backend, num_circuits):
483 # try getting coupling_map from user, else backend
484 if coupling_map is None:
485 if getattr(backend, 'configuration', None):
486 configuration = backend.configuration()
487 if hasattr(configuration, 'coupling_map') and configuration.coupling_map:
488 faulty_map = _create_faulty_qubits_map(backend)
489 if faulty_map:
490 coupling_map = CouplingMap()
491 for qubit1, qubit2 in configuration.coupling_map:
492 if faulty_map[qubit1] is not None and faulty_map[qubit2] is not None:
493 coupling_map.add_edge(faulty_map[qubit1], faulty_map[qubit2])
494 else:
495 coupling_map = CouplingMap(configuration.coupling_map)
496
497 # coupling_map could be None, or a list of lists, e.g. [[0, 1], [2, 1]]
498 if coupling_map is None or isinstance(coupling_map, CouplingMap):
499 coupling_map = [coupling_map] * num_circuits
500 elif isinstance(coupling_map, list) and all(isinstance(i, list) and len(i) == 2
501 for i in coupling_map):
502 coupling_map = [coupling_map] * num_circuits
503
504 coupling_map = [CouplingMap(cm) if isinstance(cm, list) else cm for cm in coupling_map]
505
506 return coupling_map
507
508
509 def _parse_backend_properties(backend_properties, backend, num_circuits):
510 # try getting backend_properties from user, else backend
511 if backend_properties is None:
512 if getattr(backend, 'properties', None):
513 backend_properties = backend.properties()
514 if backend_properties and \
515 (backend_properties.faulty_qubits() or backend_properties.faulty_gates()):
516 faulty_qubits = sorted(backend_properties.faulty_qubits(), reverse=True)
517 faulty_edges = [gates.qubits for gates in backend_properties.faulty_gates()]
518 # remove faulty qubits in backend_properties.qubits
519 for faulty_qubit in faulty_qubits:
520 del backend_properties.qubits[faulty_qubit]
521
522 gates = []
523 for gate in backend_properties.gates:
524 # remove gates using faulty edges or with faulty qubits (and remap the
525 # gates in terms of faulty_qubits_map)
526 faulty_qubits_map = _create_faulty_qubits_map(backend)
527 if any([faulty_qubits_map[qubits] is not None for qubits in gate.qubits]) or \
528 gate.qubits in faulty_edges:
529 continue
530 gate_dict = gate.to_dict()
531 replacement_gate = Gate.from_dict(gate_dict)
532 gate_dict['qubits'] = [faulty_qubits_map[qubit] for qubit in gate.qubits]
533 args = '_'.join([str(qubit) for qubit in gate_dict['qubits']])
534 gate_dict['name'] = "%s%s" % (gate_dict['gate'], args)
535 gates.append(replacement_gate)
536
537 backend_properties.gates = gates
538 if not isinstance(backend_properties, list):
539 backend_properties = [backend_properties] * num_circuits
540 return backend_properties
541
542
543 def _parse_backend_num_qubits(backend, num_circuits):
544 if backend is None:
545 return [None] * num_circuits
546 if not isinstance(backend, list):
547 return [backend.configuration().n_qubits] * num_circuits
548 backend_num_qubits = []
549 for a_backend in backend:
550 backend_num_qubits.append(a_backend.configuration().n_qubits)
551 return backend_num_qubits
552
553
554 def _parse_initial_layout(initial_layout, circuits):
555 # initial_layout could be None, or a list of ints, e.g. [0, 5, 14]
556 # or a list of tuples/None e.g. [qr[0], None, qr[1]] or a dict e.g. {qr[0]: 0}
557 def _layout_from_raw(initial_layout, circuit):
558 if initial_layout is None or isinstance(initial_layout, Layout):
559 return initial_layout
560 elif isinstancelist(initial_layout):
561 if all(isinstanceint(elem) for elem in initial_layout):
562 initial_layout = Layout.from_intlist(initial_layout, *circuit.qregs)
563 elif all(elem is None or isinstance(elem, Qubit) for elem in initial_layout):
564 initial_layout = Layout.from_qubit_list(initial_layout)
565 elif isinstance(initial_layout, dict):
566 initial_layout = Layout(initial_layout)
567 else:
568 raise TranspilerError("The initial_layout parameter could not be parsed")
569 return initial_layout
570
571 # multiple layouts?
572 if isinstance(initial_layout, list) and \
573 any(isinstance(i, (list, dict)) for i in initial_layout):
574 initial_layout = [_layout_from_raw(lo, circ) if isinstance(lo, (list, dict)) else lo
575 for lo, circ in zip(initial_layout, circuits)]
576 else:
577 # even if one layout, but multiple circuits, the layout needs to be adapted for each
578 initial_layout = [_layout_from_raw(initial_layout, circ) for circ in circuits]
579
580 if not isinstance(initial_layout, list):
581 initial_layout = [initial_layout] * len(circuits)
582
583 return initial_layout
584
585
586 def _parse_layout_method(layout_method, num_circuits):
587 if not isinstance(layout_method, list):
588 layout_method = [layout_method] * num_circuits
589 return layout_method
590
591
592 def _parse_routing_method(routing_method, num_circuits):
593 if not isinstance(routing_method, list):
594 routing_method = [routing_method] * num_circuits
595 return routing_method
596
597
598 def _parse_translation_method(translation_method, num_circuits):
599 if not isinstance(translation_method, list):
600 translation_method = [translation_method] * num_circuits
601 return translation_method
602
603
604 def _parse_seed_transpiler(seed_transpiler, num_circuits):
605 if not isinstance(seed_transpiler, list):
606 seed_transpiler = [seed_transpiler] * num_circuits
607 return seed_transpiler
608
609
610 def _parse_optimization_level(optimization_level, num_circuits):
611 if not isinstance(optimization_level, list):
612 optimization_level = [optimization_level] * num_circuits
613 return optimization_level
614
615
616 def _parse_pass_manager(pass_manager, num_circuits):
617 if not isinstance(pass_manager, list):
618 pass_manager = [pass_manager] * num_circuits
619 return pass_manager
620
621
622 def _parse_callback(callback, num_circuits):
623 if not isinstance(callback, list):
624 callback = [callback] * num_circuits
625 return callback
626
627
628 def _parse_faulty_qubits_map(backend, num_circuits):
629 if backend is None:
630 return [None] * num_circuits
631 if not isinstance(backend, list):
632 return [_create_faulty_qubits_map(backend)] * num_circuits
633 faulty_qubits_map = []
634 for a_backend in backend:
635 faulty_qubits_map.append(_create_faulty_qubits_map(a_backend))
636 return faulty_qubits_map
637
638
639 def _parse_output_name(output_name, circuits):
640 # naming and returning circuits
641 # output_name could be either a string or a list
642 if output_name is not None:
643 if isinstance(output_name, str):
644 # single circuit
645 if len(circuits) == 1:
646 return [output_name]
647 # multiple circuits
648 else:
649 raise TranspilerError("Expected a list object of length equal " +
650 "to that of the number of circuits " +
651 "being transpiled")
652 elif isinstance(output_name, list):
653 if len(circuits) == len(output_name) and \
654 all(isinstance(name, str) for name in output_name):
655 return output_name
656 else:
657 raise TranspilerError("The length of output_name list "
658 "must be equal to the number of "
659 "transpiled circuits and the output_name "
660 "list should be strings.")
661 else:
662 raise TranspilerError("The parameter output_name should be a string or a"
663 "list of strings: %s was used." % type(output_name))
664 else:
665 return [circuit.name for circuit in circuits]
666
[end of qiskit/compiler/transpile.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
Qiskit/qiskit
|
995ac92376cfa632b124e171023c779bd2f3119b
|
SabreLayout Fails when Quantum Register < Coupling Map size
<!-- ⚠️ If you do not respect this template, your issue will be closed -->
<!-- ⚠️ Make sure to browse the opened and closed issues -->
### Information
- **Qiskit Terra version**: Master, installed from source, v15.0
- **Python version**: Anaconda python, v3.7.6
- **Operating system**: Windows 10 LTSC 1809
### What is the current behavior?
When specifying a QuantumRegister that is smaller in magnitude than the size of the coupling_map you're using, SabreLayout will throw an error.
### Steps to reproduce the problem
```python
qasm_circuit = """OPENQASM 2.0;
include "qelib1.inc";
qreg q[6];
cx q[3],q[0];
cx q[1],q[0];
cx q[5],q[0];
x q[2];
cx q[2],q[0];
x q[4];
cx q[4],q[0];"""
qasm_hardware = """OPENQASM 2.0;
include "qelib1.inc";
qreg q[15];
cx q[3],q[0];
cx q[1],q[0];
cx q[5],q[0];
x q[2];
cx q[2],q[0];
x q[4];
cx q[4],q[0];"""
from qiskit import QuantumCircuit
from qiskit.transpiler import CouplingMap, passes, PassManager
# Load qreg = # logical qubits and qreg = # physical qubits circuits
logical = QuantumCircuit.from_qasm_str(qasm_circuit)
physical = QuantumCircuit.from_qasm_str(qasm_hardware)
# Define hardware graph (15 qubit)
melbourne = [[1, 0], [1, 2], [2, 3], [4, 3], [4, 10], [5, 4], [5, 6], [5, 9], [6, 8], [7, 8],
[9, 8], [9, 10], [11, 10], [11, 3], [11, 12], [12, 2], [13, 1], [13, 12], [14, 0],
[14, 13], [0, 1], [2, 1], [3, 2], [3, 4], [10, 4], [4, 5], [6, 5], [9, 5], [8, 6],
[8, 7], [8, 9], [10, 9], [10, 11], [3, 11], [12, 11], [2, 12], [1, 13], [12, 13],
[0, 14], [13, 14]]
# Define coupling map object
coupling_map = CouplingMap(couplinglist=melbourne)
# Routing passes
basic_r = passes.BasicSwap(coupling_map)
# Layout passes
sabre_l = passes.SabreLayout(coupling_map)
# Make pass lists
pass_sabre = [sabre_l, basic_r]
# Define passmanager object
pm = PassManager(pass_sabre)
# Run passes
logical_qreg = pm.run(logical)
physical_qreg = pm.run(physical)
```
output
```python
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-21-ce5f1efe766e> in <module>
51
52 # Run passes
---> 53 logical_qreg = pm.run(logical)
54 physical_qreg = pm.run(physical)
c:\anaconda3\envs\qubit\lib\site-packages\qiskit\transpiler\passmanager.py in run(self, circuits, output_name, callback)
212 """
213 if isinstance(circuits, QuantumCircuit):
--> 214 return self._run_single_circuit(circuits, output_name, callback)
215 elif len(circuits) == 1:
216 return self._run_single_circuit(circuits[0], output_name, callback)
c:\anaconda3\envs\qubit\lib\site-packages\qiskit\transpiler\passmanager.py in _run_single_circuit(self, circuit, output_name, callback)
275 if callback is None and self.callback: # TODO to remove with __init__(callback)
276 callback = self.callback
--> 277 result = running_passmanager.run(circuit, output_name=output_name, callback=callback)
278 self.property_set = running_passmanager.property_set
279 return result
c:\anaconda3\envs\qubit\lib\site-packages\qiskit\transpiler\runningpassmanager.py in run(***failed resolving arguments***)
113 for passset in self.working_list:
114 for pass_ in passset:
--> 115 dag = self._do_pass(pass_, dag, passset.options)
116
117 circuit = dag_to_circuit(dag)
c:\anaconda3\envs\qubit\lib\site-packages\qiskit\transpiler\runningpassmanager.py in _do_pass(self, pass_, dag, options)
143 # Run the pass itself, if not already run
144 if pass_ not in self.valid_passes:
--> 145 dag = self._run_this_pass(pass_, dag)
146
147 # update the valid_passes property
c:\anaconda3\envs\qubit\lib\site-packages\qiskit\transpiler\runningpassmanager.py in _run_this_pass(self, pass_, dag)
174 # Measure time if we have a callback or logging set
175 start_time = time()
--> 176 pass_.run(FencedDAGCircuit(dag))
177 end_time = time()
178 run_time = end_time - start_time
c:\anaconda3\envs\qubit\lib\site-packages\qiskit\transpiler\passes\layout\sabre_layout.py in run(self, dag)
105 final_layout = self._compose_layouts(initial_layout,
106 pass_final_layout,
--> 107 circ.qregs)
108 initial_layout = final_layout
109 circ = circ.reverse_ops()
c:\anaconda3\envs\qubit\lib\site-packages\qiskit\transpiler\passes\layout\sabre_layout.py in _compose_layouts(self, initial_layout, pass_final_layout, qregs)
141 trivial_layout = Layout.generate_trivial_layout(*qregs)
142 pass_final_layout = Layout({trivial_layout[v.index]: p
--> 143 for v, p in pass_final_layout.get_virtual_bits().items()})
144 qubit_map = Layout.combine_into_edge_map(initial_layout, trivial_layout)
145 final_layout = {v: pass_final_layout[qubit_map[v]]
c:\anaconda3\envs\qubit\lib\site-packages\qiskit\transpiler\passes\layout\sabre_layout.py in <dictcomp>(.0)
141 trivial_layout = Layout.generate_trivial_layout(*qregs)
142 pass_final_layout = Layout({trivial_layout[v.index]: p
--> 143 for v, p in pass_final_layout.get_virtual_bits().items()})
144 qubit_map = Layout.combine_into_edge_map(initial_layout, trivial_layout)
145 final_layout = {v: pass_final_layout[qubit_map[v]]
c:\anaconda3\envs\qubit\lib\site-packages\qiskit\transpiler\layout.py in __getitem__(self, item)
102 if item in self._v2p:
103 return self._v2p[item]
--> 104 raise KeyError('The item %s does not exist in the Layout' % (item,))
105
106 def __setitem__(self, key, value):
KeyError: 'The item 6 does not exist in the Layout'
```
The item it specifies seems to be random as well (i.e. it can throw item 6 - item 14 as KeyError). Note that the line:
```python
physical_qreg = pm.run(physical)
```
will not fail, just the one that is using the circuit with the Quantum Register < Coupling Map size.
### What is the expected behavior?
If you specify the initial layout yourself or use the built in Trivial or Dense options, they will not fail when the Quantum Register is less than the hardware graph size and I imagine it should be the same with Sabre. As far as I can tell, if you don't specify the Quantum Register size as the # logical qubits in the circuit, if you use the:
```python
[FullAncillaAllocation(coupling_map), EnlargeWithAncilla(), ApplyLayout()]
```
passes then it won't properly label the 'extra' qubits as ancilla, so you probably want to always specify the # logical qubits as the Quantum Register size. Granted, this analysis could be wrong, but it seems that way from what I could tell.
### Suggested solutions
Unsure what the source of the problem is (new-ish to qiskit), so no suggestions from me.
|
Hi, I also encountered this bug. I think replacing 'circ.qregs' with new-circ.qregs at line 104 in sabre_layout.py would solve the problem.
before
```python
for i in range(self.max_iterations):
for _ in ('forward', 'backward'):
pm = self._layout_and_route_passmanager(initial_layout)
new_circ = pm.run(circ)
# Update initial layout and reverse the unmapped circuit.
pass_final_layout = pm.property_set['final_layout']
final_layout = self._compose_layouts(initial_layout,
pass_final_layout,
circ.qregs)
initial_layout = final_layout
circ = circ.reverse_ops()
```
after
```python
for i in range(self.max_iterations):
for _ in ('forward', 'backward'):
pm = self._layout_and_route_passmanager(initial_layout)
new_circ = pm.run(circ)
# Update initial layout and reverse the unmapped circuit.
pass_final_layout = pm.property_set['final_layout']
final_layout = self._compose_layouts(initial_layout,
pass_final_layout,
new_circ.qregs)
initial_layout = final_layout
circ = circ.reverse_ops()
```
|
2020-08-13T07:38:33Z
|
<patch>
diff --git a/qiskit/transpiler/passes/layout/sabre_layout.py b/qiskit/transpiler/passes/layout/sabre_layout.py
--- a/qiskit/transpiler/passes/layout/sabre_layout.py
+++ b/qiskit/transpiler/passes/layout/sabre_layout.py
@@ -102,7 +102,7 @@ def run(self, dag):
pass_final_layout = pm.property_set['final_layout']
final_layout = self._compose_layouts(initial_layout,
pass_final_layout,
- circ.qregs)
+ new_circ.qregs)
initial_layout = final_layout
circ = circ.reverse_ops()
</patch>
|
[]
|
[]
| |||
mesonbuild__meson-1541
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Need dependency(..., headers_only : true) and so on for use with Valgrind
Similar to #726 but not the same.
Use case: I want to detect valgrind via pkg-config and pass the right includes for valgrind/valgrind.h to my build, but not link to the valgrind libs (which fails because they're static libs and not built with -fPIC, but is also unnecessary because everything we need is in the headers via magic macros).
I know of other use cases where this is needed, e.g. where glib headers are used for portable typedefs but no linking is done.
`cflags = run_command('pkg-config', '--cflags', 'dep').stdout().strip().split()` is not cross-compile friendly, so doesn't seem like a good solution.
</issue>
<code>
[start of README.md]
1 <p align="center">
2 <img src="http://mesonbuild.com/meson_logo.png">
3 </p>
4 Meson® is a project to create the best possible next-generation
5 build system.
6
7 #### Status
8
9 [](https://pypi.python.org/pypi/meson)
10 [](https://travis-ci.org/mesonbuild/meson)
11 [](https://ci.appveyor.com/project/jpakkane/meson)
12
13 #### Dependencies
14
15 - [Python](http://python.org) (version 3.4 or newer)
16 - [Ninja](https://ninja-build.org) (version 1.5 or newer)
17
18 #### Installing from source
19
20 You can run Meson directly from a revision control checkout or an
21 extracted tarball. If you wish you can install it locally with the
22 standard Python distutils command `python3 setup.py install <your
23 options here>`.
24
25 Meson is also available from
26 [PyPi](https://pypi.python.org/pypi/meson), so it can be installed
27 with `pip3 install meson` (this does not require a source checkout,
28 pip will download the package automatically). The exact command to
29 type to install with pip can very between systems, be sure to use the
30 Python 3 version of pip.
31
32 #### Creating a standalone script
33
34 Meson can be run as a [Python zip
35 app](https://docs.python.org/3/library/zipapp.html). To generate the
36 executable run the following command:
37
38 python3 -m zipapp -p '/usr/bin/env python3' -m meson:main -o meson <source checkout>
39
40 Note that the source checkout may not be `meson` because it would
41 clash with the generated binary name.
42
43 This will zip all files inside the source checkout into the script
44 which includes hundreds of tests, so you might want to temporarily
45 remove those before running it.
46
47 #### Running
48
49 Meson requires that you have a source directory and a build directory
50 and that these two are different. In your source root must exist a file
51 called 'meson.build'. To generate the build system run this command:
52
53 `meson <source directory> <build directory>`
54
55 Depending on how you obtained Meson the command might also be called
56 `meson.py` instead of plain `meson`. In the rest of this document we
57 are going to use the latter form.
58
59 You can omit either of the two directories, and Meson will substitute
60 the current directory and autodetect what you mean. This allows you to
61 do things like this:
62
63 `cd source_root; mkdir build; cd build; meson ..`
64
65 or
66
67 `cd source_root; mkdir build; meson build`
68
69 To compile, cd into your build directory and type `ninja`. To run unit
70 tests, type `ninja test`.
71
72 Install is the same but it can take an extra argument:
73
74 `DESTDIR=/destdir/path ninja install`
75
76 `DESTDIR` can be omitted. If you are installing to system directories,
77 you may need to run this command with sudo.
78
79
80 #### Contributing
81
82 We love code contributions. See the contributing.txt file for
83 details.
84
85
86 #### IRC
87
88 The irc channel for Meson is `#mesonbuild` over at Freenode.
89
90
91 #### Further info
92
93 More information about the Meson build system can be found at the
94 [project's home page](http://mesonbuild.com).
95
96 Meson is a registered trademark of Jussi Pakkanen
97
[end of README.md]
[start of mesonbuild/backend/backends.py]
1 # Copyright 2012-2016 The Meson development team
2
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6
7 # http://www.apache.org/licenses/LICENSE-2.0
8
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import os, pickle, re
16 from .. import build
17 from .. import dependencies
18 from .. import mesonlib
19 from .. import mlog
20 from .. import compilers
21 import json
22 import subprocess
23 from ..mesonlib import MesonException, get_compiler_for_source, classify_unity_sources
24 from ..compilers import CompilerArgs
25
26 class CleanTrees:
27 '''
28 Directories outputted by custom targets that have to be manually cleaned
29 because on Linux `ninja clean` only deletes empty directories.
30 '''
31 def __init__(self, build_dir, trees):
32 self.build_dir = build_dir
33 self.trees = trees
34
35 class InstallData:
36 def __init__(self, source_dir, build_dir, prefix, strip_bin):
37 self.source_dir = source_dir
38 self.build_dir = build_dir
39 self.prefix = prefix
40 self.strip_bin = strip_bin
41 self.targets = []
42 self.headers = []
43 self.man = []
44 self.data = []
45 self.po_package_name = ''
46 self.po = []
47 self.install_scripts = []
48 self.install_subdirs = []
49
50 class ExecutableSerialisation:
51 def __init__(self, name, fname, cmd_args, env, is_cross, exe_wrapper,
52 workdir, extra_paths, capture):
53 self.name = name
54 self.fname = fname
55 self.cmd_args = cmd_args
56 self.env = env
57 self.is_cross = is_cross
58 self.exe_runner = exe_wrapper
59 self.workdir = workdir
60 self.extra_paths = extra_paths
61 self.capture = capture
62
63 class TestSerialisation:
64 def __init__(self, name, suite, fname, is_cross, exe_wrapper, is_parallel, cmd_args, env,
65 should_fail, timeout, workdir, extra_paths):
66 self.name = name
67 self.suite = suite
68 self.fname = fname
69 self.is_cross = is_cross
70 self.exe_runner = exe_wrapper
71 self.is_parallel = is_parallel
72 self.cmd_args = cmd_args
73 self.env = env
74 self.should_fail = should_fail
75 self.timeout = timeout
76 self.workdir = workdir
77 self.extra_paths = extra_paths
78
79 # This class contains the basic functionality that is needed by all backends.
80 # Feel free to move stuff in and out of it as you see fit.
81 class Backend:
82 def __init__(self, build):
83 self.build = build
84 self.environment = build.environment
85 self.processed_targets = {}
86 self.build_to_src = os.path.relpath(self.environment.get_source_dir(),
87 self.environment.get_build_dir())
88 for t in self.build.targets:
89 priv_dirname = self.get_target_private_dir_abs(t)
90 os.makedirs(priv_dirname, exist_ok=True)
91
92 def get_target_filename(self, t):
93 if isinstance(t, build.CustomTarget):
94 if len(t.get_outputs()) != 1:
95 mlog.warning('custom_target {!r} has more than one output! '
96 'Using the first one.'.format(t.name))
97 filename = t.get_outputs()[0]
98 else:
99 assert(isinstance(t, build.BuildTarget))
100 filename = t.get_filename()
101 return os.path.join(self.get_target_dir(t), filename)
102
103 def get_target_filename_abs(self, target):
104 return os.path.join(self.environment.get_build_dir(), self.get_target_filename(target))
105
106 def get_target_filename_for_linking(self, target):
107 # On some platforms (msvc for instance), the file that is used for
108 # dynamic linking is not the same as the dynamic library itself. This
109 # file is called an import library, and we want to link against that.
110 # On all other platforms, we link to the library directly.
111 if isinstance(target, build.SharedLibrary):
112 link_lib = target.get_import_filename() or target.get_filename()
113 return os.path.join(self.get_target_dir(target), link_lib)
114 elif isinstance(target, build.StaticLibrary):
115 return os.path.join(self.get_target_dir(target), target.get_filename())
116 raise AssertionError('BUG: Tried to link to something that\'s not a library')
117
118 def get_target_dir(self, target):
119 if self.environment.coredata.get_builtin_option('layout') == 'mirror':
120 dirname = target.get_subdir()
121 else:
122 dirname = 'meson-out'
123 return dirname
124
125 def get_target_private_dir(self, target):
126 dirname = os.path.join(self.get_target_dir(target), target.get_basename() + target.type_suffix())
127 return dirname
128
129 def get_target_private_dir_abs(self, target):
130 dirname = os.path.join(self.environment.get_build_dir(), self.get_target_private_dir(target))
131 return dirname
132
133 def get_target_generated_dir(self, target, gensrc, src):
134 """
135 Takes a BuildTarget, a generator source (CustomTarget or GeneratedList),
136 and a generated source filename.
137 Returns the full path of the generated source relative to the build root
138 """
139 # CustomTarget generators output to the build dir of the CustomTarget
140 if isinstance(gensrc, build.CustomTarget):
141 return os.path.join(self.get_target_dir(gensrc), src)
142 # GeneratedList generators output to the private build directory of the
143 # target that the GeneratedList is used in
144 return os.path.join(self.get_target_private_dir(target), src)
145
146 def get_unity_source_filename(self, target, suffix):
147 return target.name + '-unity.' + suffix
148
149 def generate_unity_files(self, target, unity_src):
150 abs_files = []
151 result = []
152 compsrcs = classify_unity_sources(target.compilers.values(), unity_src)
153
154 def init_language_file(suffix):
155 outfilename = os.path.join(self.get_target_private_dir_abs(target),
156 self.get_unity_source_filename(target, suffix))
157 outfileabs = os.path.join(self.environment.get_build_dir(),
158 outfilename)
159 outfileabs_tmp = outfileabs + '.tmp'
160 abs_files.append(outfileabs)
161 outfileabs_tmp_dir = os.path.dirname(outfileabs_tmp)
162 if not os.path.exists(outfileabs_tmp_dir):
163 os.makedirs(outfileabs_tmp_dir)
164 result.append(outfilename)
165 return open(outfileabs_tmp, 'w')
166
167 # For each language, generate a unity source file and return the list
168 for comp, srcs in compsrcs.items():
169 with init_language_file(comp.get_default_suffix()) as ofile:
170 for src in srcs:
171 ofile.write('#include<%s>\n' % src)
172 [mesonlib.replace_if_different(x, x + '.tmp') for x in abs_files]
173 return result
174
175 def relpath(self, todir, fromdir):
176 return os.path.relpath(os.path.join('dummyprefixdir', todir),
177 os.path.join('dummyprefixdir', fromdir))
178
179 def flatten_object_list(self, target, proj_dir_to_build_root=''):
180 obj_list = []
181 for obj in target.get_objects():
182 if isinstance(obj, str):
183 o = os.path.join(proj_dir_to_build_root,
184 self.build_to_src, target.get_subdir(), obj)
185 obj_list.append(o)
186 elif isinstance(obj, mesonlib.File):
187 obj_list.append(obj.rel_to_builddir(self.build_to_src))
188 elif isinstance(obj, build.ExtractedObjects):
189 obj_list += self.determine_ext_objs(obj, proj_dir_to_build_root)
190 else:
191 raise MesonException('Unknown data type in object list.')
192 return obj_list
193
194 def serialise_executable(self, exe, cmd_args, workdir, env={},
195 capture=None):
196 import hashlib
197 # Can't just use exe.name here; it will likely be run more than once
198 if isinstance(exe, (dependencies.ExternalProgram,
199 build.BuildTarget, build.CustomTarget)):
200 basename = exe.name
201 else:
202 basename = os.path.basename(exe)
203 # Take a digest of the cmd args, env, workdir, and capture. This avoids
204 # collisions and also makes the name deterministic over regenerations
205 # which avoids a rebuild by Ninja because the cmdline stays the same.
206 data = bytes(str(sorted(env.items())) + str(cmd_args) + str(workdir) + str(capture),
207 encoding='utf-8')
208 digest = hashlib.sha1(data).hexdigest()
209 scratch_file = 'meson_exe_{0}_{1}.dat'.format(basename, digest)
210 exe_data = os.path.join(self.environment.get_scratch_dir(), scratch_file)
211 with open(exe_data, 'wb') as f:
212 if isinstance(exe, dependencies.ExternalProgram):
213 exe_cmd = exe.get_command()
214 exe_needs_wrapper = False
215 elif isinstance(exe, (build.BuildTarget, build.CustomTarget)):
216 exe_cmd = [self.get_target_filename_abs(exe)]
217 exe_needs_wrapper = exe.is_cross
218 else:
219 exe_cmd = [exe]
220 exe_needs_wrapper = False
221 is_cross = exe_needs_wrapper and \
222 self.environment.is_cross_build() and \
223 self.environment.cross_info.need_cross_compiler() and \
224 self.environment.cross_info.need_exe_wrapper()
225 if is_cross:
226 exe_wrapper = self.environment.cross_info.config['binaries'].get('exe_wrapper', None)
227 else:
228 exe_wrapper = None
229 if mesonlib.is_windows():
230 extra_paths = self.determine_windows_extra_paths(exe)
231 else:
232 extra_paths = []
233 es = ExecutableSerialisation(basename, exe_cmd, cmd_args, env,
234 is_cross, exe_wrapper, workdir,
235 extra_paths, capture)
236 pickle.dump(es, f)
237 return exe_data
238
239 def serialise_tests(self):
240 test_data = os.path.join(self.environment.get_scratch_dir(), 'meson_test_setup.dat')
241 with open(test_data, 'wb') as datafile:
242 self.write_test_file(datafile)
243 benchmark_data = os.path.join(self.environment.get_scratch_dir(), 'meson_benchmark_setup.dat')
244 with open(benchmark_data, 'wb') as datafile:
245 self.write_benchmark_file(datafile)
246 return test_data, benchmark_data
247
248 def determine_linker(self, target):
249 '''
250 If we're building a static library, there is only one static linker.
251 Otherwise, we query the target for the dynamic linker.
252 '''
253 if isinstance(target, build.StaticLibrary):
254 if target.is_cross:
255 return self.build.static_cross_linker
256 else:
257 return self.build.static_linker
258 l = target.get_clike_dynamic_linker()
259 if not l:
260 m = "Couldn't determine linker for target {!r}"
261 raise MesonException(m.format(target.name))
262 return l
263
264 def object_filename_from_source(self, target, source):
265 if isinstance(source, mesonlib.File):
266 source = source.fname
267 # foo.vala files compile down to foo.c and then foo.c.o, not foo.vala.o
268 if source.endswith('.vala'):
269 source = os.path.join(self.get_target_private_dir(target), source[:-5] + '.c')
270 return source.replace('/', '_').replace('\\', '_') + '.' + self.environment.get_object_suffix()
271
272 def determine_ext_objs(self, extobj, proj_dir_to_build_root):
273 result = []
274 targetdir = self.get_target_private_dir(extobj.target)
275 # With unity builds, there's just one object that contains all the
276 # sources, and we only support extracting all the objects in this mode,
277 # so just return that.
278 if self.environment.coredata.get_builtin_option('unity'):
279 comp = get_compiler_for_source(extobj.target.compilers.values(),
280 extobj.srclist[0])
281 # The unity object name uses the full absolute path of the source file
282 osrc = os.path.join(self.get_target_private_dir_abs(extobj.target),
283 self.get_unity_source_filename(extobj.target,
284 comp.get_default_suffix()))
285 objname = self.object_filename_from_source(extobj.target, osrc)
286 objpath = os.path.join(proj_dir_to_build_root, targetdir, objname)
287 return [objpath]
288 for osrc in extobj.srclist:
289 objname = self.object_filename_from_source(extobj.target, osrc)
290 objpath = os.path.join(proj_dir_to_build_root, targetdir, objname)
291 result.append(objpath)
292 return result
293
294 def get_pch_include_args(self, compiler, target):
295 args = []
296 pchpath = self.get_target_private_dir(target)
297 includeargs = compiler.get_include_args(pchpath, False)
298 for lang in ['c', 'cpp']:
299 p = target.get_pch(lang)
300 if len(p) == 0:
301 continue
302 if compiler.can_compile(p[-1]):
303 header = p[0]
304 args += compiler.get_pch_use_args(pchpath, header)
305 if len(args) > 0:
306 args = includeargs + args
307 return args
308
309 @staticmethod
310 def escape_extra_args(compiler, args):
311 # No extra escaping/quoting needed when not running on Windows
312 if not mesonlib.is_windows():
313 return args
314 extra_args = []
315 # Compiler-specific escaping is needed for -D args but not for any others
316 if compiler.get_id() == 'msvc':
317 # MSVC needs escaping when a -D argument ends in \ or \"
318 for arg in args:
319 if arg.startswith('-D') or arg.startswith('/D'):
320 # Without extra escaping for these two, the next character
321 # gets eaten
322 if arg.endswith('\\'):
323 arg += '\\'
324 elif arg.endswith('\\"'):
325 arg = arg[:-2] + '\\\\"'
326 extra_args.append(arg)
327 else:
328 # MinGW GCC needs all backslashes in defines to be doubly-escaped
329 # FIXME: Not sure about Cygwin or Clang
330 for arg in args:
331 if arg.startswith('-D') or arg.startswith('/D'):
332 arg = arg.replace('\\', '\\\\')
333 extra_args.append(arg)
334 return extra_args
335
336 def generate_basic_compiler_args(self, target, compiler, no_warn_args=False):
337 # Create an empty commands list, and start adding arguments from
338 # various sources in the order in which they must override each other
339 # starting from hard-coded defaults followed by build options and so on.
340 commands = CompilerArgs(compiler)
341 # First, the trivial ones that are impossible to override.
342 #
343 # Add -nostdinc/-nostdinc++ if needed; can't be overriden
344 commands += self.get_cross_stdlib_args(target, compiler)
345 # Add things like /NOLOGO or -pipe; usually can't be overriden
346 commands += compiler.get_always_args()
347 # Only add warning-flags by default if the buildtype enables it, and if
348 # we weren't explicitly asked to not emit warnings (for Vala, f.ex)
349 if no_warn_args:
350 commands += compiler.get_no_warn_args()
351 elif self.environment.coredata.get_builtin_option('buildtype') != 'plain':
352 commands += compiler.get_warn_args(self.environment.coredata.get_builtin_option('warning_level'))
353 # Add -Werror if werror=true is set in the build options set on the
354 # command-line or default_options inside project(). This only sets the
355 # action to be done for warnings if/when they are emitted, so it's ok
356 # to set it after get_no_warn_args() or get_warn_args().
357 if self.environment.coredata.get_builtin_option('werror'):
358 commands += compiler.get_werror_args()
359 # Add compile args for c_* or cpp_* build options set on the
360 # command-line or default_options inside project().
361 commands += compiler.get_option_compile_args(self.environment.coredata.compiler_options)
362 # Add buildtype args: optimization level, debugging, etc.
363 commands += compiler.get_buildtype_args(self.environment.coredata.get_builtin_option('buildtype'))
364 # Add compile args added using add_project_arguments()
365 commands += self.build.get_project_args(compiler, target.subproject)
366 # Add compile args added using add_global_arguments()
367 # These override per-project arguments
368 commands += self.build.get_global_args(compiler)
369 # Compile args added from the env: CFLAGS/CXXFLAGS, etc. We want these
370 # to override all the defaults, but not the per-target compile args.
371 commands += self.environment.coredata.external_args[compiler.get_language()]
372 # Always set -fPIC for shared libraries
373 if isinstance(target, build.SharedLibrary):
374 commands += compiler.get_pic_args()
375 # Set -fPIC for static libraries by default unless explicitly disabled
376 if isinstance(target, build.StaticLibrary) and target.pic:
377 commands += compiler.get_pic_args()
378 # Add compile args needed to find external dependencies. Link args are
379 # added while generating the link command.
380 # NOTE: We must preserve the order in which external deps are
381 # specified, so we reverse the list before iterating over it.
382 for dep in reversed(target.get_external_deps()):
383 commands += dep.get_compile_args()
384 # Qt needs -fPIC for executables
385 # XXX: We should move to -fPIC for all executables
386 if isinstance(target, build.Executable):
387 commands += dep.get_exe_args(compiler)
388 # For 'automagic' deps: Boost and GTest. Also dependency('threads').
389 # pkg-config puts the thread flags itself via `Cflags:`
390 if dep.need_threads():
391 commands += compiler.thread_flags()
392 # Fortran requires extra include directives.
393 if compiler.language == 'fortran':
394 for lt in target.link_targets:
395 priv_dir = os.path.join(self.get_target_dir(lt), lt.get_basename() + lt.type_suffix())
396 incflag = compiler.get_include_args(priv_dir, False)
397 commands += incflag
398 return commands
399
400 def build_target_link_arguments(self, compiler, deps):
401 args = []
402 for d in deps:
403 if not isinstance(d, (build.StaticLibrary, build.SharedLibrary)):
404 raise RuntimeError('Tried to link with a non-library target "%s".' % d.get_basename())
405 if isinstance(compiler, compilers.LLVMDCompiler):
406 args += ['-L' + self.get_target_filename_for_linking(d)]
407 else:
408 args.append(self.get_target_filename_for_linking(d))
409 # If you have executable e that links to shared lib s1 that links to shared library s2
410 # you have to specify s2 as well as s1 when linking e even if e does not directly use
411 # s2. Gcc handles this case fine but Clang does not for some reason. Thus we need to
412 # explictly specify all libraries every time.
413 args += self.build_target_link_arguments(compiler, d.get_dependencies())
414 return args
415
416 def determine_windows_extra_paths(self, target):
417 '''On Windows there is no such thing as an rpath.
418 We must determine all locations of DLLs that this exe
419 links to and return them so they can be used in unit
420 tests.'''
421 if not isinstance(target, build.Executable):
422 return []
423 prospectives = target.get_transitive_link_deps()
424 result = []
425 for ld in prospectives:
426 if ld == '' or ld == '.':
427 continue
428 dirseg = os.path.join(self.environment.get_build_dir(), self.get_target_dir(ld))
429 if dirseg not in result:
430 result.append(dirseg)
431 return result
432
433 def write_benchmark_file(self, datafile):
434 self.write_test_serialisation(self.build.get_benchmarks(), datafile)
435
436 def write_test_file(self, datafile):
437 self.write_test_serialisation(self.build.get_tests(), datafile)
438
439 def write_test_serialisation(self, tests, datafile):
440 arr = []
441 for t in tests:
442 exe = t.get_exe()
443 if isinstance(exe, dependencies.ExternalProgram):
444 cmd = exe.get_command()
445 else:
446 cmd = [os.path.join(self.environment.get_build_dir(), self.get_target_filename(t.get_exe()))]
447 is_cross = self.environment.is_cross_build() and \
448 self.environment.cross_info.need_cross_compiler() and \
449 self.environment.cross_info.need_exe_wrapper()
450 if is_cross:
451 exe_wrapper = self.environment.cross_info.config['binaries'].get('exe_wrapper', None)
452 else:
453 exe_wrapper = None
454 if mesonlib.is_windows():
455 extra_paths = self.determine_windows_extra_paths(exe)
456 else:
457 extra_paths = []
458 cmd_args = []
459 for a in t.cmd_args:
460 if hasattr(a, 'held_object'):
461 a = a.held_object
462 if isinstance(a, mesonlib.File):
463 a = os.path.join(self.environment.get_build_dir(), a.rel_to_builddir(self.build_to_src))
464 cmd_args.append(a)
465 elif isinstance(a, str):
466 cmd_args.append(a)
467 elif isinstance(a, build.Target):
468 cmd_args.append(self.get_target_filename(a))
469 else:
470 raise MesonException('Bad object in test command.')
471 ts = TestSerialisation(t.get_name(), t.suite, cmd, is_cross, exe_wrapper,
472 t.is_parallel, cmd_args, t.env, t.should_fail,
473 t.timeout, t.workdir, extra_paths)
474 arr.append(ts)
475 pickle.dump(arr, datafile)
476
477 def generate_depmf_install(self, d):
478 if self.build.dep_manifest_name is None:
479 return
480 ifilename = os.path.join(self.environment.get_build_dir(), 'depmf.json')
481 ofilename = os.path.join(self.environment.get_prefix(), self.build.dep_manifest_name)
482 mfobj = {'type': 'dependency manifest',
483 'version': '1.0'}
484 mfobj['projects'] = self.build.dep_manifest
485 with open(ifilename, 'w') as f:
486 f.write(json.dumps(mfobj))
487 # Copy file from, to, and with mode unchanged
488 d.data.append([ifilename, ofilename, None])
489
490 def get_regen_filelist(self):
491 '''List of all files whose alteration means that the build
492 definition needs to be regenerated.'''
493 deps = [os.path.join(self.build_to_src, df)
494 for df in self.interpreter.get_build_def_files()]
495 if self.environment.is_cross_build():
496 deps.append(os.path.join(self.build_to_src,
497 self.environment.coredata.cross_file))
498 deps.append('meson-private/coredata.dat')
499 if os.path.exists(os.path.join(self.environment.get_source_dir(), 'meson_options.txt')):
500 deps.append(os.path.join(self.build_to_src, 'meson_options.txt'))
501 for sp in self.build.subprojects.keys():
502 fname = os.path.join(self.environment.get_source_dir(), sp, 'meson_options.txt')
503 if os.path.isfile(fname):
504 deps.append(os.path.join(self.build_to_src, sp, 'meson_options.txt'))
505 return deps
506
507 def exe_object_to_cmd_array(self, exe):
508 if self.environment.is_cross_build() and \
509 self.environment.cross_info.need_exe_wrapper() and \
510 isinstance(exe, build.BuildTarget) and exe.is_cross:
511 if 'exe_wrapper' not in self.environment.cross_info.config['binaries']:
512 s = 'Can not use target %s as a generator because it is cross-built\n'
513 s += 'and no exe wrapper is defined. You might want to set it to native instead.'
514 s = s % exe.name
515 raise MesonException(s)
516 if isinstance(exe, build.BuildTarget):
517 exe_arr = [os.path.join(self.environment.get_build_dir(), self.get_target_filename(exe))]
518 else:
519 exe_arr = exe.get_command()
520 return exe_arr
521
522 def replace_extra_args(self, args, genlist):
523 final_args = []
524 for a in args:
525 if a == '@EXTRA_ARGS@':
526 final_args += genlist.get_extra_args()
527 else:
528 final_args.append(a)
529 return final_args
530
531 def replace_outputs(self, args, private_dir, output_list):
532 newargs = []
533 regex = re.compile('@OUTPUT(\d+)@')
534 for arg in args:
535 m = regex.search(arg)
536 while m is not None:
537 index = int(m.group(1))
538 src = '@OUTPUT%d@' % index
539 arg = arg.replace(src, os.path.join(private_dir, output_list[index]))
540 m = regex.search(arg)
541 newargs.append(arg)
542 return newargs
543
544 def get_build_by_default_targets(self):
545 result = {}
546 # Get all build and custom targets that must be built by default
547 for name, t in self.build.get_targets().items():
548 if t.build_by_default or t.install or t.build_always:
549 result[name] = t
550 # Get all targets used as test executables and arguments. These must
551 # also be built by default. XXX: Sometime in the future these should be
552 # built only before running tests.
553 for t in self.build.get_tests():
554 exe = t.exe
555 if hasattr(exe, 'held_object'):
556 exe = exe.held_object
557 if isinstance(exe, (build.CustomTarget, build.BuildTarget)):
558 result[exe.get_id()] = exe
559 for arg in t.cmd_args:
560 if hasattr(arg, 'held_object'):
561 arg = arg.held_object
562 if not isinstance(arg, (build.CustomTarget, build.BuildTarget)):
563 continue
564 result[arg.get_id()] = arg
565 return result
566
567 def get_custom_target_provided_libraries(self, target):
568 libs = []
569 for t in target.get_generated_sources():
570 if not isinstance(t, build.CustomTarget):
571 continue
572 for f in t.output:
573 if self.environment.is_library(f):
574 libs.append(os.path.join(self.get_target_dir(t), f))
575 return libs
576
577 def get_custom_target_sources(self, target):
578 '''
579 Custom target sources can be of various object types; strings, File,
580 BuildTarget, even other CustomTargets.
581 Returns the path to them relative to the build root directory.
582 '''
583 srcs = []
584 for i in target.get_sources():
585 if hasattr(i, 'held_object'):
586 i = i.held_object
587 if isinstance(i, str):
588 fname = [os.path.join(self.build_to_src, target.subdir, i)]
589 elif isinstance(i, build.BuildTarget):
590 fname = [self.get_target_filename(i)]
591 elif isinstance(i, build.CustomTarget):
592 fname = [os.path.join(self.get_target_dir(i), p) for p in i.get_outputs()]
593 elif isinstance(i, build.GeneratedList):
594 fname = [os.path.join(self.get_target_private_dir(target), p) for p in i.get_outputs()]
595 else:
596 fname = [i.rel_to_builddir(self.build_to_src)]
597 if target.absolute_paths:
598 fname = [os.path.join(self.environment.get_build_dir(), f) for f in fname]
599 srcs += fname
600 return srcs
601
602 def eval_custom_target_command(self, target, absolute_outputs=False):
603 # We want the outputs to be absolute only when using the VS backend
604 # XXX: Maybe allow the vs backend to use relative paths too?
605 source_root = self.build_to_src
606 build_root = '.'
607 outdir = self.get_target_dir(target)
608 if absolute_outputs:
609 source_root = self.environment.get_source_dir()
610 build_root = self.environment.get_source_dir()
611 outdir = os.path.join(self.environment.get_build_dir(), outdir)
612 outputs = []
613 for i in target.output:
614 outputs.append(os.path.join(outdir, i))
615 inputs = self.get_custom_target_sources(target)
616 # Evaluate the command list
617 cmd = []
618 for i in target.command:
619 if isinstance(i, build.Executable):
620 cmd += self.exe_object_to_cmd_array(i)
621 continue
622 elif isinstance(i, build.CustomTarget):
623 # GIR scanner will attempt to execute this binary but
624 # it assumes that it is in path, so always give it a full path.
625 tmp = i.get_outputs()[0]
626 i = os.path.join(self.get_target_dir(i), tmp)
627 elif isinstance(i, mesonlib.File):
628 i = i.rel_to_builddir(self.build_to_src)
629 if target.absolute_paths:
630 i = os.path.join(self.environment.get_build_dir(), i)
631 # FIXME: str types are blindly added ignoring 'target.absolute_paths'
632 # because we can't know if they refer to a file or just a string
633 elif not isinstance(i, str):
634 err_msg = 'Argument {0} is of unknown type {1}'
635 raise RuntimeError(err_msg.format(str(i), str(type(i))))
636 elif '@SOURCE_ROOT@' in i:
637 i = i.replace('@SOURCE_ROOT@', source_root)
638 elif '@BUILD_ROOT@' in i:
639 i = i.replace('@BUILD_ROOT@', build_root)
640 elif '@DEPFILE@' in i:
641 if target.depfile is None:
642 msg = 'Custom target {!r} has @DEPFILE@ but no depfile ' \
643 'keyword argument.'.format(target.name)
644 raise MesonException(msg)
645 dfilename = os.path.join(outdir, target.depfile)
646 i = i.replace('@DEPFILE@', dfilename)
647 elif '@PRIVATE_OUTDIR_' in i:
648 match = re.search('@PRIVATE_OUTDIR_(ABS_)?([^/\s*]*)@', i)
649 if not match:
650 msg = 'Custom target {!r} has an invalid argument {!r}' \
651 ''.format(target.name, i)
652 raise MesonException(msg)
653 source = match.group(0)
654 if match.group(1) is None and not target.absolute_paths:
655 lead_dir = ''
656 else:
657 lead_dir = self.environment.get_build_dir()
658 i = i.replace(source, os.path.join(lead_dir, outdir))
659 cmd.append(i)
660 # Substitute the rest of the template strings
661 values = mesonlib.get_filenames_templates_dict(inputs, outputs)
662 cmd = mesonlib.substitute_values(cmd, values)
663 # This should not be necessary but removing it breaks
664 # building GStreamer on Windows. The underlying issue
665 # is problems with quoting backslashes on Windows
666 # which is the seventh circle of hell. The downside is
667 # that this breaks custom targets whose command lines
668 # have backslashes. If you try to fix this be sure to
669 # check that it does not break GST.
670 #
671 # The bug causes file paths such as c:\foo to get escaped
672 # into c:\\foo.
673 #
674 # Unfortunately we have not been able to come up with an
675 # isolated test case for this so unless you manage to come up
676 # with one, the only way is to test the building with Gst's
677 # setup. Note this in your MR or ping us and we will get it
678 # fixed.
679 #
680 # https://github.com/mesonbuild/meson/pull/737
681 cmd = [i.replace('\\', '/') for i in cmd]
682 return inputs, outputs, cmd
683
684 def run_postconf_scripts(self):
685 env = {'MESON_SOURCE_ROOT': self.environment.get_source_dir(),
686 'MESON_BUILD_ROOT': self.environment.get_build_dir(),
687 }
688 child_env = os.environ.copy()
689 child_env.update(env)
690
691 for s in self.build.postconf_scripts:
692 cmd = s['exe'] + s['args']
693 subprocess.check_call(cmd, env=child_env)
694
[end of mesonbuild/backend/backends.py]
[start of mesonbuild/coredata.py]
1 # Copyright 2012-2017 The Meson development team
2
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6
7 # http://www.apache.org/licenses/LICENSE-2.0
8
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import pickle, os, uuid
16 from pathlib import PurePath
17 from .mesonlib import MesonException, commonpath
18 from .mesonlib import default_libdir, default_libexecdir, default_prefix
19
20 version = '0.40.0.dev1'
21 backendlist = ['ninja', 'vs', 'vs2010', 'vs2015', 'vs2017', 'xcode']
22
23 class UserOption:
24 def __init__(self, name, description, choices):
25 super().__init__()
26 self.name = name
27 self.choices = choices
28 self.description = description
29
30 def parse_string(self, valuestring):
31 return valuestring
32
33 class UserStringOption(UserOption):
34 def __init__(self, name, description, value, choices=None):
35 super().__init__(name, description, choices)
36 self.set_value(value)
37
38 def validate(self, value):
39 if not isinstance(value, str):
40 raise MesonException('Value "%s" for string option "%s" is not a string.' % (str(value), self.name))
41
42 def set_value(self, newvalue):
43 self.validate(newvalue)
44 self.value = newvalue
45
46 class UserBooleanOption(UserOption):
47 def __init__(self, name, description, value):
48 super().__init__(name, description, [True, False])
49 self.set_value(value)
50
51 def tobool(self, thing):
52 if isinstance(thing, bool):
53 return thing
54 if thing.lower() == 'true':
55 return True
56 if thing.lower() == 'false':
57 return False
58 raise MesonException('Value %s is not boolean (true or false).' % thing)
59
60 def set_value(self, newvalue):
61 self.value = self.tobool(newvalue)
62
63 def parse_string(self, valuestring):
64 if valuestring == 'false':
65 return False
66 if valuestring == 'true':
67 return True
68 raise MesonException('Value "%s" for boolean option "%s" is not a boolean.' % (valuestring, self.name))
69
70 def __bool__(self):
71 return self.value
72
73 class UserComboOption(UserOption):
74 def __init__(self, name, description, choices, value):
75 super().__init__(name, description, choices)
76 if not isinstance(self.choices, list):
77 raise MesonException('Combo choices must be an array.')
78 for i in self.choices:
79 if not isinstance(i, str):
80 raise MesonException('Combo choice elements must be strings.')
81 self.set_value(value)
82
83 def set_value(self, newvalue):
84 if newvalue not in self.choices:
85 optionsstring = ', '.join(['"%s"' % (item,) for item in self.choices])
86 raise MesonException('Value "%s" for combo option "%s" is not one of the choices. Possible choices are: %s.' % (newvalue, self.name, optionsstring))
87 self.value = newvalue
88
89 class UserStringArrayOption(UserOption):
90 def __init__(self, name, description, value, **kwargs):
91 super().__init__(name, description, kwargs.get('choices', []))
92 self.set_value(value)
93
94 def set_value(self, newvalue):
95 if isinstance(newvalue, str):
96 if not newvalue.startswith('['):
97 raise MesonException('Valuestring does not define an array: ' + newvalue)
98 newvalue = eval(newvalue, {}, {}) # Yes, it is unsafe.
99 if not isinstance(newvalue, list):
100 raise MesonException('"{0}" should be a string array, but it is not'.format(str(newvalue)))
101 for i in newvalue:
102 if not isinstance(i, str):
103 raise MesonException('String array element "{0}" is not a string.'.format(str(newvalue)))
104 self.value = newvalue
105
106 # This class contains all data that must persist over multiple
107 # invocations of Meson. It is roughly the same thing as
108 # cmakecache.
109
110 class CoreData:
111
112 def __init__(self, options):
113 self.guid = str(uuid.uuid4()).upper()
114 self.test_guid = str(uuid.uuid4()).upper()
115 self.regen_guid = str(uuid.uuid4()).upper()
116 self.target_guids = {}
117 self.version = version
118 self.init_builtins(options)
119 self.user_options = {}
120 self.compiler_options = {}
121 self.base_options = {}
122 # These two, external_*args, are set via env vars CFLAGS, LDFLAGS, etc
123 # but only when not cross-compiling.
124 self.external_args = {}
125 self.external_link_args = {}
126 if options.cross_file is not None:
127 self.cross_file = os.path.join(os.getcwd(), options.cross_file)
128 else:
129 self.cross_file = None
130 self.wrap_mode = options.wrap_mode
131 self.compilers = {}
132 self.cross_compilers = {}
133 self.deps = {}
134 self.modules = {}
135 # Only to print a warning if it changes between Meson invocations.
136 self.pkgconf_envvar = os.environ.get('PKG_CONFIG_PATH', '')
137
138 def sanitize_prefix(self, prefix):
139 if not os.path.isabs(prefix):
140 raise MesonException('prefix value {!r} must be an absolute path'
141 ''.format(prefix))
142 if prefix.endswith('/') or prefix.endswith('\\'):
143 # On Windows we need to preserve the trailing slash if the
144 # string is of type 'C:\' because 'C:' is not an absolute path.
145 if len(prefix) == 3 and prefix[1] == ':':
146 pass
147 else:
148 prefix = prefix[:-1]
149 return prefix
150
151 def sanitize_dir_option_value(self, prefix, option, value):
152 '''
153 If the option is an installation directory option and the value is an
154 absolute path, check that it resides within prefix and return the value
155 as a path relative to the prefix.
156
157 This way everyone can do f.ex, get_option('libdir') and be sure to get
158 the library directory relative to prefix.
159 '''
160 if option.endswith('dir') and os.path.isabs(value) and \
161 option not in builtin_dir_noprefix_options:
162 # Value must be a subdir of the prefix
163 # commonpath will always return a path in the native format, so we
164 # must use pathlib.PurePath to do the same conversion before
165 # comparing.
166 if commonpath([value, prefix]) != str(PurePath(prefix)):
167 m = 'The value of the {!r} option is {!r} which must be a ' \
168 'subdir of the prefix {!r}.\nNote that if you pass a ' \
169 'relative path, it is assumed to be a subdir of prefix.'
170 raise MesonException(m.format(option, value, prefix))
171 # Convert path to be relative to prefix
172 skip = len(prefix) + 1
173 value = value[skip:]
174 return value
175
176 def init_builtins(self, options):
177 self.builtins = {}
178 # Sanitize prefix
179 options.prefix = self.sanitize_prefix(options.prefix)
180 # Initialize other builtin options
181 for key in get_builtin_options():
182 if hasattr(options, key):
183 value = getattr(options, key)
184 value = self.sanitize_dir_option_value(options.prefix, key, value)
185 setattr(options, key, value)
186 else:
187 value = get_builtin_option_default(key)
188 args = [key] + builtin_options[key][1:-1] + [value]
189 self.builtins[key] = builtin_options[key][0](*args)
190
191 def get_builtin_option(self, optname):
192 if optname in self.builtins:
193 return self.builtins[optname].value
194 raise RuntimeError('Tried to get unknown builtin option %s.' % optname)
195
196 def set_builtin_option(self, optname, value):
197 if optname == 'prefix':
198 value = self.sanitize_prefix(value)
199 elif optname in self.builtins:
200 prefix = self.builtins['prefix'].value
201 value = self.sanitize_dir_option_value(prefix, optname, value)
202 else:
203 raise RuntimeError('Tried to set unknown builtin option %s.' % optname)
204 self.builtins[optname].set_value(value)
205
206 def load(filename):
207 load_fail_msg = 'Coredata file {!r} is corrupted. Try with a fresh build tree.'.format(filename)
208 try:
209 with open(filename, 'rb') as f:
210 obj = pickle.load(f)
211 except pickle.UnpicklingError:
212 raise MesonException(load_fail_msg)
213 if not isinstance(obj, CoreData):
214 raise MesonException(load_fail_msg)
215 if obj.version != version:
216 raise MesonException('Build directory has been generated with Meson version %s, which is incompatible with current version %s.\nPlease delete this build directory AND create a new one.' %
217 (obj.version, version))
218 return obj
219
220 def save(obj, filename):
221 if obj.version != version:
222 raise MesonException('Fatal version mismatch corruption.')
223 with open(filename, 'wb') as f:
224 pickle.dump(obj, f)
225
226 def get_builtin_options():
227 return list(builtin_options.keys())
228
229 def is_builtin_option(optname):
230 return optname in get_builtin_options()
231
232 def get_builtin_option_choices(optname):
233 if is_builtin_option(optname):
234 if builtin_options[optname][0] == UserStringOption:
235 return None
236 elif builtin_options[optname][0] == UserBooleanOption:
237 return [True, False]
238 else:
239 return builtin_options[optname][2]
240 else:
241 raise RuntimeError('Tried to get the supported values for an unknown builtin option \'%s\'.' % optname)
242
243 def get_builtin_option_description(optname):
244 if is_builtin_option(optname):
245 return builtin_options[optname][1]
246 else:
247 raise RuntimeError('Tried to get the description for an unknown builtin option \'%s\'.' % optname)
248
249 def get_builtin_option_default(optname):
250 if is_builtin_option(optname):
251 o = builtin_options[optname]
252 if o[0] == UserComboOption:
253 return o[3]
254 return o[2]
255 else:
256 raise RuntimeError('Tried to get the default value for an unknown builtin option \'%s\'.' % optname)
257
258 builtin_options = {
259 'buildtype': [UserComboOption, 'Build type to use.', ['plain', 'debug', 'debugoptimized', 'release', 'minsize'], 'debug'],
260 'strip': [UserBooleanOption, 'Strip targets on install.', False],
261 'unity': [UserBooleanOption, 'Unity build.', False],
262 'prefix': [UserStringOption, 'Installation prefix.', default_prefix()],
263 'libdir': [UserStringOption, 'Library directory.', default_libdir()],
264 'libexecdir': [UserStringOption, 'Library executable directory.', default_libexecdir()],
265 'bindir': [UserStringOption, 'Executable directory.', 'bin'],
266 'sbindir': [UserStringOption, 'System executable directory.', 'sbin'],
267 'includedir': [UserStringOption, 'Header file directory.', 'include'],
268 'datadir': [UserStringOption, 'Data file directory.', 'share'],
269 'mandir': [UserStringOption, 'Manual page directory.', 'share/man'],
270 'infodir': [UserStringOption, 'Info page directory.', 'share/info'],
271 'localedir': [UserStringOption, 'Locale data directory.', 'share/locale'],
272 # sysconfdir, localstatedir and sharedstatedir are a bit special. These defaults to ${prefix}/etc,
273 # ${prefix}/var and ${prefix}/com but nobody uses that. Instead they always set it
274 # manually to /etc, /var and /var/lib. This default values is thus pointless and not really used
275 # but we set it to this for consistency with other systems.
276 #
277 # Projects installing to sysconfdir, localstatedir or sharedstatedir probably want
278 # to set the following in project():
279 #
280 # default_options : ['sysconfdir=/etc', 'localstatedir=/var', 'sharedstatedir=/var/lib']
281 'sysconfdir': [UserStringOption, 'Sysconf data directory.', 'etc'],
282 'localstatedir': [UserStringOption, 'Localstate data directory.', 'var'],
283 'sharedstatedir': [UserStringOption, 'Architecture-independent data directory.', 'com'],
284 'werror': [UserBooleanOption, 'Treat warnings as errors.', False],
285 'warning_level': [UserComboOption, 'Compiler warning level to use.', ['1', '2', '3'], '1'],
286 'layout': [UserComboOption, 'Build directory layout.', ['mirror', 'flat'], 'mirror'],
287 'default_library': [UserComboOption, 'Default library type.', ['shared', 'static'], 'shared'],
288 'backend': [UserComboOption, 'Backend to use.', backendlist, 'ninja'],
289 'stdsplit': [UserBooleanOption, 'Split stdout and stderr in test logs.', True],
290 'errorlogs': [UserBooleanOption, "Whether to print the logs from failing tests.", True],
291 }
292
293 # Installation directories that can reside in a path outside of the prefix
294 builtin_dir_noprefix_options = {'sysconfdir', 'localstatedir', 'sharedstatedir'}
295
296 forbidden_target_names = {'clean': None,
297 'clean-ctlist': None,
298 'clean-gcno': None,
299 'clean-gcda': None,
300 'coverage-text': None,
301 'coverage-xml': None,
302 'coverage-html': None,
303 'phony': None,
304 'PHONY': None,
305 'all': None,
306 'test': None,
307 'benchmark': None,
308 'install': None,
309 'uninstall': None,
310 'build.ninja': None,
311 'scan-build': None,
312 'reconfigure': None,
313 }
314
[end of mesonbuild/coredata.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
mesonbuild/meson
|
fa107031249e7a45c2b7187e85ab5e06ca5e4a9d
|
Need dependency(..., headers_only : true) and so on for use with Valgrind
Similar to #726 but not the same.
Use case: I want to detect valgrind via pkg-config and pass the right includes for valgrind/valgrind.h to my build, but not link to the valgrind libs (which fails because they're static libs and not built with -fPIC, but is also unnecessary because everything we need is in the headers via magic macros).
I know of other use cases where this is needed, e.g. where glib headers are used for portable typedefs but no linking is done.
`cflags = run_command('pkg-config', '--cflags', 'dep').stdout().strip().split()` is not cross-compile friendly, so doesn't seem like a good solution.
|
I'm wondering if we actually need these getters, or if something like this wouldn't be better:
`valgrind_dep = dependency('valgrind', headers_only : true)`
I actually really like the idea of a `headers_only` kwarg here, and it's quite easy to implement too.
|
2017-03-29T18:25:04Z
|
<patch>
diff --git a/mesonbuild/dependencies.py b/mesonbuild/dependencies.py
--- a/mesonbuild/dependencies.py
+++ b/mesonbuild/dependencies.py
@@ -1483,6 +1483,14 @@ def get_link_args(self):
def get_version(self):
return self.version
+class ValgrindDependency(PkgConfigDependency):
+
+ def __init__(self, environment, kwargs):
+ PkgConfigDependency.__init__(self, 'valgrind', environment, kwargs)
+
+ def get_link_args(self):
+ return []
+
def get_dep_identifier(name, kwargs):
elements = [name]
modlist = kwargs.get('modules', [])
@@ -1544,4 +1552,5 @@ def find_external_dependency(name, environment, kwargs):
'gl': GLDependency,
'threads': ThreadDependency,
'python3': Python3Dependency,
+ 'valgrind': ValgrindDependency,
}
</patch>
|
[]
|
[]
| |||
mesonbuild__meson-5974
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Support for clang-tidy (or other linters)
CMake has support for additional linting by clang-tidy: http://www.mariobadr.com/using-clang-tidy-with-cmake-36.html
I was wondering if something like this could be done for Meson, so that header dependencies etc. still work.
</issue>
<code>
[start of README.md]
1 <p align="center">
2 <img src="https://mesonbuild.com/assets/images/meson_logo.png">
3 </p>
4 Meson® is a project to create the best possible next-generation
5 build system.
6
7 #### Status
8
9 [](https://pypi.python.org/pypi/meson)
10 [](https://travis-ci.org/mesonbuild/meson)
11 [](https://dev.azure.com/jussi0947/jussi/_build/latest?definitionId=1)
12 [](https://codecov.io/gh/mesonbuild/meson/branch/master)
13 [](https://lgtm.com/projects/g/mesonbuild/meson/context:python)
14 [](https://lgtm.com/projects/g/mesonbuild/meson/alerts)
15
16 #### Dependencies
17
18 - [Python](https://python.org) (version 3.5 or newer)
19 - [Ninja](https://ninja-build.org) (version 1.5 or newer)
20
21 #### Installing from source
22
23 You can run Meson directly from a revision control checkout or an
24 extracted tarball. If you wish you can install it locally with the
25 standard Python distutils command `python3 setup.py install <your
26 options here>`.
27
28 Meson is also available from
29 [PyPi](https://pypi.python.org/pypi/meson), so it can be installed
30 with `pip3 install meson` (this does not require a source checkout,
31 pip will download the package automatically). The exact command to
32 type to install with Pip can vary between systems, be sure to use the
33 Python 3 version of Pip.
34
35 #### Running
36
37 Meson requires that you have a source directory and a build directory
38 and that these two are different. In your source root must exist a
39 file called `meson.build`. To generate the build system run this
40 command:
41
42 `meson <source directory> <build directory>`
43
44 Depending on how you obtained Meson the command might also be called
45 `meson.py` instead of plain `meson`. In the rest of this document we
46 are going to use the latter form.
47
48 You can omit either of the two directories, and Meson will substitute
49 the current directory and autodetect what you mean. This allows you to
50 do things like this:
51
52 `cd source_root; mkdir builddir; cd builddir; meson ..`
53
54 or
55
56 `cd source_root; mkdir builddir; meson builddir`
57
58 To compile, cd into your build directory and type `ninja`. To run unit
59 tests, type `ninja test`.
60
61 Install is the same but it can take an extra argument:
62
63 `DESTDIR=/destdir/path ninja install`
64
65 `DESTDIR` can be omitted. If you are installing to system directories,
66 you may need to run this command with sudo.
67
68
69 #### Contributing
70
71 We love code contributions. See the [contribution
72 page](https://mesonbuild.com/Contributing.html) on the web site for
73 details.
74
75
76 #### IRC
77
78 The irc channel for Meson is `#mesonbuild` over at Freenode.
79
80 You can use [FreeNode's official webchat][meson_irc]
81 to connect to this channel.
82
83 [meson_irc]: https://webchat.freenode.net/?channels=%23mesonbuild
84
85 #### Further info
86
87 More information about the Meson build system can be found at the
88 [project's home page](https://mesonbuild.com).
89
90 Meson is a registered trademark of Jussi Pakkanen.
91
[end of README.md]
[start of mesonbuild/compilers/c.py]
1 # Copyright 2012-2017 The Meson development team
2
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6
7 # http://www.apache.org/licenses/LICENSE-2.0
8
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import os.path
16 import typing
17
18 from .. import coredata
19 from ..mesonlib import MachineChoice, MesonException, mlog, version_compare
20 from .c_function_attributes import C_FUNC_ATTRIBUTES
21 from .mixins.clike import CLikeCompiler
22 from .mixins.ccrx import CcrxCompiler
23 from .mixins.arm import ArmCompiler, ArmclangCompiler
24 from .mixins.visualstudio import VisualStudioLikeCompiler
25 from .mixins.gnu import GnuCompiler
26 from .mixins.intel import IntelGnuLikeCompiler, IntelVisualStudioLikeCompiler
27 from .mixins.clang import ClangCompiler
28 from .mixins.elbrus import ElbrusCompiler
29 from .mixins.pgi import PGICompiler
30 from .mixins.islinker import BasicLinkerIsCompilerMixin, LinkerEnvVarsMixin
31 from .compilers import (
32 gnu_winlibs,
33 msvc_winlibs,
34 Compiler,
35 CompilerType,
36 )
37
38
39 class CCompiler(CLikeCompiler, Compiler):
40
41 @staticmethod
42 def attribute_check_func(name):
43 try:
44 return C_FUNC_ATTRIBUTES[name]
45 except KeyError:
46 raise MesonException('Unknown function attribute "{}"'.format(name))
47
48 def __init__(self, exelist, version, for_machine: MachineChoice, is_cross: bool,
49 exe_wrapper: typing.Optional[str] = None, **kwargs):
50 # If a child ObjC or CPP class has already set it, don't set it ourselves
51 self.language = 'c'
52 Compiler.__init__(self, exelist, version, for_machine, **kwargs)
53 CLikeCompiler.__init__(self, is_cross, exe_wrapper)
54
55 def get_no_stdinc_args(self):
56 return ['-nostdinc']
57
58 def sanity_check(self, work_dir, environment):
59 code = 'int main() { int class=0; return class; }\n'
60 return self.sanity_check_impl(work_dir, environment, 'sanitycheckc.c', code)
61
62 def has_header_symbol(self, hname, symbol, prefix, env, *, extra_args=None, dependencies=None):
63 fargs = {'prefix': prefix, 'header': hname, 'symbol': symbol}
64 t = '''{prefix}
65 #include <{header}>
66 int main () {{
67 /* If it's not defined as a macro, try to use as a symbol */
68 #ifndef {symbol}
69 {symbol};
70 #endif
71 return 0;
72 }}'''
73 return self.compiles(t.format(**fargs), env, extra_args=extra_args,
74 dependencies=dependencies)
75
76
77 class ClangCCompiler(ClangCompiler, CCompiler):
78 def __init__(self, exelist, version, compiler_type, for_machine: MachineChoice, is_cross, exe_wrapper=None, **kwargs):
79 CCompiler.__init__(self, exelist, version, for_machine, is_cross, exe_wrapper, **kwargs)
80 ClangCompiler.__init__(self, compiler_type)
81 default_warn_args = ['-Wall', '-Winvalid-pch']
82 self.warn_args = {'0': [],
83 '1': default_warn_args,
84 '2': default_warn_args + ['-Wextra'],
85 '3': default_warn_args + ['-Wextra', '-Wpedantic']}
86
87 def get_options(self):
88 opts = CCompiler.get_options(self)
89 c_stds = ['c89', 'c99', 'c11']
90 g_stds = ['gnu89', 'gnu99', 'gnu11']
91 # https://releases.llvm.org/6.0.0/tools/clang/docs/ReleaseNotes.html
92 # https://en.wikipedia.org/wiki/Xcode#Latest_versions
93 v = '>=10.0.0' if self.compiler_type is CompilerType.CLANG_OSX else '>=6.0.0'
94 if version_compare(self.version, v):
95 c_stds += ['c17']
96 g_stds += ['gnu17']
97 v = '>=11.0.0' if self.compiler_type is CompilerType.CLANG_OSX else '>=8.0.0'
98 if version_compare(self.version, v):
99 c_stds += ['c18']
100 g_stds += ['gnu18']
101 opts.update({'c_std': coredata.UserComboOption('C language standard to use',
102 ['none'] + c_stds + g_stds,
103 'none')})
104 return opts
105
106 def get_option_compile_args(self, options):
107 args = []
108 std = options['c_std']
109 if std.value != 'none':
110 args.append('-std=' + std.value)
111 return args
112
113 def get_option_link_args(self, options):
114 return []
115
116
117 class EmscriptenCCompiler(LinkerEnvVarsMixin, BasicLinkerIsCompilerMixin, ClangCCompiler):
118 def __init__(self, exelist, version, compiler_type, for_machine: MachineChoice, is_cross, exe_wrapper=None, **kwargs):
119 if not is_cross:
120 raise MesonException('Emscripten compiler can only be used for cross compilation.')
121 ClangCCompiler.__init__(self, exelist, version, compiler_type, for_machine, is_cross, exe_wrapper, **kwargs)
122 self.id = 'emscripten'
123
124 def get_option_link_args(self, options):
125 return []
126
127 def get_soname_args(self, *args, **kwargs):
128 raise MesonException('Emscripten does not support shared libraries.')
129
130 class ArmclangCCompiler(ArmclangCompiler, CCompiler):
131 def __init__(self, exelist, version, compiler_type, for_machine: MachineChoice, is_cross, exe_wrapper=None, **kwargs):
132 CCompiler.__init__(self, exelist, version, for_machine, is_cross, exe_wrapper, **kwargs)
133 ArmclangCompiler.__init__(self, compiler_type)
134 default_warn_args = ['-Wall', '-Winvalid-pch']
135 self.warn_args = {'0': [],
136 '1': default_warn_args,
137 '2': default_warn_args + ['-Wextra'],
138 '3': default_warn_args + ['-Wextra', '-Wpedantic']}
139
140 def get_options(self):
141 opts = CCompiler.get_options(self)
142 opts.update({'c_std': coredata.UserComboOption('C language standard to use',
143 ['none', 'c90', 'c99', 'c11',
144 'gnu90', 'gnu99', 'gnu11'],
145 'none')})
146 return opts
147
148 def get_option_compile_args(self, options):
149 args = []
150 std = options['c_std']
151 if std.value != 'none':
152 args.append('-std=' + std.value)
153 return args
154
155 def get_option_link_args(self, options):
156 return []
157
158
159 class GnuCCompiler(GnuCompiler, CCompiler):
160 def __init__(self, exelist, version, compiler_type, for_machine: MachineChoice, is_cross, exe_wrapper=None, defines=None, **kwargs):
161 CCompiler.__init__(self, exelist, version, for_machine, is_cross, exe_wrapper, **kwargs)
162 GnuCompiler.__init__(self, compiler_type, defines)
163 default_warn_args = ['-Wall', '-Winvalid-pch']
164 self.warn_args = {'0': [],
165 '1': default_warn_args,
166 '2': default_warn_args + ['-Wextra'],
167 '3': default_warn_args + ['-Wextra', '-Wpedantic']}
168
169 def get_options(self):
170 opts = CCompiler.get_options(self)
171 c_stds = ['c89', 'c99', 'c11']
172 g_stds = ['gnu89', 'gnu99', 'gnu11']
173 v = '>=8.0.0'
174 if version_compare(self.version, v):
175 c_stds += ['c17', 'c18']
176 g_stds += ['gnu17', 'gnu18']
177 opts.update({'c_std': coredata.UserComboOption('C language standard to use',
178 ['none'] + c_stds + g_stds,
179 'none')})
180 if self.compiler_type.is_windows_compiler:
181 opts.update({
182 'c_winlibs': coredata.UserArrayOption('Standard Win libraries to link against',
183 gnu_winlibs), })
184 return opts
185
186 def get_option_compile_args(self, options):
187 args = []
188 std = options['c_std']
189 if std.value != 'none':
190 args.append('-std=' + std.value)
191 return args
192
193 def get_option_link_args(self, options):
194 if self.compiler_type.is_windows_compiler:
195 return options['c_winlibs'].value[:]
196 return []
197
198 def get_pch_use_args(self, pch_dir, header):
199 return ['-fpch-preprocess', '-include', os.path.basename(header)]
200
201
202 class PGICCompiler(PGICompiler, CCompiler):
203 def __init__(self, exelist, version, compiler_type, for_machine: MachineChoice, is_cross, exe_wrapper=None, **kwargs):
204 CCompiler.__init__(self, exelist, version, for_machine, is_cross, exe_wrapper, **kwargs)
205 PGICompiler.__init__(self, compiler_type)
206
207
208 class ElbrusCCompiler(GnuCCompiler, ElbrusCompiler):
209 def __init__(self, exelist, version, compiler_type, for_machine: MachineChoice, is_cross, exe_wrapper=None, defines=None, **kwargs):
210 GnuCCompiler.__init__(self, exelist, version, compiler_type, for_machine, is_cross, exe_wrapper, defines, **kwargs)
211 ElbrusCompiler.__init__(self, compiler_type, defines)
212
213 # It does support some various ISO standards and c/gnu 90, 9x, 1x in addition to those which GNU CC supports.
214 def get_options(self):
215 opts = CCompiler.get_options(self)
216 opts.update({'c_std': coredata.UserComboOption('C language standard to use',
217 ['none', 'c89', 'c90', 'c9x', 'c99', 'c1x', 'c11',
218 'gnu89', 'gnu90', 'gnu9x', 'gnu99', 'gnu1x', 'gnu11',
219 'iso9899:2011', 'iso9899:1990', 'iso9899:199409', 'iso9899:1999'],
220 'none')})
221 return opts
222
223 # Elbrus C compiler does not have lchmod, but there is only linker warning, not compiler error.
224 # So we should explicitly fail at this case.
225 def has_function(self, funcname, prefix, env, *, extra_args=None, dependencies=None):
226 if funcname == 'lchmod':
227 return False, False
228 else:
229 return super().has_function(funcname, prefix, env,
230 extra_args=extra_args,
231 dependencies=dependencies)
232
233
234 class IntelCCompiler(IntelGnuLikeCompiler, CCompiler):
235 def __init__(self, exelist, version, compiler_type, for_machine: MachineChoice, is_cross, exe_wrapper=None, **kwargs):
236 CCompiler.__init__(self, exelist, version, for_machine, is_cross, exe_wrapper, **kwargs)
237 IntelGnuLikeCompiler.__init__(self, compiler_type)
238 self.lang_header = 'c-header'
239 default_warn_args = ['-Wall', '-w3', '-diag-disable:remark']
240 self.warn_args = {'0': [],
241 '1': default_warn_args,
242 '2': default_warn_args + ['-Wextra'],
243 '3': default_warn_args + ['-Wextra']}
244
245 def get_options(self):
246 opts = CCompiler.get_options(self)
247 c_stds = ['c89', 'c99']
248 g_stds = ['gnu89', 'gnu99']
249 if version_compare(self.version, '>=16.0.0'):
250 c_stds += ['c11']
251 opts.update({'c_std': coredata.UserComboOption('C language standard to use',
252 ['none'] + c_stds + g_stds,
253 'none')})
254 return opts
255
256 def get_option_compile_args(self, options):
257 args = []
258 std = options['c_std']
259 if std.value != 'none':
260 args.append('-std=' + std.value)
261 return args
262
263
264 class VisualStudioLikeCCompilerMixin:
265
266 """Shared methods that apply to MSVC-like C compilers."""
267
268 def get_options(self):
269 opts = super().get_options()
270 opts.update({'c_winlibs': coredata.UserArrayOption('Windows libs to link against.',
271 msvc_winlibs)})
272 return opts
273
274 def get_option_link_args(self, options):
275 return options['c_winlibs'].value[:]
276
277 class VisualStudioCCompiler(VisualStudioLikeCompiler, VisualStudioLikeCCompilerMixin, CCompiler):
278
279 def __init__(self, exelist, version, for_machine: MachineChoice, is_cross, exe_wrap, target: str, **kwargs):
280 CCompiler.__init__(self, exelist, version, for_machine, is_cross, exe_wrap, **kwargs)
281 VisualStudioLikeCompiler.__init__(self, target)
282 self.id = 'msvc'
283
284 class ClangClCCompiler(VisualStudioLikeCompiler, VisualStudioLikeCCompilerMixin, CCompiler):
285 def __init__(self, exelist, version, for_machine: MachineChoice, is_cross, exe_wrap, target, **kwargs):
286 CCompiler.__init__(self, exelist, version, for_machine, is_cross, exe_wrap, **kwargs)
287 VisualStudioLikeCompiler.__init__(self, target)
288 self.id = 'clang-cl'
289
290
291 class IntelClCCompiler(IntelVisualStudioLikeCompiler, VisualStudioLikeCCompilerMixin, CCompiler):
292
293 """Intel "ICL" compiler abstraction."""
294
295 __have_warned = False
296
297 def __init__(self, exelist, version, for_machine: MachineChoice, is_cross, exe_wrap, target, **kwargs):
298 CCompiler.__init__(self, exelist, version, for_machine, is_cross, exe_wrap, **kwargs)
299 IntelVisualStudioLikeCompiler.__init__(self, target)
300
301 def get_options(self):
302 opts = super().get_options()
303 c_stds = ['none', 'c89', 'c99', 'c11']
304 opts.update({'c_std': coredata.UserComboOption('C language standard to use',
305 c_stds,
306 'none')})
307 return opts
308
309 def get_option_compile_args(self, options):
310 args = []
311 std = options['c_std']
312 if std.value == 'c89':
313 if not self.__have_warned:
314 self.__have_warned = True
315 mlog.warning("ICL doesn't explicitly implement c89, setting the standard to 'none', which is close.")
316 elif std.value != 'none':
317 args.append('/Qstd:' + std.value)
318 return args
319
320
321 class ArmCCompiler(ArmCompiler, CCompiler):
322 def __init__(self, exelist, version, compiler_type, for_machine: MachineChoice, is_cross, exe_wrapper=None, **kwargs):
323 CCompiler.__init__(self, exelist, version, for_machine, is_cross, exe_wrapper, **kwargs)
324 ArmCompiler.__init__(self, compiler_type)
325
326 def get_options(self):
327 opts = CCompiler.get_options(self)
328 opts.update({'c_std': coredata.UserComboOption('C language standard to use',
329 ['none', 'c90', 'c99'],
330 'none')})
331 return opts
332
333 def get_option_compile_args(self, options):
334 args = []
335 std = options['c_std']
336 if std.value != 'none':
337 args.append('--' + std.value)
338 return args
339
340 class CcrxCCompiler(CcrxCompiler, CCompiler):
341 def __init__(self, exelist, version, compiler_type, for_machine: MachineChoice, is_cross, exe_wrapper=None, **kwargs):
342 CCompiler.__init__(self, exelist, version, for_machine, is_cross, exe_wrapper, **kwargs)
343 CcrxCompiler.__init__(self, compiler_type)
344
345 # Override CCompiler.get_always_args
346 def get_always_args(self):
347 return ['-nologo']
348
349 def get_options(self):
350 opts = CCompiler.get_options(self)
351 opts.update({'c_std': coredata.UserComboOption('C language standard to use',
352 ['none', 'c89', 'c99'],
353 'none')})
354 return opts
355
356 def get_option_compile_args(self, options):
357 args = []
358 std = options['c_std']
359 if std.value == 'c89':
360 args.append('-lang=c')
361 elif std.value == 'c99':
362 args.append('-lang=c99')
363 return args
364
365 def get_compile_only_args(self):
366 return []
367
368 def get_no_optimization_args(self):
369 return ['-optimize=0']
370
371 def get_output_args(self, target):
372 return ['-output=obj=%s' % target]
373
374 def get_werror_args(self):
375 return ['-change_message=error']
376
377 def get_include_args(self, path, is_system):
378 if path == '':
379 path = '.'
380 return ['-include=' + path]
381
[end of mesonbuild/compilers/c.py]
[start of mesonbuild/dependencies/boost.py]
1 # Copyright 2013-2017 The Meson development team
2
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6
7 # http://www.apache.org/licenses/LICENSE-2.0
8
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 # This file contains the detection logic for miscellaneous external dependencies.
16
17 import glob
18 import os
19
20 from .. import mlog
21 from .. import mesonlib
22 from ..environment import detect_cpu_family
23
24 from .base import (DependencyException, ExternalDependency)
25 from .misc import ThreadDependency
26
27 # On windows 3 directory layouts are supported:
28 # * The default layout (versioned) installed:
29 # - $BOOST_ROOT/include/boost-x_x/boost/*.hpp
30 # - $BOOST_ROOT/lib/*.lib
31 # * The non-default layout (system) installed:
32 # - $BOOST_ROOT/include/boost/*.hpp
33 # - $BOOST_ROOT/lib/*.lib
34 # * The pre-built binaries from sf.net:
35 # - $BOOST_ROOT/boost/*.hpp
36 # - $BOOST_ROOT/lib<arch>-<compiler>/*.lib where arch=32/64 and compiler=msvc-14.1
37 #
38 # Note that we should also try to support:
39 # mingw-w64 / Windows : libboost_<module>-mt.a (location = <prefix>/mingw64/lib/)
40 # libboost_<module>-mt.dll.a
41 #
42 # Library names supported:
43 # - libboost_<module>-<compiler>-mt-gd-x_x.lib (static)
44 # - boost_<module>-<compiler>-mt-gd-x_x.lib|.dll (shared)
45 # - libboost_<module>.lib (static)
46 # - boost_<module>.lib|.dll (shared)
47 # where compiler is vc141 for example.
48 #
49 # NOTE: -gd means runtime and build time debugging is on
50 # -mt means threading=multi
51 #
52 # The `modules` argument accept library names. This is because every module that
53 # has libraries to link against also has multiple options regarding how to
54 # link. See for example:
55 # * http://www.boost.org/doc/libs/1_65_1/libs/test/doc/html/boost_test/usage_variants.html
56 # * http://www.boost.org/doc/libs/1_65_1/doc/html/stacktrace/configuration_and_build.html
57 # * http://www.boost.org/doc/libs/1_65_1/libs/math/doc/html/math_toolkit/main_tr1.html
58
59 # **On Unix**, official packaged versions of boost libraries follow the following schemes:
60 #
61 # Linux / Debian: libboost_<module>.so -> libboost_<module>.so.1.66.0
62 # Linux / Red Hat: libboost_<module>.so -> libboost_<module>.so.1.66.0
63 # Linux / OpenSuse: libboost_<module>.so -> libboost_<module>.so.1.66.0
64 # Win / Cygwin: libboost_<module>.dll.a (location = /usr/lib)
65 # libboost_<module>.a
66 # cygboost_<module>_1_64.dll (location = /usr/bin)
67 # Win / VS: boost_<module>-vc<ver>-mt[-gd]-<arch>-1_67.dll (location = C:/local/boost_1_67_0)
68 # Mac / homebrew: libboost_<module>.dylib + libboost_<module>-mt.dylib (location = /usr/local/lib)
69 # Mac / macports: libboost_<module>.dylib + libboost_<module>-mt.dylib (location = /opt/local/lib)
70 #
71 # Its not clear that any other abi tags (e.g. -gd) are used in official packages.
72 #
73 # On Linux systems, boost libs have multithreading support enabled, but without the -mt tag.
74 #
75 # Boost documentation recommends using complex abi tags like "-lboost_regex-gcc34-mt-d-1_36".
76 # (See http://www.boost.org/doc/libs/1_66_0/more/getting_started/unix-variants.html#library-naming)
77 # However, its not clear that any Unix distribution follows this scheme.
78 # Furthermore, the boost documentation for unix above uses examples from windows like
79 # "libboost_regex-vc71-mt-d-x86-1_34.lib", so apparently the abi tags may be more aimed at windows.
80 #
81 # Probably we should use the linker search path to decide which libraries to use. This will
82 # make it possible to find the macports boost libraries without setting BOOST_ROOT, and will
83 # also mean that it would be possible to use user-installed boost libraries when official
84 # packages are installed.
85 #
86 # We thus follow the following strategy:
87 # 1. Look for libraries using compiler.find_library( )
88 # 1.1 On Linux, just look for boost_<module>
89 # 1.2 On other systems (e.g. Mac) look for boost_<module>-mt if multithreading.
90 # 1.3 Otherwise look for boost_<module>
91 # 2. Fall back to previous approach
92 # 2.1. Search particular directories.
93 # 2.2. Find boost libraries with unknown suffixes using file-name globbing.
94
95 # TODO: Unix: Don't assume we know where the boost dir is, rely on -Idir and -Ldir being set.
96 # TODO: Allow user to specify suffix in BOOST_SUFFIX, or add specific options like BOOST_DEBUG for 'd' for debug.
97
98 class BoostDependency(ExternalDependency):
99 def __init__(self, environment, kwargs):
100 super().__init__('boost', environment, 'cpp', kwargs)
101 self.need_static_link = ['boost_exception', 'boost_test_exec_monitor']
102 self.is_debug = environment.coredata.get_builtin_option('buildtype').startswith('debug')
103 threading = kwargs.get("threading", "multi")
104 self.is_multithreading = threading == "multi"
105
106 self.requested_modules = self.get_requested(kwargs)
107 if 'thread' in self.requested_modules:
108 self._add_sub_dependency(ThreadDependency, environment, kwargs)
109
110 self.boost_root = None
111 self.boost_roots = []
112 self.incdir = None
113 self.libdir = None
114
115 if 'BOOST_ROOT' in os.environ:
116 self.boost_root = os.environ['BOOST_ROOT']
117 self.boost_roots = [self.boost_root]
118 if not os.path.isabs(self.boost_root):
119 raise DependencyException('BOOST_ROOT must be an absolute path.')
120 if 'BOOST_INCLUDEDIR' in os.environ:
121 self.incdir = os.environ['BOOST_INCLUDEDIR']
122 if 'BOOST_LIBRARYDIR' in os.environ:
123 self.libdir = os.environ['BOOST_LIBRARYDIR']
124
125 if self.boost_root is None:
126 if self.env.machines[self.for_machine].is_windows():
127 self.boost_roots = self.detect_win_roots()
128 else:
129 self.boost_roots = self.detect_nix_roots()
130
131 if self.incdir is None:
132 if self.env.machines[self.for_machine].is_windows():
133 self.incdir = self.detect_win_incdir()
134 else:
135 self.incdir = self.detect_nix_incdir()
136
137 mlog.debug('Boost library root dir is', mlog.bold(self.boost_root))
138 mlog.debug('Boost include directory is', mlog.bold(self.incdir))
139
140 # 1. check if we can find BOOST headers.
141 self.detect_headers_and_version()
142
143 if not self.is_found:
144 return # if we can not find 'boost/version.hpp'
145
146 # 2. check if we can find BOOST libraries.
147 self.detect_lib_modules()
148 mlog.debug('Boost library directory is', mlog.bold(self.libdir))
149
150 mlog.debug('Installed Boost libraries: ')
151 for key in sorted(self.lib_modules.keys()):
152 mlog.debug(key, self.lib_modules[key])
153
154 # 3. check if requested modules are valid, that is, either found or in the list of known boost libraries
155 self.check_invalid_modules()
156
157 # 4. final check whether or not we find all requested and valid modules
158 self.check_find_requested_modules()
159
160 def check_invalid_modules(self):
161 invalid_modules = [c for c in self.requested_modules if 'boost_' + c not in self.lib_modules and 'boost_' + c not in BOOST_LIBS]
162
163 # previous versions of meson allowed include dirs as modules
164 remove = []
165 for m in invalid_modules:
166 if m in BOOST_DIRS:
167 mlog.warning('Requested boost library', mlog.bold(m), 'that doesn\'t exist. '
168 'This will be an error in the future')
169 remove.append(m)
170
171 self.requested_modules = [x for x in self.requested_modules if x not in remove]
172 invalid_modules = [x for x in invalid_modules if x not in remove]
173
174 if invalid_modules:
175 mlog.error('Invalid Boost modules: ' + ', '.join(invalid_modules))
176 return True
177 else:
178 return False
179
180 def log_details(self):
181 module_str = ', '.join(self.requested_modules)
182 return module_str
183
184 def log_info(self):
185 if self.boost_root:
186 return self.boost_root
187 return ''
188
189 def detect_nix_roots(self):
190 return [os.path.abspath(os.path.join(x, '..'))
191 for x in self.clib_compiler.get_default_include_dirs()]
192
193 def detect_win_roots(self):
194 res = []
195 # Where boost documentation says it should be
196 globtext = 'C:\\Program Files\\boost\\boost_*'
197 files = glob.glob(globtext)
198 res.extend(files)
199
200 # Where boost built from source actually installs it
201 if os.path.isdir('C:\\Boost'):
202 res.append('C:\\Boost')
203
204 # Where boost prebuilt binaries are
205 globtext = 'C:\\local\\boost_*'
206 files = glob.glob(globtext)
207 res.extend(files)
208 return res
209
210 def detect_nix_incdir(self):
211 if self.boost_root:
212 return os.path.join(self.boost_root, 'include')
213 return None
214
215 # FIXME: Should pick a version that matches the requested version
216 # Returns the folder that contains the boost folder.
217 def detect_win_incdir(self):
218 for root in self.boost_roots:
219 globtext = os.path.join(root, 'include', 'boost-*')
220 incdirs = glob.glob(globtext)
221 if incdirs:
222 return incdirs[0]
223 incboostdir = os.path.join(root, 'include', 'boost')
224 if os.path.isdir(incboostdir):
225 return os.path.join(root, 'include')
226 incboostdir = os.path.join(root, 'boost')
227 if os.path.isdir(incboostdir):
228 return root
229 return None
230
231 def get_compile_args(self):
232 args = []
233 include_dir = self.incdir
234
235 # Use "-isystem" when including boost headers instead of "-I"
236 # to avoid compiler warnings/failures when "-Werror" is used
237
238 # Careful not to use "-isystem" on default include dirs as it
239 # breaks some of the headers for certain gcc versions
240
241 # For example, doing g++ -isystem /usr/include on a simple
242 # "int main()" source results in the error:
243 # "/usr/include/c++/6.3.1/cstdlib:75:25: fatal error: stdlib.h: No such file or directory"
244
245 # See https://gcc.gnu.org/bugzilla/show_bug.cgi?id=70129
246 # and http://stackoverflow.com/questions/37218953/isystem-on-a-system-include-directory-causes-errors
247 # for more details
248
249 if include_dir and include_dir not in self.clib_compiler.get_default_include_dirs():
250 args.append("".join(self.clib_compiler.get_include_args(include_dir, True)))
251 return args
252
253 def get_requested(self, kwargs):
254 candidates = mesonlib.extract_as_list(kwargs, 'modules')
255 for c in candidates:
256 if not isinstance(c, str):
257 raise DependencyException('Boost module argument is not a string.')
258 return candidates
259
260 def detect_headers_and_version(self):
261 try:
262 version = self.clib_compiler.get_define('BOOST_LIB_VERSION', '#include <boost/version.hpp>', self.env, self.get_compile_args(), [], disable_cache=True)[0]
263 except mesonlib.EnvironmentException:
264 return
265 except TypeError:
266 return
267 # Remove quotes
268 version = version[1:-1]
269 # Fix version string
270 self.version = version.replace('_', '.')
271 self.is_found = True
272
273 def detect_lib_modules(self):
274 self.lib_modules = {}
275 # 1. Try to find modules using compiler.find_library( )
276 if self.find_libraries_with_abi_tags(self.abi_tags()):
277 pass
278 # 2. Fall back to the old method
279 else:
280 if self.env.machines[self.for_machine].is_windows():
281 self.detect_lib_modules_win()
282 else:
283 self.detect_lib_modules_nix()
284
285 def check_find_requested_modules(self):
286 # 3. Check if we can find the modules
287 for m in self.requested_modules:
288 if 'boost_' + m not in self.lib_modules:
289 mlog.debug('Requested Boost library {!r} not found'.format(m))
290 self.is_found = False
291
292 def modname_from_filename(self, filename):
293 modname = os.path.basename(filename)
294 modname = modname.split('.', 1)[0]
295 modname = modname.split('-', 1)[0]
296 if modname.startswith('libboost'):
297 modname = modname[3:]
298 return modname
299
300 def compiler_tag(self):
301 tag = None
302 compiler = self.env.detect_cpp_compiler(self.for_machine)
303 if self.env.machines[self.for_machine].is_windows():
304 if compiler.get_id() in ['msvc', 'clang-cl']:
305 comp_ts_version = compiler.get_toolset_version()
306 compiler_ts = comp_ts_version.split('.')
307 # FIXME - what about other compilers?
308 tag = '-vc{}{}'.format(compiler_ts[0], compiler_ts[1])
309 else:
310 tag = ''
311 return tag
312
313 def threading_tag(self):
314 if not self.is_multithreading:
315 return ''
316
317 if self.env.machines[self.for_machine].is_darwin():
318 # - Mac: requires -mt for multithreading, so should not fall back to non-mt libraries.
319 return '-mt'
320 elif self.env.machines[self.for_machine].is_windows():
321 # - Windows: requires -mt for multithreading, so should not fall back to non-mt libraries.
322 return '-mt'
323 else:
324 # - Linux: leaves off -mt but libraries are multithreading-aware.
325 # - Cygwin: leaves off -mt but libraries are multithreading-aware.
326 return ''
327
328 def version_tag(self):
329 return '-' + self.version.replace('.', '_')
330
331 def debug_tag(self):
332 return '-gd' if self.is_debug else ''
333
334 def arch_tag(self):
335 # currently only applies to windows msvc installed binaries
336 if self.env.detect_cpp_compiler(self.for_machine).get_id() not in ['msvc', 'clang-cl']:
337 return ''
338 # pre-compiled binaries only added arch tag for versions > 1.64
339 if float(self.version) < 1.65:
340 return ''
341 arch = detect_cpu_family(self.env.coredata.compilers.host)
342 if arch == 'x86':
343 return '-x32'
344 elif arch == 'x86_64':
345 return '-x64'
346 return ''
347
348 def versioned_abi_tag(self):
349 return self.compiler_tag() + self.threading_tag() + self.debug_tag() + self.arch_tag() + self.version_tag()
350
351 # FIXME - how to handle different distributions, e.g. for Mac? Currently we handle homebrew and macports, but not fink.
352 def abi_tags(self):
353 if self.env.machines[self.for_machine].is_windows():
354 return [self.versioned_abi_tag(), self.threading_tag()]
355 else:
356 return [self.threading_tag()]
357
358 def sourceforge_dir(self):
359 if self.env.detect_cpp_compiler(self.for_machine).get_id() != 'msvc':
360 return None
361 comp_ts_version = self.env.detect_cpp_compiler(self.for_machine).get_toolset_version()
362 arch = detect_cpu_family(self.env.coredata.compilers.host)
363 if arch == 'x86':
364 return 'lib32-msvc-{}'.format(comp_ts_version)
365 elif arch == 'x86_64':
366 return 'lib64-msvc-{}'.format(comp_ts_version)
367 else:
368 # Does anyone do Boost cross-compiling to other archs on Windows?
369 return None
370
371 def find_libraries_with_abi_tag(self, tag):
372
373 # All modules should have the same tag
374 self.lib_modules = {}
375
376 all_found = True
377
378 for module in self.requested_modules:
379 libname = 'boost_' + module + tag
380
381 args = self.clib_compiler.find_library(libname, self.env, self.extra_lib_dirs())
382 if args is None:
383 mlog.debug("Couldn\'t find library '{}' for boost module '{}' (ABI tag = '{}')".format(libname, module, tag))
384 all_found = False
385 else:
386 mlog.debug('Link args for boost module "{}" are {}'.format(module, args))
387 self.lib_modules['boost_' + module] = args
388
389 return all_found
390
391 def find_libraries_with_abi_tags(self, tags):
392 for tag in tags:
393 if self.find_libraries_with_abi_tag(tag):
394 return True
395 return False
396
397 def detect_lib_modules_win(self):
398 if not self.libdir:
399 # The libdirs in the distributed binaries (from sf)
400 lib_sf = self.sourceforge_dir()
401
402 if self.boost_root:
403 roots = [self.boost_root]
404 else:
405 roots = self.boost_roots
406 for root in roots:
407 # The default libdir when building
408 libdir = os.path.join(root, 'lib')
409 if os.path.isdir(libdir):
410 self.libdir = libdir
411 break
412 if lib_sf:
413 full_path = os.path.join(root, lib_sf)
414 if os.path.isdir(full_path):
415 self.libdir = full_path
416 break
417
418 if not self.libdir:
419 return
420
421 for name in self.need_static_link:
422 # FIXME - why are we only looking for *.lib? Mingw provides *.dll.a and *.a
423 libname = 'lib' + name + self.versioned_abi_tag() + '.lib'
424 if os.path.isfile(os.path.join(self.libdir, libname)):
425 self.lib_modules[self.modname_from_filename(libname)] = [libname]
426 else:
427 libname = "lib{}.lib".format(name)
428 if os.path.isfile(os.path.join(self.libdir, libname)):
429 self.lib_modules[name[3:]] = [libname]
430
431 # globber1 applies to a layout=system installation
432 # globber2 applies to a layout=versioned installation
433 globber1 = 'libboost_*' if self.static else 'boost_*'
434 globber2 = globber1 + self.versioned_abi_tag()
435 # FIXME - why are we only looking for *.lib? Mingw provides *.dll.a and *.a
436 globber2_matches = glob.glob(os.path.join(self.libdir, globber2 + '.lib'))
437 for entry in globber2_matches:
438 fname = os.path.basename(entry)
439 self.lib_modules[self.modname_from_filename(fname)] = [fname]
440 if not globber2_matches:
441 # FIXME - why are we only looking for *.lib? Mingw provides *.dll.a and *.a
442 for entry in glob.glob(os.path.join(self.libdir, globber1 + '.lib')):
443 if self.static:
444 fname = os.path.basename(entry)
445 self.lib_modules[self.modname_from_filename(fname)] = [fname]
446
447 def detect_lib_modules_nix(self):
448 if self.static:
449 libsuffix = 'a'
450 elif self.env.machines[self.for_machine].is_darwin():
451 libsuffix = 'dylib'
452 else:
453 libsuffix = 'so'
454
455 globber = 'libboost_*.{}'.format(libsuffix)
456 if self.libdir:
457 libdirs = [self.libdir]
458 elif self.boost_root is None:
459 libdirs = mesonlib.get_library_dirs()
460 else:
461 libdirs = [os.path.join(self.boost_root, 'lib')]
462 for libdir in libdirs:
463 for name in self.need_static_link:
464 libname = 'lib{}.a'.format(name)
465 if os.path.isfile(os.path.join(libdir, libname)):
466 self.lib_modules[name] = [libname]
467 for entry in glob.glob(os.path.join(libdir, globber)):
468 # I'm not 100% sure what to do here. Some distros
469 # have modules such as thread only as -mt versions.
470 # On debian all packages are built threading=multi
471 # but not suffixed with -mt.
472 # FIXME: implement detect_lib_modules_{debian, redhat, ...}
473 # FIXME: this wouldn't work with -mt-gd either. -BDR
474 if self.is_multithreading and mesonlib.is_debianlike():
475 pass
476 elif self.is_multithreading and entry.endswith('-mt.{}'.format(libsuffix)):
477 pass
478 elif not entry.endswith('-mt.{}'.format(libsuffix)):
479 pass
480 else:
481 continue
482 modname = self.modname_from_filename(entry)
483 if modname not in self.lib_modules:
484 self.lib_modules[modname] = [entry]
485
486 def extra_lib_dirs(self):
487 if self.libdir:
488 return [self.libdir]
489 elif self.boost_root:
490 return [os.path.join(self.boost_root, 'lib')]
491 return []
492
493 def get_link_args(self, **kwargs):
494 args = []
495 for d in self.extra_lib_dirs():
496 args += self.clib_compiler.get_linker_search_args(d)
497 for lib in self.requested_modules:
498 args += self.lib_modules['boost_' + lib]
499 return args
500
501 def get_sources(self):
502 return []
503
504 # Generated with boost_names.py
505 BOOST_LIBS = [
506 'boost_atomic',
507 'boost_chrono',
508 'boost_chrono',
509 'boost_container',
510 'boost_context',
511 'boost_coroutine',
512 'boost_date_time',
513 'boost_exception',
514 'boost_fiber',
515 'boost_filesystem',
516 'boost_graph',
517 'boost_iostreams',
518 'boost_locale',
519 'boost_log',
520 'boost_log_setup',
521 'boost_math_tr1',
522 'boost_math_tr1f',
523 'boost_math_tr1l',
524 'boost_math_c99',
525 'boost_math_c99f',
526 'boost_math_c99l',
527 'boost_math_tr1',
528 'boost_math_tr1f',
529 'boost_math_tr1l',
530 'boost_math_c99',
531 'boost_math_c99f',
532 'boost_math_c99l',
533 'boost_math_tr1',
534 'boost_math_tr1f',
535 'boost_math_tr1l',
536 'boost_math_c99',
537 'boost_math_c99f',
538 'boost_math_c99l',
539 'boost_math_tr1',
540 'boost_math_tr1f',
541 'boost_math_tr1l',
542 'boost_math_c99',
543 'boost_math_c99f',
544 'boost_math_c99l',
545 'boost_math_tr1',
546 'boost_math_tr1f',
547 'boost_math_tr1l',
548 'boost_math_c99',
549 'boost_math_c99f',
550 'boost_math_c99l',
551 'boost_math_tr1',
552 'boost_math_tr1f',
553 'boost_math_tr1l',
554 'boost_math_c99',
555 'boost_math_c99f',
556 'boost_math_c99l',
557 'boost_mpi',
558 'boost_program_options',
559 'boost_random',
560 'boost_regex',
561 'boost_serialization',
562 'boost_wserialization',
563 'boost_signals',
564 'boost_stacktrace_noop',
565 'boost_stacktrace_backtrace',
566 'boost_stacktrace_addr2line',
567 'boost_stacktrace_basic',
568 'boost_stacktrace_windbg',
569 'boost_stacktrace_windbg_cached',
570 'boost_system',
571 'boost_prg_exec_monitor',
572 'boost_test_exec_monitor',
573 'boost_unit_test_framework',
574 'boost_thread',
575 'boost_timer',
576 'boost_type_erasure',
577 'boost_wave'
578 ]
579
580 BOOST_DIRS = [
581 'lambda',
582 'optional',
583 'convert',
584 'system',
585 'uuid',
586 'archive',
587 'align',
588 'timer',
589 'chrono',
590 'gil',
591 'logic',
592 'signals',
593 'predef',
594 'tr1',
595 'multi_index',
596 'property_map',
597 'multi_array',
598 'context',
599 'random',
600 'endian',
601 'circular_buffer',
602 'proto',
603 'assign',
604 'format',
605 'math',
606 'phoenix',
607 'graph',
608 'locale',
609 'mpl',
610 'pool',
611 'unordered',
612 'core',
613 'exception',
614 'ptr_container',
615 'flyweight',
616 'range',
617 'typeof',
618 'thread',
619 'move',
620 'spirit',
621 'dll',
622 'compute',
623 'serialization',
624 'ratio',
625 'msm',
626 'config',
627 'metaparse',
628 'coroutine2',
629 'qvm',
630 'program_options',
631 'concept',
632 'detail',
633 'hana',
634 'concept_check',
635 'compatibility',
636 'variant',
637 'type_erasure',
638 'mpi',
639 'test',
640 'fusion',
641 'log',
642 'sort',
643 'local_function',
644 'units',
645 'functional',
646 'preprocessor',
647 'integer',
648 'container',
649 'polygon',
650 'interprocess',
651 'numeric',
652 'iterator',
653 'wave',
654 'lexical_cast',
655 'multiprecision',
656 'utility',
657 'tti',
658 'asio',
659 'dynamic_bitset',
660 'algorithm',
661 'xpressive',
662 'bimap',
663 'signals2',
664 'type_traits',
665 'regex',
666 'statechart',
667 'parameter',
668 'icl',
669 'python',
670 'lockfree',
671 'intrusive',
672 'io',
673 'pending',
674 'geometry',
675 'tuple',
676 'iostreams',
677 'heap',
678 'atomic',
679 'filesystem',
680 'smart_ptr',
681 'function',
682 'fiber',
683 'type_index',
684 'accumulators',
685 'function_types',
686 'coroutine',
687 'vmd',
688 'date_time',
689 'property_tree',
690 'bind'
691 ]
692
[end of mesonbuild/dependencies/boost.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
mesonbuild/meson
|
1473fbc3f6c412ef83a9a96c8b6df5f60571fc3c
|
Support for clang-tidy (or other linters)
CMake has support for additional linting by clang-tidy: http://www.mariobadr.com/using-clang-tidy-with-cmake-36.html
I was wondering if something like this could be done for Meson, so that header dependencies etc. still work.
|
Meson already produces the `compile_commands.json` that is required for `clang-tidy` but sadly there are too many command line options in there.
I don't know why exactly, but `clang-tidy` chokes on `-pipe`, various `-W` diagnostic flags and more.
Actually if you remove all `-pipe` from the `compile_commands.json` then you can run `clang-tidy` as long as you pass `-clang-diagnostic-unused-command-line-argument` to the list of checks.
@FSMaxB how did you make it work? Something like -
```
run_target('clang-tidy', command : ['clang-tidy', '-checks="*" -p=' +
join_paths(meson.build_root())])
```
doesn't works for me 😕
@agauniyal https://github.com/1984not-GmbH/molch/blob/f06980fa7e65bcc6ac06b72caec39e8771ed2ffc/ci/clang-tidy.sh I'm not proud of it, but it works for me.
I‘d second that request and add iwyu as a request.
It would be really good to have easy support for automatic fixes being applied as well. Setting up the python script is a real pain in my experience.
🙏🏼 thanks
Using `clang-tidy-6.0` the following works for me:
```meson
src_files = files('asourcefile.cpp')
clangtidy = find_program('clang-tidy', required: false)
if clangtidy.found()
run_target(
'tidy',
command: [
clangtidy,
'-checks=*',
'-p', meson.build_root()
] + src_files)
endif
```
@NickeZ thanks! This kinda works for me too.
I'm just curious if clang throws these errors for you too; `error: unknown argument: '-pipe' [clang-diagnostic-error]`
<strike>I don't think I specify this option anywhere, but maybe meson does?</strike> This seems to still have the original problem from above.
@kaimast Actually I don't get that issue. I'm using llvm 6.0, which version are you using? (But even with 3.9 it works for me)
If you have to manually provide the src_files this somewhat defeats the benefit of a build system.
It's kind of necessary though. For example, I use google test and the macros generate code that is not very modern, so I need to ignore those tests when linting.
Usually you already have a variable that holds all your sourcefiles anyways, so supporting this in meson is still pretty useful.
@NickeZ strange. I'm using 6.0 too. Are you using "ninja tidy" or some other command to run the linter?
So your Tests supposedly are in a dedicated folder or follow a certain naming based on which they should be excluded/ or the other way around where source directories would be included. (But not on a file by file basis otherwise it’s pretty much just a bash script)
Any progress on this? Or any way I could help? I still have the same problem on 0.48.2
The problem with this is not the actual target but the fact that you almost always want to specify a specific command line argument (specifically what tests you want to run and which header directories to use). This is in contrast to scan-build, which consists of only one and the same command that you always run. Enabling all checks just drowns you in warnings on almost all code bases. Doing this right would take possibly two new options just for this. Unfortunately we already have too many options and cramming even more in makes the whole thing unwieldy.
If you just want to replicate what CMake (as linked above) does, then NickeZ's snippet is pretty much the same thing.
clang-tidy reads the file .clang-tidy for options, so there doesn't have to be a way to pass them.
You always need to pass the files you want to process, though. Sadly clang-tidy does not seem to have a flag for "process all files in `compile_commands.json`". Even if it did, passing all files becomes very slow even with medium sized projects. Then you'd want to use `run-clang-tidy` instead to get the parallelism.
Well, for a Meson project I would expect that *all* C/C++ files are passed that are also being compiled.
@jhasse: All C/C++-Files that are not part of a subproject more likely.
@NickeZ's solution works well, except for the fact that meson is inserting the `-pipe` flag, which does in fact prevent an error. Is there a way to disable that flag?
```
error: unknown argument: '-pipe' [clang-diagnostic-error]
```
Yeah the -pipe is indeed the major issue for me too.
I think we need a proper Meson module that invokes the linter and cleans up the compile commands beforehand.
That is essentially what I am doing now except I use a bash script. See below what I do
Meson part
```python
clangtidy = find_program('clang-tidy', required: false)
if clangtidy.found()
run_target(
'tidy',
command: [
'scripts/clang-tidy.sh',
clangtidy.path(),
meson.source_root(),
meson.build_root()
] + <CPP FILES>,
depends: <PROJECT NAME>)
endif
```
Bash script
```bash
#! /bin/bash
# Pick any flags you like here
CHECKS='-hicpp-*,-readability-implicit-bool-conversion,-cppcoreguidelines-*,-clang-diagnostic*,-llvm-*,-bugprone-*,-modernize-*,-misc-*'
BIN=$1 && shift
PROJECT_ROOT=$1 && shift
MESON_ROOT=$1 && shift
# Execute in a different directory to ensure we don't mess with the meson config
TIDY_DIR=${PROJECT_ROOT}/build-tidy
mkdir -p ${TIDY_DIR}
cp ${MESON_ROOT}/compile_commands.json ${TIDY_DIR}
# Replace meson commands clang does not understand
sed -i 's/-pipe//g' ${TIDY_DIR}/compile_commands.json
echo "Running clang checks: ${CHECKS}"
$BIN -checks=${CHECKS} -warnings-as-errors=* -p ${TIDY_DIR} $@
```
Ideally the meson code should look something like this imho
```python
clang_tidy = import('clang_tidy')
if clang_tidy.found():
clang_tidy.run_on(<PROJECT NAME>)
endif
```
Adding some observations. https://bugs.llvm.org/show_bug.cgi?id=37315 seems to be related to the `-pipe` issues since actually removing `-Xclang` works as well. Furthermore setting the option `b_colorout=never` which still sets `-Xclang` works as well. Is `-Xclang` really necessary here?
@kaimast - thank you for the inspiration. I wound up doing things a bit differently. I wanted everything in the build directory, I needed to be able to pass other flags to clang-tidy, and I wanted something fairly generic. So, the bash script I came up with is:
```bash
#!/usr/bin/bash
BIN=$1 && shift
BUILD_ROOT=$1 && shift
# Execute in a different directory to ensure we don't mess with the meson config
TIDY_DIR="${BUILD_ROOT}/:clang-tidy"
mkdir -p ${TIDY_DIR}
cp ${BUILD_ROOT}/compile_commands.json ${TIDY_DIR}
# Replace meson commands clang does not understand
sed -i 's/-pipe//g' ${TIDY_DIR}/compile_commands.json
$BIN -p ${TIDY_DIR} $*
```
There is also https://bugs.llvm.org/show_bug.cgi?id=37281 to consider.
Also, none of the `run_target()` examples above parallelizes. Would be nice if Meson took care of that.
FYI... forgot that I filed a clang-tidy bug for -the pipe and clang-diagnostic error but found it now again when searching for the -pipe problem in bugzilla. Can be found here:
https://bugs.llvm.org/show_bug.cgi?id=37315
Seems there is exactly zero reaction from Clang developers on this bug. Maybe it makes sense to write a message to their mailing list instead? I think they ignore most of the bugzilla bugs (judging from my own experience).
> The problem with this is not the actual target but the fact that you almost always want to specify a specific command line argument (specifically what tests you want to run and which header directories to use). This is in contrast to scan-build, which consists of only one and the same command that you always run. Enabling all checks just drowns you in warnings on almost all code bases. Doing this right would take possibly two new options just for this. Unfortunately we already have too many options and cramming even more in makes the whole thing unwieldy
@jpakkane What two options are you thinking about?
I think everything should be possible to configure in `.clang-tidy`. What checks to enable definitely is. The header filter is a bit problematic due to https://bugs.llvm.org/show_bug.cgi?id=37281, but could say "set header filter in .clang-tidy like this for it to work with Meson" and/or extract and modify it if it looks like it has paths relative to the source root (e.g. `^src/` -> `^../src/`).
If a `clang-tidy` target should be available or not could be handled just like you do in 1fca654055d3502d2db9c5aad66a522beaa1df19 . Only add it if there is a `.clang-tidy` in the root.
The only thing I can come to think of that I would want configurable in Meson itself is some way to say "exclude these sources". Not running clang-tidy on source in the build directory and subprojects by default would make it so that is not even needed in most cases. A regex option that defaults to match source in the subprojects and build folder? (In CMake they work around this by placing a mostly empty dummy .clang-tidy in folder of source that should not be checked. See e.g. https://gitlab.kitware.com/cmake/cmake/commit/b13bc8659f87567b1b091806d42f5023b2a6b48b . Ugh!)
> If a clang-tidy target should be available or not could be handled just like you do in 1fca654 . Only add it if there is a .clang-tidy in the root.
I did not know clang-tidy had this functionality (it's not particularly prominent in the docs). Adding a clang-tidy target that behaves just like the clang-format target (i.e. run it on all sources using the given `.clang-tidy` file) is something we can do fairly easily.
Just a +1 on having a clang-tidy target as well - would be really neat if it worked like clang-format and you can just throw it at your whole project.
With the script/run_target approach above you kind of want a global source-file list of your project, which would be against meson principles I guess. :)
|
2019-09-29T20:18:02Z
|
<patch>
diff --git a/mesonbuild/backend/ninjabackend.py b/mesonbuild/backend/ninjabackend.py
--- a/mesonbuild/backend/ninjabackend.py
+++ b/mesonbuild/backend/ninjabackend.py
@@ -2656,27 +2656,37 @@ def generate_scanbuild(self):
# Alias that runs the target defined above
self.create_target_alias('meson-scan-build')
- def generate_clangformat(self):
- target_name = 'clang-format'
- if not environment.detect_clangformat():
- return
- if not os.path.exists(os.path.join(self.environment.source_dir, '.clang-format')) and \
- not os.path.exists(os.path.join(self.environment.source_dir, '_clang-format')):
+ def generate_clangtool(self, name):
+ target_name = 'clang-' + name
+ if not os.path.exists(os.path.join(self.environment.source_dir, '.clang-' + name)) and \
+ not os.path.exists(os.path.join(self.environment.source_dir, '_clang-' + name)):
return
- if 'target_name' in self.all_outputs:
+ if target_name in self.all_outputs:
return
cmd = self.environment.get_build_command() + \
- ['--internal', 'clangformat', self.environment.source_dir, self.environment.build_dir]
+ ['--internal', 'clang' + name, self.environment.source_dir, self.environment.build_dir]
elem = NinjaBuildElement(self.all_outputs, 'meson-' + target_name, 'CUSTOM_COMMAND', 'PHONY')
elem.add_item('COMMAND', cmd)
elem.add_item('pool', 'console')
self.add_build(elem)
self.create_target_alias('meson-' + target_name)
+ def generate_clangformat(self):
+ if not environment.detect_clangformat():
+ return
+ self.generate_clangtool('format')
+
+ def generate_clangtidy(self):
+ import shutil
+ if not shutil.which('clang-tidy'):
+ return
+ self.generate_clangtool('tidy')
+
# For things like scan-build and other helper tools we might have.
def generate_utils(self):
self.generate_scanbuild()
self.generate_clangformat()
+ self.generate_clangtidy()
cmd = self.environment.get_build_command() + ['--internal', 'uninstall']
elem = NinjaBuildElement(self.all_outputs, 'meson-uninstall', 'CUSTOM_COMMAND', 'PHONY')
elem.add_item('COMMAND', cmd)
diff --git a/mesonbuild/scripts/clangtidy.py b/mesonbuild/scripts/clangtidy.py
new file mode 100644
--- /dev/null
+++ b/mesonbuild/scripts/clangtidy.py
@@ -0,0 +1,38 @@
+# Copyright 2019 The Meson development team
+
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+
+# http://www.apache.org/licenses/LICENSE-2.0
+
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import pathlib
+import subprocess
+from concurrent.futures import ThreadPoolExecutor
+
+from ..compilers import lang_suffixes
+
+def clangformat(srcdir_name, builddir_name):
+ srcdir = pathlib.Path(srcdir_name)
+ suffixes = set(lang_suffixes['c']).union(set(lang_suffixes['cpp']))
+ suffixes.add('h')
+ futures = []
+ with ThreadPoolExecutor() as e:
+ for f in (x for suff in suffixes for x in srcdir.glob('**/*.' + suff)):
+ strf = str(f)
+ if strf.startswith(builddir_name):
+ continue
+ futures.append(e.submit(subprocess.check_call, ['clang-tidy', '-p', builddir_name, strf]))
+ [x.result() for x in futures]
+ return 0
+
+def run(args):
+ srcdir_name = args[0]
+ builddir_name = args[1]
+ return clangformat(srcdir_name, builddir_name)
diff --git a/run_unittests.py b/run_unittests.py
--- a/run_unittests.py
+++ b/run_unittests.py
@@ -3630,6 +3630,19 @@ def test_clang_format(self):
if os.path.exists(testheader):
os.unlink(testheader)
+ @skipIfNoExecutable('clang-tidy')
+ def test_clang_tidy(self):
+ if self.backend is not Backend.ninja:
+ raise unittest.SkipTest('Clang-tidy is for now only supported on Ninja, not {}'.format(self.backend.name))
+ if shutil.which('c++') is None:
+ raise unittest.SkipTest('Clang-tidy breaks when ccache is used and "c++" not in path.')
+ if is_osx():
+ raise unittest.SkipTest('Apple ships a broken clang-tidy that chokes on -pipe.')
+ testdir = os.path.join(self.unit_test_dir, '70 clang-tidy')
+ self.init(testdir, override_envvars={'CXX': 'c++'})
+ out = self.run_target('clang-tidy')
+ self.assertIn('cttest.cpp:4:20', out)
+
def test_introspect_buildoptions_without_configured_build(self):
testdir = os.path.join(self.unit_test_dir, '59 introspect buildoptions')
testfile = os.path.join(testdir, 'meson.build')
</patch>
|
[]
|
[]
| |||
huggingface__transformers-18907
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
bug in transformers notebook (training from scratch)?
Hello there!
First of all, I cannot thank @Rocketknight1 enough for the amazing work he has been doing to create `tensorflow` versions of the notebooks. On my side, I have spent some time and money (colab pro) trying to tie the notebooks together to create a full classifier from scratch with the following steps:
1. train the tokenizer
2. train the language model
3. train de classification head.
Unfortunately, I run into two issues. You can use the fully working notebook pasted below.
First issue: by training my own tokenizer I actually get a `perplexity` (225) that is way worse than the example shown https://github.com/huggingface/notebooks/blob/new_tf_notebooks/examples/language_modeling-tf.ipynb when using
```
model_checkpoint = "bert-base-uncased"
datasets = load_dataset("wikitext", "wikitext-2-raw-v1")
```
This is puzzling as the tokenizer should be fine-tuned to the data used in the original tf2 notebook!
Second, there seem to be some **python issue** when I try to fine-tune the language model I obtained above with a text classification head.
Granted, the `tokenizer` and the underlying `language model` have been trained on another dataset (the wikipedia dataset from the previous two tf2 notebook that is). See https://github.com/huggingface/notebooks/blob/new_tf_notebooks/examples/text_classification-tf.ipynb . However, I should at least get some valid output! Here the model is complaining about some collate function.
Could you please have a look @sgugger @LysandreJik @Rocketknight1 when you can? I would be very happy to contribute this notebook to the Hugging Face community (although most of the credits go to @Rocketknight1). There is a great demand for building language models and NLP tasks from scratch.
Thanks!!!!
Code below
---
get the most recent versions
```
!pip install git+https://github.com/huggingface/datasets.git
!pip install transformers
```
train tokenizer from scratch
```
from datasets import load_dataset
dataset = load_dataset("wikitext", name="wikitext-2-raw-v1", split="train")
batch_size = 1000
def batch_iterator():
for i in range(0, len(dataset), batch_size):
yield dataset[i : i + batch_size]["text"]
all_texts = [dataset[i : i + batch_size]["text"] for i in range(0, len(dataset), batch_size)]
from tokenizers import decoders, models, normalizers, pre_tokenizers, processors, trainers, Tokenizer
tokenizer = Tokenizer(models.WordPiece(unl_token="[UNK]"))
tokenizer.normalizer = normalizers.BertNormalizer(lowercase=True)
tokenizer.pre_tokenizer = pre_tokenizers.BertPreTokenizer()
special_tokens = ["[UNK]", "[PAD]", "[CLS]", "[SEP]", "[MASK]"]
trainer = trainers.WordPieceTrainer(vocab_size=25000, special_tokens=special_tokens)
tokenizer.train_from_iterator(batch_iterator(), trainer=trainer)
cls_token_id = tokenizer.token_to_id("[CLS]")
sep_token_id = tokenizer.token_to_id("[SEP]")
print(cls_token_id, sep_token_id)
tokenizer.post_processor = processors.TemplateProcessing(
single=f"[CLS]:0 $A:0 [SEP]:0",
pair=f"[CLS]:0 $A:0 [SEP]:0 $B:1 [SEP]:1",
special_tokens=[
("[CLS]", cls_token_id),
("[SEP]", sep_token_id),
],
)
tokenizer.decoder = decoders.WordPiece(prefix="##")
from transformers import BertTokenizerFast
mytokenizer = BertTokenizerFast(tokenizer_object=tokenizer)
```
causal language from scratch using my own tokenizer `mytokenizer`
```
model_checkpoint = "bert-base-uncased"
datasets = load_dataset("wikitext", "wikitext-2-raw-v1")
def tokenize_function(examples):
return mytokenizer(examples["text"], truncation=True)
tokenized_datasets = datasets.map(
tokenize_function, batched=True, num_proc=4, remove_columns=["text"]
)
block_size = 128
def group_texts(examples):
# Concatenate all texts.
concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()}
total_length = len(concatenated_examples[list(examples.keys())[0]])
# We drop the small remainder, we could add padding if the model supported it instead of this drop, you can
# customize this part to your needs.
total_length = (total_length // block_size) * block_size
# Split by chunks of max_len.
result = {
k: [t[i : i + block_size] for i in range(0, total_length, block_size)]
for k, t in concatenated_examples.items()
}
result["labels"] = result["input_ids"].copy()
return result
lm_datasets = tokenized_datasets.map(
group_texts,
batched=True,
batch_size=1000,
num_proc=4,
)
from transformers import TFAutoModelForMaskedLM
model = TFAutoModelForMaskedLM.from_pretrained(model_checkpoint)
from transformers import create_optimizer, AdamWeightDecay
import tensorflow as tf
optimizer = AdamWeightDecay(lr=2e-5, weight_decay_rate=0.01)
def dummy_loss(y_true, y_pred):
return tf.reduce_mean(y_pred)
model.compile(optimizer=optimizer, loss={"loss": dummy_loss})
from transformers import DataCollatorForLanguageModeling
data_collator = DataCollatorForLanguageModeling(
tokenizer=mytokenizer, mlm_probability=0.15, return_tensors="tf"
)
train_set = lm_datasets["train"].to_tf_dataset(
columns=["attention_mask", "input_ids", "labels"],
shuffle=True,
batch_size=16,
collate_fn=data_collator,
)
validation_set = lm_datasets["validation"].to_tf_dataset(
columns=["attention_mask", "input_ids", "labels"],
shuffle=False,
batch_size=16,
collate_fn=data_collator,
)
model.fit(train_set, validation_data=validation_set, epochs=1)
import math
eval_results = model.evaluate(validation_set)[0]
print(f"Perplexity: {math.exp(eval_results):.2f}")
```
and fine tune a classification tasks
```
GLUE_TASKS = [
"cola",
"mnli",
"mnli-mm",
"mrpc",
"qnli",
"qqp",
"rte",
"sst2",
"stsb",
"wnli",
]
task = "sst2"
batch_size = 16
from datasets import load_dataset, load_metric
actual_task = "mnli" if task == "mnli-mm" else task
dataset = load_dataset("glue", actual_task)
metric = load_metric("glue", actual_task)
```
and now try to classify text
```
from transformers import AutoTokenizer
task_to_keys = {
"cola": ("sentence", None),
"mnli": ("premise", "hypothesis"),
"mnli-mm": ("premise", "hypothesis"),
"mrpc": ("sentence1", "sentence2"),
"qnli": ("question", "sentence"),
"qqp": ("question1", "question2"),
"rte": ("sentence1", "sentence2"),
"sst2": ("sentence", None),
"stsb": ("sentence1", "sentence2"),
"wnli": ("sentence1", "sentence2"),
}
sentence1_key, sentence2_key = task_to_keys[task]
if sentence2_key is None:
print(f"Sentence: {dataset['train'][0][sentence1_key]}")
else:
print(f"Sentence 1: {dataset['train'][0][sentence1_key]}")
print(f"Sentence 2: {dataset['train'][0][sentence2_key]}")
def preprocess_function(examples):
if sentence2_key is None:
return mytokenizer(examples[sentence1_key], truncation=True)
return mytokenizer(examples[sentence1_key], examples[sentence2_key], truncation=True)
pre_tokenizer_columns = set(dataset["train"].features)
encoded_dataset = dataset.map(preprocess_function, batched=True)
tokenizer_columns = list(set(encoded_dataset["train"].features) - pre_tokenizer_columns)
print("Columns added by tokenizer:", tokenizer_columns)
validation_key = (
"validation_mismatched"
if task == "mnli-mm"
else "validation_matched"
if task == "mnli"
else "validation"
)
tf_train_dataset = encoded_dataset["train"].to_tf_dataset(
columns=tokenizer_columns,
label_cols=["label"],
shuffle=True,
batch_size=16,
collate_fn=mytokenizer.pad,
)
tf_validation_dataset = encoded_dataset[validation_key].to_tf_dataset(
columns=tokenizer_columns,
label_cols=["label"],
shuffle=False,
batch_size=16,
collate_fn=mytokenizer.pad,
)
from transformers import TFAutoModelForSequenceClassification
import tensorflow as tf
num_labels = 3 if task.startswith("mnli") else 1 if task == "stsb" else 2
if task == "stsb":
loss = tf.keras.losses.MeanSquaredError()
num_labels = 1
elif task.startswith("mnli"):
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
num_labels = 3
else:
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
num_labels = 2
model = TFAutoModelForSequenceClassification.from_pretrained(
model, num_labels=num_labels
)
from transformers import create_optimizer
num_epochs = 5
batches_per_epoch = len(encoded_dataset["train"]) // batch_size
total_train_steps = int(batches_per_epoch * num_epochs)
optimizer, schedule = create_optimizer(
init_lr=2e-5, num_warmup_steps=0, num_train_steps=total_train_steps
)
model.compile(optimizer=optimizer, loss=loss)
metric_name = (
"pearson"
if task == "stsb"
else "matthews_correlation"
if task == "cola"
else "accuracy"
)
def compute_metrics(predictions, labels):
if task != "stsb":
predictions = np.argmax(predictions, axis=1)
else:
predictions = predictions[:, 0]
return metric.compute(predictions=predictions, references=labels)
model.fit(
tf_train_dataset,
validation_data=tf_validation_dataset,
epochs=5,
callbacks=tf.keras.callbacks.EarlyStopping(patience=2),
)
predictions = model.predict(tf_validation_dataset)["logits"]
compute_metrics(predictions, np.array(encoded_dataset[validation_key]["label"]))
Loading cached processed dataset at /root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-d01ad7112f932f9c.arrow
Loading cached processed dataset at /root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-de5efda680a1f856.arrow
Loading cached processed dataset at /root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-0f3c1e00b7f03ba8.arrow
Sentence: hide new secretions from the parental units
Columns added by tokenizer: ['attention_mask', 'input_ids', 'token_type_ids']
---------------------------------------------------------------------------
VisibleDeprecationWarning Traceback (most recent call last)
<ipython-input-42-6eba4122302c> in <module>()
44 shuffle=True,
45 batch_size=16,
---> 46 collate_fn=mytokenizer.pad,
47 )
48 tf_validation_dataset = encoded_dataset[validation_key].to_tf_dataset(
9 frames
/usr/local/lib/python3.7/dist-packages/datasets/formatting/formatting.py in _arrow_array_to_numpy(self, pa_array)
165 # cast to list of arrays or we end up with a np.array with dtype object
166 array: List[np.ndarray] = pa_array.to_numpy(zero_copy_only=zero_copy_only).tolist()
--> 167 return np.array(array, copy=False, **self.np_array_kwargs)
168
169
VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
```
What do you think? Happy to help if I can
Thanks!!
</issue>
<code>
[start of README.md]
1 <!---
2 Copyright 2020 The HuggingFace Team. All rights reserved.
3
4 Licensed under the Apache License, Version 2.0 (the "License");
5 you may not use this file except in compliance with the License.
6 You may obtain a copy of the License at
7
8 http://www.apache.org/licenses/LICENSE-2.0
9
10 Unless required by applicable law or agreed to in writing, software
11 distributed under the License is distributed on an "AS IS" BASIS,
12 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 See the License for the specific language governing permissions and
14 limitations under the License.
15 -->
16
17 <p align="center">
18 <br>
19 <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers_logo_name.png" width="400"/>
20 <br>
21 <p>
22 <p align="center">
23 <a href="https://circleci.com/gh/huggingface/transformers">
24 <img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/main">
25 </a>
26 <a href="https://github.com/huggingface/transformers/blob/main/LICENSE">
27 <img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue">
28 </a>
29 <a href="https://huggingface.co/docs/transformers/index">
30 <img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers/index.svg?down_color=red&down_message=offline&up_message=online">
31 </a>
32 <a href="https://github.com/huggingface/transformers/releases">
33 <img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg">
34 </a>
35 <a href="https://github.com/huggingface/transformers/blob/main/CODE_OF_CONDUCT.md">
36 <img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg">
37 </a>
38 <a href="https://zenodo.org/badge/latestdoi/155220641"><img src="https://zenodo.org/badge/155220641.svg" alt="DOI"></a>
39 </p>
40
41 <h4 align="center">
42 <p>
43 <b>English</b> |
44 <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hans.md">简体中文</a> |
45 <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hant.md">繁體中文</a> |
46 <a href="https://github.com/huggingface/transformers/blob/main/README_ko.md">한국어</a>
47 <p>
48 </h4>
49
50 <h3 align="center">
51 <p>State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow</p>
52 </h3>
53
54 <h3 align="center">
55 <a href="https://hf.co/course"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/course_banner.png"></a>
56 </h3>
57
58 🤗 Transformers provides thousands of pretrained models to perform tasks on different modalities such as text, vision, and audio.
59
60 These models can be applied on:
61
62 * 📝 Text, for tasks like text classification, information extraction, question answering, summarization, translation, text generation, in over 100 languages.
63 * 🖼️ Images, for tasks like image classification, object detection, and segmentation.
64 * 🗣️ Audio, for tasks like speech recognition and audio classification.
65
66 Transformer models can also perform tasks on **several modalities combined**, such as table question answering, optical character recognition, information extraction from scanned documents, video classification, and visual question answering.
67
68 🤗 Transformers provides APIs to quickly download and use those pretrained models on a given text, fine-tune them on your own datasets and then share them with the community on our [model hub](https://huggingface.co/models). At the same time, each python module defining an architecture is fully standalone and can be modified to enable quick research experiments.
69
70 🤗 Transformers is backed by the three most popular deep learning libraries — [Jax](https://jax.readthedocs.io/en/latest/), [PyTorch](https://pytorch.org/) and [TensorFlow](https://www.tensorflow.org/) — with a seamless integration between them. It's straightforward to train your models with one before loading them for inference with the other.
71
72 ## Online demos
73
74 You can test most of our models directly on their pages from the [model hub](https://huggingface.co/models). We also offer [private model hosting, versioning, & an inference API](https://huggingface.co/pricing) for public and private models.
75
76 Here are a few examples:
77
78 In Natural Language Processing:
79 - [Masked word completion with BERT](https://huggingface.co/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France)
80 - [Name Entity Recognition with Electra](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city)
81 - [Text generation with GPT-2](https://huggingface.co/gpt2?text=A+long+time+ago%2C+)
82 - [Natural Language Inference with RoBERTa](https://huggingface.co/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal)
83 - [Summarization with BART](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct)
84 - [Question answering with DistilBERT](https://huggingface.co/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species)
85 - [Translation with T5](https://huggingface.co/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin)
86
87 In Computer Vision:
88 - [Image classification with ViT](https://huggingface.co/google/vit-base-patch16-224)
89 - [Object Detection with DETR](https://huggingface.co/facebook/detr-resnet-50)
90 - [Semantic Segmentation with SegFormer](https://huggingface.co/nvidia/segformer-b0-finetuned-ade-512-512)
91 - [Panoptic Segmentation with DETR](https://huggingface.co/facebook/detr-resnet-50-panoptic)
92
93 In Audio:
94 - [Automatic Speech Recognition with Wav2Vec2](https://huggingface.co/facebook/wav2vec2-base-960h)
95 - [Keyword Spotting with Wav2Vec2](https://huggingface.co/superb/wav2vec2-base-superb-ks)
96
97 In Multimodal tasks:
98 - [Visual Question Answering with ViLT](https://huggingface.co/dandelin/vilt-b32-finetuned-vqa)
99
100 **[Write With Transformer](https://transformer.huggingface.co)**, built by the Hugging Face team, is the official demo of this repo’s text generation capabilities.
101
102 ## If you are looking for custom support from the Hugging Face team
103
104 <a target="_blank" href="https://huggingface.co/support">
105 <img alt="HuggingFace Expert Acceleration Program" src="https://cdn-media.huggingface.co/marketing/transformers/new-support-improved.png" style="max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);">
106 </a><br>
107
108 ## Quick tour
109
110 To immediately use a model on a given input (text, image, audio, ...), we provide the `pipeline` API. Pipelines group together a pretrained model with the preprocessing that was used during that model's training. Here is how to quickly use a pipeline to classify positive versus negative texts:
111
112 ```python
113 >>> from transformers import pipeline
114
115 # Allocate a pipeline for sentiment-analysis
116 >>> classifier = pipeline('sentiment-analysis')
117 >>> classifier('We are very happy to introduce pipeline to the transformers repository.')
118 [{'label': 'POSITIVE', 'score': 0.9996980428695679}]
119 ```
120
121 The second line of code downloads and caches the pretrained model used by the pipeline, while the third evaluates it on the given text. Here the answer is "positive" with a confidence of 99.97%.
122
123 Many tasks have a pre-trained `pipeline` ready to go, in NLP but also in computer vision and speech. For example, we can easily extract detected objects in an image:
124
125 ``` python
126 >>> import requests
127 >>> from PIL import Image
128 >>> from transformers import pipeline
129
130 # Download an image with cute cats
131 >>> url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample.png"
132 >>> image_data = requests.get(url, stream=True).raw
133 >>> image = Image.open(image_data)
134
135 # Allocate a pipeline for object detection
136 >>> object_detector = pipeline('object_detection')
137 >>> object_detector(image)
138 [{'score': 0.9982201457023621,
139 'label': 'remote',
140 'box': {'xmin': 40, 'ymin': 70, 'xmax': 175, 'ymax': 117}},
141 {'score': 0.9960021376609802,
142 'label': 'remote',
143 'box': {'xmin': 333, 'ymin': 72, 'xmax': 368, 'ymax': 187}},
144 {'score': 0.9954745173454285,
145 'label': 'couch',
146 'box': {'xmin': 0, 'ymin': 1, 'xmax': 639, 'ymax': 473}},
147 {'score': 0.9988006353378296,
148 'label': 'cat',
149 'box': {'xmin': 13, 'ymin': 52, 'xmax': 314, 'ymax': 470}},
150 {'score': 0.9986783862113953,
151 'label': 'cat',
152 'box': {'xmin': 345, 'ymin': 23, 'xmax': 640, 'ymax': 368}}]
153 ```
154
155 Here we get a list of objects detected in the image, with a box surrounding the object and a confidence score. Here is the original image on the right, with the predictions displayed on the left:
156
157 <h3 align="center">
158 <a><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample.png" width="400"></a>
159 <a><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample_post_processed.png" width="400"></a>
160 </h3>
161
162 You can learn more about the tasks supported by the `pipeline` API in [this tutorial](https://huggingface.co/docs/transformers/task_summary).
163
164 In addition to `pipeline`, to download and use any of the pretrained models on your given task, all it takes is three lines of code. Here is the PyTorch version:
165 ```python
166 >>> from transformers import AutoTokenizer, AutoModel
167
168 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
169 >>> model = AutoModel.from_pretrained("bert-base-uncased")
170
171 >>> inputs = tokenizer("Hello world!", return_tensors="pt")
172 >>> outputs = model(**inputs)
173 ```
174
175 And here is the equivalent code for TensorFlow:
176 ```python
177 >>> from transformers import AutoTokenizer, TFAutoModel
178
179 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
180 >>> model = TFAutoModel.from_pretrained("bert-base-uncased")
181
182 >>> inputs = tokenizer("Hello world!", return_tensors="tf")
183 >>> outputs = model(**inputs)
184 ```
185
186 The tokenizer is responsible for all the preprocessing the pretrained model expects, and can be called directly on a single string (as in the above examples) or a list. It will output a dictionary that you can use in downstream code or simply directly pass to your model using the ** argument unpacking operator.
187
188 The model itself is a regular [Pytorch `nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) or a [TensorFlow `tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model) (depending on your backend) which you can use as usual. [This tutorial](https://huggingface.co/docs/transformers/training) explains how to integrate such a model into a classic PyTorch or TensorFlow training loop, or how to use our `Trainer` API to quickly fine-tune on a new dataset.
189
190 ## Why should I use transformers?
191
192 1. Easy-to-use state-of-the-art models:
193 - High performance on natural language understanding & generation, computer vision, and audio tasks.
194 - Low barrier to entry for educators and practitioners.
195 - Few user-facing abstractions with just three classes to learn.
196 - A unified API for using all our pretrained models.
197
198 1. Lower compute costs, smaller carbon footprint:
199 - Researchers can share trained models instead of always retraining.
200 - Practitioners can reduce compute time and production costs.
201 - Dozens of architectures with over 60,000 pretrained models across all modalities.
202
203 1. Choose the right framework for every part of a model's lifetime:
204 - Train state-of-the-art models in 3 lines of code.
205 - Move a single model between TF2.0/PyTorch/JAX frameworks at will.
206 - Seamlessly pick the right framework for training, evaluation and production.
207
208 1. Easily customize a model or an example to your needs:
209 - We provide examples for each architecture to reproduce the results published by its original authors.
210 - Model internals are exposed as consistently as possible.
211 - Model files can be used independently of the library for quick experiments.
212
213 ## Why shouldn't I use transformers?
214
215 - This library is not a modular toolbox of building blocks for neural nets. The code in the model files is not refactored with additional abstractions on purpose, so that researchers can quickly iterate on each of the models without diving into additional abstractions/files.
216 - The training API is not intended to work on any model but is optimized to work with the models provided by the library. For generic machine learning loops, you should use another library (possibly, [Accelerate](https://huggingface.co/docs/accelerate)).
217 - While we strive to present as many use cases as possible, the scripts in our [examples folder](https://github.com/huggingface/transformers/tree/main/examples) are just that: examples. It is expected that they won't work out-of-the box on your specific problem and that you will be required to change a few lines of code to adapt them to your needs.
218
219 ## Installation
220
221 ### With pip
222
223 This repository is tested on Python 3.6+, Flax 0.3.2+, PyTorch 1.3.1+ and TensorFlow 2.3+.
224
225 You should install 🤗 Transformers in a [virtual environment](https://docs.python.org/3/library/venv.html). If you're unfamiliar with Python virtual environments, check out the [user guide](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/).
226
227 First, create a virtual environment with the version of Python you're going to use and activate it.
228
229 Then, you will need to install at least one of Flax, PyTorch or TensorFlow.
230 Please refer to [TensorFlow installation page](https://www.tensorflow.org/install/), [PyTorch installation page](https://pytorch.org/get-started/locally/#start-locally) and/or [Flax](https://github.com/google/flax#quick-install) and [Jax](https://github.com/google/jax#installation) installation pages regarding the specific install command for your platform.
231
232 When one of those backends has been installed, 🤗 Transformers can be installed using pip as follows:
233
234 ```bash
235 pip install transformers
236 ```
237
238 If you'd like to play with the examples or need the bleeding edge of the code and can't wait for a new release, you must [install the library from source](https://huggingface.co/docs/transformers/installation#installing-from-source).
239
240 ### With conda
241
242 Since Transformers version v4.0.0, we now have a conda channel: `huggingface`.
243
244 🤗 Transformers can be installed using conda as follows:
245
246 ```shell script
247 conda install -c huggingface transformers
248 ```
249
250 Follow the installation pages of Flax, PyTorch or TensorFlow to see how to install them with conda.
251
252 ## Model architectures
253
254 **[All the model checkpoints](https://huggingface.co/models)** provided by 🤗 Transformers are seamlessly integrated from the huggingface.co [model hub](https://huggingface.co) where they are uploaded directly by [users](https://huggingface.co/users) and [organizations](https://huggingface.co/organizations).
255
256 Current number of checkpoints: 
257
258 🤗 Transformers currently provides the following architectures (see [here](https://huggingface.co/docs/transformers/model_summary) for a high-level summary of each them):
259
260 1. **[ALBERT](https://huggingface.co/docs/transformers/model_doc/albert)** (from Google Research and the Toyota Technological Institute at Chicago) released with the paper [ALBERT: A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942), by Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut.
261 1. **[BART](https://huggingface.co/docs/transformers/model_doc/bart)** (from Facebook) released with the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/abs/1910.13461) by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer.
262 1. **[BARThez](https://huggingface.co/docs/transformers/model_doc/barthez)** (from École polytechnique) released with the paper [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321) by Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis.
263 1. **[BARTpho](https://huggingface.co/docs/transformers/model_doc/bartpho)** (from VinAI Research) released with the paper [BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese](https://arxiv.org/abs/2109.09701) by Nguyen Luong Tran, Duong Minh Le and Dat Quoc Nguyen.
264 1. **[BEiT](https://huggingface.co/docs/transformers/model_doc/beit)** (from Microsoft) released with the paper [BEiT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) by Hangbo Bao, Li Dong, Furu Wei.
265 1. **[BERT](https://huggingface.co/docs/transformers/model_doc/bert)** (from Google) released with the paper [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova.
266 1. **[BERT For Sequence Generation](https://huggingface.co/docs/transformers/model_doc/bert-generation)** (from Google) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn.
267 1. **[BERTweet](https://huggingface.co/docs/transformers/model_doc/bertweet)** (from VinAI Research) released with the paper [BERTweet: A pre-trained language model for English Tweets](https://aclanthology.org/2020.emnlp-demos.2/) by Dat Quoc Nguyen, Thanh Vu and Anh Tuan Nguyen.
268 1. **[BigBird-Pegasus](https://huggingface.co/docs/transformers/model_doc/bigbird_pegasus)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed.
269 1. **[BigBird-RoBERTa](https://huggingface.co/docs/transformers/model_doc/big_bird)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed.
270 1. **[Blenderbot](https://huggingface.co/docs/transformers/model_doc/blenderbot)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston.
271 1. **[BlenderbotSmall](https://huggingface.co/docs/transformers/model_doc/blenderbot-small)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston.
272 1. **[BLOOM](https://huggingface.co/docs/transformers/model_doc/bloom)** (from BigScience workshop) released by the [BigSicence Workshop](https://bigscience.huggingface.co/).
273 1. **[BORT](https://huggingface.co/docs/transformers/model_doc/bort)** (from Alexa) released with the paper [Optimal Subarchitecture Extraction For BERT](https://arxiv.org/abs/2010.10499) by Adrian de Wynter and Daniel J. Perry.
274 1. **[ByT5](https://huggingface.co/docs/transformers/model_doc/byt5)** (from Google Research) released with the paper [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) by Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel.
275 1. **[CamemBERT](https://huggingface.co/docs/transformers/model_doc/camembert)** (from Inria/Facebook/Sorbonne) released with the paper [CamemBERT: a Tasty French Language Model](https://arxiv.org/abs/1911.03894) by Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz Suárez*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot.
276 1. **[CANINE](https://huggingface.co/docs/transformers/model_doc/canine)** (from Google Research) released with the paper [CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation](https://arxiv.org/abs/2103.06874) by Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting.
277 1. **[CLIP](https://huggingface.co/docs/transformers/model_doc/clip)** (from OpenAI) released with the paper [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) by Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever.
278 1. **[CodeGen](https://huggingface.co/docs/transformers/model_doc/codegen)** (from Salesforce) released with the paper [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong.
279 1. **[ConvBERT](https://huggingface.co/docs/transformers/model_doc/convbert)** (from YituTech) released with the paper [ConvBERT: Improving BERT with Span-based Dynamic Convolution](https://arxiv.org/abs/2008.02496) by Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng Yan.
280 1. **[ConvNeXT](https://huggingface.co/docs/transformers/model_doc/convnext)** (from Facebook AI) released with the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, Saining Xie.
281 1. **[CPM](https://huggingface.co/docs/transformers/model_doc/cpm)** (from Tsinghua University) released with the paper [CPM: A Large-scale Generative Chinese Pre-trained Language Model](https://arxiv.org/abs/2012.00413) by Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen, Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, Maosong Sun.
282 1. **[CTRL](https://huggingface.co/docs/transformers/model_doc/ctrl)** (from Salesforce) released with the paper [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858) by Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher.
283 1. **[CvT](https://huggingface.co/docs/transformers/model_doc/cvt)** (from Microsoft) released with the paper [CvT: Introducing Convolutions to Vision Transformers](https://arxiv.org/abs/2103.15808) by Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan, Lei Zhang.
284 1. **[Data2Vec](https://huggingface.co/docs/transformers/model_doc/data2vec)** (from Facebook) released with the paper [Data2Vec: A General Framework for Self-supervised Learning in Speech, Vision and Language](https://arxiv.org/abs/2202.03555) by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli.
285 1. **[DeBERTa](https://huggingface.co/docs/transformers/model_doc/deberta)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen.
286 1. **[DeBERTa-v2](https://huggingface.co/docs/transformers/model_doc/deberta-v2)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen.
287 1. **[Decision Transformer](https://huggingface.co/docs/transformers/model_doc/decision_transformer)** (from Berkeley/Facebook/Google) released with the paper [Decision Transformer: Reinforcement Learning via Sequence Modeling](https://arxiv.org/abs/2106.01345) by Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch.
288 1. **[DeiT](https://huggingface.co/docs/transformers/model_doc/deit)** (from Facebook) released with the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou.
289 1. **[DETR](https://huggingface.co/docs/transformers/model_doc/detr)** (from Facebook) released with the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko.
290 1. **[DialoGPT](https://huggingface.co/docs/transformers/model_doc/dialogpt)** (from Microsoft Research) released with the paper [DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation](https://arxiv.org/abs/1911.00536) by Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan.
291 1. **[DistilBERT](https://huggingface.co/docs/transformers/model_doc/distilbert)** (from HuggingFace), released together with the paper [DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108) by Victor Sanh, Lysandre Debut and Thomas Wolf. The same method has been applied to compress GPT2 into [DistilGPT2](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation), RoBERTa into [DistilRoBERTa](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation), Multilingual BERT into [DistilmBERT](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation) and a German version of DistilBERT.
292 1. **[DiT](https://huggingface.co/docs/transformers/model_doc/dit)** (from Microsoft Research) released with the paper [DiT: Self-supervised Pre-training for Document Image Transformer](https://arxiv.org/abs/2203.02378) by Junlong Li, Yiheng Xu, Tengchao Lv, Lei Cui, Cha Zhang, Furu Wei.
293 1. **[Donut](https://huggingface.co/docs/transformers/main/model_doc/donut)** (from NAVER), released together with the paper [OCR-free Document Understanding Transformer](https://arxiv.org/abs/2111.15664) by Geewook Kim, Teakgyu Hong, Moonbin Yim, Jeongyeon Nam, Jinyoung Park, Jinyeong Yim, Wonseok Hwang, Sangdoo Yun, Dongyoon Han, Seunghyun Park.
294 1. **[DPR](https://huggingface.co/docs/transformers/model_doc/dpr)** (from Facebook) released with the paper [Dense Passage Retrieval for Open-Domain Question Answering](https://arxiv.org/abs/2004.04906) by Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih.
295 1. **[DPT](https://huggingface.co/docs/transformers/master/model_doc/dpt)** (from Intel Labs) released with the paper [Vision Transformers for Dense Prediction](https://arxiv.org/abs/2103.13413) by René Ranftl, Alexey Bochkovskiy, Vladlen Koltun.
296 1. **[ELECTRA](https://huggingface.co/docs/transformers/model_doc/electra)** (from Google Research/Stanford University) released with the paper [ELECTRA: Pre-training text encoders as discriminators rather than generators](https://arxiv.org/abs/2003.10555) by Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning.
297 1. **[EncoderDecoder](https://huggingface.co/docs/transformers/model_doc/encoder-decoder)** (from Google Research) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn.
298 1. **[FlauBERT](https://huggingface.co/docs/transformers/model_doc/flaubert)** (from CNRS) released with the paper [FlauBERT: Unsupervised Language Model Pre-training for French](https://arxiv.org/abs/1912.05372) by Hang Le, Loïc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoît Crabbé, Laurent Besacier, Didier Schwab.
299 1. **[FLAVA](https://huggingface.co/docs/transformers/model_doc/flava)** (from Facebook AI) released with the paper [FLAVA: A Foundational Language And Vision Alignment Model](https://arxiv.org/abs/2112.04482) by Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, and Douwe Kiela.
300 1. **[FNet](https://huggingface.co/docs/transformers/model_doc/fnet)** (from Google Research) released with the paper [FNet: Mixing Tokens with Fourier Transforms](https://arxiv.org/abs/2105.03824) by James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon.
301 1. **[Funnel Transformer](https://huggingface.co/docs/transformers/model_doc/funnel)** (from CMU/Google Brain) released with the paper [Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing](https://arxiv.org/abs/2006.03236) by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le.
302 1. **[GLPN](https://huggingface.co/docs/transformers/model_doc/glpn)** (from KAIST) released with the paper [Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth](https://arxiv.org/abs/2201.07436) by Doyeon Kim, Woonghyun Ga, Pyungwhan Ahn, Donggyu Joo, Sehwan Chun, Junmo Kim.
303 1. **[GPT](https://huggingface.co/docs/transformers/model_doc/openai-gpt)** (from OpenAI) released with the paper [Improving Language Understanding by Generative Pre-Training](https://blog.openai.com/language-unsupervised/) by Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever.
304 1. **[GPT Neo](https://huggingface.co/docs/transformers/model_doc/gpt_neo)** (from EleutherAI) released in the repository [EleutherAI/gpt-neo](https://github.com/EleutherAI/gpt-neo) by Sid Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy.
305 1. **[GPT NeoX](https://huggingface.co/docs/transformers/model_doc/gpt_neox)** (from EleutherAI) released with the paper [GPT-NeoX-20B: An Open-Source Autoregressive Language Model](https://arxiv.org/abs/2204.06745) by Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, Samuel Weinbach
306 1. **[GPT-2](https://huggingface.co/docs/transformers/model_doc/gpt2)** (from OpenAI) released with the paper [Language Models are Unsupervised Multitask Learners](https://blog.openai.com/better-language-models/) by Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever**.
307 1. **[GPT-J](https://huggingface.co/docs/transformers/model_doc/gptj)** (from EleutherAI) released in the repository [kingoflolz/mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/) by Ben Wang and Aran Komatsuzaki.
308 1. **[GroupViT](https://huggingface.co/docs/transformers/model_doc/groupvit)** (from UCSD, NVIDIA) released with the paper [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094) by Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang.
309 1. **[Hubert](https://huggingface.co/docs/transformers/model_doc/hubert)** (from Facebook) released with the paper [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed.
310 1. **[I-BERT](https://huggingface.co/docs/transformers/model_doc/ibert)** (from Berkeley) released with the paper [I-BERT: Integer-only BERT Quantization](https://arxiv.org/abs/2101.01321) by Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney, Kurt Keutzer.
311 1. **[ImageGPT](https://huggingface.co/docs/transformers/model_doc/imagegpt)** (from OpenAI) released with the paper [Generative Pretraining from Pixels](https://openai.com/blog/image-gpt/) by Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever.
312 1. **[LayoutLM](https://huggingface.co/docs/transformers/model_doc/layoutlm)** (from Microsoft Research Asia) released with the paper [LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318) by Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou.
313 1. **[LayoutLMv2](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (from Microsoft Research Asia) released with the paper [LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding](https://arxiv.org/abs/2012.14740) by Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou.
314 1. **[LayoutLMv3](https://huggingface.co/docs/transformers/model_doc/layoutlmv3)** (from Microsoft Research Asia) released with the paper [LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking](https://arxiv.org/abs/2204.08387) by Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei.
315 1. **[LayoutXLM](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (from Microsoft Research Asia) released with the paper [LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding](https://arxiv.org/abs/2104.08836) by Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Furu Wei.
316 1. **[LED](https://huggingface.co/docs/transformers/model_doc/led)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan.
317 1. **[LeViT](https://huggingface.co/docs/transformers/model_doc/levit)** (from Meta AI) released with the paper [LeViT: A Vision Transformer in ConvNet's Clothing for Faster Inference](https://arxiv.org/abs/2104.01136) by Ben Graham, Alaaeldin El-Nouby, Hugo Touvron, Pierre Stock, Armand Joulin, Hervé Jégou, Matthijs Douze.
318 1. **[Longformer](https://huggingface.co/docs/transformers/model_doc/longformer)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan.
319 1. **[LongT5](https://huggingface.co/docs/transformers/model_doc/longt5)** (from Google AI) released with the paper [LongT5: Efficient Text-To-Text Transformer for Long Sequences](https://arxiv.org/abs/2112.07916) by Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung, Yinfei Yang.
320 1. **[LUKE](https://huggingface.co/docs/transformers/model_doc/luke)** (from Studio Ousia) released with the paper [LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention](https://arxiv.org/abs/2010.01057) by Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, Yuji Matsumoto.
321 1. **[LXMERT](https://huggingface.co/docs/transformers/model_doc/lxmert)** (from UNC Chapel Hill) released with the paper [LXMERT: Learning Cross-Modality Encoder Representations from Transformers for Open-Domain Question Answering](https://arxiv.org/abs/1908.07490) by Hao Tan and Mohit Bansal.
322 1. **[M-CTC-T](https://huggingface.co/docs/transformers/model_doc/mctct)** (from Facebook) released with the paper [Pseudo-Labeling For Massively Multilingual Speech Recognition](https://arxiv.org/abs/2111.00161) by Loren Lugosch, Tatiana Likhomanenko, Gabriel Synnaeve, and Ronan Collobert.
323 1. **[M2M100](https://huggingface.co/docs/transformers/model_doc/m2m_100)** (from Facebook) released with the paper [Beyond English-Centric Multilingual Machine Translation](https://arxiv.org/abs/2010.11125) by Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin.
324 1. **[MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)** Machine translation models trained using [OPUS](http://opus.nlpl.eu/) data by Jörg Tiedemann. The [Marian Framework](https://marian-nmt.github.io/) is being developed by the Microsoft Translator Team.
325 1. **[MaskFormer](https://huggingface.co/docs/transformers/model_doc/maskformer)** (from Meta and UIUC) released with the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) by Bowen Cheng, Alexander G. Schwing, Alexander Kirillov.
326 1. **[mBART](https://huggingface.co/docs/transformers/model_doc/mbart)** (from Facebook) released with the paper [Multilingual Denoising Pre-training for Neural Machine Translation](https://arxiv.org/abs/2001.08210) by Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer.
327 1. **[mBART-50](https://huggingface.co/docs/transformers/model_doc/mbart)** (from Facebook) released with the paper [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) by Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, Angela Fan.
328 1. **[Megatron-BERT](https://huggingface.co/docs/transformers/model_doc/megatron-bert)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro.
329 1. **[Megatron-GPT2](https://huggingface.co/docs/transformers/model_doc/megatron_gpt2)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro.
330 1. **[mLUKE](https://huggingface.co/docs/transformers/model_doc/mluke)** (from Studio Ousia) released with the paper [mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models](https://arxiv.org/abs/2110.08151) by Ryokan Ri, Ikuya Yamada, and Yoshimasa Tsuruoka.
331 1. **[MobileBERT](https://huggingface.co/docs/transformers/model_doc/mobilebert)** (from CMU/Google Brain) released with the paper [MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices](https://arxiv.org/abs/2004.02984) by Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou.
332 1. **[MobileViT](https://huggingface.co/docs/transformers/model_doc/mobilevit)** (from Apple) released with the paper [MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer](https://arxiv.org/abs/2110.02178) by Sachin Mehta and Mohammad Rastegari.
333 1. **[MPNet](https://huggingface.co/docs/transformers/model_doc/mpnet)** (from Microsoft Research) released with the paper [MPNet: Masked and Permuted Pre-training for Language Understanding](https://arxiv.org/abs/2004.09297) by Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu.
334 1. **[MT5](https://huggingface.co/docs/transformers/model_doc/mt5)** (from Google AI) released with the paper [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) by Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel.
335 1. **[MVP](https://huggingface.co/docs/transformers/model_doc/mvp)** (from RUC AI Box) released with the paper [MVP: Multi-task Supervised Pre-training for Natural Language Generation](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen.
336 1. **[Nezha](https://huggingface.co/docs/transformers/model_doc/nezha)** (from Huawei Noah’s Ark Lab) released with the paper [NEZHA: Neural Contextualized Representation for Chinese Language Understanding](https://arxiv.org/abs/1909.00204) by Junqiu Wei, Xiaozhe Ren, Xiaoguang Li, Wenyong Huang, Yi Liao, Yasheng Wang, Jiashu Lin, Xin Jiang, Xiao Chen and Qun Liu.
337 1. **[NLLB](https://huggingface.co/docs/transformers/model_doc/nllb)** (from Meta) released with the paper [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672) by the NLLB team.
338 1. **[Nyströmformer](https://huggingface.co/docs/transformers/model_doc/nystromformer)** (from the University of Wisconsin - Madison) released with the paper [Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention](https://arxiv.org/abs/2102.03902) by Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, Vikas Singh.
339 1. **[OPT](https://huggingface.co/docs/transformers/master/model_doc/opt)** (from Meta AI) released with the paper [OPT: Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) by Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al.
340 1. **[OWL-ViT](https://huggingface.co/docs/transformers/model_doc/owlvit)** (from Google AI) released with the paper [Simple Open-Vocabulary Object Detection with Vision Transformers](https://arxiv.org/abs/2205.06230) by Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, and Neil Houlsby.
341 1. **[Pegasus](https://huggingface.co/docs/transformers/model_doc/pegasus)** (from Google) released with the paper [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777) by Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu.
342 1. **[PEGASUS-X](https://huggingface.co/docs/transformers/main/model_doc/pegasus_x)** (from Google) released with the paper [Investigating Efficiently Extending Transformers for Long Input Summarization](https://arxiv.org/abs/2208.04347) by Jason Phang, Yao Zhao, and Peter J. Liu.
343 1. **[Perceiver IO](https://huggingface.co/docs/transformers/model_doc/perceiver)** (from Deepmind) released with the paper [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) by Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hénaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, João Carreira.
344 1. **[PhoBERT](https://huggingface.co/docs/transformers/model_doc/phobert)** (from VinAI Research) released with the paper [PhoBERT: Pre-trained language models for Vietnamese](https://www.aclweb.org/anthology/2020.findings-emnlp.92/) by Dat Quoc Nguyen and Anh Tuan Nguyen.
345 1. **[PLBart](https://huggingface.co/docs/transformers/model_doc/plbart)** (from UCLA NLP) released with the paper [Unified Pre-training for Program Understanding and Generation](https://arxiv.org/abs/2103.06333) by Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, Kai-Wei Chang.
346 1. **[PoolFormer](https://huggingface.co/docs/transformers/model_doc/poolformer)** (from Sea AI Labs) released with the paper [MetaFormer is Actually What You Need for Vision](https://arxiv.org/abs/2111.11418) by Yu, Weihao and Luo, Mi and Zhou, Pan and Si, Chenyang and Zhou, Yichen and Wang, Xinchao and Feng, Jiashi and Yan, Shuicheng.
347 1. **[ProphetNet](https://huggingface.co/docs/transformers/model_doc/prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou.
348 1. **[QDQBert](https://huggingface.co/docs/transformers/model_doc/qdqbert)** (from NVIDIA) released with the paper [Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation](https://arxiv.org/abs/2004.09602) by Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev and Paulius Micikevicius.
349 1. **[RAG](https://huggingface.co/docs/transformers/model_doc/rag)** (from Facebook) released with the paper [Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks](https://arxiv.org/abs/2005.11401) by Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, Douwe Kiela.
350 1. **[REALM](https://huggingface.co/docs/transformers/model_doc/realm.html)** (from Google Research) released with the paper [REALM: Retrieval-Augmented Language Model Pre-Training](https://arxiv.org/abs/2002.08909) by Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat and Ming-Wei Chang.
351 1. **[Reformer](https://huggingface.co/docs/transformers/model_doc/reformer)** (from Google Research) released with the paper [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451) by Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya.
352 1. **[RegNet](https://huggingface.co/docs/transformers/model_doc/regnet)** (from META Platforms) released with the paper [Designing Network Design Space](https://arxiv.org/abs/2003.13678) by Ilija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, Piotr Dollár.
353 1. **[RemBERT](https://huggingface.co/docs/transformers/model_doc/rembert)** (from Google Research) released with the paper [Rethinking embedding coupling in pre-trained language models](https://arxiv.org/abs/2010.12821) by Hyung Won Chung, Thibault Févry, Henry Tsai, M. Johnson, Sebastian Ruder.
354 1. **[ResNet](https://huggingface.co/docs/transformers/model_doc/resnet)** (from Microsoft Research) released with the paper [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) by Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun.
355 1. **[RoBERTa](https://huggingface.co/docs/transformers/model_doc/roberta)** (from Facebook), released together with the paper [RoBERTa: A Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov.
356 1. **[RoFormer](https://huggingface.co/docs/transformers/model_doc/roformer)** (from ZhuiyiTechnology), released together with the paper [RoFormer: Enhanced Transformer with Rotary Position Embedding](https://arxiv.org/abs/2104.09864) by Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu.
357 1. **[SegFormer](https://huggingface.co/docs/transformers/model_doc/segformer)** (from NVIDIA) released with the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo.
358 1. **[SEW](https://huggingface.co/docs/transformers/model_doc/sew)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi.
359 1. **[SEW-D](https://huggingface.co/docs/transformers/model_doc/sew_d)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi.
360 1. **[SpeechToTextTransformer](https://huggingface.co/docs/transformers/model_doc/speech_to_text)** (from Facebook), released together with the paper [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino.
361 1. **[SpeechToTextTransformer2](https://huggingface.co/docs/transformers/model_doc/speech_to_text_2)** (from Facebook), released together with the paper [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) by Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau.
362 1. **[Splinter](https://huggingface.co/docs/transformers/model_doc/splinter)** (from Tel Aviv University), released together with the paper [Few-Shot Question Answering by Pretraining Span Selection](https://arxiv.org/abs/2101.00438) by Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy.
363 1. **[SqueezeBERT](https://huggingface.co/docs/transformers/model_doc/squeezebert)** (from Berkeley) released with the paper [SqueezeBERT: What can computer vision teach NLP about efficient neural networks?](https://arxiv.org/abs/2006.11316) by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W. Keutzer.
364 1. **[Swin Transformer](https://huggingface.co/docs/transformers/model_doc/swin)** (from Microsoft) released with the paper [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) by Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo.
365 1. **[Swin Transformer V2](https://huggingface.co/docs/transformers/main/model_doc/swinv2)** (from Microsoft) released with the paper [Swin Transformer V2: Scaling Up Capacity and Resolution](https://arxiv.org/abs/2111.09883) by Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, Furu Wei, Baining Guo.
366 1. **[T5](https://huggingface.co/docs/transformers/model_doc/t5)** (from Google AI) released with the paper [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu.
367 1. **[T5v1.1](https://huggingface.co/docs/transformers/model_doc/t5v1.1)** (from Google AI) released in the repository [google-research/text-to-text-transfer-transformer](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu.
368 1. **[TAPAS](https://huggingface.co/docs/transformers/model_doc/tapas)** (from Google AI) released with the paper [TAPAS: Weakly Supervised Table Parsing via Pre-training](https://arxiv.org/abs/2004.02349) by Jonathan Herzig, Paweł Krzysztof Nowak, Thomas Müller, Francesco Piccinno and Julian Martin Eisenschlos.
369 1. **[TAPEX](https://huggingface.co/docs/transformers/model_doc/tapex)** (from Microsoft Research) released with the paper [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou.
370 1. **[Trajectory Transformer](https://huggingface.co/docs/transformers/model_doc/trajectory_transformers)** (from the University of California at Berkeley) released with the paper [Offline Reinforcement Learning as One Big Sequence Modeling Problem](https://arxiv.org/abs/2106.02039) by Michael Janner, Qiyang Li, Sergey Levine
371 1. **[Transformer-XL](https://huggingface.co/docs/transformers/model_doc/transfo-xl)** (from Google/CMU) released with the paper [Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context](https://arxiv.org/abs/1901.02860) by Zihang Dai*, Zhilin Yang*, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov.
372 1. **[TrOCR](https://huggingface.co/docs/transformers/model_doc/trocr)** (from Microsoft), released together with the paper [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) by Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei.
373 1. **[UL2](https://huggingface.co/docs/transformers/model_doc/ul2)** (from Google Research) released with the paper [Unifying Language Learning Paradigms](https://arxiv.org/abs/2205.05131v1) by Yi Tay, Mostafa Dehghani, Vinh Q. Tran, Xavier Garcia, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Neil Houlsby, Donald Metzler
374 1. **[UniSpeech](https://huggingface.co/docs/transformers/model_doc/unispeech)** (from Microsoft Research) released with the paper [UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data](https://arxiv.org/abs/2101.07597) by Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang.
375 1. **[UniSpeechSat](https://huggingface.co/docs/transformers/model_doc/unispeech-sat)** (from Microsoft Research) released with the paper [UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER AWARE PRE-TRAINING](https://arxiv.org/abs/2110.05752) by Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu.
376 1. **[VAN](https://huggingface.co/docs/transformers/model_doc/van)** (from Tsinghua University and Nankai University) released with the paper [Visual Attention Network](https://arxiv.org/abs/2202.09741) by Meng-Hao Guo, Cheng-Ze Lu, Zheng-Ning Liu, Ming-Ming Cheng, Shi-Min Hu.
377 1. **[VideoMAE](https://huggingface.co/docs/transformers/main/model_doc/videomae)** (from Multimedia Computing Group, Nanjing University) released with the paper [VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training](https://arxiv.org/abs/2203.12602) by Zhan Tong, Yibing Song, Jue Wang, Limin Wang.
378 1. **[ViLT](https://huggingface.co/docs/transformers/model_doc/vilt)** (from NAVER AI Lab/Kakao Enterprise/Kakao Brain) released with the paper [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) by Wonjae Kim, Bokyung Son, Ildoo Kim.
379 1. **[Vision Transformer (ViT)](https://huggingface.co/docs/transformers/model_doc/vit)** (from Google AI) released with the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby.
380 1. **[VisualBERT](https://huggingface.co/docs/transformers/model_doc/visual_bert)** (from UCLA NLP) released with the paper [VisualBERT: A Simple and Performant Baseline for Vision and Language](https://arxiv.org/pdf/1908.03557) by Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang.
381 1. **[ViTMAE](https://huggingface.co/docs/transformers/model_doc/vit_mae)** (from Meta AI) released with the paper [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377) by Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick.
382 1. **[Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/wav2vec2)** (from Facebook AI) released with the paper [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.
383 1. **[Wav2Vec2-Conformer](https://huggingface.co/docs/transformers/model_doc/wav2vec2-conformer)** (from Facebook AI) released with the paper [FAIRSEQ S2T: Fast Speech-to-Text Modeling with FAIRSEQ](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino.
384 1. **[Wav2Vec2Phoneme](https://huggingface.co/docs/transformers/model_doc/wav2vec2_phoneme)** (from Facebook AI) released with the paper [Simple and Effective Zero-shot Cross-lingual Phoneme Recognition](https://arxiv.org/abs/2109.11680) by Qiantong Xu, Alexei Baevski, Michael Auli.
385 1. **[WavLM](https://huggingface.co/docs/transformers/model_doc/wavlm)** (from Microsoft Research) released with the paper [WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900) by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei.
386 1. **[XGLM](https://huggingface.co/docs/transformers/model_doc/xglm)** (From Facebook AI) released with the paper [Few-shot Learning with Multilingual Language Models](https://arxiv.org/abs/2112.10668) by Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, Xian Li.
387 1. **[XLM](https://huggingface.co/docs/transformers/model_doc/xlm)** (from Facebook) released together with the paper [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample and Alexis Conneau.
388 1. **[XLM-ProphetNet](https://huggingface.co/docs/transformers/model_doc/xlm-prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou.
389 1. **[XLM-RoBERTa](https://huggingface.co/docs/transformers/model_doc/xlm-roberta)** (from Facebook AI), released together with the paper [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Alexis Conneau*, Kartikay Khandelwal*, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov.
390 1. **[XLM-RoBERTa-XL](https://huggingface.co/docs/transformers/model_doc/xlm-roberta-xl)** (from Facebook AI), released together with the paper [Larger-Scale Transformers for Multilingual Masked Language Modeling](https://arxiv.org/abs/2105.00572) by Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, Alexis Conneau.
391 1. **[XLNet](https://huggingface.co/docs/transformers/model_doc/xlnet)** (from Google/CMU) released with the paper [XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) by Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le.
392 1. **[XLS-R](https://huggingface.co/docs/transformers/model_doc/xls_r)** (from Facebook AI) released with the paper [XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale](https://arxiv.org/abs/2111.09296) by Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, Michael Auli.
393 1. **[XLSR-Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/xlsr_wav2vec2)** (from Facebook AI) released with the paper [Unsupervised Cross-Lingual Representation Learning For Speech Recognition](https://arxiv.org/abs/2006.13979) by Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli.
394 1. **[YOLOS](https://huggingface.co/docs/transformers/model_doc/yolos)** (from Huazhong University of Science & Technology) released with the paper [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://arxiv.org/abs/2106.00666) by Yuxin Fang, Bencheng Liao, Xinggang Wang, Jiemin Fang, Jiyang Qi, Rui Wu, Jianwei Niu, Wenyu Liu.
395 1. **[YOSO](https://huggingface.co/docs/transformers/model_doc/yoso)** (from the University of Wisconsin - Madison) released with the paper [You Only Sample (Almost) Once: Linear Cost Self-Attention Via Bernoulli Sampling](https://arxiv.org/abs/2111.09714) by Zhanpeng Zeng, Yunyang Xiong, Sathya N. Ravi, Shailesh Acharya, Glenn Fung, Vikas Singh.
396 1. Want to contribute a new model? We have added a **detailed guide and templates** to guide you in the process of adding a new model. You can find them in the [`templates`](./templates) folder of the repository. Be sure to check the [contributing guidelines](./CONTRIBUTING.md) and contact the maintainers or open an issue to collect feedbacks before starting your PR.
397
398 To check if each model has an implementation in Flax, PyTorch or TensorFlow, or has an associated tokenizer backed by the 🤗 Tokenizers library, refer to [this table](https://huggingface.co/docs/transformers/index#supported-frameworks).
399
400 These implementations have been tested on several datasets (see the example scripts) and should match the performance of the original implementations. You can find more details on performance in the Examples section of the [documentation](https://huggingface.co/docs/transformers/examples).
401
402
403 ## Learn more
404
405 | Section | Description |
406 |-|-|
407 | [Documentation](https://huggingface.co/docs/transformers/) | Full API documentation and tutorials |
408 | [Task summary](https://huggingface.co/docs/transformers/task_summary) | Tasks supported by 🤗 Transformers |
409 | [Preprocessing tutorial](https://huggingface.co/docs/transformers/preprocessing) | Using the `Tokenizer` class to prepare data for the models |
410 | [Training and fine-tuning](https://huggingface.co/docs/transformers/training) | Using the models provided by 🤗 Transformers in a PyTorch/TensorFlow training loop and the `Trainer` API |
411 | [Quick tour: Fine-tuning/usage scripts](https://github.com/huggingface/transformers/tree/main/examples) | Example scripts for fine-tuning models on a wide range of tasks |
412 | [Model sharing and uploading](https://huggingface.co/docs/transformers/model_sharing) | Upload and share your fine-tuned models with the community |
413 | [Migration](https://huggingface.co/docs/transformers/migration) | Migrate to 🤗 Transformers from `pytorch-transformers` or `pytorch-pretrained-bert` |
414
415 ## Citation
416
417 We now have a [paper](https://www.aclweb.org/anthology/2020.emnlp-demos.6/) you can cite for the 🤗 Transformers library:
418 ```bibtex
419 @inproceedings{wolf-etal-2020-transformers,
420 title = "Transformers: State-of-the-Art Natural Language Processing",
421 author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush",
422 booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
423 month = oct,
424 year = "2020",
425 address = "Online",
426 publisher = "Association for Computational Linguistics",
427 url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6",
428 pages = "38--45"
429 }
430 ```
431
[end of README.md]
[start of README_ko.md]
1 <!---
2 Copyright 2020 The HuggingFace Team. All rights reserved.
3
4 Licensed under the Apache License, Version 2.0 (the "License");
5 you may not use this file except in compliance with the License.
6 You may obtain a copy of the License at
7
8 http://www.apache.org/licenses/LICENSE-2.0
9
10 Unless required by applicable law or agreed to in writing, software
11 distributed under the License is distributed on an "AS IS" BASIS,
12 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 See the License for the specific language governing permissions and
14 limitations under the License.
15 -->
16
17 <p align="center">
18 <br>
19 <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers_logo_name.png" width="400"/>
20 <br>
21 <p>
22 <p align="center">
23 <a href="https://circleci.com/gh/huggingface/transformers">
24 <img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/main">
25 </a>
26 <a href="https://github.com/huggingface/transformers/blob/main/LICENSE">
27 <img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue">
28 </a>
29 <a href="https://huggingface.co/docs/transformers/index">
30 <img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers/index.svg?down_color=red&down_message=offline&up_message=online">
31 </a>
32 <a href="https://github.com/huggingface/transformers/releases">
33 <img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg">
34 </a>
35 <a href="https://github.com/huggingface/transformers/blob/main/CODE_OF_CONDUCT.md">
36 <img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg">
37 </a>
38 <a href="https://zenodo.org/badge/latestdoi/155220641"><img src="https://zenodo.org/badge/155220641.svg" alt="DOI"></a>
39 </p>
40
41 <h4 align="center">
42 <p>
43 <a href="https://github.com/huggingface/transformers/">English</a> |
44 <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hans.md">简体中文</a> |
45 <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hant.md">繁體中文</a> |
46 <b>한국어</b>
47 <p>
48 </h4>
49
50 <h3 align="center">
51 <p> Jax, Pytorch, TensorFlow를 위한 최첨단 자연어처리</p>
52 </h3>
53
54 <h3 align="center">
55 <a href="https://hf.co/course"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/course_banner.png"></a>
56 </h3>
57
58 🤗 Transformers는 분류, 정보 추출, 질문 답변, 요약, 번역, 문장 생성 등을 100개 이상의 언어로 수행할 수 있는 수천개의 사전학습된 모델을 제공합니다. 우리의 목표는 모두가 최첨단의 NLP 기술을 쉽게 사용하는 것입니다.
59
60 🤗 Transformers는 이러한 사전학습 모델을 빠르게 다운로드해 특정 텍스트에 사용하고, 원하는 데이터로 fine-tuning해 커뮤니티나 우리의 [모델 허브](https://huggingface.co/models)에 공유할 수 있도록 API를 제공합니다. 또한, 모델 구조를 정의하는 각 파이썬 모듈은 완전히 독립적이여서 연구 실험을 위해 손쉽게 수정할 수 있습니다.
61
62 🤗 Transformers는 가장 유명한 3개의 딥러닝 라이브러리를 지원합니다. 이들은 서로 완벽히 연동됩니다 — [Jax](https://jax.readthedocs.io/en/latest/), [PyTorch](https://pytorch.org/), [TensorFlow](https://www.tensorflow.org/). 간단하게 이 라이브러리 중 하나로 모델을 학습하고, 또 다른 라이브러리로 추론을 위해 모델을 불러올 수 있습니다.
63
64 ## 온라인 데모
65
66 대부분의 모델을 [모델 허브](https://huggingface.co/models) 페이지에서 바로 테스트해볼 수 있습니다. 공개 및 비공개 모델을 위한 [비공개 모델 호스팅, 버전 관리, 추론 API](https://huggingface.co/pricing)도 제공합니다.
67
68 예시:
69 - [BERT로 마스킹된 단어 완성하기](https://huggingface.co/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France)
70 - [Electra를 이용한 개체명 인식](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city)
71 - [GPT-2로 텍스트 생성하기](https://huggingface.co/gpt2?text=A+long+time+ago%2C+)
72 - [RoBERTa로 자연어 추론하기](https://huggingface.co/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal)
73 - [BART를 이용한 요약](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct)
74 - [DistilBERT를 이용한 질문 답변](https://huggingface.co/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species)
75 - [T5로 번역하기](https://huggingface.co/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin)
76
77 **[Transformer와 글쓰기](https://transformer.huggingface.co)** 는 이 저장소의 텍스트 생성 능력에 관한 Hugging Face 팀의 공식 데모입니다.
78
79 ## Hugging Face 팀의 커스텀 지원을 원한다면
80
81 <a target="_blank" href="https://huggingface.co/support">
82 <img alt="HuggingFace Expert Acceleration Program" src="https://huggingface.co/front/thumbnails/support.png" style="max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);">
83 </a><br>
84
85 ## 퀵 투어
86
87 원하는 텍스트에 바로 모델을 사용할 수 있도록, 우리는 `pipeline` API를 제공합니다. Pipeline은 사전학습 모델과 그 모델을 학습할 때 적용한 전처리 방식을 하나로 합칩니다. 다음은 긍정적인 텍스트와 부정적인 텍스트를 분류하기 위해 pipeline을 사용한 간단한 예시입니다:
88
89 ```python
90 >>> from transformers import pipeline
91
92 # Allocate a pipeline for sentiment-analysis
93 >>> classifier = pipeline('sentiment-analysis')
94 >>> classifier('We are very happy to introduce pipeline to the transformers repository.')
95 [{'label': 'POSITIVE', 'score': 0.9996980428695679}]
96 ```
97
98 코드의 두번째 줄은 pipeline이 사용하는 사전학습 모델을 다운로드하고 캐시로 저장합니다. 세번째 줄에선 그 모델이 주어진 텍스트를 평가합니다. 여기서 모델은 99.97%의 확률로 텍스트가 긍정적이라고 평가했습니다.
99
100 많은 NLP 과제들을 `pipeline`으로 바로 수행할 수 있습니다. 예를 들어, 질문과 문맥이 주어지면 손쉽게 답변을 추출할 수 있습니다:
101
102 ``` python
103 >>> from transformers import pipeline
104
105 # Allocate a pipeline for question-answering
106 >>> question_answerer = pipeline('question-answering')
107 >>> question_answerer({
108 ... 'question': 'What is the name of the repository ?',
109 ... 'context': 'Pipeline has been included in the huggingface/transformers repository'
110 ... })
111 {'score': 0.30970096588134766, 'start': 34, 'end': 58, 'answer': 'huggingface/transformers'}
112
113 ```
114
115 답변뿐만 아니라, 여기에 사용된 사전학습 모델은 확신도와 토크나이즈된 문장 속 답변의 시작점, 끝점까지 반환합니다. [이 튜토리얼](https://huggingface.co/docs/transformers/task_summary)에서 `pipeline` API가 지원하는 다양한 과제를 확인할 수 있습니다.
116
117 코드 3줄로 원하는 과제에 맞게 사전학습 모델을 다운로드 받고 사용할 수 있습니다. 다음은 PyTorch 버전입니다:
118 ```python
119 >>> from transformers import AutoTokenizer, AutoModel
120
121 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
122 >>> model = AutoModel.from_pretrained("bert-base-uncased")
123
124 >>> inputs = tokenizer("Hello world!", return_tensors="pt")
125 >>> outputs = model(**inputs)
126 ```
127 다음은 TensorFlow 버전입니다:
128 ```python
129 >>> from transformers import AutoTokenizer, TFAutoModel
130
131 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
132 >>> model = TFAutoModel.from_pretrained("bert-base-uncased")
133
134 >>> inputs = tokenizer("Hello world!", return_tensors="tf")
135 >>> outputs = model(**inputs)
136 ```
137
138 토크나이저는 사전학습 모델의 모든 전처리를 책임집니다. 그리고 (위의 예시처럼) 1개의 스트링이나 리스트도 처리할 수 있습니다. 토크나이저는 딕셔너리를 반환하는데, 이는 다운스트림 코드에 사용하거나 언패킹 연산자 ** 를 이용해 모델에 바로 전달할 수도 있습니다.
139
140 모델 자체는 일반적으로 사용되는 [Pytorch `nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)나 [TensorFlow `tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model)입니다. [이 튜토리얼](https://huggingface.co/transformers/training.html)은 이러한 모델을 표준적인 PyTorch나 TensorFlow 학습 과정에서 사용하는 방법, 또는 새로운 데이터로 fine-tune하기 위해 `Trainer` API를 사용하는 방법을 설명해줍니다.
141
142 ## 왜 transformers를 사용해야 할까요?
143
144 1. 손쉽게 사용할 수 있는 최첨단 모델:
145 - NLU와 NLG 과제에서 뛰어난 성능을 보입니다.
146 - 교육자 실무자에게 진입 장벽이 낮습니다.
147 - 3개의 클래스만 배우면 바로 사용할 수 있습니다.
148 - 하나의 API로 모든 사전학습 모델을 사용할 수 있습니다.
149
150 1. 더 적은 계산 비용, 더 적은 탄소 발자국:
151 - 연구자들은 모델을 계속 다시 학습시키는 대신 학습된 모델을 공유할 수 있습니다.
152 - 실무자들은 학습에 필요한 시간과 비용을 절약할 수 있습니다.
153 - 수십개의 모델 구조, 2,000개 이상의 사전학습 모델, 100개 이상의 언어로 학습된 모델 등.
154
155 1. 모델의 각 생애주기에 적합한 프레임워크:
156 - 코드 3줄로 최첨단 모델을 학습하세요.
157 - 자유롭게 모델을 TF2.0나 PyTorch 프레임워크로 변환하세요.
158 - 학습, 평가, 공개 등 각 단계에 맞는 프레임워크를 원하는대로 선택하세요.
159
160 1. 필요한 대로 모델이나 예시를 커스터마이즈하세요:
161 - 우리는 저자가 공개한 결과를 재현하기 위해 각 모델 구조의 예시를 제공합니다.
162 - 모델 내부 구조는 가능한 일관적으로 공개되어 있습니다.
163 - 빠른 실험을 위해 모델 파일은 라이브러리와 독립적으로 사용될 수 있습니다.
164
165 ## 왜 transformers를 사용하지 말아야 할까요?
166
167 - 이 라이브러리는 신경망 블록을 만들기 위한 모듈이 아닙니다. 연구자들이 여러 파일을 살펴보지 않고 바로 각 모델을 사용할 수 있도록, 모델 파일 코드의 추상화 수준을 적정하게 유지했습니다.
168 - 학습 API는 모든 모델에 적용할 수 있도록 만들어지진 않았지만, 라이브러리가 제공하는 모델들에 적용할 수 있도록 최적화되었습니다. 일반적인 머신 러닝을 위해선, 다른 라이브러리를 사용하세요.
169 - 가능한 많은 사용 예시를 보여드리고 싶어서, [예시 폴더](https://github.com/huggingface/transformers/tree/main/examples)의 스크립트를 준비했습니다. 이 스크립트들을 수정 없이 특정한 문제에 바로 적용하지 못할 수 있습니다. 필요에 맞게 일부 코드를 수정해야 할 수 있습니다.
170
171 ## 설치
172
173 ### pip로 설치하기
174
175 이 저장소는 Python 3.6+, Flax 0.3.2+, PyTorch 1.3.1+, TensorFlow 2.3+에서 테스트 되었습니다.
176
177 [가상 환경](https://docs.python.org/3/library/venv.html)에 🤗 Transformers를 설치하세요. Python 가상 환경에 익숙하지 않다면, [사용자 가이드](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/)를 확인하세요.
178
179 우선, 사용할 Python 버전으로 가상 환경을 만들고 실행하세요.
180
181 그 다음, Flax, PyTorch, TensorFlow 중 적어도 하나는 설치해야 합니다.
182 플랫폼에 맞는 설치 명령어를 확인하기 위해 [TensorFlow 설치 페이지](https://www.tensorflow.org/install/), [PyTorch 설치 페이지](https://pytorch.org/get-started/locally/#start-locally), [Flax 설치 페이지](https://github.com/google/flax#quick-install)를 확인하세요.
183
184 이들 중 적어도 하나가 설치되었다면, 🤗 Transformers는 다음과 같이 pip을 이용해 설치할 수 있습니다:
185
186 ```bash
187 pip install transformers
188 ```
189
190 예시들을 체험해보고 싶거나, 최최최첨단 코드를 원하거나, 새로운 버전이 나올 때까지 기다릴 수 없다면 [라이브러리를 소스에서 바로 설치](https://huggingface.co/docs/transformers/installation#installing-from-source)하셔야 합니다.
191
192 ### conda로 설치하기
193
194 Transformers 버전 v4.0.0부터, conda 채널이 생겼습니다: `huggingface`.
195
196 🤗 Transformers는 다음과 같이 conda로 설치할 수 있습니다:
197
198 ```shell script
199 conda install -c huggingface transformers
200 ```
201
202 Flax, PyTorch, TensorFlow 설치 페이지에서 이들을 conda로 설치하는 방법을 확인하세요.
203
204 ## 모델 구조
205
206 **🤗 Transformers가 제공하는 [모든 모델 체크포인트](https://huggingface.co/models)** 는 huggingface.co [모델 허브](https://huggingface.co)에 완벽히 연동되어 있습니다. [개인](https://huggingface.co/users)과 [기관](https://huggingface.co/organizations)이 모델 허브에 직접 업로드할 수 있습니다.
207
208 현재 사용 가능한 모델 체크포인트의 개수: 
209
210 🤗 Transformers는 다음 모델들을 제공합니다 (각 모델의 요약은 [여기](https://huggingface.co/docs/transformers/model_summary)서 확인하세요):
211
212 1. **[ALBERT](https://huggingface.co/docs/transformers/model_doc/albert)** (from Google Research and the Toyota Technological Institute at Chicago) released with the paper [ALBERT: A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942), by Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut.
213 1. **[BART](https://huggingface.co/docs/transformers/model_doc/bart)** (from Facebook) released with the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/pdf/1910.13461.pdf) by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer.
214 1. **[BARThez](https://huggingface.co/docs/transformers/model_doc/barthez)** (from École polytechnique) released with the paper [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321) by Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis.
215 1. **[BARTpho](https://huggingface.co/docs/transformers/model_doc/bartpho)** (from VinAI Research) released with the paper [BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese](https://arxiv.org/abs/2109.09701) by Nguyen Luong Tran, Duong Minh Le and Dat Quoc Nguyen.
216 1. **[BEiT](https://huggingface.co/docs/transformers/model_doc/beit)** (from Microsoft) released with the paper [BEiT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) by Hangbo Bao, Li Dong, Furu Wei.
217 1. **[BERT](https://huggingface.co/docs/transformers/model_doc/bert)** (from Google) released with the paper [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova.
218 1. **[BERT For Sequence Generation](https://huggingface.co/docs/transformers/model_doc/bert-generation)** (from Google) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn.
219 1. **[BERTweet](https://huggingface.co/docs/transformers/model_doc/bertweet)** (from VinAI Research) released with the paper [BERTweet: A pre-trained language model for English Tweets](https://aclanthology.org/2020.emnlp-demos.2/) by Dat Quoc Nguyen, Thanh Vu and Anh Tuan Nguyen.
220 1. **[BigBird-Pegasus](https://huggingface.co/docs/transformers/model_doc/bigbird_pegasus)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed.
221 1. **[BigBird-RoBERTa](https://huggingface.co/docs/transformers/model_doc/big_bird)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed.
222 1. **[Blenderbot](https://huggingface.co/docs/transformers/model_doc/blenderbot)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston.
223 1. **[BlenderbotSmall](https://huggingface.co/docs/transformers/model_doc/blenderbot-small)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston.
224 1. **[BLOOM](https://huggingface.co/docs/transformers/model_doc/bloom)** (from BigScience workshop) released by the [BigSicence Workshop](https://bigscience.huggingface.co/).
225 1. **[BORT](https://huggingface.co/docs/transformers/model_doc/bort)** (from Alexa) released with the paper [Optimal Subarchitecture Extraction For BERT](https://arxiv.org/abs/2010.10499) by Adrian de Wynter and Daniel J. Perry.
226 1. **[ByT5](https://huggingface.co/docs/transformers/model_doc/byt5)** (from Google Research) released with the paper [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) by Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel.
227 1. **[CamemBERT](https://huggingface.co/docs/transformers/model_doc/camembert)** (from Inria/Facebook/Sorbonne) released with the paper [CamemBERT: a Tasty French Language Model](https://arxiv.org/abs/1911.03894) by Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz Suárez*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot.
228 1. **[CANINE](https://huggingface.co/docs/transformers/model_doc/canine)** (from Google Research) released with the paper [CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation](https://arxiv.org/abs/2103.06874) by Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting.
229 1. **[CLIP](https://huggingface.co/docs/transformers/model_doc/clip)** (from OpenAI) released with the paper [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) by Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever.
230 1. **[CodeGen](https://huggingface.co/docs/transformers/model_doc/codegen)** (from Salesforce) released with the paper [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong.
231 1. **[ConvBERT](https://huggingface.co/docs/transformers/model_doc/convbert)** (from YituTech) released with the paper [ConvBERT: Improving BERT with Span-based Dynamic Convolution](https://arxiv.org/abs/2008.02496) by Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng Yan.
232 1. **[ConvNeXT](https://huggingface.co/docs/transformers/model_doc/convnext)** (from Facebook AI) released with the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, Saining Xie.
233 1. **[CPM](https://huggingface.co/docs/transformers/model_doc/cpm)** (from Tsinghua University) released with the paper [CPM: A Large-scale Generative Chinese Pre-trained Language Model](https://arxiv.org/abs/2012.00413) by Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen, Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, Maosong Sun.
234 1. **[CTRL](https://huggingface.co/docs/transformers/model_doc/ctrl)** (from Salesforce) released with the paper [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858) by Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher.
235 1. **[CvT](https://huggingface.co/docs/transformers/model_doc/cvt)** (from Microsoft) released with the paper [CvT: Introducing Convolutions to Vision Transformers](https://arxiv.org/abs/2103.15808) by Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan, Lei Zhang.
236 1. **[Data2Vec](https://huggingface.co/docs/transformers/model_doc/data2vec)** (from Facebook) released with the paper [Data2Vec: A General Framework for Self-supervised Learning in Speech, Vision and Language](https://arxiv.org/abs/2202.03555) by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli.
237 1. **[DeBERTa](https://huggingface.co/docs/transformers/model_doc/deberta)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen.
238 1. **[DeBERTa-v2](https://huggingface.co/docs/transformers/model_doc/deberta-v2)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen.
239 1. **[Decision Transformer](https://huggingface.co/docs/transformers/model_doc/decision_transformer)** (from Berkeley/Facebook/Google) released with the paper [Decision Transformer: Reinforcement Learning via Sequence Modeling](https://arxiv.org/abs/2106.01345) by Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch.
240 1. **[DeiT](https://huggingface.co/docs/transformers/model_doc/deit)** (from Facebook) released with the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou.
241 1. **[DETR](https://huggingface.co/docs/transformers/model_doc/detr)** (from Facebook) released with the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko.
242 1. **[DialoGPT](https://huggingface.co/docs/transformers/model_doc/dialogpt)** (from Microsoft Research) released with the paper [DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation](https://arxiv.org/abs/1911.00536) by Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan.
243 1. **[DistilBERT](https://huggingface.co/docs/transformers/model_doc/distilbert)** (from HuggingFace), released together with the paper [DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108) by Victor Sanh, Lysandre Debut and Thomas Wolf. The same method has been applied to compress GPT2 into [DistilGPT2](https://github.com/huggingface/transformers/tree/main/examples/distillation), RoBERTa into [DistilRoBERTa](https://github.com/huggingface/transformers/tree/main/examples/distillation), Multilingual BERT into [DistilmBERT](https://github.com/huggingface/transformers/tree/main/examples/distillation) and a German version of DistilBERT.
244 1. **[DiT](https://huggingface.co/docs/transformers/model_doc/dit)** (from Microsoft Research) released with the paper [DiT: Self-supervised Pre-training for Document Image Transformer](https://arxiv.org/abs/2203.02378) by Junlong Li, Yiheng Xu, Tengchao Lv, Lei Cui, Cha Zhang, Furu Wei.
245 1. **[Donut](https://huggingface.co/docs/transformers/main/model_doc/donut)** (from NAVER) released with the paper [OCR-free Document Understanding Transformer](https://arxiv.org/abs/2111.15664) by Geewook Kim, Teakgyu Hong, Moonbin Yim, Jeongyeon Nam, Jinyoung Park, Jinyeong Yim, Wonseok Hwang, Sangdoo Yun, Dongyoon Han, Seunghyun Park.
246 1. **[DPR](https://huggingface.co/docs/transformers/model_doc/dpr)** (from Facebook) released with the paper [Dense Passage Retrieval for Open-Domain Question Answering](https://arxiv.org/abs/2004.04906) by Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih.
247 1. **[DPT](https://huggingface.co/docs/transformers/master/model_doc/dpt)** (from Intel Labs) released with the paper [Vision Transformers for Dense Prediction](https://arxiv.org/abs/2103.13413) by René Ranftl, Alexey Bochkovskiy, Vladlen Koltun.
248 1. **[ELECTRA](https://huggingface.co/docs/transformers/model_doc/electra)** (from Google Research/Stanford University) released with the paper [ELECTRA: Pre-training text encoders as discriminators rather than generators](https://arxiv.org/abs/2003.10555) by Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning.
249 1. **[EncoderDecoder](https://huggingface.co/docs/transformers/model_doc/encoder-decoder)** (from Google Research) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn.
250 1. **[FlauBERT](https://huggingface.co/docs/transformers/model_doc/flaubert)** (from CNRS) released with the paper [FlauBERT: Unsupervised Language Model Pre-training for French](https://arxiv.org/abs/1912.05372) by Hang Le, Loïc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoît Crabbé, Laurent Besacier, Didier Schwab.
251 1. **[FLAVA](https://huggingface.co/docs/transformers/model_doc/flava)** (from Facebook AI) released with the paper [FLAVA: A Foundational Language And Vision Alignment Model](https://arxiv.org/abs/2112.04482) by Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, and Douwe Kiela.
252 1. **[FNet](https://huggingface.co/docs/transformers/model_doc/fnet)** (from Google Research) released with the paper [FNet: Mixing Tokens with Fourier Transforms](https://arxiv.org/abs/2105.03824) by James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon.
253 1. **[Funnel Transformer](https://huggingface.co/docs/transformers/model_doc/funnel)** (from CMU/Google Brain) released with the paper [Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing](https://arxiv.org/abs/2006.03236) by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le.
254 1. **[GLPN](https://huggingface.co/docs/transformers/model_doc/glpn)** (from KAIST) released with the paper [Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth](https://arxiv.org/abs/2201.07436) by Doyeon Kim, Woonghyun Ga, Pyungwhan Ahn, Donggyu Joo, Sehwan Chun, Junmo Kim.
255 1. **[GPT](https://huggingface.co/docs/transformers/model_doc/openai-gpt)** (from OpenAI) released with the paper [Improving Language Understanding by Generative Pre-Training](https://blog.openai.com/language-unsupervised/) by Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever.
256 1. **[GPT Neo](https://huggingface.co/docs/transformers/model_doc/gpt_neo)** (from EleutherAI) released in the repository [EleutherAI/gpt-neo](https://github.com/EleutherAI/gpt-neo) by Sid Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy.
257 1. **[GPT NeoX](https://huggingface.co/docs/transformers/model_doc/gpt_neox)** (from EleutherAI) released with the paper [GPT-NeoX-20B: An Open-Source Autoregressive Language Model](https://arxiv.org/abs/2204.06745) by Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, Samuel Weinbach
258 1. **[GPT-2](https://huggingface.co/docs/transformers/model_doc/gpt2)** (from OpenAI) released with the paper [Language Models are Unsupervised Multitask Learners](https://blog.openai.com/better-language-models/) by Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever**.
259 1. **[GPT-J](https://huggingface.co/docs/transformers/model_doc/gptj)** (from EleutherAI) released in the repository [kingoflolz/mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/) by Ben Wang and Aran Komatsuzaki.
260 1. **[GroupViT](https://huggingface.co/docs/transformers/model_doc/groupvit)** (from UCSD, NVIDIA) released with the paper [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094) by Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang.
261 1. **[Hubert](https://huggingface.co/docs/transformers/model_doc/hubert)** (from Facebook) released with the paper [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed.
262 1. **[I-BERT](https://huggingface.co/docs/transformers/model_doc/ibert)** (from Berkeley) released with the paper [I-BERT: Integer-only BERT Quantization](https://arxiv.org/abs/2101.01321) by Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney, Kurt Keutzer.
263 1. **[ImageGPT](https://huggingface.co/docs/transformers/model_doc/imagegpt)** (from OpenAI) released with the paper [Generative Pretraining from Pixels](https://openai.com/blog/image-gpt/) by Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever.
264 1. **[LayoutLM](https://huggingface.co/docs/transformers/model_doc/layoutlm)** (from Microsoft Research Asia) released with the paper [LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318) by Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou.
265 1. **[LayoutLMv2](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (from Microsoft Research Asia) released with the paper [LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding](https://arxiv.org/abs/2012.14740) by Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou.
266 1. **[LayoutLMv3](https://huggingface.co/docs/transformers/model_doc/layoutlmv3)** (from Microsoft Research Asia) released with the paper [LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking](https://arxiv.org/abs/2204.08387) by Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei.
267 1. **[LayoutXLM](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (from Microsoft Research Asia) released with the paper [LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding](https://arxiv.org/abs/2104.08836) by Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Furu Wei.
268 1. **[LED](https://huggingface.co/docs/transformers/model_doc/led)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan.
269 1. **[LeViT](https://huggingface.co/docs/transformers/model_doc/levit)** (from Meta AI) released with the paper [LeViT: A Vision Transformer in ConvNet's Clothing for Faster Inference](https://arxiv.org/abs/2104.01136) by Ben Graham, Alaaeldin El-Nouby, Hugo Touvron, Pierre Stock, Armand Joulin, Hervé Jégou, Matthijs Douze.
270 1. **[Longformer](https://huggingface.co/docs/transformers/model_doc/longformer)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan.
271 1. **[LongT5](https://huggingface.co/docs/transformers/model_doc/longt5)** (from Google AI) released with the paper [LongT5: Efficient Text-To-Text Transformer for Long Sequences](https://arxiv.org/abs/2112.07916) by Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung, Yinfei Yang.
272 1. **[LUKE](https://huggingface.co/docs/transformers/model_doc/luke)** (from Studio Ousia) released with the paper [LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention](https://arxiv.org/abs/2010.01057) by Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, Yuji Matsumoto.
273 1. **[LXMERT](https://huggingface.co/docs/transformers/model_doc/lxmert)** (from UNC Chapel Hill) released with the paper [LXMERT: Learning Cross-Modality Encoder Representations from Transformers for Open-Domain Question Answering](https://arxiv.org/abs/1908.07490) by Hao Tan and Mohit Bansal.
274 1. **[M-CTC-T](https://huggingface.co/docs/transformers/model_doc/mctct)** (from Facebook) released with the paper [Pseudo-Labeling For Massively Multilingual Speech Recognition](https://arxiv.org/abs/2111.00161) by Loren Lugosch, Tatiana Likhomanenko, Gabriel Synnaeve, and Ronan Collobert.
275 1. **[M2M100](https://huggingface.co/docs/transformers/model_doc/m2m_100)** (from Facebook) released with the paper [Beyond English-Centric Multilingual Machine Translation](https://arxiv.org/abs/2010.11125) by Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin.
276 1. **[MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)** Machine translation models trained using [OPUS](http://opus.nlpl.eu/) data by Jörg Tiedemann. The [Marian Framework](https://marian-nmt.github.io/) is being developed by the Microsoft Translator Team.
277 1. **[MaskFormer](https://huggingface.co/docs/transformers/model_doc/maskformer)** (from Meta and UIUC) released with the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) by Bowen Cheng, Alexander G. Schwing, Alexander Kirillov.
278 1. **[mBART](https://huggingface.co/docs/transformers/model_doc/mbart)** (from Facebook) released with the paper [Multilingual Denoising Pre-training for Neural Machine Translation](https://arxiv.org/abs/2001.08210) by Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer.
279 1. **[mBART-50](https://huggingface.co/docs/transformers/model_doc/mbart)** (from Facebook) released with the paper [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) by Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, Angela Fan.
280 1. **[Megatron-BERT](https://huggingface.co/docs/transformers/model_doc/megatron-bert)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro.
281 1. **[Megatron-GPT2](https://huggingface.co/docs/transformers/model_doc/megatron_gpt2)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro.
282 1. **[mLUKE](https://huggingface.co/docs/transformers/model_doc/mluke)** (from Studio Ousia) released with the paper [mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models](https://arxiv.org/abs/2110.08151) by Ryokan Ri, Ikuya Yamada, and Yoshimasa Tsuruoka.
283 1. **[MobileBERT](https://huggingface.co/docs/transformers/model_doc/mobilebert)** (from CMU/Google Brain) released with the paper [MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices](https://arxiv.org/abs/2004.02984) by Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou.
284 1. **[MobileViT](https://huggingface.co/docs/transformers/model_doc/mobilevit)** (from Apple) released with the paper [MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer](https://arxiv.org/abs/2110.02178) by Sachin Mehta and Mohammad Rastegari.
285 1. **[MPNet](https://huggingface.co/docs/transformers/model_doc/mpnet)** (from Microsoft Research) released with the paper [MPNet: Masked and Permuted Pre-training for Language Understanding](https://arxiv.org/abs/2004.09297) by Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu.
286 1. **[MT5](https://huggingface.co/docs/transformers/model_doc/mt5)** (from Google AI) released with the paper [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) by Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel.
287 1. **[MVP](https://huggingface.co/docs/transformers/model_doc/mvp)** (from RUC AI Box) released with the paper [MVP: Multi-task Supervised Pre-training for Natural Language Generation](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen.
288 1. **[Nezha](https://huggingface.co/docs/transformers/model_doc/nezha)** (from Huawei Noah’s Ark Lab) released with the paper [NEZHA: Neural Contextualized Representation for Chinese Language Understanding](https://arxiv.org/abs/1909.00204) by Junqiu Wei, Xiaozhe Ren, Xiaoguang Li, Wenyong Huang, Yi Liao, Yasheng Wang, Jiashu Lin, Xin Jiang, Xiao Chen and Qun Liu.
289 1. **[NLLB](https://huggingface.co/docs/transformers/model_doc/nllb)** (from Meta) released with the paper [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672) by the NLLB team.
290 1. **[Nyströmformer](https://huggingface.co/docs/transformers/model_doc/nystromformer)** (from the University of Wisconsin - Madison) released with the paper [Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention](https://arxiv.org/abs/2102.03902) by Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, Vikas Singh.
291 1. **[OPT](https://huggingface.co/docs/transformers/master/model_doc/opt)** (from Meta AI) released with the paper [OPT: Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) by Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al.
292 1. **[OWL-ViT](https://huggingface.co/docs/transformers/model_doc/owlvit)** (from Google AI) released with the paper [Simple Open-Vocabulary Object Detection with Vision Transformers](https://arxiv.org/abs/2205.06230) by Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, and Neil Houlsby.
293 1. **[Pegasus](https://huggingface.co/docs/transformers/model_doc/pegasus)** (from Google) released with the paper [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777) by Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu.
294 1. **[PEGASUS-X](https://huggingface.co/docs/transformers/main/model_doc/pegasus_x)** (from Google) released with the paper [Investigating Efficiently Extending Transformers for Long Input Summarization](https://arxiv.org/abs/2208.04347) by Jason Phang, Yao Zhao, Peter J. Liu.
295 1. **[Perceiver IO](https://huggingface.co/docs/transformers/model_doc/perceiver)** (from Deepmind) released with the paper [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) by Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hénaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, João Carreira.
296 1. **[PhoBERT](https://huggingface.co/docs/transformers/model_doc/phobert)** (from VinAI Research) released with the paper [PhoBERT: Pre-trained language models for Vietnamese](https://www.aclweb.org/anthology/2020.findings-emnlp.92/) by Dat Quoc Nguyen and Anh Tuan Nguyen.
297 1. **[PLBart](https://huggingface.co/docs/transformers/model_doc/plbart)** (from UCLA NLP) released with the paper [Unified Pre-training for Program Understanding and Generation](https://arxiv.org/abs/2103.06333) by Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, Kai-Wei Chang.
298 1. **[PoolFormer](https://huggingface.co/docs/transformers/model_doc/poolformer)** (from Sea AI Labs) released with the paper [MetaFormer is Actually What You Need for Vision](https://arxiv.org/abs/2111.11418) by Yu, Weihao and Luo, Mi and Zhou, Pan and Si, Chenyang and Zhou, Yichen and Wang, Xinchao and Feng, Jiashi and Yan, Shuicheng.
299 1. **[ProphetNet](https://huggingface.co/docs/transformers/model_doc/prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou.
300 1. **[QDQBert](https://huggingface.co/docs/transformers/model_doc/qdqbert)** (from NVIDIA) released with the paper [Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation](https://arxiv.org/abs/2004.09602) by Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev and Paulius Micikevicius.
301 1. **[RAG](https://huggingface.co/docs/transformers/model_doc/rag)** (from Facebook) released with the paper [Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks](https://arxiv.org/abs/2005.11401) by Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, Douwe Kiela.
302 1. **[REALM](https://huggingface.co/docs/transformers/model_doc/realm.html)** (from Google Research) released with the paper [REALM: Retrieval-Augmented Language Model Pre-Training](https://arxiv.org/abs/2002.08909) by Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat and Ming-Wei Chang.
303 1. **[Reformer](https://huggingface.co/docs/transformers/model_doc/reformer)** (from Google Research) released with the paper [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451) by Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya.
304 1. **[RegNet](https://huggingface.co/docs/transformers/model_doc/regnet)** (from META Research) released with the paper [Designing Network Design Space](https://arxiv.org/abs/2003.13678) by Ilija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, Piotr Dollár.
305 1. **[RemBERT](https://huggingface.co/docs/transformers/model_doc/rembert)** (from Google Research) released with the paper [Rethinking embedding coupling in pre-trained language models](https://arxiv.org/pdf/2010.12821.pdf) by Hyung Won Chung, Thibault Févry, Henry Tsai, M. Johnson, Sebastian Ruder.
306 1. **[ResNet](https://huggingface.co/docs/transformers/model_doc/resnet)** (from Microsoft Research) released with the paper [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) by Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun.
307 1. **[RoBERTa](https://huggingface.co/docs/transformers/model_doc/roberta)** (from Facebook), released together with the paper a [Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov.
308 1. **[RoFormer](https://huggingface.co/docs/transformers/model_doc/roformer)** (from ZhuiyiTechnology), released together with the paper a [RoFormer: Enhanced Transformer with Rotary Position Embedding](https://arxiv.org/pdf/2104.09864v1.pdf) by Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu.
309 1. **[SegFormer](https://huggingface.co/docs/transformers/model_doc/segformer)** (from NVIDIA) released with the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo.
310 1. **[SEW](https://huggingface.co/docs/transformers/model_doc/sew)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi.
311 1. **[SEW-D](https://huggingface.co/docs/transformers/model_doc/sew_d)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi.
312 1. **[SpeechToTextTransformer](https://huggingface.co/docs/transformers/model_doc/speech_to_text)** (from Facebook), released together with the paper [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino.
313 1. **[SpeechToTextTransformer2](https://huggingface.co/docs/transformers/model_doc/speech_to_text_2)** (from Facebook), released together with the paper [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) by Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau.
314 1. **[Splinter](https://huggingface.co/docs/transformers/model_doc/splinter)** (from Tel Aviv University), released together with the paper [Few-Shot Question Answering by Pretraining Span Selection](https://arxiv.org/abs/2101.00438) by Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy.
315 1. **[SqueezeBERT](https://huggingface.co/docs/transformers/model_doc/squeezebert)** (from Berkeley) released with the paper [SqueezeBERT: What can computer vision teach NLP about efficient neural networks?](https://arxiv.org/abs/2006.11316) by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W. Keutzer.
316 1. **[Swin Transformer](https://huggingface.co/docs/transformers/model_doc/swin)** (from Microsoft) released with the paper [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) by Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo.
317 1. **[Swin Transformer V2](https://huggingface.co/docs/transformers/main/model_doc/swinv2)** (from Microsoft) released with the paper [Swin Transformer V2: Scaling Up Capacity and Resolution](https://arxiv.org/abs/2111.09883) by Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, Furu Wei, Baining Guo.
318 1. **[T5](https://huggingface.co/docs/transformers/model_doc/t5)** (from Google AI) released with the paper [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu.
319 1. **[T5v1.1](https://huggingface.co/docs/transformers/model_doc/t5v1.1)** (from Google AI) released in the repository [google-research/text-to-text-transfer-transformer](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu.
320 1. **[TAPAS](https://huggingface.co/docs/transformers/model_doc/tapas)** (from Google AI) released with the paper [TAPAS: Weakly Supervised Table Parsing via Pre-training](https://arxiv.org/abs/2004.02349) by Jonathan Herzig, Paweł Krzysztof Nowak, Thomas Müller, Francesco Piccinno and Julian Martin Eisenschlos.
321 1. **[TAPEX](https://huggingface.co/docs/transformers/model_doc/tapex)** (from Microsoft Research) released with the paper [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou.
322 1. **[Trajectory Transformer](https://huggingface.co/docs/transformers/model_doc/trajectory_transformers)** (from the University of California at Berkeley) released with the paper [Offline Reinforcement Learning as One Big Sequence Modeling Problem](https://arxiv.org/abs/2106.02039) by Michael Janner, Qiyang Li, Sergey Levine
323 1. **[Transformer-XL](https://huggingface.co/docs/transformers/model_doc/transfo-xl)** (from Google/CMU) released with the paper [Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context](https://arxiv.org/abs/1901.02860) by Zihang Dai*, Zhilin Yang*, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov.
324 1. **[TrOCR](https://huggingface.co/docs/transformers/model_doc/trocr)** (from Microsoft), released together with the paper [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) by Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei.
325 1. **[UL2](https://huggingface.co/docs/transformers/model_doc/ul2)** (from Google Research) released with the paper [Unifying Language Learning Paradigms](https://arxiv.org/abs/2205.05131v1) by Yi Tay, Mostafa Dehghani, Vinh Q. Tran, Xavier Garcia, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Neil Houlsby, Donald Metzler
326 1. **[UniSpeech](https://huggingface.co/docs/transformers/model_doc/unispeech)** (from Microsoft Research) released with the paper [UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data](https://arxiv.org/abs/2101.07597) by Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang.
327 1. **[UniSpeechSat](https://huggingface.co/docs/transformers/model_doc/unispeech-sat)** (from Microsoft Research) released with the paper [UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER AWARE PRE-TRAINING](https://arxiv.org/abs/2110.05752) by Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu.
328 1. **[VAN](https://huggingface.co/docs/transformers/model_doc/van)** (from Tsinghua University and Nankai University) released with the paper [Visual Attention Network](https://arxiv.org/pdf/2202.09741.pdf) by Meng-Hao Guo, Cheng-Ze Lu, Zheng-Ning Liu, Ming-Ming Cheng, Shi-Min Hu.
329 1. **[VideoMAE](https://huggingface.co/docs/transformers/main/model_doc/videomae)** (from Multimedia Computing Group, Nanjing University) released with the paper [VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training](https://arxiv.org/abs/2203.12602) by Zhan Tong, Yibing Song, Jue Wang, Limin Wang.
330 1. **[ViLT](https://huggingface.co/docs/transformers/model_doc/vilt)** (from NAVER AI Lab/Kakao Enterprise/Kakao Brain) released with the paper [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) by Wonjae Kim, Bokyung Son, Ildoo Kim.
331 1. **[Vision Transformer (ViT)](https://huggingface.co/docs/transformers/model_doc/vit)** (from Google AI) released with the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby.
332 1. **[VisualBERT](https://huggingface.co/docs/transformers/model_doc/visual_bert)** (from UCLA NLP) released with the paper [VisualBERT: A Simple and Performant Baseline for Vision and Language](https://arxiv.org/pdf/1908.03557) by Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang.
333 1. **[ViTMAE](https://huggingface.co/docs/transformers/model_doc/vit_mae)** (from Meta AI) released with the paper [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377) by Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick.
334 1. **[Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/wav2vec2)** (from Facebook AI) released with the paper [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.
335 1. **[Wav2Vec2-Conformer](https://huggingface.co/docs/transformers/model_doc/wav2vec2-conformer)** (from Facebook AI) released with the paper [FAIRSEQ S2T: Fast Speech-to-Text Modeling with FAIRSEQ](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino.
336 1. **[Wav2Vec2Phoneme](https://huggingface.co/docs/transformers/model_doc/wav2vec2_phoneme)** (from Facebook AI) released with the paper [Simple and Effective Zero-shot Cross-lingual Phoneme Recognition](https://arxiv.org/abs/2109.11680) by Qiantong Xu, Alexei Baevski, Michael Auli.
337 1. **[WavLM](https://huggingface.co/docs/transformers/model_doc/wavlm)** (from Microsoft Research) released with the paper [WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900) by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei.
338 1. **[XGLM](https://huggingface.co/docs/transformers/model_doc/xglm)** (From Facebook AI) released with the paper [Few-shot Learning with Multilingual Language Models](https://arxiv.org/abs/2112.10668) by Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, Xian Li.
339 1. **[XLM](https://huggingface.co/docs/transformers/model_doc/xlm)** (from Facebook) released together with the paper [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample and Alexis Conneau.
340 1. **[XLM-ProphetNet](https://huggingface.co/docs/transformers/model_doc/xlm-prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou.
341 1. **[XLM-RoBERTa](https://huggingface.co/docs/transformers/model_doc/xlm-roberta)** (from Facebook AI), released together with the paper [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Alexis Conneau*, Kartikay Khandelwal*, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov.
342 1. **[XLM-RoBERTa-XL](https://huggingface.co/docs/transformers/model_doc/xlm-roberta-xl)** (from Facebook AI) released with the paper [Larger-Scale Transformers for Multilingual Masked Language Modeling](https://arxiv.org/abs/2105.00572) by Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, Alexis Conneau.
343 1. **[XLNet](https://huggingface.co/docs/transformers/model_doc/xlnet)** (from Google/CMU) released with the paper [XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) by Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le.
344 1. **[XLS-R](https://huggingface.co/docs/transformers/model_doc/xls_r)** (from Facebook AI) released with the paper [XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale](https://arxiv.org/abs/2111.09296) by Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, Michael Auli.
345 1. **[XLSR-Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/xlsr_wav2vec2)** (from Facebook AI) released with the paper [Unsupervised Cross-Lingual Representation Learning For Speech Recognition](https://arxiv.org/abs/2006.13979) by Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli.
346 1. **[YOLOS](https://huggingface.co/docs/transformers/model_doc/yolos)** (from Huazhong University of Science & Technology) released with the paper [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://arxiv.org/abs/2106.00666) by Yuxin Fang, Bencheng Liao, Xinggang Wang, Jiemin Fang, Jiyang Qi, Rui Wu, Jianwei Niu, Wenyu Liu.
347 1. **[YOSO](https://huggingface.co/docs/transformers/model_doc/yoso)** (from the University of Wisconsin - Madison) released with the paper [You Only Sample (Almost) by Zhanpeng Zeng, Yunyang Xiong, Sathya N. Ravi, Shailesh Acharya, Glenn Fung, Vikas Singh.
348 1. 새로운 모델을 올리고 싶나요? 우리가 **상세한 가이드와 템플릿** 으로 새로운 모델을 올리도록 도와드릴게요. 가이드와 템플릿은 이 저장소의 [`templates`](./templates) 폴더에서 확인하실 수 있습니다. [컨트리뷰션 가이드라인](./CONTRIBUTING.md)을 꼭 확인해주시고, PR을 올리기 전에 메인테이너에게 연락하거나 이슈를 오픈해 피드백을 받으시길 바랍니다.
349
350 각 모델이 Flax, PyTorch, TensorFlow으로 구현되었는지 또는 🤗 Tokenizers 라이브러리가 지원하는 토크나이저를 사용하는지 확인하려면, [이 표](https://huggingface.co/docs/transformers/index#supported-frameworks)를 확인하세요.
351
352 이 구현은 여러 데이터로 검증되었고 (예시 스크립트를 참고하세요) 오리지널 구현의 성능과 같아야 합니다. [도큐먼트](https://huggingface.co/docs/transformers/examples)의 Examples 섹션에서 성능에 대한 자세한 설명을 확인할 수 있습니다.
353
354 ## 더 알아보기
355
356 | 섹션 | 설명 |
357 |-|-|
358 | [도큐먼트](https://huggingface.co/transformers/) | 전체 API 도큐먼트와 튜토리얼 |
359 | [과제 요약](https://huggingface.co/docs/transformers/task_summary) | 🤗 Transformers가 지원하는 과제들 |
360 | [전처리 튜토리얼](https://huggingface.co/docs/transformers/preprocessing) | `Tokenizer` 클래스를 이용해 모델을 위한 데이터 준비하기 |
361 | [학습과 fine-tuning](https://huggingface.co/docs/transformers/training) | 🤗 Transformers가 제공하는 모델 PyTorch/TensorFlow 학습 과정과 `Trainer` API에서 사용하기 |
362 | [퀵 투어: Fine-tuning/사용 스크립트](https://github.com/huggingface/transformers/tree/main/examples) | 다양한 과제에서 모델 fine-tuning하는 예시 스크립트 |
363 | [모델 공유 및 업로드](https://huggingface.co/docs/transformers/model_sharing) | 커뮤니티에 fine-tune된 모델을 업로드 및 공유하기 |
364 | [마이그레이션](https://huggingface.co/docs/transformers/migration) | `pytorch-transformers`나 `pytorch-pretrained-bert`에서 🤗 Transformers로 이동하기|
365
366 ## 인용
367
368 🤗 Transformers 라이브러리를 인용하고 싶다면, 이 [논문](https://www.aclweb.org/anthology/2020.emnlp-demos.6/)을 인용해 주세요:
369 ```bibtex
370 @inproceedings{wolf-etal-2020-transformers,
371 title = "Transformers: State-of-the-Art Natural Language Processing",
372 author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush",
373 booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
374 month = oct,
375 year = "2020",
376 address = "Online",
377 publisher = "Association for Computational Linguistics",
378 url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6",
379 pages = "38--45"
380 }
381 ```
382
[end of README_ko.md]
[start of README_zh-hans.md]
1 <!---
2 Copyright 2020 The HuggingFace Team. All rights reserved.
3
4 Licensed under the Apache License, Version 2.0 (the "License");
5 you may not use this file except in compliance with the License.
6 You may obtain a copy of the License at
7
8 http://www.apache.org/licenses/LICENSE-2.0
9
10 Unless required by applicable law or agreed to in writing, software
11 distributed under the License is distributed on an "AS IS" BASIS,
12 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 See the License for the specific language governing permissions and
14 limitations under the License.
15 -->
16
17 <!---
18 A useful guide for English-Chinese translation of Hugging Face documentation
19 - Add space around English words and numbers when they appear between Chinese characters. E.g., 共 100 多种语言; 使用 transformers 库。
20 - Use square quotes, e.g.,「引用」
21
22 Dictionary
23
24 Hugging Face: 抱抱脸
25 token: 词符(并用括号标注原英文)
26 tokenize: 词符化(并用括号标注原英文)
27 tokenizer: 词符化器(并用括号标注原英文)
28 transformer: transformer(不翻译)
29 pipeline: 流水线
30 API: API (不翻译)
31 inference: 推理
32 Trainer: 训练器。当作为类名出现时不翻译。
33 pretrained/pretrain: 预训练
34 finetune: 微调
35 community: 社区
36 example: 当特指仓库中 example 目录时翻译为「用例」
37 Python data structures (e.g., list, set, dict): 翻译为列表,集合,词典,并用括号标注原英文
38 NLP/Natural Language Processing: 以 NLP 出现时不翻译,以 Natural Language Processing 出现时翻译为自然语言处理
39 checkpoint: 检查点
40 -->
41
42 <p align="center">
43 <br>
44 <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers_logo_name.png" width="400"/>
45 <br>
46 <p>
47 <p align="center">
48 <a href="https://circleci.com/gh/huggingface/transformers">
49 <img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/main">
50 </a>
51 <a href="https://github.com/huggingface/transformers/blob/main/LICENSE">
52 <img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue">
53 </a>
54 <a href="https://huggingface.co/docs/transformers/index">
55 <img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers/index.svg?down_color=red&down_message=offline&up_message=online">
56 </a>
57 <a href="https://github.com/huggingface/transformers/releases">
58 <img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg">
59 </a>
60 <a href="https://github.com/huggingface/transformers/blob/main/CODE_OF_CONDUCT.md">
61 <img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg">
62 </a>
63 <a href="https://zenodo.org/badge/latestdoi/155220641"><img src="https://zenodo.org/badge/155220641.svg" alt="DOI"></a>
64 </p>
65
66 <h4 align="center">
67 <p>
68 <a href="https://github.com/huggingface/transformers/">English</a> |
69 <b>简体中文</b> |
70 <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hant.md">繁體中文</a> |
71 <a href="https://github.com/huggingface/transformers/blob/main/README_ko.md">한국어</a>
72 <p>
73 </h4>
74
75 <h3 align="center">
76 <p>为 Jax、PyTorch 和 TensorFlow 打造的先进的自然语言处理</p>
77 </h3>
78
79 <h3 align="center">
80 <a href="https://hf.co/course"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/course_banner.png"></a>
81 </h3>
82
83 🤗 Transformers 提供了数以千计的预训练模型,支持 100 多种语言的文本分类、信息抽取、问答、摘要、翻译、文本生成。它的宗旨让最先进的 NLP 技术人人易用。
84
85 🤗 Transformers 提供了便于快速下载和使用的API,让你可以把预训练模型用在给定文本、在你的数据集上微调然后通过 [model hub](https://huggingface.co/models) 与社区共享。同时,每个定义的 Python 模块均完全独立,方便修改和快速研究实验。
86
87 🤗 Transformers 支持三个最热门的深度学习库: [Jax](https://jax.readthedocs.io/en/latest/), [PyTorch](https://pytorch.org/) and [TensorFlow](https://www.tensorflow.org/) — 并与之无缝整合。你可以直接使用一个框架训练你的模型然后用另一个加载和推理。
88
89 ## 在线演示
90
91 你可以直接在模型页面上测试大多数 [model hub](https://huggingface.co/models) 上的模型。 我们也提供了 [私有模型托管、模型版本管理以及推理API](https://huggingface.co/pricing)。
92
93 这里是一些例子:
94 - [用 BERT 做掩码填词](https://huggingface.co/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France)
95 - [用 Electra 做命名实体识别](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city)
96 - [用 GPT-2 做文本生成](https://huggingface.co/gpt2?text=A+long+time+ago%2C+)
97 - [用 RoBERTa 做自然语言推理](https://huggingface.co/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal)
98 - [用 BART 做文本摘要](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct)
99 - [用 DistilBERT 做问答](https://huggingface.co/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species)
100 - [用 T5 做翻译](https://huggingface.co/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin)
101
102 **[Write With Transformer](https://transformer.huggingface.co)**,由抱抱脸团队打造,是一个文本生成的官方 demo。
103
104 ## 如果你在寻找由抱抱脸团队提供的定制化支持服务
105
106 <a target="_blank" href="https://huggingface.co/support">
107 <img alt="HuggingFace Expert Acceleration Program" src="https://huggingface.co/front/thumbnails/support.png" style="max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);">
108 </a><br>
109
110 ## 快速上手
111
112 我们为快速使用模型提供了 `pipeline` (流水线)API。流水线聚合了预训练模型和对应的文本预处理。下面是一个快速使用流水线去判断正负面情绪的例子:
113
114 ```python
115 >>> from transformers import pipeline
116
117 # 使用情绪分析流水线
118 >>> classifier = pipeline('sentiment-analysis')
119 >>> classifier('We are very happy to introduce pipeline to the transformers repository.')
120 [{'label': 'POSITIVE', 'score': 0.9996980428695679}]
121 ```
122
123 第二行代码下载并缓存了流水线使用的预训练模型,而第三行代码则在给定的文本上进行了评估。这里的答案“正面” (positive) 具有 99 的置信度。
124
125 许多的 NLP 任务都有开箱即用的预训练流水线。比如说,我们可以轻松的从给定文本中抽取问题答案:
126
127 ``` python
128 >>> from transformers import pipeline
129
130 # 使用问答流水线
131 >>> question_answerer = pipeline('question-answering')
132 >>> question_answerer({
133 ... 'question': 'What is the name of the repository ?',
134 ... 'context': 'Pipeline has been included in the huggingface/transformers repository'
135 ... })
136 {'score': 0.30970096588134766, 'start': 34, 'end': 58, 'answer': 'huggingface/transformers'}
137
138 ```
139
140 除了给出答案,预训练模型还给出了对应的置信度分数、答案在词符化 (tokenized) 后的文本中开始和结束的位置。你可以从[这个教程](https://huggingface.co/docs/transformers/task_summary)了解更多流水线API支持的任务。
141
142 要在你的任务上下载和使用任意预训练模型也很简单,只需三行代码。这里是 PyTorch 版的示例:
143 ```python
144 >>> from transformers import AutoTokenizer, AutoModel
145
146 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
147 >>> model = AutoModel.from_pretrained("bert-base-uncased")
148
149 >>> inputs = tokenizer("Hello world!", return_tensors="pt")
150 >>> outputs = model(**inputs)
151 ```
152 这里是等效的 TensorFlow 代码:
153 ```python
154 >>> from transformers import AutoTokenizer, TFAutoModel
155
156 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
157 >>> model = TFAutoModel.from_pretrained("bert-base-uncased")
158
159 >>> inputs = tokenizer("Hello world!", return_tensors="tf")
160 >>> outputs = model(**inputs)
161 ```
162
163 词符化器 (tokenizer) 为所有的预训练模型提供了预处理,并可以直接对单个字符串进行调用(比如上面的例子)或对列表 (list) 调用。它会输出一个你可以在下游代码里使用或直接通过 `**` 解包表达式传给模型的词典 (dict)。
164
165 模型本身是一个常规的 [Pytorch `nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) 或 [TensorFlow `tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model)(取决于你的后端),可以常规方式使用。 [这个教程](https://huggingface.co/transformers/training.html)解释了如何将这样的模型整合到经典的 PyTorch 或 TensorFlow 训练循环中,或是如何使用我们的 `Trainer` 训练器)API 来在一个新的数据集上快速微调。
166
167 ## 为什么要用 transformers?
168
169 1. 便于使用的先进模型:
170 - NLU 和 NLG 上表现优越
171 - 对教学和实践友好且低门槛
172 - 高级抽象,只需了解三个类
173 - 对所有模型统一的API
174
175 1. 更低计算开销,更少的碳排放:
176 - 研究人员可以分享已训练的模型而非每次从头开始训练
177 - 工程师可以减少计算用时和生产环境开销
178 - 数十种模型架构、两千多个预训练模型、100多种语言支持
179
180 1. 对于模型生命周期的每一个部分都面面俱到:
181 - 训练先进的模型,只需 3 行代码
182 - 模型在不同深度学习框架间任意转移,随你心意
183 - 为训练、评估和生产选择最适合的框架,衔接无缝
184
185 1. 为你的需求轻松定制专属模型和用例:
186 - 我们为每种模型架构提供了多个用例来复现原论文结果
187 - 模型内部结构保持透明一致
188 - 模型文件可单独使用,方便魔改和快速实验
189
190 ## 什么情况下我不该用 transformers?
191
192 - 本库并不是模块化的神经网络工具箱。模型文件中的代码特意呈若璞玉,未经额外抽象封装,以便研究人员快速迭代魔改而不致溺于抽象和文件跳转之中。
193 - `Trainer` API 并非兼容任何模型,只为本库之模型优化。若是在寻找适用于通用机器学习的训练循环实现,请另觅他库。
194 - 尽管我们已尽力而为,[examples 目录](https://github.com/huggingface/transformers/tree/main/examples)中的脚本也仅为用例而已。对于你的特定问题,它们并不一定开箱即用,可能需要改几行代码以适之。
195
196 ## 安装
197
198 ### 使用 pip
199
200 这个仓库已在 Python 3.6+、Flax 0.3.2+、PyTorch 1.3.1+ 和 TensorFlow 2.3+ 下经过测试。
201
202 你可以在[虚拟环境](https://docs.python.org/3/library/venv.html)中安装 🤗 Transformers。如果你还不熟悉 Python 的虚拟环境,请阅此[用户说明](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/)。
203
204 首先,用你打算使用的版本的 Python 创建一个虚拟环境并激活。
205
206 然后,你需要安装 Flax、PyTorch 或 TensorFlow 其中之一。关于在你使用的平台上安装这些框架,请参阅 [TensorFlow 安装页](https://www.tensorflow.org/install/), [PyTorch 安装页](https://pytorch.org/get-started/locally/#start-locally) 或 [Flax 安装页](https://github.com/google/flax#quick-install)。
207
208 当这些后端之一安装成功后, 🤗 Transformers 可依此安装:
209
210 ```bash
211 pip install transformers
212 ```
213
214 如果你想要试试用例或者想在正式发布前使用最新的开发中代码,你得[从源代码安装](https://huggingface.co/docs/transformers/installation#installing-from-source)。
215
216 ### 使用 conda
217
218 自 Transformers 4.0.0 版始,我们有了一个 conda 频道: `huggingface`。
219
220 🤗 Transformers 可以通过 conda 依此安装:
221
222 ```shell script
223 conda install -c huggingface transformers
224 ```
225
226 要通过 conda 安装 Flax、PyTorch 或 TensorFlow 其中之一,请参阅它们各自安装页的说明。
227
228 ## 模型架构
229
230 🤗 Transformers 支持的[**所有的模型检查点**](https://huggingface.co/models)由[用户](https://huggingface.co/users)和[组织](https://huggingface.co/organizations)上传,均与 huggingface.co [model hub](https://huggingface.co) 无缝整合。
231
232 目前的检查点数量: 
233
234 🤗 Transformers 目前支持如下的架构(模型概述请阅[这里](https://huggingface.co/docs/transformers/model_summary)):
235
236 1. **[ALBERT](https://huggingface.co/docs/transformers/model_doc/albert)** (来自 Google Research and the Toyota Technological Institute at Chicago) 伴随论文 [ALBERT: A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942), 由 Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut 发布。
237 1. **[BART](https://huggingface.co/docs/transformers/model_doc/bart)** (来自 Facebook) 伴随论文 [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/pdf/1910.13461.pdf) 由 Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer 发布。
238 1. **[BARThez](https://huggingface.co/docs/transformers/model_doc/barthez)** (来自 École polytechnique) 伴随论文 [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321) 由 Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis 发布。
239 1. **[BARTpho](https://huggingface.co/docs/transformers/model_doc/bartpho)** (来自 VinAI Research) 伴随论文 [BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese](https://arxiv.org/abs/2109.09701) 由 Nguyen Luong Tran, Duong Minh Le and Dat Quoc Nguyen 发布。
240 1. **[BEiT](https://huggingface.co/docs/transformers/model_doc/beit)** (来自 Microsoft) 伴随论文 [BEiT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) 由 Hangbo Bao, Li Dong, Furu Wei 发布。
241 1. **[BERT](https://huggingface.co/docs/transformers/model_doc/bert)** (来自 Google) 伴随论文 [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) 由 Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova 发布。
242 1. **[BERT For Sequence Generation](https://huggingface.co/docs/transformers/model_doc/bert-generation)** (来自 Google) 伴随论文 [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) 由 Sascha Rothe, Shashi Narayan, Aliaksei Severyn 发布。
243 1. **[BERTweet](https://huggingface.co/docs/transformers/model_doc/bertweet)** (来自 VinAI Research) 伴随论文 [BERTweet: A pre-trained language model for English Tweets](https://aclanthology.org/2020.emnlp-demos.2/) 由 Dat Quoc Nguyen, Thanh Vu and Anh Tuan Nguyen 发布。
244 1. **[BigBird-Pegasus](https://huggingface.co/docs/transformers/model_doc/bigbird_pegasus)** (来自 Google Research) 伴随论文 [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) 由 Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed 发布。
245 1. **[BigBird-RoBERTa](https://huggingface.co/docs/transformers/model_doc/big_bird)** (来自 Google Research) 伴随论文 [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) 由 Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed 发布。
246 1. **[Blenderbot](https://huggingface.co/docs/transformers/model_doc/blenderbot)** (来自 Facebook) 伴随论文 [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) 由 Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston 发布。
247 1. **[BlenderbotSmall](https://huggingface.co/docs/transformers/model_doc/blenderbot-small)** (来自 Facebook) 伴随论文 [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) 由 Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston 发布。
248 1. **[BLOOM](https://huggingface.co/docs/transformers/model_doc/bloom)** (from BigScience workshop) released by the [BigSicence Workshop](https://bigscience.huggingface.co/).
249 1. **[BORT](https://huggingface.co/docs/transformers/model_doc/bort)** (来自 Alexa) 伴随论文 [Optimal Subarchitecture Extraction For BERT](https://arxiv.org/abs/2010.10499) 由 Adrian de Wynter and Daniel J. Perry 发布。
250 1. **[ByT5](https://huggingface.co/docs/transformers/model_doc/byt5)** (来自 Google Research) 伴随论文 [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) 由 Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel 发布。
251 1. **[CamemBERT](https://huggingface.co/docs/transformers/model_doc/camembert)** (来自 Inria/Facebook/Sorbonne) 伴随论文 [CamemBERT: a Tasty French Language Model](https://arxiv.org/abs/1911.03894) 由 Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz Suárez*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot 发布。
252 1. **[CANINE](https://huggingface.co/docs/transformers/model_doc/canine)** (来自 Google Research) 伴随论文 [CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation](https://arxiv.org/abs/2103.06874) 由 Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting 发布。
253 1. **[CLIP](https://huggingface.co/docs/transformers/model_doc/clip)** (来自 OpenAI) 伴随论文 [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) 由 Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever 发布。
254 1. **[CodeGen](https://huggingface.co/docs/transformers/model_doc/codegen)** (来自 Salesforce) 伴随论文 [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) 由 Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong 发布。
255 1. **[ConvBERT](https://huggingface.co/docs/transformers/model_doc/convbert)** (来自 YituTech) 伴随论文 [ConvBERT: Improving BERT with Span-based Dynamic Convolution](https://arxiv.org/abs/2008.02496) 由 Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng Yan 发布。
256 1. **[ConvNeXT](https://huggingface.co/docs/transformers/model_doc/convnext)** (来自 Facebook AI) 伴随论文 [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) 由 Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, Saining Xie 发布。
257 1. **[CPM](https://huggingface.co/docs/transformers/model_doc/cpm)** (来自 Tsinghua University) 伴随论文 [CPM: A Large-scale Generative Chinese Pre-trained Language Model](https://arxiv.org/abs/2012.00413) 由 Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen, Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, Maosong Sun 发布。
258 1. **[CTRL](https://huggingface.co/docs/transformers/model_doc/ctrl)** (来自 Salesforce) 伴随论文 [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858) 由 Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher 发布。
259 1. **[CvT](https://huggingface.co/docs/transformers/model_doc/cvt)** (来自 Microsoft) 伴随论文 [CvT: Introducing Convolutions to Vision Transformers](https://arxiv.org/abs/2103.15808) 由 Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan, Lei Zhang 发布。
260 1. **[Data2Vec](https://huggingface.co/docs/transformers/model_doc/data2vec)** (来自 Facebook) 伴随论文 [Data2Vec: A General Framework for Self-supervised Learning in Speech, Vision and Language](https://arxiv.org/abs/2202.03555) 由 Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli 发布。
261 1. **[DeBERTa](https://huggingface.co/docs/transformers/model_doc/deberta)** (来自 Microsoft) 伴随论文 [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) 由 Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen 发布。
262 1. **[DeBERTa-v2](https://huggingface.co/docs/transformers/model_doc/deberta-v2)** (来自 Microsoft) 伴随论文 [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) 由 Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen 发布。
263 1. **[Decision Transformer](https://huggingface.co/docs/transformers/model_doc/decision_transformer)** (来自 Berkeley/Facebook/Google) 伴随论文 [Decision Transformer: Reinforcement Learning via Sequence Modeling](https://arxiv.org/abs/2106.01345) 由 Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch 发布。
264 1. **[DeiT](https://huggingface.co/docs/transformers/model_doc/deit)** (来自 Facebook) 伴随论文 [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) 由 Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou 发布。
265 1. **[DETR](https://huggingface.co/docs/transformers/model_doc/detr)** (来自 Facebook) 伴随论文 [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) 由 Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko 发布。
266 1. **[DialoGPT](https://huggingface.co/docs/transformers/model_doc/dialogpt)** (来自 Microsoft Research) 伴随论文 [DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation](https://arxiv.org/abs/1911.00536) 由 Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan 发布。
267 1. **[DistilBERT](https://huggingface.co/docs/transformers/model_doc/distilbert)** (来自 HuggingFace), 伴随论文 [DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108) 由 Victor Sanh, Lysandre Debut and Thomas Wolf 发布。 同样的方法也应用于压缩 GPT-2 到 [DistilGPT2](https://github.com/huggingface/transformers/tree/main/examples/distillation), RoBERTa 到 [DistilRoBERTa](https://github.com/huggingface/transformers/tree/main/examples/distillation), Multilingual BERT 到 [DistilmBERT](https://github.com/huggingface/transformers/tree/main/examples/distillation) 和德语版 DistilBERT。
268 1. **[DiT](https://huggingface.co/docs/transformers/model_doc/dit)** (来自 Microsoft Research) 伴随论文 [DiT: Self-supervised Pre-training for Document Image Transformer](https://arxiv.org/abs/2203.02378) 由 Junlong Li, Yiheng Xu, Tengchao Lv, Lei Cui, Cha Zhang, Furu Wei 发布。
269 1. **[Donut](https://huggingface.co/docs/transformers/main/model_doc/donut)** (来自 NAVER) 伴随论文 [OCR-free Document Understanding Transformer](https://arxiv.org/abs/2111.15664) 由 Geewook Kim, Teakgyu Hong, Moonbin Yim, Jeongyeon Nam, Jinyoung Park, Jinyeong Yim, Wonseok Hwang, Sangdoo Yun, Dongyoon Han, Seunghyun Park 发布。
270 1. **[DPR](https://huggingface.co/docs/transformers/model_doc/dpr)** (来自 Facebook) 伴随论文 [Dense Passage Retrieval for Open-Domain Question Answering](https://arxiv.org/abs/2004.04906) 由 Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih 发布。
271 1. **[DPT](https://huggingface.co/docs/transformers/master/model_doc/dpt)** (来自 Intel Labs) 伴随论文 [Vision Transformers for Dense Prediction](https://arxiv.org/abs/2103.13413) 由 René Ranftl, Alexey Bochkovskiy, Vladlen Koltun 发布。
272 1. **[ELECTRA](https://huggingface.co/docs/transformers/model_doc/electra)** (来自 Google Research/Stanford University) 伴随论文 [ELECTRA: Pre-training text encoders as discriminators rather than generators](https://arxiv.org/abs/2003.10555) 由 Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning 发布。
273 1. **[EncoderDecoder](https://huggingface.co/docs/transformers/model_doc/encoder-decoder)** (来自 Google Research) 伴随论文 [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) 由 Sascha Rothe, Shashi Narayan, Aliaksei Severyn 发布。
274 1. **[FlauBERT](https://huggingface.co/docs/transformers/model_doc/flaubert)** (来自 CNRS) 伴随论文 [FlauBERT: Unsupervised Language Model Pre-training for French](https://arxiv.org/abs/1912.05372) 由 Hang Le, Loïc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoît Crabbé, Laurent Besacier, Didier Schwab 发布。
275 1. **[FLAVA](https://huggingface.co/docs/transformers/model_doc/flava)** (来自 Facebook AI) 伴随论文 [FLAVA: A Foundational Language And Vision Alignment Model](https://arxiv.org/abs/2112.04482) 由 Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, and Douwe Kiela 发布。
276 1. **[FNet](https://huggingface.co/docs/transformers/model_doc/fnet)** (来自 Google Research) 伴随论文 [FNet: Mixing Tokens with Fourier Transforms](https://arxiv.org/abs/2105.03824) 由 James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon 发布。
277 1. **[Funnel Transformer](https://huggingface.co/docs/transformers/model_doc/funnel)** (来自 CMU/Google Brain) 伴随论文 [Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing](https://arxiv.org/abs/2006.03236) 由 Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le 发布。
278 1. **[GLPN](https://huggingface.co/docs/transformers/model_doc/glpn)** (来自 KAIST) 伴随论文 [Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth](https://arxiv.org/abs/2201.07436) 由 Doyeon Kim, Woonghyun Ga, Pyungwhan Ahn, Donggyu Joo, Sehwan Chun, Junmo Kim 发布。
279 1. **[GPT](https://huggingface.co/docs/transformers/model_doc/openai-gpt)** (来自 OpenAI) 伴随论文 [Improving Language Understanding by Generative Pre-Training](https://blog.openai.com/language-unsupervised/) 由 Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever 发布。
280 1. **[GPT Neo](https://huggingface.co/docs/transformers/model_doc/gpt_neo)** (来自 EleutherAI) 随仓库 [EleutherAI/gpt-neo](https://github.com/EleutherAI/gpt-neo) 发布。作者为 Sid Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy 发布。
281 1. **[GPT NeoX](https://huggingface.co/docs/transformers/model_doc/gpt_neox)** (from EleutherAI) released with the paper [GPT-NeoX-20B: An Open-Source Autoregressive Language Model](https://arxiv.org/abs/2204.06745) by Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, Samuel Weinbach
282 1. **[GPT-2](https://huggingface.co/docs/transformers/model_doc/gpt2)** (来自 OpenAI) 伴随论文 [Language Models are Unsupervised Multitask Learners](https://blog.openai.com/better-language-models/) 由 Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever** 发布。
283 1. **[GPT-J](https://huggingface.co/docs/transformers/model_doc/gptj)** (来自 EleutherAI) 伴随论文 [kingoflolz/mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/) 由 Ben Wang and Aran Komatsuzaki 发布。
284 1. **[GroupViT](https://huggingface.co/docs/transformers/model_doc/groupvit)** (来自 UCSD, NVIDIA) 伴随论文 [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094) 由 Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang 发布。
285 1. **[Hubert](https://huggingface.co/docs/transformers/model_doc/hubert)** (来自 Facebook) 伴随论文 [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) 由 Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed 发布。
286 1. **[I-BERT](https://huggingface.co/docs/transformers/model_doc/ibert)** (来自 Berkeley) 伴随论文 [I-BERT: Integer-only BERT Quantization](https://arxiv.org/abs/2101.01321) 由 Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney, Kurt Keutzer 发布。
287 1. **[ImageGPT](https://huggingface.co/docs/transformers/model_doc/imagegpt)** (来自 OpenAI) 伴随论文 [Generative Pretraining from Pixels](https://openai.com/blog/image-gpt/) 由 Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever 发布。
288 1. **[LayoutLM](https://huggingface.co/docs/transformers/model_doc/layoutlm)** (来自 Microsoft Research Asia) 伴随论文 [LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318) 由 Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou 发布。
289 1. **[LayoutLMv2](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (来自 Microsoft Research Asia) 伴随论文 [LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding](https://arxiv.org/abs/2012.14740) 由 Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou 发布。
290 1. **[LayoutLMv3](https://huggingface.co/docs/transformers/model_doc/layoutlmv3)** (来自 Microsoft Research Asia) 伴随论文 [LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking](https://arxiv.org/abs/2204.08387) 由 Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei 发布。
291 1. **[LayoutXLM](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (来自 Microsoft Research Asia) 伴随论文 [LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding](https://arxiv.org/abs/2104.08836) 由 Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Furu Wei 发布。
292 1. **[LED](https://huggingface.co/docs/transformers/model_doc/led)** (来自 AllenAI) 伴随论文 [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) 由 Iz Beltagy, Matthew E. Peters, Arman Cohan 发布。
293 1. **[LeViT](https://huggingface.co/docs/transformers/model_doc/levit)** (来自 Meta AI) 伴随论文 [LeViT: A Vision Transformer in ConvNet's Clothing for Faster Inference](https://arxiv.org/abs/2104.01136) 由 Ben Graham, Alaaeldin El-Nouby, Hugo Touvron, Pierre Stock, Armand Joulin, Hervé Jégou, Matthijs Douze 发布。
294 1. **[Longformer](https://huggingface.co/docs/transformers/model_doc/longformer)** (来自 AllenAI) 伴随论文 [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) 由 Iz Beltagy, Matthew E. Peters, Arman Cohan 发布。
295 1. **[LongT5](https://huggingface.co/docs/transformers/model_doc/longt5)** (来自 Google AI) released 伴随论文 [LongT5: Efficient Text-To-Text Transformer for Long Sequences](https://arxiv.org/abs/2112.07916) 由 Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung, Yinfei Yang 发布。
296 1. **[LUKE](https://huggingface.co/docs/transformers/model_doc/luke)** (来自 Studio Ousia) 伴随论文 [LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention](https://arxiv.org/abs/2010.01057) 由 Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, Yuji Matsumoto 发布。
297 1. **[LXMERT](https://huggingface.co/docs/transformers/model_doc/lxmert)** (来自 UNC Chapel Hill) 伴随论文 [LXMERT: Learning Cross-Modality Encoder Representations from Transformers for Open-Domain Question Answering](https://arxiv.org/abs/1908.07490) 由 Hao Tan and Mohit Bansal 发布。
298 1. **[M-CTC-T](https://huggingface.co/docs/transformers/model_doc/mctct)** (来自 Facebook) 伴随论文 [Pseudo-Labeling For Massively Multilingual Speech Recognition](https://arxiv.org/abs/2111.00161) 由 Loren Lugosch, Tatiana Likhomanenko, Gabriel Synnaeve, and Ronan Collobert 发布。
299 1. **[M2M100](https://huggingface.co/docs/transformers/model_doc/m2m_100)** (来自 Facebook) 伴随论文 [Beyond English-Centric Multilingual Machine Translation](https://arxiv.org/abs/2010.11125) 由 Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin 发布。
300 1. **[MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)** 用 [OPUS](http://opus.nlpl.eu/) 数据训练的机器翻译模型由 Jörg Tiedemann 发布。[Marian Framework](https://marian-nmt.github.io/) 由微软翻译团队开发。
301 1. **[MaskFormer](https://huggingface.co/docs/transformers/model_doc/maskformer)** (from Meta and UIUC) released with the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) by Bowen Cheng, Alexander G. Schwing, Alexander Kirillov
302 1. **[mBART](https://huggingface.co/docs/transformers/model_doc/mbart)** (来自 Facebook) 伴随论文 [Multilingual Denoising Pre-training for Neural Machine Translation](https://arxiv.org/abs/2001.08210) 由 Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer 发布。
303 1. **[mBART-50](https://huggingface.co/docs/transformers/model_doc/mbart)** (来自 Facebook) 伴随论文 [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) 由 Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, Angela Fan 发布。
304 1. **[Megatron-BERT](https://huggingface.co/docs/transformers/model_doc/megatron-bert)** (来自 NVIDIA) 伴随论文 [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) 由 Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro 发布。
305 1. **[Megatron-GPT2](https://huggingface.co/docs/transformers/model_doc/megatron_gpt2)** (来自 NVIDIA) 伴随论文 [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) 由 Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro 发布。
306 1. **[mLUKE](https://huggingface.co/docs/transformers/model_doc/mluke)** (来自 Studio Ousia) 伴随论文 [mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models](https://arxiv.org/abs/2110.08151) 由 Ryokan Ri, Ikuya Yamada, and Yoshimasa Tsuruoka 发布。
307 1. **[MobileBERT](https://huggingface.co/docs/transformers/model_doc/mobilebert)** (来自 CMU/Google Brain) 伴随论文 [MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices](https://arxiv.org/abs/2004.02984) 由 Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou 发布。
308 1. **[MobileViT](https://huggingface.co/docs/transformers/model_doc/mobilevit)** (来自 Apple) 伴随论文 [MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer](https://arxiv.org/abs/2110.02178) 由 Sachin Mehta and Mohammad Rastegari 发布。
309 1. **[MPNet](https://huggingface.co/docs/transformers/model_doc/mpnet)** (来自 Microsoft Research) 伴随论文 [MPNet: Masked and Permuted Pre-training for Language Understanding](https://arxiv.org/abs/2004.09297) 由 Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu 发布。
310 1. **[MT5](https://huggingface.co/docs/transformers/model_doc/mt5)** (来自 Google AI) 伴随论文 [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) 由 Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel 发布。
311 1. **[MVP](https://huggingface.co/docs/transformers/model_doc/mvp)** (来自 中国人民大学 AI Box) 伴随论文 [MVP: Multi-task Supervised Pre-training for Natural Language Generation](https://arxiv.org/abs/2206.12131) 由 Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen 发布。
312 1. **[Nezha](https://huggingface.co/docs/transformers/model_doc/nezha)** (来自华为诺亚方舟实验室) 伴随论文 [NEZHA: Neural Contextualized Representation for Chinese Language Understanding](https://arxiv.org/abs/1909.00204) 由 Junqiu Wei, Xiaozhe Ren, Xiaoguang Li, Wenyong Huang, Yi Liao, Yasheng Wang, Jiashu Lin, Xin Jiang, Xiao Chen and Qun Liu 发布。
313 1. **[NLLB](https://huggingface.co/docs/transformers/model_doc/nllb)** (来自 Meta) 伴随论文 [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672) 由 the NLLB team 发布。
314 1. **[Nyströmformer](https://huggingface.co/docs/transformers/model_doc/nystromformer)** (来自 the University of Wisconsin - Madison) 伴随论文 [Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention](https://arxiv.org/abs/2102.03902) 由 Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, Vikas Singh 发布。
315 1. **[OPT](https://huggingface.co/docs/transformers/master/model_doc/opt)** (来自 Meta AI) 伴随论文 [OPT: Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) 由 Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al 发布。
316 1. **[OWL-ViT](https://huggingface.co/docs/transformers/model_doc/owlvit)** (来自 Google AI) 伴随论文 [Simple Open-Vocabulary Object Detection with Vision Transformers](https://arxiv.org/abs/2205.06230) 由 Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, and Neil Houlsby 发布。
317 1. **[Pegasus](https://huggingface.co/docs/transformers/model_doc/pegasus)** (来自 Google) 伴随论文 [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777) 由 Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu 发布。
318 1. **[PEGASUS-X](https://huggingface.co/docs/transformers/main/model_doc/pegasus_x)** (来自 Google) 伴随论文 [Investigating Efficiently Extending Transformers for Long Input Summarization](https://arxiv.org/abs/2208.04347) 由 Jason Phang, Yao Zhao, Peter J. Liu 发布。
319 1. **[Perceiver IO](https://huggingface.co/docs/transformers/model_doc/perceiver)** (来自 Deepmind) 伴随论文 [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) 由 Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hénaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, João Carreira 发布。
320 1. **[PhoBERT](https://huggingface.co/docs/transformers/model_doc/phobert)** (来自 VinAI Research) 伴随论文 [PhoBERT: Pre-trained language models for Vietnamese](https://www.aclweb.org/anthology/2020.findings-emnlp.92/) 由 Dat Quoc Nguyen and Anh Tuan Nguyen 发布。
321 1. **[PLBart](https://huggingface.co/docs/transformers/model_doc/plbart)** (来自 UCLA NLP) 伴随论文 [Unified Pre-training for Program Understanding and Generation](https://arxiv.org/abs/2103.06333) 由 Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, Kai-Wei Chang 发布。
322 1. **[PoolFormer](https://huggingface.co/docs/transformers/model_doc/poolformer)** (来自 Sea AI Labs) 伴随论文 [MetaFormer is Actually What You Need for Vision](https://arxiv.org/abs/2111.11418) 由 Yu, Weihao and Luo, Mi and Zhou, Pan and Si, Chenyang and Zhou, Yichen and Wang, Xinchao and Feng, Jiashi and Yan, Shuicheng 发布。
323 1. **[ProphetNet](https://huggingface.co/docs/transformers/model_doc/prophetnet)** (来自 Microsoft Research) 伴随论文 [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) 由 Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou 发布。
324 1. **[QDQBert](https://huggingface.co/docs/transformers/model_doc/qdqbert)** (来自 NVIDIA) 伴随论文 [Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation](https://arxiv.org/abs/2004.09602) 由 Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev and Paulius Micikevicius 发布。
325 1. **[RAG](https://huggingface.co/docs/transformers/model_doc/rag)** (来自 Facebook) 伴随论文 [Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks](https://arxiv.org/abs/2005.11401) 由 Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, Douwe Kiela 发布。
326 1. **[REALM](https://huggingface.co/docs/transformers/model_doc/realm.html)** (来自 Google Research) 伴随论文 [REALM: Retrieval-Augmented Language Model Pre-Training](https://arxiv.org/abs/2002.08909) 由 Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat and Ming-Wei Chang 发布。
327 1. **[Reformer](https://huggingface.co/docs/transformers/model_doc/reformer)** (来自 Google Research) 伴随论文 [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451) 由 Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya 发布。
328 1. **[RegNet](https://huggingface.co/docs/transformers/model_doc/regnet)** (from META Research) released with the paper [Designing Network Design Space](https://arxiv.org/abs/2003.13678) by Ilija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, Piotr Dollár.
329 1. **[RemBERT](https://huggingface.co/docs/transformers/model_doc/rembert)** (来自 Google Research) 伴随论文 [Rethinking embedding coupling in pre-trained language models](https://arxiv.org/pdf/2010.12821.pdf) 由 Hyung Won Chung, Thibault Févry, Henry Tsai, M. Johnson, Sebastian Ruder 发布。
330 1. **[ResNet](https://huggingface.co/docs/transformers/model_doc/resnet)** (from Microsoft Research) released with the paper [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) by Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun.
331 1. **[RoBERTa](https://huggingface.co/docs/transformers/model_doc/roberta)** (来自 Facebook), 伴随论文 [Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) 由 Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov 发布。
332 1. **[RoFormer](https://huggingface.co/docs/transformers/model_doc/roformer)** (来自 ZhuiyiTechnology), 伴随论文 [RoFormer: Enhanced Transformer with Rotary Position Embedding](https://arxiv.org/pdf/2104.09864v1.pdf) 由 Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu 发布。
333 1. **[SegFormer](https://huggingface.co/docs/transformers/model_doc/segformer)** (来自 NVIDIA) 伴随论文 [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) 由 Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo 发布。
334 1. **[SEW](https://huggingface.co/docs/transformers/model_doc/sew)** (来自 ASAPP) 伴随论文 [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) 由 Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi 发布。
335 1. **[SEW-D](https://huggingface.co/docs/transformers/model_doc/sew_d)** (来自 ASAPP) 伴随论文 [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) 由 Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi 发布。
336 1. **[SpeechToTextTransformer](https://huggingface.co/docs/transformers/model_doc/speech_to_text)** (来自 Facebook), 伴随论文 [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) 由 Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino 发布。
337 1. **[SpeechToTextTransformer2](https://huggingface.co/docs/transformers/model_doc/speech_to_text_2)** (来自 Facebook) 伴随论文 [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) 由 Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau 发布。
338 1. **[Splinter](https://huggingface.co/docs/transformers/model_doc/splinter)** (来自 Tel Aviv University) 伴随论文 [Few-Shot Question Answering by Pretraining Span Selection](https://arxiv.org/abs/2101.00438) 由 Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy 发布。
339 1. **[SqueezeBERT](https://huggingface.co/docs/transformers/model_doc/squeezebert)** (来自 Berkeley) 伴随论文 [SqueezeBERT: What can computer vision teach NLP about efficient neural networks?](https://arxiv.org/abs/2006.11316) 由 Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W. Keutzer 发布。
340 1. **[Swin Transformer](https://huggingface.co/docs/transformers/model_doc/swin)** (来自 Microsoft) 伴随论文 [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) 由 Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo 发布。
341 1. **[Swin Transformer V2](https://huggingface.co/docs/transformers/main/model_doc/swinv2)** (来自 Microsoft) 伴随论文 [Swin Transformer V2: Scaling Up Capacity and Resolution](https://arxiv.org/abs/2111.09883) 由 Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, Furu Wei, Baining Guo 发布。
342 1. **[T5](https://huggingface.co/docs/transformers/model_doc/t5)** (来自 Google AI) 伴随论文 [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) 由 Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu 发布。
343 1. **[T5v1.1](https://huggingface.co/docs/transformers/model_doc/t5v1.1)** (来自 Google AI) 伴随论文 [google-research/text-to-text-transfer-transformer](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) 由 Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu 发布。
344 1. **[TAPAS](https://huggingface.co/docs/transformers/model_doc/tapas)** (来自 Google AI) 伴随论文 [TAPAS: Weakly Supervised Table Parsing via Pre-training](https://arxiv.org/abs/2004.02349) 由 Jonathan Herzig, Paweł Krzysztof Nowak, Thomas Müller, Francesco Piccinno and Julian Martin Eisenschlos 发布。
345 1. **[TAPEX](https://huggingface.co/docs/transformers/model_doc/tapex)** (来自 Microsoft Research) 伴随论文 [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) 由 Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou 发布。
346 1. **[Trajectory Transformer](https://huggingface.co/docs/transformers/model_doc/trajectory_transformers)** (from the University of California at Berkeley) released with the paper [Offline Reinforcement Learning as One Big Sequence Modeling Problem](https://arxiv.org/abs/2106.02039) by Michael Janner, Qiyang Li, Sergey Levine
347 1. **[Transformer-XL](https://huggingface.co/docs/transformers/model_doc/transfo-xl)** (来自 Google/CMU) 伴随论文 [Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context](https://arxiv.org/abs/1901.02860) 由 Zihang Dai*, Zhilin Yang*, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov 发布。
348 1. **[TrOCR](https://huggingface.co/docs/transformers/model_doc/trocr)** (来自 Microsoft) 伴随论文 [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) 由 Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei 发布。
349 1. **[UL2](https://huggingface.co/docs/transformers/model_doc/ul2)** (from Google Research) released with the paper [Unifying Language Learning Paradigms](https://arxiv.org/abs/2205.05131v1) by Yi Tay, Mostafa Dehghani, Vinh Q. Tran, Xavier Garcia, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Neil Houlsby, Donald Metzler
350 1. **[UniSpeech](https://huggingface.co/docs/transformers/model_doc/unispeech)** (来自 Microsoft Research) 伴随论文 [UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data](https://arxiv.org/abs/2101.07597) 由 Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang 发布。
351 1. **[UniSpeechSat](https://huggingface.co/docs/transformers/model_doc/unispeech-sat)** (来自 Microsoft Research) 伴随论文 [UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER AWARE PRE-TRAINING](https://arxiv.org/abs/2110.05752) 由 Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu 发布。
352 1. **[VAN](https://huggingface.co/docs/transformers/model_doc/van)** (来自 Tsinghua University and Nankai University) 伴随论文 [Visual Attention Network](https://arxiv.org/pdf/2202.09741.pdf) 由 Meng-Hao Guo, Cheng-Ze Lu, Zheng-Ning Liu, Ming-Ming Cheng, Shi-Min Hu 发布。
353 1. **[VideoMAE](https://huggingface.co/docs/transformers/main/model_doc/videomae)** (来自 Multimedia Computing Group, Nanjing University) 伴随论文 [VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training](https://arxiv.org/abs/2203.12602) 由 Zhan Tong, Yibing Song, Jue Wang, Limin Wang 发布。
354 1. **[ViLT](https://huggingface.co/docs/transformers/model_doc/vilt)** (来自 NAVER AI Lab/Kakao Enterprise/Kakao Brain) 伴随论文 [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) 由 Wonjae Kim, Bokyung Son, Ildoo Kim 发布。
355 1. **[Vision Transformer (ViT)](https://huggingface.co/docs/transformers/model_doc/vit)** (来自 Google AI) 伴随论文 [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) 由 Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby 发布。
356 1. **[VisualBERT](https://huggingface.co/docs/transformers/model_doc/visual_bert)** (来自 UCLA NLP) 伴随论文 [VisualBERT: A Simple and Performant Baseline for Vision and Language](https://arxiv.org/pdf/1908.03557) 由 Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang 发布。
357 1. **[ViTMAE](https://huggingface.co/docs/transformers/model_doc/vit_mae)** (来自 Meta AI) 伴随论文 [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377) 由 Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick 发布。
358 1. **[Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/wav2vec2)** (来自 Facebook AI) 伴随论文 [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) 由 Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli 发布。
359 1. **[Wav2Vec2-Conformer](https://huggingface.co/docs/transformers/model_doc/wav2vec2-conformer)** (来自 Facebook AI) 伴随论文 [FAIRSEQ S2T: Fast Speech-to-Text Modeling with FAIRSEQ](https://arxiv.org/abs/2010.05171) 由 Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino 发布。
360 1. **[Wav2Vec2Phoneme](https://huggingface.co/docs/transformers/model_doc/wav2vec2_phoneme)** (来自 Facebook AI) 伴随论文 [Simple and Effective Zero-shot Cross-lingual Phoneme Recognition](https://arxiv.org/abs/2109.11680) 由 Qiantong Xu, Alexei Baevski, Michael Auli 发布。
361 1. **[WavLM](https://huggingface.co/docs/transformers/model_doc/wavlm)** (from Microsoft Research) released with the paper [WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900) by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei.
362 1. **[XGLM](https://huggingface.co/docs/transformers/model_doc/xglm)** (From Facebook AI) released with the paper [Few-shot Learning with Multilingual Language Models](https://arxiv.org/abs/2112.10668) by Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, Xian Li.
363 1. **[XLM](https://huggingface.co/docs/transformers/model_doc/xlm)** (来自 Facebook) 伴随论文 [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) 由 Guillaume Lample and Alexis Conneau 发布。
364 1. **[XLM-ProphetNet](https://huggingface.co/docs/transformers/model_doc/xlm-prophetnet)** (来自 Microsoft Research) 伴随论文 [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) 由 Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou 发布。
365 1. **[XLM-RoBERTa](https://huggingface.co/docs/transformers/model_doc/xlm-roberta)** (来自 Facebook AI), 伴随论文 [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) 由 Alexis Conneau*, Kartikay Khandelwal*, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov 发布。
366 1. **[XLM-RoBERTa-XL](https://huggingface.co/docs/transformers/model_doc/xlm-roberta-xl)** (来自 Facebook AI) 伴随论文 [Larger-Scale Transformers for Multilingual Masked Language Modeling](https://arxiv.org/abs/2105.00572) 由 Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, Alexis Conneau 发布。
367 1. **[XLNet](https://huggingface.co/docs/transformers/model_doc/xlnet)** (来自 Google/CMU) 伴随论文 [XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) 由 Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le 发布。
368 1. **[XLS-R](https://huggingface.co/docs/transformers/model_doc/xls_r)** (来自 Facebook AI) 伴随论文 [XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale](https://arxiv.org/abs/2111.09296) 由 Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, Michael Auli 发布。
369 1. **[XLSR-Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/xlsr_wav2vec2)** (来自 Facebook AI) 伴随论文 [Unsupervised Cross-Lingual Representation Learning For Speech Recognition](https://arxiv.org/abs/2006.13979) 由 Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli 发布。
370 1. **[YOLOS](https://huggingface.co/docs/transformers/model_doc/yolos)** (来自 Huazhong University of Science & Technology) 伴随论文 [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://arxiv.org/abs/2106.00666) 由 Yuxin Fang, Bencheng Liao, Xinggang Wang, Jiemin Fang, Jiyang Qi, Rui Wu, Jianwei Niu, Wenyu Liu 发布。
371 1. **[YOSO](https://huggingface.co/docs/transformers/model_doc/yoso)** (来自 the University of Wisconsin - Madison) 伴随论文 [You Only Sample (Almost) 由 Zhanpeng Zeng, Yunyang Xiong, Sathya N. Ravi, Shailesh Acharya, Glenn Fung, Vikas Singh 发布。
372 1. 想要贡献新的模型?我们这里有一份**详细指引和模板**来引导你添加新的模型。你可以在 [`templates`](./templates) 目录中找到他们。记得查看 [贡献指南](./CONTRIBUTING.md) 并在开始写 PR 前联系维护人员或开一个新的 issue 来获得反馈。
373
374 要检查某个模型是否已有 Flax、PyTorch 或 TensorFlow 的实现,或其是否在 🤗 Tokenizers 库中有对应词符化器(tokenizer),敬请参阅[此表](https://huggingface.co/docs/transformers/index#supported-frameworks)。
375
376 这些实现均已于多个数据集测试(请参看用例脚本)并应于原版实现表现相当。你可以在用例文档的[此节](https://huggingface.co/docs/transformers/examples)中了解表现的细节。
377
378
379 ## 了解更多
380
381 | 章节 | 描述 |
382 |-|-|
383 | [文档](https://huggingface.co/transformers/) | 完整的 API 文档和教程 |
384 | [任务总结](https://huggingface.co/docs/transformers/task_summary) | 🤗 Transformers 支持的任务 |
385 | [预处理教程](https://huggingface.co/docs/transformers/preprocessing) | 使用 `Tokenizer` 来为模型准备数据 |
386 | [训练和微调](https://huggingface.co/docs/transformers/training) | 在 PyTorch/TensorFlow 的训练循环或 `Trainer` API 中使用 🤗 Transformers 提供的模型 |
387 | [快速上手:微调和用例脚本](https://github.com/huggingface/transformers/tree/main/examples) | 为各种任务提供的用例脚本 |
388 | [模型分享和上传](https://huggingface.co/docs/transformers/model_sharing) | 和社区上传和分享你微调的模型 |
389 | [迁移](https://huggingface.co/docs/transformers/migration) | 从 `pytorch-transformers` 或 `pytorch-pretrained-bert` 迁移到 🤗 Transformers |
390
391 ## 引用
392
393 我们已将此库的[论文](https://www.aclweb.org/anthology/2020.emnlp-demos.6/)正式发表,如果你使用了 🤗 Transformers 库,请引用:
394 ```bibtex
395 @inproceedings{wolf-etal-2020-transformers,
396 title = "Transformers: State-of-the-Art Natural Language Processing",
397 author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush",
398 booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
399 month = oct,
400 year = "2020",
401 address = "Online",
402 publisher = "Association for Computational Linguistics",
403 url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6",
404 pages = "38--45"
405 }
406 ```
407
[end of README_zh-hans.md]
[start of README_zh-hant.md]
1 <!---
2 Copyright 2020 The HuggingFace Team. All rights reserved.
3
4 Licensed under the Apache License, Version 2.0 (the "License");
5 you may not use this file except in compliance with the License.
6 You may obtain a copy of the License at
7
8 http://www.apache.org/licenses/LICENSE-2.0
9
10 Unless required by applicable law or agreed to in writing, software
11 distributed under the License is distributed on an "AS IS" BASIS,
12 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 See the License for the specific language governing permissions and
14 limitations under the License.
15 -->
16
17 <!---
18 A useful guide for English-Traditional Chinese translation of Hugging Face documentation
19 - Add space around English words and numbers when they appear between Chinese characters. E.g., 共 100 多種語言; 使用 transformers 函式庫。
20 - Use square quotes, e.g.,「引用」
21 - Some of terms in the file can be found at National Academy for Educational Research (https://terms.naer.edu.tw/), an official website providing bilingual translations between English and Traditional Chinese.
22
23 Dictionary
24
25 API: API (不翻譯)
26 add: 加入
27 checkpoint: 檢查點
28 code: 程式碼
29 community: 社群
30 confidence: 信賴度
31 dataset: 資料集
32 documentation: 文件
33 example: 基本翻譯為「範例」,或依語意翻為「例子」
34 finetune: 微調
35 Hugging Face: Hugging Face(不翻譯)
36 implementation: 實作
37 inference: 推論
38 library: 函式庫
39 module: 模組
40 NLP/Natural Language Processing: 以 NLP 出現時不翻譯,以 Natural Language Processing 出現時翻譯為自然語言處理
41 online demos: 線上Demo
42 pipeline: pipeline(不翻譯)
43 pretrained/pretrain: 預訓練
44 Python data structures (e.g., list, set, dict): 翻譯為串列,集合,字典,並用括號標註原英文
45 repository: repository(不翻譯)
46 summary: 概覽
47 token-: token-(不翻譯)
48 Trainer: Trainer(不翻譯)
49 transformer: transformer(不翻譯)
50 tutorial: 教學
51 user: 使用者
52 -->
53
54 <p align="center">
55 <br>
56 <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers_logo_name.png" width="400"/>
57 <br>
58 <p>
59 <p align="center">
60 <a href="https://circleci.com/gh/huggingface/transformers">
61 <img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/main">
62 </a>
63 <a href="https://github.com/huggingface/transformers/blob/main/LICENSE">
64 <img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue">
65 </a>
66 <a href="https://huggingface.co/docs/transformers/index">
67 <img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers/index.svg?down_color=red&down_message=offline&up_message=online">
68 </a>
69 <a href="https://github.com/huggingface/transformers/releases">
70 <img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg">
71 </a>
72 <a href="https://github.com/huggingface/transformers/blob/main/CODE_OF_CONDUCT.md">
73 <img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg">
74 </a>
75 <a href="https://zenodo.org/badge/latestdoi/155220641"><img src="https://zenodo.org/badge/155220641.svg" alt="DOI"></a>
76 </p>
77
78 <h4 align="center">
79 <p>
80 <a href="https://github.com/huggingface/transformers/">English</a> |
81 <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hans.md">简体中文</a> |
82 <b>繁體中文</b> |
83 <a href="https://github.com/huggingface/transformers/blob/main/README_ko.md">한국어</a>
84 <p>
85 </h4>
86
87 <h3 align="center">
88 <p>為 Jax、PyTorch 以及 TensorFlow 打造的先進自然語言處理函式庫</p>
89 </h3>
90
91 <h3 align="center">
92 <a href="https://hf.co/course"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/course_banner.png"></a>
93 </h3>
94
95 🤗 Transformers 提供了數以千計的預訓練模型,支援 100 多種語言的文本分類、資訊擷取、問答、摘要、翻譯、文本生成。它的宗旨是讓最先進的 NLP 技術人人易用。
96
97 🤗 Transformers 提供了便於快速下載和使用的API,讓你可以將預訓練模型用在給定文本、在你的資料集上微調然後經由 [model hub](https://huggingface.co/models) 與社群共享。同時,每個定義的 Python 模組架構均完全獨立,方便修改和快速研究實驗。
98
99 🤗 Transformers 支援三個最熱門的深度學習函式庫: [Jax](https://jax.readthedocs.io/en/latest/), [PyTorch](https://pytorch.org/) 以及 [TensorFlow](https://www.tensorflow.org/) — 並與之完美整合。你可以直接使用其中一個框架訓練你的模型,然後用另一個載入和推論。
100
101 ## 線上Demo
102
103 你可以直接在 [model hub](https://huggingface.co/models) 上測試大多數的模型。我們也提供了 [私有模型託管、模型版本管理以及推論API](https://huggingface.co/pricing)。
104
105 這裡是一些範例:
106 - [用 BERT 做遮蓋填詞](https://huggingface.co/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France)
107 - [用 Electra 做專有名詞辨識](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city)
108 - [用 GPT-2 做文本生成](https://huggingface.co/gpt2?text=A+long+time+ago%2C+)
109 - [用 RoBERTa 做自然語言推論](https://huggingface.co/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal)
110 - [用 BART 做文本摘要](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct)
111 - [用 DistilBERT 做問答](https://huggingface.co/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species)
112 - [用 T5 做翻譯](https://huggingface.co/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin)
113
114 **[Write With Transformer](https://transformer.huggingface.co)**,由 Hugging Face 團隊所打造,是一個文本生成的官方 demo。
115
116 ## 如果你在尋找由 Hugging Face 團隊所提供的客製化支援服務
117
118 <a target="_blank" href="https://huggingface.co/support">
119 <img alt="HuggingFace Expert Acceleration Program" src="https://huggingface.co/front/thumbnails/support.png" style="max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);">
120 </a><br>
121
122 ## 快速上手
123
124 我們為快速使用模型提供了 `pipeline` API。 Pipeline 包含了預訓練模型和對應的文本預處理。下面是一個快速使用 pipeline 去判斷正負面情緒的例子:
125
126 ```python
127 >>> from transformers import pipeline
128
129 # 使用情緒分析 pipeline
130 >>> classifier = pipeline('sentiment-analysis')
131 >>> classifier('We are very happy to introduce pipeline to the transformers repository.')
132 [{'label': 'POSITIVE', 'score': 0.9996980428695679}]
133 ```
134
135 第二行程式碼下載並快取 pipeline 使用的預訓練模型,而第三行程式碼則在給定的文本上進行了評估。這裡的答案“正面” (positive) 具有 99.97% 的信賴度。
136
137 許多的 NLP 任務都有隨選即用的預訓練 `pipeline`。例如,我們可以輕鬆地從給定文本中擷取問題答案:
138
139 ``` python
140 >>> from transformers import pipeline
141
142 # 使用問答 pipeline
143 >>> question_answerer = pipeline('question-answering')
144 >>> question_answerer({
145 ... 'question': 'What is the name of the repository ?',
146 ... 'context': 'Pipeline has been included in the huggingface/transformers repository'
147 ... })
148 {'score': 0.30970096588134766, 'start': 34, 'end': 58, 'answer': 'huggingface/transformers'}
149
150 ```
151
152 除了提供問題解答,預訓練模型還提供了對應的信賴度分數以及解答在 tokenized 後的文本中開始和結束的位置。你可以從[這個教學](https://huggingface.co/docs/transformers/task_summary)了解更多 `pipeline` API支援的任務。
153
154 要在你的任務中下載和使用任何預訓練模型很簡單,只需三行程式碼。這裡是 PyTorch 版的範例:
155 ```python
156 >>> from transformers import AutoTokenizer, AutoModel
157
158 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
159 >>> model = AutoModel.from_pretrained("bert-base-uncased")
160
161 >>> inputs = tokenizer("Hello world!", return_tensors="pt")
162 >>> outputs = model(**inputs)
163 ```
164 這裡是對應的 TensorFlow 程式碼:
165 ```python
166 >>> from transformers import AutoTokenizer, TFAutoModel
167
168 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
169 >>> model = TFAutoModel.from_pretrained("bert-base-uncased")
170
171 >>> inputs = tokenizer("Hello world!", return_tensors="tf")
172 >>> outputs = model(**inputs)
173 ```
174
175 Tokenizer 為所有的預訓練模型提供了預處理,並可以直接轉換單一字串(比如上面的例子)或串列 (list)。它會輸出一個的字典 (dict) 讓你可以在下游程式碼裡使用或直接藉由 `**` 運算式傳給模型。
176
177 模型本身是一個常規的 [Pytorch `nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) 或 [TensorFlow `tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model)(取決於你的後端),可依常規方式使用。 [這個教學](https://huggingface.co/transformers/training.html)解釋了如何將這樣的模型整合到一般的 PyTorch 或 TensorFlow 訓練迴圈中,或是如何使用我們的 `Trainer` API 在一個新的資料集上快速進行微調。
178
179 ## 為什麼要用 transformers?
180
181 1. 便於使用的先進模型:
182 - NLU 和 NLG 上性能卓越
183 - 對教學和實作友好且低門檻
184 - 高度抽象,使用者只須學習 3 個類別
185 - 對所有模型使用的制式化API
186
187 1. 更低的運算成本,更少的碳排放:
188 - 研究人員可以分享已訓練的模型而非每次從頭開始訓練
189 - 工程師可以減少計算時間以及生產成本
190 - 數十種模型架構、兩千多個預訓練模型、100多種語言支援
191
192 1. 對於模型生命週期的每一個部分都面面俱到:
193 - 訓練先進的模型,只需 3 行程式碼
194 - 模型可以在不同深度學習框架之間任意轉換
195 - 為訓練、評估和生產選擇最適合的框架,並完美銜接
196
197 1. 為你的需求輕鬆客製化專屬模型和範例:
198 - 我們為每種模型架構提供了多個範例來重現原論文結果
199 - 一致的模型內部架構
200 - 模型檔案可單獨使用,便於修改和快速實驗
201
202 ## 什麼情況下我不該用 transformers?
203
204 - 本函式庫並不是模組化的神經網絡工具箱。模型文件中的程式碼並未做額外的抽象封裝,以便研究人員快速地翻閱及修改程式碼,而不會深陷複雜的類別包裝之中。
205 - `Trainer` API 並非相容任何模型,它只為本函式庫中的模型最佳化。對於一般的機器學習用途,請使用其他函式庫。
206 - 儘管我們已盡力而為,[examples 目錄](https://github.com/huggingface/transformers/tree/main/examples)中的腳本也僅為範例而已。對於特定問題,它們並不一定隨選即用,可能需要修改幾行程式碼以符合需求。
207
208 ## 安裝
209
210 ### 使用 pip
211
212 這個 Repository 已在 Python 3.6+、Flax 0.3.2+、PyTorch 1.3.1+ 和 TensorFlow 2.3+ 下經過測試。
213
214 你可以在[虛擬環境](https://docs.python.org/3/library/venv.html)中安裝 🤗 Transformers。如果你還不熟悉 Python 的虛擬環境,請閱此[使用者指引](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/)。
215
216 首先,用你打算使用的版本的 Python 創建一個虛擬環境並進入。
217
218 然後,你需要安裝 Flax、PyTorch 或 TensorFlow 其中之一。對於該如何在你使用的平台上安裝這些框架,請參閱 [TensorFlow 安裝頁面](https://www.tensorflow.org/install/), [PyTorch 安裝頁面](https://pytorch.org/get-started/locally/#start-locally) 或 [Flax 安裝頁面](https://github.com/google/flax#quick-install)。
219
220 當其中一個後端安裝成功後,🤗 Transformers 可依此安裝:
221
222 ```bash
223 pip install transformers
224 ```
225
226 如果你想要試試範例或者想在正式發布前使用最新開發中的程式碼,你必須[從原始碼安裝](https://huggingface.co/docs/transformers/installation#installing-from-source)。
227
228 ### 使用 conda
229
230 自 Transformers 4.0.0 版始,我們有了一個 conda channel: `huggingface`。
231
232 🤗 Transformers 可以藉由 conda 依此安裝:
233
234 ```shell script
235 conda install -c huggingface transformers
236 ```
237
238 要藉由 conda 安裝 Flax、PyTorch 或 TensorFlow 其中之一,請參閱它們各自安裝頁面的說明。
239
240 ## 模型架構
241
242 **🤗 Transformers 支援的[所有的模型檢查點](https://huggingface.co/models)**,由[使用者](https://huggingface.co/users)和[組織](https://huggingface.co/organizations)上傳,均與 huggingface.co [model hub](https://huggingface.co) 完美結合。
243
244 目前的檢查點數量: 
245
246 🤗 Transformers 目前支援以下的架構(模型概覽請參閱[這裡](https://huggingface.co/docs/transformers/model_summary)):
247
248 1. **[ALBERT](https://huggingface.co/docs/transformers/model_doc/albert)** (from Google Research and the Toyota Technological Institute at Chicago) released with the paper [ALBERT: A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942), by Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut.
249 1. **[BART](https://huggingface.co/docs/transformers/model_doc/bart)** (from Facebook) released with the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/pdf/1910.13461.pdf) by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer.
250 1. **[BARThez](https://huggingface.co/docs/transformers/model_doc/barthez)** (from École polytechnique) released with the paper [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321) by Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis.
251 1. **[BARTpho](https://huggingface.co/docs/transformers/model_doc/bartpho)** (from VinAI Research) released with the paper [BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese](https://arxiv.org/abs/2109.09701) by Nguyen Luong Tran, Duong Minh Le and Dat Quoc Nguyen.
252 1. **[BEiT](https://huggingface.co/docs/transformers/model_doc/beit)** (from Microsoft) released with the paper [BEiT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) by Hangbo Bao, Li Dong, Furu Wei.
253 1. **[BERT](https://huggingface.co/docs/transformers/model_doc/bert)** (from Google) released with the paper [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova.
254 1. **[BERT For Sequence Generation](https://huggingface.co/docs/transformers/model_doc/bert-generation)** (from Google) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn.
255 1. **[BERTweet](https://huggingface.co/docs/transformers/model_doc/bertweet)** (from VinAI Research) released with the paper [BERTweet: A pre-trained language model for English Tweets](https://aclanthology.org/2020.emnlp-demos.2/) by Dat Quoc Nguyen, Thanh Vu and Anh Tuan Nguyen.
256 1. **[BigBird-Pegasus](https://huggingface.co/docs/transformers/model_doc/bigbird_pegasus)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed.
257 1. **[BigBird-RoBERTa](https://huggingface.co/docs/transformers/model_doc/big_bird)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed.
258 1. **[Blenderbot](https://huggingface.co/docs/transformers/model_doc/blenderbot)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston.
259 1. **[BlenderbotSmall](https://huggingface.co/docs/transformers/model_doc/blenderbot-small)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston.
260 1. **[BLOOM](https://huggingface.co/docs/transformers/model_doc/bloom)** (from BigScience workshop) released by the [BigSicence Workshop](https://bigscience.huggingface.co/).
261 1. **[BORT](https://huggingface.co/docs/transformers/model_doc/bort)** (from Alexa) released with the paper [Optimal Subarchitecture Extraction For BERT](https://arxiv.org/abs/2010.10499) by Adrian de Wynter and Daniel J. Perry.
262 1. **[ByT5](https://huggingface.co/docs/transformers/model_doc/byt5)** (from Google Research) released with the paper [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) by Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel.
263 1. **[CamemBERT](https://huggingface.co/docs/transformers/model_doc/camembert)** (from Inria/Facebook/Sorbonne) released with the paper [CamemBERT: a Tasty French Language Model](https://arxiv.org/abs/1911.03894) by Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz Suárez*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot.
264 1. **[CANINE](https://huggingface.co/docs/transformers/model_doc/canine)** (from Google Research) released with the paper [CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation](https://arxiv.org/abs/2103.06874) by Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting.
265 1. **[CLIP](https://huggingface.co/docs/transformers/model_doc/clip)** (from OpenAI) released with the paper [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) by Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever.
266 1. **[CodeGen](https://huggingface.co/docs/transformers/model_doc/codegen)** (from Salesforce) released with the paper [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong.
267 1. **[ConvBERT](https://huggingface.co/docs/transformers/model_doc/convbert)** (from YituTech) released with the paper [ConvBERT: Improving BERT with Span-based Dynamic Convolution](https://arxiv.org/abs/2008.02496) by Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng Yan.
268 1. **[ConvNeXT](https://huggingface.co/docs/transformers/model_doc/convnext)** (from Facebook AI) released with the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, Saining Xie.
269 1. **[CPM](https://huggingface.co/docs/transformers/model_doc/cpm)** (from Tsinghua University) released with the paper [CPM: A Large-scale Generative Chinese Pre-trained Language Model](https://arxiv.org/abs/2012.00413) by Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen, Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, Maosong Sun.
270 1. **[CTRL](https://huggingface.co/docs/transformers/model_doc/ctrl)** (from Salesforce) released with the paper [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858) by Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher.
271 1. **[CvT](https://huggingface.co/docs/transformers/model_doc/cvt)** (from Microsoft) released with the paper [CvT: Introducing Convolutions to Vision Transformers](https://arxiv.org/abs/2103.15808) by Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan, Lei Zhang.
272 1. **[Data2Vec](https://huggingface.co/docs/transformers/model_doc/data2vec)** (from Facebook) released with the paper [Data2Vec: A General Framework for Self-supervised Learning in Speech, Vision and Language](https://arxiv.org/abs/2202.03555) by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli.
273 1. **[DeBERTa](https://huggingface.co/docs/transformers/model_doc/deberta)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen.
274 1. **[DeBERTa-v2](https://huggingface.co/docs/transformers/model_doc/deberta-v2)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen.
275 1. **[Decision Transformer](https://huggingface.co/docs/transformers/model_doc/decision_transformer)** (from Berkeley/Facebook/Google) released with the paper [Decision Transformer: Reinforcement Learning via Sequence Modeling](https://arxiv.org/abs/2106.01345) by Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch.
276 1. **[DeiT](https://huggingface.co/docs/transformers/model_doc/deit)** (from Facebook) released with the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou.
277 1. **[DETR](https://huggingface.co/docs/transformers/model_doc/detr)** (from Facebook) released with the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko.
278 1. **[DialoGPT](https://huggingface.co/docs/transformers/model_doc/dialogpt)** (from Microsoft Research) released with the paper [DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation](https://arxiv.org/abs/1911.00536) by Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan.
279 1. **[DistilBERT](https://huggingface.co/docs/transformers/model_doc/distilbert)** (from HuggingFace), released together with the paper [DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108) by Victor Sanh, Lysandre Debut and Thomas Wolf. The same method has been applied to compress GPT2 into [DistilGPT2](https://github.com/huggingface/transformers/tree/main/examples/distillation), RoBERTa into [DistilRoBERTa](https://github.com/huggingface/transformers/tree/main/examples/distillation), Multilingual BERT into [DistilmBERT](https://github.com/huggingface/transformers/tree/main/examples/distillation) and a German version of DistilBERT.
280 1. **[DiT](https://huggingface.co/docs/transformers/model_doc/dit)** (from Microsoft Research) released with the paper [DiT: Self-supervised Pre-training for Document Image Transformer](https://arxiv.org/abs/2203.02378) by Junlong Li, Yiheng Xu, Tengchao Lv, Lei Cui, Cha Zhang, Furu Wei.
281 1. **[Donut](https://huggingface.co/docs/transformers/main/model_doc/donut)** (from NAVER) released with the paper [OCR-free Document Understanding Transformer](https://arxiv.org/abs/2111.15664) by Geewook Kim, Teakgyu Hong, Moonbin Yim, Jeongyeon Nam, Jinyoung Park, Jinyeong Yim, Wonseok Hwang, Sangdoo Yun, Dongyoon Han, Seunghyun Park.
282 1. **[DPR](https://huggingface.co/docs/transformers/model_doc/dpr)** (from Facebook) released with the paper [Dense Passage Retrieval for Open-Domain Question Answering](https://arxiv.org/abs/2004.04906) by Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih.
283 1. **[DPT](https://huggingface.co/docs/transformers/master/model_doc/dpt)** (from Intel Labs) released with the paper [Vision Transformers for Dense Prediction](https://arxiv.org/abs/2103.13413) by René Ranftl, Alexey Bochkovskiy, Vladlen Koltun.
284 1. **[ELECTRA](https://huggingface.co/docs/transformers/model_doc/electra)** (from Google Research/Stanford University) released with the paper [ELECTRA: Pre-training text encoders as discriminators rather than generators](https://arxiv.org/abs/2003.10555) by Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning.
285 1. **[EncoderDecoder](https://huggingface.co/docs/transformers/model_doc/encoder-decoder)** (from Google Research) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn.
286 1. **[FlauBERT](https://huggingface.co/docs/transformers/model_doc/flaubert)** (from CNRS) released with the paper [FlauBERT: Unsupervised Language Model Pre-training for French](https://arxiv.org/abs/1912.05372) by Hang Le, Loïc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoît Crabbé, Laurent Besacier, Didier Schwab.
287 1. **[FLAVA](https://huggingface.co/docs/transformers/model_doc/flava)** (from Facebook AI) released with the paper [FLAVA: A Foundational Language And Vision Alignment Model](https://arxiv.org/abs/2112.04482) by Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, and Douwe Kiela.
288 1. **[FNet](https://huggingface.co/docs/transformers/model_doc/fnet)** (from Google Research) released with the paper [FNet: Mixing Tokens with Fourier Transforms](https://arxiv.org/abs/2105.03824) by James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon.
289 1. **[Funnel Transformer](https://huggingface.co/docs/transformers/model_doc/funnel)** (from CMU/Google Brain) released with the paper [Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing](https://arxiv.org/abs/2006.03236) by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le.
290 1. **[GLPN](https://huggingface.co/docs/transformers/model_doc/glpn)** (from KAIST) released with the paper [Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth](https://arxiv.org/abs/2201.07436) by Doyeon Kim, Woonghyun Ga, Pyungwhan Ahn, Donggyu Joo, Sehwan Chun, Junmo Kim.
291 1. **[GPT](https://huggingface.co/docs/transformers/model_doc/openai-gpt)** (from OpenAI) released with the paper [Improving Language Understanding by Generative Pre-Training](https://blog.openai.com/language-unsupervised/) by Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever.
292 1. **[GPT Neo](https://huggingface.co/docs/transformers/model_doc/gpt_neo)** (from EleutherAI) released in the repository [EleutherAI/gpt-neo](https://github.com/EleutherAI/gpt-neo) by Sid Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy.
293 1. **[GPT NeoX](https://huggingface.co/docs/transformers/model_doc/gpt_neox)** (from EleutherAI) released with the paper [GPT-NeoX-20B: An Open-Source Autoregressive Language Model](https://arxiv.org/abs/2204.06745) by Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, Samuel Weinbach
294 1. **[GPT-2](https://huggingface.co/docs/transformers/model_doc/gpt2)** (from OpenAI) released with the paper [Language Models are Unsupervised Multitask Learners](https://blog.openai.com/better-language-models/) by Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever**.
295 1. **[GPT-J](https://huggingface.co/docs/transformers/model_doc/gptj)** (from EleutherAI) released with the paper [kingoflolz/mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/) by Ben Wang and Aran Komatsuzaki.
296 1. **[GroupViT](https://huggingface.co/docs/transformers/model_doc/groupvit)** (from UCSD, NVIDIA) released with the paper [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094) by Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang.
297 1. **[Hubert](https://huggingface.co/docs/transformers/model_doc/hubert)** (from Facebook) released with the paper [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed.
298 1. **[I-BERT](https://huggingface.co/docs/transformers/model_doc/ibert)** (from Berkeley) released with the paper [I-BERT: Integer-only BERT Quantization](https://arxiv.org/abs/2101.01321) by Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney, Kurt Keutzer.
299 1. **[ImageGPT](https://huggingface.co/docs/transformers/model_doc/imagegpt)** (from OpenAI) released with the paper [Generative Pretraining from Pixels](https://openai.com/blog/image-gpt/) by Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever.
300 1. **[LayoutLM](https://huggingface.co/docs/transformers/model_doc/layoutlm)** (from Microsoft Research Asia) released with the paper [LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318) by Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou.
301 1. **[LayoutLMv2](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (from Microsoft Research Asia) released with the paper [LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding](https://arxiv.org/abs/2012.14740) by Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou.
302 1. **[LayoutLMv3](https://huggingface.co/docs/transformers/model_doc/layoutlmv3)** (from Microsoft Research Asia) released with the paper [LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking](https://arxiv.org/abs/2204.08387) by Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei.
303 1. **[LayoutXLM](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (from Microsoft Research Asia) released with the paper [LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding](https://arxiv.org/abs/2104.08836) by Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Furu Wei.
304 1. **[LED](https://huggingface.co/docs/transformers/model_doc/led)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan.
305 1. **[LeViT](https://huggingface.co/docs/transformers/model_doc/levit)** (from Meta AI) released with the paper [LeViT: A Vision Transformer in ConvNet's Clothing for Faster Inference](https://arxiv.org/abs/2104.01136) by Ben Graham, Alaaeldin El-Nouby, Hugo Touvron, Pierre Stock, Armand Joulin, Hervé Jégou, Matthijs Douze.
306 1. **[Longformer](https://huggingface.co/docs/transformers/model_doc/longformer)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan.
307 1. **[LongT5](https://huggingface.co/docs/transformers/model_doc/longt5)** (from Google AI) released with the paper [LongT5: Efficient Text-To-Text Transformer for Long Sequences](https://arxiv.org/abs/2112.07916) by Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung, Yinfei Yang.
308 1. **[LUKE](https://huggingface.co/docs/transformers/model_doc/luke)** (from Studio Ousia) released with the paper [LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention](https://arxiv.org/abs/2010.01057) by Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, Yuji Matsumoto.
309 1. **[LXMERT](https://huggingface.co/docs/transformers/model_doc/lxmert)** (from UNC Chapel Hill) released with the paper [LXMERT: Learning Cross-Modality Encoder Representations from Transformers for Open-Domain Question Answering](https://arxiv.org/abs/1908.07490) by Hao Tan and Mohit Bansal.
310 1. **[M-CTC-T](https://huggingface.co/docs/transformers/model_doc/mctct)** (from Facebook) released with the paper [Pseudo-Labeling For Massively Multilingual Speech Recognition](https://arxiv.org/abs/2111.00161) by Loren Lugosch, Tatiana Likhomanenko, Gabriel Synnaeve, and Ronan Collobert.
311 1. **[M2M100](https://huggingface.co/docs/transformers/model_doc/m2m_100)** (from Facebook) released with the paper [Beyond English-Centric Multilingual Machine Translation](https://arxiv.org/abs/2010.11125) by Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin.
312 1. **[MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)** Machine translation models trained using [OPUS](http://opus.nlpl.eu/) data by Jörg Tiedemann. The [Marian Framework](https://marian-nmt.github.io/) is being developed by the Microsoft Translator Team.
313 1. **[MaskFormer](https://huggingface.co/docs/transformers/model_doc/maskformer)** (from Meta and UIUC) released with the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) by Bowen Cheng, Alexander G. Schwing, Alexander Kirillov
314 1. **[mBART](https://huggingface.co/docs/transformers/model_doc/mbart)** (from Facebook) released with the paper [Multilingual Denoising Pre-training for Neural Machine Translation](https://arxiv.org/abs/2001.08210) by Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer.
315 1. **[mBART-50](https://huggingface.co/docs/transformers/model_doc/mbart)** (from Facebook) released with the paper [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) by Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, Angela Fan.
316 1. **[Megatron-BERT](https://huggingface.co/docs/transformers/model_doc/megatron-bert)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro.
317 1. **[Megatron-GPT2](https://huggingface.co/docs/transformers/model_doc/megatron_gpt2)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro.
318 1. **[mLUKE](https://huggingface.co/docs/transformers/model_doc/mluke)** (from Studio Ousia) released with the paper [mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models](https://arxiv.org/abs/2110.08151) by Ryokan Ri, Ikuya Yamada, and Yoshimasa Tsuruoka.
319 1. **[MobileBERT](https://huggingface.co/docs/transformers/model_doc/mobilebert)** (from CMU/Google Brain) released with the paper [MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices](https://arxiv.org/abs/2004.02984) by Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou.
320 1. **[MobileViT](https://huggingface.co/docs/transformers/model_doc/mobilevit)** (from Apple) released with the paper [MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer](https://arxiv.org/abs/2110.02178) by Sachin Mehta and Mohammad Rastegari.
321 1. **[MPNet](https://huggingface.co/docs/transformers/model_doc/mpnet)** (from Microsoft Research) released with the paper [MPNet: Masked and Permuted Pre-training for Language Understanding](https://arxiv.org/abs/2004.09297) by Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu.
322 1. **[MT5](https://huggingface.co/docs/transformers/model_doc/mt5)** (from Google AI) released with the paper [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) by Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel.
323 1. **[MVP](https://huggingface.co/docs/transformers/model_doc/mvp)** (from RUC AI Box) released with the paper [MVP: Multi-task Supervised Pre-training for Natural Language Generation](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen.
324 1. **[Nezha](https://huggingface.co/docs/transformers/model_doc/nezha)** (from Huawei Noah’s Ark Lab) released with the paper [NEZHA: Neural Contextualized Representation for Chinese Language Understanding](https://arxiv.org/abs/1909.00204) by Junqiu Wei, Xiaozhe Ren, Xiaoguang Li, Wenyong Huang, Yi Liao, Yasheng Wang, Jiashu Lin, Xin Jiang, Xiao Chen and Qun Liu.
325 1. **[NLLB](https://huggingface.co/docs/transformers/model_doc/nllb)** (from Meta) released with the paper [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672) by the NLLB team.
326 1. **[Nyströmformer](https://huggingface.co/docs/transformers/model_doc/nystromformer)** (from the University of Wisconsin - Madison) released with the paper [Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention](https://arxiv.org/abs/2102.03902) by Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, Vikas Singh.
327 1. **[OPT](https://huggingface.co/docs/transformers/master/model_doc/opt)** (from Meta AI) released with the paper [OPT: Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) by Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al.
328 1. **[OWL-ViT](https://huggingface.co/docs/transformers/model_doc/owlvit)** (from Google AI) released with the paper [Simple Open-Vocabulary Object Detection with Vision Transformers](https://arxiv.org/abs/2205.06230) by Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, and Neil Houlsby.
329 1. **[Pegasus](https://huggingface.co/docs/transformers/model_doc/pegasus)** (from Google) released with the paper [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777) by Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu.
330 1. **[PEGASUS-X](https://huggingface.co/docs/transformers/main/model_doc/pegasus_x)** (from Google) released with the paper [Investigating Efficiently Extending Transformers for Long Input Summarization](https://arxiv.org/abs/2208.04347) by Jason Phang, Yao Zhao, Peter J. Liu.
331 1. **[Perceiver IO](https://huggingface.co/docs/transformers/model_doc/perceiver)** (from Deepmind) released with the paper [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) by Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hénaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, João Carreira.
332 1. **[PhoBERT](https://huggingface.co/docs/transformers/model_doc/phobert)** (from VinAI Research) released with the paper [PhoBERT: Pre-trained language models for Vietnamese](https://www.aclweb.org/anthology/2020.findings-emnlp.92/) by Dat Quoc Nguyen and Anh Tuan Nguyen.
333 1. **[PLBart](https://huggingface.co/docs/transformers/model_doc/plbart)** (from UCLA NLP) released with the paper [Unified Pre-training for Program Understanding and Generation](https://arxiv.org/abs/2103.06333) by Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, Kai-Wei Chang.
334 1. **[PoolFormer](https://huggingface.co/docs/transformers/model_doc/poolformer)** (from Sea AI Labs) released with the paper [MetaFormer is Actually What You Need for Vision](https://arxiv.org/abs/2111.11418) by Yu, Weihao and Luo, Mi and Zhou, Pan and Si, Chenyang and Zhou, Yichen and Wang, Xinchao and Feng, Jiashi and Yan, Shuicheng.
335 1. **[ProphetNet](https://huggingface.co/docs/transformers/model_doc/prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou.
336 1. **[QDQBert](https://huggingface.co/docs/transformers/model_doc/qdqbert)** (from NVIDIA) released with the paper [Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation](https://arxiv.org/abs/2004.09602) by Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev and Paulius Micikevicius.
337 1. **[RAG](https://huggingface.co/docs/transformers/model_doc/rag)** (from Facebook) released with the paper [Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks](https://arxiv.org/abs/2005.11401) by Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, Douwe Kiela.
338 1. **[REALM](https://huggingface.co/docs/transformers/model_doc/realm.html)** (from Google Research) released with the paper [REALM: Retrieval-Augmented Language Model Pre-Training](https://arxiv.org/abs/2002.08909) by Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat and Ming-Wei Chang.
339 1. **[Reformer](https://huggingface.co/docs/transformers/model_doc/reformer)** (from Google Research) released with the paper [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451) by Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya.
340 1. **[RegNet](https://huggingface.co/docs/transformers/model_doc/regnet)** (from META Research) released with the paper [Designing Network Design Space](https://arxiv.org/abs/2003.13678) by Ilija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, Piotr Dollár.
341 1. **[RemBERT](https://huggingface.co/docs/transformers/model_doc/rembert)** (from Google Research) released with the paper [Rethinking embedding coupling in pre-trained language models](https://arxiv.org/pdf/2010.12821.pdf) by Hyung Won Chung, Thibault Févry, Henry Tsai, M. Johnson, Sebastian Ruder.
342 1. **[ResNet](https://huggingface.co/docs/transformers/model_doc/resnet)** (from Microsoft Research) released with the paper [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) by Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun.
343 1. **[RoBERTa](https://huggingface.co/docs/transformers/model_doc/roberta)** (from Facebook), released together with the paper a [Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov.
344 1. **[RoFormer](https://huggingface.co/docs/transformers/model_doc/roformer)** (from ZhuiyiTechnology), released together with the paper a [RoFormer: Enhanced Transformer with Rotary Position Embedding](https://arxiv.org/pdf/2104.09864v1.pdf) by Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu.
345 1. **[SegFormer](https://huggingface.co/docs/transformers/model_doc/segformer)** (from NVIDIA) released with the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo.
346 1. **[SEW](https://huggingface.co/docs/transformers/model_doc/sew)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi.
347 1. **[SEW-D](https://huggingface.co/docs/transformers/model_doc/sew_d)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi.
348 1. **[SpeechToTextTransformer](https://huggingface.co/docs/transformers/model_doc/speech_to_text)** (from Facebook), released together with the paper [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino.
349 1. **[SpeechToTextTransformer2](https://huggingface.co/docs/transformers/model_doc/speech_to_text_2)** (from Facebook) released with the paper [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) by Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau.
350 1. **[Splinter](https://huggingface.co/docs/transformers/model_doc/splinter)** (from Tel Aviv University) released with the paper [Few-Shot Question Answering by Pretraining Span Selection](https://arxiv.org/abs/2101.00438) by Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy.
351 1. **[SqueezeBERT](https://huggingface.co/docs/transformers/model_doc/squeezebert)** (from Berkeley) released with the paper [SqueezeBERT: What can computer vision teach NLP about efficient neural networks?](https://arxiv.org/abs/2006.11316) by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W. Keutzer.
352 1. **[Swin Transformer](https://huggingface.co/docs/transformers/model_doc/swin)** (from Microsoft) released with the paper [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) by Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo.
353 1. **[Swin Transformer V2](https://huggingface.co/docs/transformers/main/model_doc/swinv2)** (from Microsoft) released with the paper [Swin Transformer V2: Scaling Up Capacity and Resolution](https://arxiv.org/abs/2111.09883) by Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, Furu Wei, Baining Guo.
354 1. **[T5](https://huggingface.co/docs/transformers/model_doc/t5)** (from Google AI) released with the paper [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu.
355 1. **[T5v1.1](https://huggingface.co/docs/transformers/model_doc/t5v1.1)** (from Google AI) released with the paper [google-research/text-to-text-transfer-transformer](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu.
356 1. **[TAPAS](https://huggingface.co/docs/transformers/model_doc/tapas)** (from Google AI) released with the paper [TAPAS: Weakly Supervised Table Parsing via Pre-training](https://arxiv.org/abs/2004.02349) by Jonathan Herzig, Paweł Krzysztof Nowak, Thomas Müller, Francesco Piccinno and Julian Martin Eisenschlos.
357 1. **[TAPEX](https://huggingface.co/docs/transformers/model_doc/tapex)** (from Microsoft Research) released with the paper [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou.
358 1. **[Trajectory Transformer](https://huggingface.co/docs/transformers/model_doc/trajectory_transformers)** (from the University of California at Berkeley) released with the paper [Offline Reinforcement Learning as One Big Sequence Modeling Problem](https://arxiv.org/abs/2106.02039) by Michael Janner, Qiyang Li, Sergey Levine
359 1. **[Transformer-XL](https://huggingface.co/docs/transformers/model_doc/transfo-xl)** (from Google/CMU) released with the paper [Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context](https://arxiv.org/abs/1901.02860) by Zihang Dai*, Zhilin Yang*, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov.
360 1. **[TrOCR](https://huggingface.co/docs/transformers/model_doc/trocr)** (from Microsoft) released with the paper [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) by Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei.
361 1. **[UL2](https://huggingface.co/docs/transformers/model_doc/ul2)** (from Google Research) released with the paper [Unifying Language Learning Paradigms](https://arxiv.org/abs/2205.05131v1) by Yi Tay, Mostafa Dehghani, Vinh Q. Tran, Xavier Garcia, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Neil Houlsby, Donald Metzler
362 1. **[UniSpeech](https://huggingface.co/docs/transformers/model_doc/unispeech)** (from Microsoft Research) released with the paper [UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data](https://arxiv.org/abs/2101.07597) by Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang.
363 1. **[UniSpeechSat](https://huggingface.co/docs/transformers/model_doc/unispeech-sat)** (from Microsoft Research) released with the paper [UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER AWARE PRE-TRAINING](https://arxiv.org/abs/2110.05752) by Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu.
364 1. **[VAN](https://huggingface.co/docs/transformers/model_doc/van)** (from Tsinghua University and Nankai University) released with the paper [Visual Attention Network](https://arxiv.org/pdf/2202.09741.pdf) by Meng-Hao Guo, Cheng-Ze Lu, Zheng-Ning Liu, Ming-Ming Cheng, Shi-Min Hu.
365 1. **[VideoMAE](https://huggingface.co/docs/transformers/main/model_doc/videomae)** (from Multimedia Computing Group, Nanjing University) released with the paper [VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training](https://arxiv.org/abs/2203.12602) by Zhan Tong, Yibing Song, Jue Wang, Limin Wang.
366 1. **[ViLT](https://huggingface.co/docs/transformers/model_doc/vilt)** (from NAVER AI Lab/Kakao Enterprise/Kakao Brain) released with the paper [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) by Wonjae Kim, Bokyung Son, Ildoo Kim.
367 1. **[Vision Transformer (ViT)](https://huggingface.co/docs/transformers/model_doc/vit)** (from Google AI) released with the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby.
368 1. **[VisualBERT](https://huggingface.co/docs/transformers/model_doc/visual_bert)** (from UCLA NLP) released with the paper [VisualBERT: A Simple and Performant Baseline for Vision and Language](https://arxiv.org/pdf/1908.03557) by Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang.
369 1. **[ViTMAE](https://huggingface.co/docs/transformers/model_doc/vit_mae)** (from Meta AI) released with the paper [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377) by Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick.
370 1. **[Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/wav2vec2)** (from Facebook AI) released with the paper [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.
371 1. **[Wav2Vec2-Conformer](https://huggingface.co/docs/transformers/model_doc/wav2vec2-conformer)** (from Facebook AI) released with the paper [FAIRSEQ S2T: Fast Speech-to-Text Modeling with FAIRSEQ](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino.
372 1. **[Wav2Vec2Phoneme](https://huggingface.co/docs/transformers/model_doc/wav2vec2_phoneme)** (from Facebook AI) released with the paper [Simple and Effective Zero-shot Cross-lingual Phoneme Recognition](https://arxiv.org/abs/2109.11680) by Qiantong Xu, Alexei Baevski, Michael Auli.
373 1. **[WavLM](https://huggingface.co/docs/transformers/model_doc/wavlm)** (from Microsoft Research) released with the paper [WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900) by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei.
374 1. **[XGLM](https://huggingface.co/docs/transformers/model_doc/xglm)** (From Facebook AI) released with the paper [Few-shot Learning with Multilingual Language Models](https://arxiv.org/abs/2112.10668) by Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, Xian Li.
375 1. **[XLM](https://huggingface.co/docs/transformers/model_doc/xlm)** (from Facebook) released together with the paper [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample and Alexis Conneau.
376 1. **[XLM-ProphetNet](https://huggingface.co/docs/transformers/model_doc/xlm-prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou.
377 1. **[XLM-RoBERTa](https://huggingface.co/docs/transformers/model_doc/xlm-roberta)** (from Facebook AI), released together with the paper [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Alexis Conneau*, Kartikay Khandelwal*, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov.
378 1. **[XLM-RoBERTa-XL](https://huggingface.co/docs/transformers/model_doc/xlm-roberta-xl)** (from Facebook AI) released with the paper [Larger-Scale Transformers for Multilingual Masked Language Modeling](https://arxiv.org/abs/2105.00572) by Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, Alexis Conneau.
379 1. **[XLNet](https://huggingface.co/docs/transformers/model_doc/xlnet)** (from Google/CMU) released with the paper [XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) by Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le.
380 1. **[XLS-R](https://huggingface.co/docs/transformers/model_doc/xls_r)** (from Facebook AI) released with the paper [XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale](https://arxiv.org/abs/2111.09296) by Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, Michael Auli.
381 1. **[XLSR-Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/xlsr_wav2vec2)** (from Facebook AI) released with the paper [Unsupervised Cross-Lingual Representation Learning For Speech Recognition](https://arxiv.org/abs/2006.13979) by Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli.
382 1. **[YOLOS](https://huggingface.co/docs/transformers/model_doc/yolos)** (from Huazhong University of Science & Technology) released with the paper [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://arxiv.org/abs/2106.00666) by Yuxin Fang, Bencheng Liao, Xinggang Wang, Jiemin Fang, Jiyang Qi, Rui Wu, Jianwei Niu, Wenyu Liu.
383 1. **[YOSO](https://huggingface.co/docs/transformers/model_doc/yoso)** (from the University of Wisconsin - Madison) released with the paper [You Only Sample (Almost) by Zhanpeng Zeng, Yunyang Xiong, Sathya N. Ravi, Shailesh Acharya, Glenn Fung, Vikas Singh.
384 1. 想要貢獻新的模型?我們這裡有一份**詳細指引和模板**來引導你加入新的模型。你可以在 [`templates`](./templates) 目錄中找到它們。記得查看[貢獻指引](./CONTRIBUTING.md)並在開始寫 PR 前聯繫維護人員或開一個新的 issue 來獲得 feedbacks。
385
386 要檢查某個模型是否已有 Flax、PyTorch 或 TensorFlow 的實作,或其是否在🤗 Tokenizers 函式庫中有對應的 tokenizer,敬請參閱[此表](https://huggingface.co/docs/transformers/index#supported-frameworks)。
387
388 這些實作均已於多個資料集測試(請參閱範例腳本)並應與原版實作表現相當。你可以在範例文件的[此節](https://huggingface.co/docs/transformers/examples)中了解實作的細節。
389
390
391 ## 了解更多
392
393 | 章節 | 描述 |
394 |-|-|
395 | [文件](https://huggingface.co/transformers/) | 完整的 API 文件和教學 |
396 | [任務概覽](https://huggingface.co/docs/transformers/task_summary) | 🤗 Transformers 支援的任務 |
397 | [預處理教學](https://huggingface.co/docs/transformers/preprocessing) | 使用 `Tokenizer` 來為模型準備資料 |
398 | [訓練和微調](https://huggingface.co/docs/transformers/training) | 使用 PyTorch/TensorFlow 的內建的訓練方式或於 `Trainer` API 中使用 🤗 Transformers 提供的模型 |
399 | [快速上手:微調和範例腳本](https://github.com/huggingface/transformers/tree/main/examples) | 為各種任務提供的範例腳本 |
400 | [模型分享和上傳](https://huggingface.co/docs/transformers/model_sharing) | 上傳並與社群分享你微調的模型 |
401 | [遷移](https://huggingface.co/docs/transformers/migration) | 從 `pytorch-transformers` 或 `pytorch-pretrained-bert` 遷移到 🤗 Transformers |
402
403 ## 引用
404
405 我們已將此函式庫的[論文](https://www.aclweb.org/anthology/2020.emnlp-demos.6/)正式發表。如果你使用了 🤗 Transformers 函式庫,可以引用:
406 ```bibtex
407 @inproceedings{wolf-etal-2020-transformers,
408 title = "Transformers: State-of-the-Art Natural Language Processing",
409 author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush",
410 booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
411 month = oct,
412 year = "2020",
413 address = "Online",
414 publisher = "Association for Computational Linguistics",
415 url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6",
416 pages = "38--45"
417 }
418 ```
419
[end of README_zh-hant.md]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
huggingface/transformers
|
7d5fde991d598370d961be8cb7add6541e2b59ce
|
bug in transformers notebook (training from scratch)?
Hello there!
First of all, I cannot thank @Rocketknight1 enough for the amazing work he has been doing to create `tensorflow` versions of the notebooks. On my side, I have spent some time and money (colab pro) trying to tie the notebooks together to create a full classifier from scratch with the following steps:
1. train the tokenizer
2. train the language model
3. train de classification head.
Unfortunately, I run into two issues. You can use the fully working notebook pasted below.
First issue: by training my own tokenizer I actually get a `perplexity` (225) that is way worse than the example shown https://github.com/huggingface/notebooks/blob/new_tf_notebooks/examples/language_modeling-tf.ipynb when using
```
model_checkpoint = "bert-base-uncased"
datasets = load_dataset("wikitext", "wikitext-2-raw-v1")
```
This is puzzling as the tokenizer should be fine-tuned to the data used in the original tf2 notebook!
Second, there seem to be some **python issue** when I try to fine-tune the language model I obtained above with a text classification head.
Granted, the `tokenizer` and the underlying `language model` have been trained on another dataset (the wikipedia dataset from the previous two tf2 notebook that is). See https://github.com/huggingface/notebooks/blob/new_tf_notebooks/examples/text_classification-tf.ipynb . However, I should at least get some valid output! Here the model is complaining about some collate function.
Could you please have a look @sgugger @LysandreJik @Rocketknight1 when you can? I would be very happy to contribute this notebook to the Hugging Face community (although most of the credits go to @Rocketknight1). There is a great demand for building language models and NLP tasks from scratch.
Thanks!!!!
Code below
---
get the most recent versions
```
!pip install git+https://github.com/huggingface/datasets.git
!pip install transformers
```
train tokenizer from scratch
```
from datasets import load_dataset
dataset = load_dataset("wikitext", name="wikitext-2-raw-v1", split="train")
batch_size = 1000
def batch_iterator():
for i in range(0, len(dataset), batch_size):
yield dataset[i : i + batch_size]["text"]
all_texts = [dataset[i : i + batch_size]["text"] for i in range(0, len(dataset), batch_size)]
from tokenizers import decoders, models, normalizers, pre_tokenizers, processors, trainers, Tokenizer
tokenizer = Tokenizer(models.WordPiece(unl_token="[UNK]"))
tokenizer.normalizer = normalizers.BertNormalizer(lowercase=True)
tokenizer.pre_tokenizer = pre_tokenizers.BertPreTokenizer()
special_tokens = ["[UNK]", "[PAD]", "[CLS]", "[SEP]", "[MASK]"]
trainer = trainers.WordPieceTrainer(vocab_size=25000, special_tokens=special_tokens)
tokenizer.train_from_iterator(batch_iterator(), trainer=trainer)
cls_token_id = tokenizer.token_to_id("[CLS]")
sep_token_id = tokenizer.token_to_id("[SEP]")
print(cls_token_id, sep_token_id)
tokenizer.post_processor = processors.TemplateProcessing(
single=f"[CLS]:0 $A:0 [SEP]:0",
pair=f"[CLS]:0 $A:0 [SEP]:0 $B:1 [SEP]:1",
special_tokens=[
("[CLS]", cls_token_id),
("[SEP]", sep_token_id),
],
)
tokenizer.decoder = decoders.WordPiece(prefix="##")
from transformers import BertTokenizerFast
mytokenizer = BertTokenizerFast(tokenizer_object=tokenizer)
```
causal language from scratch using my own tokenizer `mytokenizer`
```
model_checkpoint = "bert-base-uncased"
datasets = load_dataset("wikitext", "wikitext-2-raw-v1")
def tokenize_function(examples):
return mytokenizer(examples["text"], truncation=True)
tokenized_datasets = datasets.map(
tokenize_function, batched=True, num_proc=4, remove_columns=["text"]
)
block_size = 128
def group_texts(examples):
# Concatenate all texts.
concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()}
total_length = len(concatenated_examples[list(examples.keys())[0]])
# We drop the small remainder, we could add padding if the model supported it instead of this drop, you can
# customize this part to your needs.
total_length = (total_length // block_size) * block_size
# Split by chunks of max_len.
result = {
k: [t[i : i + block_size] for i in range(0, total_length, block_size)]
for k, t in concatenated_examples.items()
}
result["labels"] = result["input_ids"].copy()
return result
lm_datasets = tokenized_datasets.map(
group_texts,
batched=True,
batch_size=1000,
num_proc=4,
)
from transformers import TFAutoModelForMaskedLM
model = TFAutoModelForMaskedLM.from_pretrained(model_checkpoint)
from transformers import create_optimizer, AdamWeightDecay
import tensorflow as tf
optimizer = AdamWeightDecay(lr=2e-5, weight_decay_rate=0.01)
def dummy_loss(y_true, y_pred):
return tf.reduce_mean(y_pred)
model.compile(optimizer=optimizer, loss={"loss": dummy_loss})
from transformers import DataCollatorForLanguageModeling
data_collator = DataCollatorForLanguageModeling(
tokenizer=mytokenizer, mlm_probability=0.15, return_tensors="tf"
)
train_set = lm_datasets["train"].to_tf_dataset(
columns=["attention_mask", "input_ids", "labels"],
shuffle=True,
batch_size=16,
collate_fn=data_collator,
)
validation_set = lm_datasets["validation"].to_tf_dataset(
columns=["attention_mask", "input_ids", "labels"],
shuffle=False,
batch_size=16,
collate_fn=data_collator,
)
model.fit(train_set, validation_data=validation_set, epochs=1)
import math
eval_results = model.evaluate(validation_set)[0]
print(f"Perplexity: {math.exp(eval_results):.2f}")
```
and fine tune a classification tasks
```
GLUE_TASKS = [
"cola",
"mnli",
"mnli-mm",
"mrpc",
"qnli",
"qqp",
"rte",
"sst2",
"stsb",
"wnli",
]
task = "sst2"
batch_size = 16
from datasets import load_dataset, load_metric
actual_task = "mnli" if task == "mnli-mm" else task
dataset = load_dataset("glue", actual_task)
metric = load_metric("glue", actual_task)
```
and now try to classify text
```
from transformers import AutoTokenizer
task_to_keys = {
"cola": ("sentence", None),
"mnli": ("premise", "hypothesis"),
"mnli-mm": ("premise", "hypothesis"),
"mrpc": ("sentence1", "sentence2"),
"qnli": ("question", "sentence"),
"qqp": ("question1", "question2"),
"rte": ("sentence1", "sentence2"),
"sst2": ("sentence", None),
"stsb": ("sentence1", "sentence2"),
"wnli": ("sentence1", "sentence2"),
}
sentence1_key, sentence2_key = task_to_keys[task]
if sentence2_key is None:
print(f"Sentence: {dataset['train'][0][sentence1_key]}")
else:
print(f"Sentence 1: {dataset['train'][0][sentence1_key]}")
print(f"Sentence 2: {dataset['train'][0][sentence2_key]}")
def preprocess_function(examples):
if sentence2_key is None:
return mytokenizer(examples[sentence1_key], truncation=True)
return mytokenizer(examples[sentence1_key], examples[sentence2_key], truncation=True)
pre_tokenizer_columns = set(dataset["train"].features)
encoded_dataset = dataset.map(preprocess_function, batched=True)
tokenizer_columns = list(set(encoded_dataset["train"].features) - pre_tokenizer_columns)
print("Columns added by tokenizer:", tokenizer_columns)
validation_key = (
"validation_mismatched"
if task == "mnli-mm"
else "validation_matched"
if task == "mnli"
else "validation"
)
tf_train_dataset = encoded_dataset["train"].to_tf_dataset(
columns=tokenizer_columns,
label_cols=["label"],
shuffle=True,
batch_size=16,
collate_fn=mytokenizer.pad,
)
tf_validation_dataset = encoded_dataset[validation_key].to_tf_dataset(
columns=tokenizer_columns,
label_cols=["label"],
shuffle=False,
batch_size=16,
collate_fn=mytokenizer.pad,
)
from transformers import TFAutoModelForSequenceClassification
import tensorflow as tf
num_labels = 3 if task.startswith("mnli") else 1 if task == "stsb" else 2
if task == "stsb":
loss = tf.keras.losses.MeanSquaredError()
num_labels = 1
elif task.startswith("mnli"):
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
num_labels = 3
else:
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
num_labels = 2
model = TFAutoModelForSequenceClassification.from_pretrained(
model, num_labels=num_labels
)
from transformers import create_optimizer
num_epochs = 5
batches_per_epoch = len(encoded_dataset["train"]) // batch_size
total_train_steps = int(batches_per_epoch * num_epochs)
optimizer, schedule = create_optimizer(
init_lr=2e-5, num_warmup_steps=0, num_train_steps=total_train_steps
)
model.compile(optimizer=optimizer, loss=loss)
metric_name = (
"pearson"
if task == "stsb"
else "matthews_correlation"
if task == "cola"
else "accuracy"
)
def compute_metrics(predictions, labels):
if task != "stsb":
predictions = np.argmax(predictions, axis=1)
else:
predictions = predictions[:, 0]
return metric.compute(predictions=predictions, references=labels)
model.fit(
tf_train_dataset,
validation_data=tf_validation_dataset,
epochs=5,
callbacks=tf.keras.callbacks.EarlyStopping(patience=2),
)
predictions = model.predict(tf_validation_dataset)["logits"]
compute_metrics(predictions, np.array(encoded_dataset[validation_key]["label"]))
Loading cached processed dataset at /root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-d01ad7112f932f9c.arrow
Loading cached processed dataset at /root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-de5efda680a1f856.arrow
Loading cached processed dataset at /root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-0f3c1e00b7f03ba8.arrow
Sentence: hide new secretions from the parental units
Columns added by tokenizer: ['attention_mask', 'input_ids', 'token_type_ids']
---------------------------------------------------------------------------
VisibleDeprecationWarning Traceback (most recent call last)
<ipython-input-42-6eba4122302c> in <module>()
44 shuffle=True,
45 batch_size=16,
---> 46 collate_fn=mytokenizer.pad,
47 )
48 tf_validation_dataset = encoded_dataset[validation_key].to_tf_dataset(
9 frames
/usr/local/lib/python3.7/dist-packages/datasets/formatting/formatting.py in _arrow_array_to_numpy(self, pa_array)
165 # cast to list of arrays or we end up with a np.array with dtype object
166 array: List[np.ndarray] = pa_array.to_numpy(zero_copy_only=zero_copy_only).tolist()
--> 167 return np.array(array, copy=False, **self.np_array_kwargs)
168
169
VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
```
What do you think? Happy to help if I can
Thanks!!
|
For the first issue you are training from scratch a new model versus fine-tuning one that has been pretrained on way more data. It's completely normal that the latter wins. As for the second one, I'm not sure you can directly use the tokenizer.pad method as a collation function.
Note that since you are copying the error messages, you should expand the intermediate frames so we can see where the error comes from.
thanks @sgugger could you please clarify what you mean by
> As for the second one, I'm not sure you can directly use the tokenizer.pad method as a collation function.
The call
```
tf_train_dataset = encoded_dataset["train"].to_tf_dataset(
columns=tokenizer_columns,
label_cols=["label"],
shuffle=True,
batch_size=16,
collate_fn=mytokenizer.pad,
```
comes directly from the official tf2 notebook https://github.com/huggingface/notebooks/blob/new_tf_notebooks/examples/text_classification-tf.ipynb
expanded error here, thanks!
```
Loading cached processed dataset at /root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-d01ad7112f932f9c.arrow
Loading cached processed dataset at /root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-de5efda680a1f856.arrow
Loading cached processed dataset at /root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-0f3c1e00b7f03ba8.arrow
Sentence: hide new secretions from the parental units
{'input_ids': [[2, 11384, 1363, 3215, 1325, 1218, 1125, 10341, 1139, 3464, 3], [2, 4023, 1491, 15755, 16, 1520, 4610, 1128, 13221, 802, 3], [2, 1187, 13755, 1327, 2845, 1142, 18920, 802, 4245, 3168, 7806, 1542, 2569, 3796, 3], [2, 3419, 22353, 13782, 1145, 3802, 1125, 1913, 2493, 3], [2, 1161, 1125, 6802, 11823, 17, 1137, 17, 1125, 17, 1233, 3765, 802, 1305, 18029, 802, 1125, 21157, 1843, 14645, 1280, 1427, 3]], 'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]}
Columns added by tokenizer: ['attention_mask', 'input_ids', 'token_type_ids']
ClassLabel(num_classes=2, names=['negative', 'positive'], names_file=None, id=None)
---------------------------------------------------------------------------
VisibleDeprecationWarning Traceback (most recent call last)
<ipython-input-56-ddb32272e3ba> in <module>()
47 shuffle=True,
48 batch_size=16,
---> 49 collate_fn=mytokenizer.pad,
50 )
51 tf_validation_dataset = encoded_dataset[validation_key].to_tf_dataset(
9 frames
/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in to_tf_dataset(self, columns, batch_size, shuffle, drop_remainder, collate_fn, collate_fn_args, label_cols, dummy_labels, prefetch)
349 return [tf.convert_to_tensor(arr) for arr in out_batch]
350
--> 351 test_batch = np_get_batch(np.arange(batch_size))
352
353 @tf.function(input_signature=[tf.TensorSpec(None, tf.int64)])
/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in np_get_batch(indices)
323
324 def np_get_batch(indices):
--> 325 batch = dataset[indices]
326 out_batch = []
327 if collate_fn is not None:
/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in __getitem__(self, key)
1780 format_columns=self._format_columns,
1781 output_all_columns=self._output_all_columns,
-> 1782 format_kwargs=self._format_kwargs,
1783 )
1784
/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in _getitem(self, key, format_type, format_columns, output_all_columns, format_kwargs)
1769 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None)
1770 formatted_output = format_table(
-> 1771 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns
1772 )
1773 return formatted_output
/usr/local/lib/python3.7/dist-packages/datasets/formatting/formatting.py in format_table(table, key, formatter, format_columns, output_all_columns)
420 else:
421 pa_table_to_format = pa_table.drop(col for col in pa_table.column_names if col not in format_columns)
--> 422 formatted_output = formatter(pa_table_to_format, query_type=query_type)
423 if output_all_columns:
424 if isinstance(formatted_output, MutableMapping):
/usr/local/lib/python3.7/dist-packages/datasets/formatting/formatting.py in __call__(self, pa_table, query_type)
196 return self.format_column(pa_table)
197 elif query_type == "batch":
--> 198 return self.format_batch(pa_table)
199
200 def format_row(self, pa_table: pa.Table) -> RowFormat:
/usr/local/lib/python3.7/dist-packages/datasets/formatting/formatting.py in format_batch(self, pa_table)
241
242 def format_batch(self, pa_table: pa.Table) -> dict:
--> 243 return self.numpy_arrow_extractor(**self.np_array_kwargs).extract_batch(pa_table)
244
245
/usr/local/lib/python3.7/dist-packages/datasets/formatting/formatting.py in extract_batch(self, pa_table)
152
153 def extract_batch(self, pa_table: pa.Table) -> dict:
--> 154 return {col: self._arrow_array_to_numpy(pa_table[col]) for col in pa_table.column_names}
155
156 def _arrow_array_to_numpy(self, pa_array: pa.Array) -> np.ndarray:
/usr/local/lib/python3.7/dist-packages/datasets/formatting/formatting.py in <dictcomp>(.0)
152
153 def extract_batch(self, pa_table: pa.Table) -> dict:
--> 154 return {col: self._arrow_array_to_numpy(pa_table[col]) for col in pa_table.column_names}
155
156 def _arrow_array_to_numpy(self, pa_array: pa.Array) -> np.ndarray:
/usr/local/lib/python3.7/dist-packages/datasets/formatting/formatting.py in _arrow_array_to_numpy(self, pa_array)
165 # cast to list of arrays or we end up with a np.array with dtype object
166 array: List[np.ndarray] = pa_array.to_numpy(zero_copy_only=zero_copy_only).tolist()
--> 167 return np.array(array, copy=False, **self.np_array_kwargs)
168
169
VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
```
I'm sure @Rocketknight1 will know what's going on here :-)
waiting for @Rocketknight1 then! Thanks
@Rocketknight1 @sgugger interestingly running the same notebook today (with the new pip install that is) returns another error
Not sure what the issue is this time... Any ideas?
Thanks!
```
Asking to truncate to max_length but no maximum length is provided and the model has no predefined maximum length. Default to no truncation.
Sentence: hide new secretions from the parental units
{'input_ids': [[2, 11384, 1363, 3215, 1325, 1218, 1125, 10341, 1139, 3464, 3], [2, 4023, 1491, 15755, 16, 1520, 4610, 1128, 13221, 798, 3], [2, 1187, 13755, 1327, 2845, 1142, 18920, 798, 4245, 3168, 7806, 1542, 2569, 3796, 3], [2, 3419, 22351, 13782, 1145, 3802, 1125, 1913, 2493, 3], [2, 1161, 1125, 6802, 11823, 17, 1137, 17, 1125, 17, 1233, 3765, 798, 1305, 18030, 798, 1125, 21156, 1843, 14645, 1280, 1427, 3]], 'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]}
100%
68/68 [00:04<00:00, 20.16ba/s]
100%
1/1 [00:00<00:00, 10.70ba/s]
100%
2/2 [00:00<00:00, 13.42ba/s]
Columns added by tokenizer: ['token_type_ids', 'input_ids', 'attention_mask']
ClassLabel(num_classes=2, names=['negative', 'positive'], names_file=None, id=None)
/usr/local/lib/python3.7/dist-packages/datasets/formatting/formatting.py:167: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
return np.array(array, copy=False, **self.np_array_kwargs)
404 Client Error: Not Found for url: https://huggingface.co/%3Ctransformers.models.bert.modeling_tf_bert.TFBertForMaskedLM%20object%20at%200x7f1f29039850%3E/resolve/main/config.json
---------------------------------------------------------------------------
HTTPError Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/transformers/configuration_utils.py in get_config_dict(cls, pretrained_model_name_or_path, **kwargs)
553 use_auth_token=use_auth_token,
--> 554 user_agent=user_agent,
555 )
6 frames
/usr/local/lib/python3.7/dist-packages/transformers/file_utils.py in cached_path(url_or_filename, cache_dir, force_download, proxies, resume_download, user_agent, extract_compressed_file, force_extract, use_auth_token, local_files_only)
1409 use_auth_token=use_auth_token,
-> 1410 local_files_only=local_files_only,
1411 )
/usr/local/lib/python3.7/dist-packages/transformers/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, use_auth_token, local_files_only)
1573 r = requests.head(url, headers=headers, allow_redirects=False, proxies=proxies, timeout=etag_timeout)
-> 1574 r.raise_for_status()
1575 etag = r.headers.get("X-Linked-Etag") or r.headers.get("ETag")
/usr/local/lib/python3.7/dist-packages/requests/models.py in raise_for_status(self)
940 if http_error_msg:
--> 941 raise HTTPError(http_error_msg, response=self)
942
HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/%3Ctransformers.models.bert.modeling_tf_bert.TFBertForMaskedLM%20object%20at%200x7f1f29039850%3E/resolve/main/config.json
During handling of the above exception, another exception occurred:
OSError Traceback (most recent call last)
<ipython-input-6-ddb32272e3ba> in <module>()
73
74 model = TFAutoModelForSequenceClassification.from_pretrained(
---> 75 model, num_labels=num_labels
76 )
77
/usr/local/lib/python3.7/dist-packages/transformers/models/auto/auto_factory.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
395 if not isinstance(config, PretrainedConfig):
396 config, kwargs = AutoConfig.from_pretrained(
--> 397 pretrained_model_name_or_path, return_unused_kwargs=True, **kwargs
398 )
399 if hasattr(config, "auto_map") and cls.__name__ in config.auto_map:
/usr/local/lib/python3.7/dist-packages/transformers/models/auto/configuration_auto.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
525 """
526 kwargs["_from_auto"] = True
--> 527 config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
528 if "model_type" in config_dict:
529 config_class = CONFIG_MAPPING[config_dict["model_type"]]
/usr/local/lib/python3.7/dist-packages/transformers/configuration_utils.py in get_config_dict(cls, pretrained_model_name_or_path, **kwargs)
568 msg += f"- or '{revision}' is a valid git identifier (branch name, a tag name, or a commit id) that exists for this model name as listed on its model page on 'https://huggingface.co/models'\n\n"
569
--> 570 raise EnvironmentError(msg)
571
572 except json.JSONDecodeError:
OSError: Can't load config for '<transformers.models.bert.modeling_tf_bert.TFBertForMaskedLM object at 0x7f1f29039850>'. Make sure that:
- '<transformers.models.bert.modeling_tf_bert.TFBertForMaskedLM object at 0x7f1f29039850>' is a correct model identifier listed on 'https://huggingface.co/models'
- or '<transformers.models.bert.modeling_tf_bert.TFBertForMaskedLM object at 0x7f1f29039850>' is the correct path to a directory containing a config.json file
```
Hi @randomgambit, sorry for the lengthy delay in replying again! I'm still making changes to some of the lower-level parts of the library, so these notebooks haven't been fully finalized yet.
The `VisibleDeprecationWarning` in your first post is something that will hopefully be fixed by upcoming changes to `datasets`, but for now you can just ignore it.
The error you're getting in your final post is, I think, caused by you overwriting the variable `model` in your code. The `from_pretrained()` method expects a string like `bert-base-cased`, but it seems like you've created an actual TF model with that variable name. If you pass an actual model object to `from_pretrained()` it'll get very confused - so make sure that whatever argument you're passing there is a string and not something else!
thanks @Rocketknight1, super useful as usual. So what you are saying is that I should have saved my tokenizer `mytokenizer` and my language model `model` using `save_pretrained()`, and then I need to load the model with a classification head using `TFAutoModelForSequenceClassification`, right?
```
model.save_pretrained('mymodel')
mytokenizer.save_pretrained('mytokenizer')
model = TFAutoModelForSequenceClassification.from_pretrained(
'mymodel', num_labels=num_labels
)
```
This seems to work. I will try to adapt the code so that both the tokenization and the language model are performed on the dataset actually used in the classidication task `(dataset = load_dataset("glue", "sst2")`. Do you mind having a look when i'm done? This will be a super useful notebook for everyone.
Thanks!
@Rocketknight1 @sgugger I can confirm the new TF notebook works beautifully! Thanks! Just a follow up though: I tried to fine-tune a `longformer` model and everything works smoothly until the `model.fit` call, where I get a cryptic message
This is the model I use:
```
task = "sst2"
model_checkpoint = "allenai/longformer-large-4096"
batch_size = 16
```
and then you can run the default notebook https://github.com/huggingface/notebooks/blob/master/examples/text_classification-tf.ipynb until you reach the end
```
model.fit(
tf_train_dataset,
validation_data=tf_validation_dataset,
epochs=3)
Epoch 1/3
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-28-4075d9d9fb81> in <module>()
3 tf_train_dataset,
4 validation_data=tf_validation_dataset,
----> 5 epochs=3)
9 frames
/usr/local/lib/python3.7/dist-packages/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing)
1182 _r=1):
1183 callbacks.on_train_batch_begin(step)
-> 1184 tmp_logs = self.train_function(iterator)
1185 if data_handler.should_sync:
1186 context.async_wait()
/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py in __call__(self, *args, **kwds)
883
884 with OptionalXlaContext(self._jit_compile):
--> 885 result = self._call(*args, **kwds)
886
887 new_tracing_count = self.experimental_get_tracing_count()
/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py in _call(self, *args, **kwds)
922 # In this case we have not created variables on the first call. So we can
923 # run the first trace but we should fail if variables are created.
--> 924 results = self._stateful_fn(*args, **kwds)
925 if self._created_variables and not ALLOW_DYNAMIC_VARIABLE_CREATION:
926 raise ValueError("Creating variables on a non-first call to a function"
/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py in __call__(self, *args, **kwargs)
3036 with self._lock:
3037 (graph_function,
-> 3038 filtered_flat_args) = self._maybe_define_function(args, kwargs)
3039 return graph_function._call_flat(
3040 filtered_flat_args, captured_inputs=graph_function.captured_inputs) # pylint: disable=protected-access
/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py in _maybe_define_function(self, args, kwargs)
3458 call_context_key in self._function_cache.missed):
3459 return self._define_function_with_shape_relaxation(
-> 3460 args, kwargs, flat_args, filtered_flat_args, cache_key_context)
3461
3462 self._function_cache.missed.add(call_context_key)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py in _define_function_with_shape_relaxation(self, args, kwargs, flat_args, filtered_flat_args, cache_key_context)
3380
3381 graph_function = self._create_graph_function(
-> 3382 args, kwargs, override_flat_arg_shapes=relaxed_arg_shapes)
3383 self._function_cache.arg_relaxed[rank_only_cache_key] = graph_function
3384
/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes)
3306 arg_names=arg_names,
3307 override_flat_arg_shapes=override_flat_arg_shapes,
-> 3308 capture_by_value=self._capture_by_value),
3309 self._function_attributes,
3310 function_spec=self.function_spec,
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes, acd_record_initial_resource_uses)
1005 _, original_func = tf_decorator.unwrap(python_func)
1006
-> 1007 func_outputs = python_func(*func_args, **func_kwargs)
1008
1009 # invariant: `func_outputs` contains only Tensors, CompositeTensors,
/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py in wrapped_fn(*args, **kwds)
666 # the function a weak reference to itself to avoid a reference cycle.
667 with OptionalXlaContext(compile_with_xla):
--> 668 out = weak_wrapped_fn().__wrapped__(*args, **kwds)
669 return out
670
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs)
992 except Exception as e: # pylint:disable=broad-except
993 if hasattr(e, "ag_error_metadata"):
--> 994 raise e.ag_error_metadata.to_exception(e)
995 else:
996 raise
TypeError: in user code:
/usr/local/lib/python3.7/dist-packages/keras/engine/training.py:853 train_function *
return step_function(self, iterator)
/usr/local/lib/python3.7/dist-packages/transformers/models/longformer/modeling_tf_longformer.py:2408 call *
inputs["global_attention_mask"] = tf.tensor_scatter_nd_update(
/usr/local/lib/python3.7/dist-packages/tensorflow/python/util/dispatch.py:206 wrapper **
return target(*args, **kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/array_ops.py:5755 tensor_scatter_nd_update
tensor=tensor, indices=indices, updates=updates, name=name)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/gen_array_ops.py:11311 tensor_scatter_update
updates=updates, name=name)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/op_def_library.py:558 _apply_op_helper
inferred_from[input_arg.type_attr]))
TypeError: Input 'updates' of 'TensorScatterUpdate' Op has type int32 that does not match type int64 of argument 'tensor'.
```
Maybe there is something specific to `longformer` that does not work well with the current notebook? What do you all think?
Thanks!
@Rocketknight1 I know you are busy (and I cannot thank you enough for the magnificent TF notebooks!) but I wanted to let you know that I also have tried with `allenai/longformer-base-4096` and I am getting the same `int64` error. Please let me know if I can do anything to help you out.
Thanks!
Hi @Rocketknight1 I hope all is well!
I know wonder if `longformer` can be trained at all with this notebook. Indeed, I read that
`This notebook is built to run on any of the tasks in the list above, with any model checkpoint from the Model Hub as long as that model has a version with a classification head.`
If so, could you please tell me which TF notebook I need to adapt to make it work?
Thanks!!
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
Have you found any solution @randomgambit? Running into this myself.
i'll try passing in zeros cast to `int32` to the `global_attention_mask` param to `fit` and see if that helps. the `tf.zeros_like` used by `transformers` to generate the mask (when none are passed in by the user) must default to `int64`?
@randomgambit try the opposite of what I said above. You need to cast your `input_ids` to `tf.int32`. something like this should work:
```
input_ids = tf.convert_to_tensor([tf.convert_to_tensor(row, dtype=tf.int32)
for row in input_ids], dtype=tf.int32)
```
it would probably work via equivalent `numpy` methods, but I haven't tried that yet. the default dtype for `tf.zeros_like` is `tf.int32` (transformers makes `global_attention_mask` using `tf.zeros_like` for you if you don't pass it in).
you could probably also create the `global_attention_mask` yourself as dtype `tf.int64`. point being i think they all just need to be the same type.
we can probably close this @Rocketknight1
thanks @jmwoloso, I initially didn't see your message. I am hoping @Rocketknight1 can just confirm all is good before closing... Thanks!
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
Ran into the same problem. I am totally lost.
Here is what I did
`import numpy as np
my_dict = {'text': ["random text 1", "random text 2", "random text 3"],
'label': [0, 0, 1]}
from datasets import Dataset
dataset = Dataset.from_dict(my_dict)`
`
from transformers import LongformerTokenizer, TFLongformerForSequenceClassification
tokenizer = LongformerTokenizer.from_pretrained('allenai/longformer-base-4096')
def tokenize_function(examples):
r=tokenizer(examples["text"], padding="max_length", truncation=True)
r['input_ids']= [tf.convert_to_tensor(row, dtype=tf.int32)
for row in r['input_ids']]
r['attention_mask']= [tf.convert_to_tensor(row, dtype=tf.int32)
for row in r['attention_mask']]
return r
tokenized_datasets = dataset.map(tokenize_function, batched=True)
small_train_dataset = tokenized_datasets.shuffle(seed=42)
from transformers import DefaultDataCollator
data_collator = DefaultDataCollator(return_tensors="tf")
tf_train_dataset = small_train_dataset.to_tf_dataset(
columns=["attention_mask", "input_ids", "token_type_ids"],
label_cols=["labels"],
shuffle=True,
collate_fn=data_collator,
batch_size=8,
)
model.fit(tf_train_dataset, batch_size=1)
`
@randomgambit and @jmwoloso any ideas?
@ichenjia There were a few errors mentioned throughout this thread. Which one are you seeing?
Thank you. It’s the last error related to int32 and int64
On Sat, Sep 3, 2022 at 11:20 PM Jason Wolosonovich ***@***.***>
wrote:
> @ichenjia <https://github.com/ichenjia> There were a few errors mentioned
> throughout this thread. Which one are you seeing?
>
> —
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/13632#issuecomment-1236269062>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AA4MCGZ7QRFZSF7GSGCBS3DV4Q5TVANCNFSM5EIHFCCA>
> .
> You are receiving this because you were mentioned.Message ID:
> ***@***.***>
>
@ichenjia Did you try my solution of casting your `input_ids` to `tf.int32`?
> @ichenjia Did you try my solution of casting your `input_ids` to `tf.int32`?
Thank you. Here is what I did per the earlier tip from this thread
`r['input_ids']= [tf.convert_to_tensor(row, dtype=tf.int32)
for row in r['input_ids']]
r['attention_mask']= [tf.convert_to_tensor(row, dtype=tf.int32) `
In the tokenizer function mapped to dataset
I still got that int32 error. Did I do something wrong?
@jmwoloso
After reading the source code of Dataset, I think the problem is in the to_tf_dataset function, which called
`_get_output_signature` LN 290-303
```
if np.issubdtype(np_arrays[0].dtype, np.integer) or np_arrays[0].dtype == bool:
tf_dtype = tf.int64
np_dtype = np.int64
elif np.issubdtype(np_arrays[0].dtype, np.number):
tf_dtype = tf.float32
np_dtype = np.float32
elif np_arrays[0].dtype.kind == "U": # Unicode strings
np_dtype = np.unicode_
tf_dtype = tf.string
else:
raise RuntimeError(
f"Unrecognized array dtype {np_arrays[0].dtype}. \n"
"Nested types and image/audio types are not supported yet."
)
```
It forces a tf.int64 instead of tf.int32. It doesn't look like we have any control over it outside the API
There are always more layers, it seems @ichenjia :) I think we definitely have some control, or at least a way to hack it to prove the theory (thanks Python!). Could you try something like below as a temporary work around to see if it solves it?
I haven't looked at the source extensively, but maybe as a permanent fix we could add some dtype checking in `_get_output_signature` of the dataset in order to preserve what is passed in, but I'd defer to the HF crew on what, if anything, could/should be done assuming this hack works.
But until then, maybe this will help. We can try overriding that private method. (Also, to get the markdown formatting to show as a script, enclose your code with 3 backticks instead of 1).
*Edit was to fix formatting
```python
import types
import numpy as np
def _get_output_signature(
dataset: "Dataset",
collate_fn: Callable,
collate_fn_args: dict,
cols_to_retain: Optional[List[str]] = None,
batch_size: Optional[int] = None,
num_test_batches: int = 10,
):
"""Private method used by `to_tf_dataset()` to find the shapes and dtypes of samples from this dataset
after being passed through the collate_fn. Tensorflow needs an exact signature for tf.numpy_function, so
the only way to do this is to run test batches - the collator may add or rename columns, so we can't figure
it out just by inspecting the dataset.
Args:
dataset (:obj:`Dataset`): Dataset to load samples from.
collate_fn(:obj:`bool`): Shuffle the dataset order when loading. Recommended True for training, False for
validation/evaluation.
collate_fn(:obj:`Callable`): A function or callable object (such as a `DataCollator`) that will collate
lists of samples into a batch.
collate_fn_args (:obj:`Dict`): A `dict` of keyword arguments to be passed to the
`collate_fn`.
batch_size (:obj:`int`, optional): The size of batches loaded from the dataset. Used for shape inference.
Can be None, which indicates that batch sizes can be variable.
Returns:
:obj:`dict`: Dict mapping column names to tf.Tensorspec objects
:obj:`dict`: Dict mapping column names to np.dtype objects
"""
if config.TF_AVAILABLE:
import tensorflow as tf
else:
raise ImportError("Called a Tensorflow-specific function but Tensorflow is not installed.")
if len(dataset) == 0:
raise ValueError("Unable to get the output signature because the dataset is empty.")
if batch_size is None:
test_batch_size = min(len(dataset), 8)
else:
batch_size = min(len(dataset), batch_size)
test_batch_size = batch_size
test_batches = []
for _ in range(num_test_batches):
indices = sample(range(len(dataset)), test_batch_size)
test_batch = dataset[indices]
if cols_to_retain is not None:
test_batch = {
key: value
for key, value in test_batch.items()
if key in cols_to_retain or key in ("label_ids", "label")
}
test_batch = [{key: value[i] for key, value in test_batch.items()} for i in range(test_batch_size)]
test_batch = collate_fn(test_batch, **collate_fn_args)
test_batches.append(test_batch)
tf_columns_to_signatures = {}
np_columns_to_dtypes = {}
for column in test_batches[0].keys():
raw_arrays = [batch[column] for batch in test_batches]
# In case the collate_fn returns something strange
np_arrays = []
for array in raw_arrays:
if isinstance(array, np.ndarray):
np_arrays.append(array)
elif isinstance(array, tf.Tensor):
np_arrays.append(array.numpy())
else:
np_arrays.append(np.array(array))
if np.issubdtype(np_arrays[0].dtype, np.integer) or np_arrays[0].dtype == bool:
tf_dtype = tf.int32 # formerly tf.int64
np_dtype = np.int32 # formerly tf.int64
elif np.issubdtype(np_arrays[0].dtype, np.number):
tf_dtype = tf.float32
np_dtype = np.float32
elif np_arrays[0].dtype.kind == "U": # Unicode strings
np_dtype = np.unicode_
tf_dtype = tf.string
else:
raise RuntimeError(
f"Unrecognized array dtype {np_arrays[0].dtype}. \n"
"Nested types and image/audio types are not supported yet."
)
shapes = [array.shape for array in np_arrays]
static_shape = []
for dim in range(len(shapes[0])):
sizes = set([shape[dim] for shape in shapes])
if dim == 0:
static_shape.append(batch_size)
continue
if len(sizes) == 1: # This dimension looks constant
static_shape.append(sizes.pop())
else: # Use None for variable dimensions
static_shape.append(None)
tf_columns_to_signatures[column] = tf.TensorSpec(shape=static_shape, dtype=tf_dtype)
np_columns_to_dtypes[column] = np_dtype
return tf_columns_to_signatures, np_columns_to_dtypes
my_dict = {'text': ["random text 1", "random text 2", "random text 3"],
'label': [0, 0, 1]}
from datasets import Dataset
dataset = Dataset.from_dict(my_dict)
from transformers import LongformerTokenizer, TFLongformerForSequenceClassification
tokenizer = LongformerTokenizer.from_pretrained('allenai/longformer-base-4096')
def tokenize_function(examples):
r=tokenizer(examples["text"], padding="max_length", truncation=True)
r['input_ids']= [tf.convert_to_tensor(row, dtype=tf.int32)
for row in r['input_ids']]
r['attention_mask']= [tf.convert_to_tensor(row, dtype=tf.int32)
for row in r['attention_mask']]
return r
tokenized_datasets = dataset.map(tokenize_function, batched=True)
small_train_dataset = tokenized_datasets.shuffle(seed=42)
from transformers import DefaultDataCollator
data_collator = DefaultDataCollator(return_tensors="tf")
# override our instance method
tf_train_dataset._get_output_signature = types.MethodType(_get_output_signature, tf_train_dataset)
tf_train_dataset = small_train_dataset.to_tf_dataset(
columns=["attention_mask", "input_ids", "token_type_ids"],
label_cols=["labels"],
shuffle=True,
collate_fn=data_collator,
batch_size=8,
)
model.fit(tf_train_dataset, batch_size=1)
```
Hi @jmwoloso @ichenjia, sorry for only seeing this now! Just to clarify, are you encountering difficulties passing `tf.int64` values to `TFLongFormer`? You're correct that the `to_tf_dataset` and `prepare_tf_dataset` methods cast all int outputs to `tf.int64`, but this is because our policy is that our models should always accept `tf.int64` for any integer tensor inputs. If you're encountering issues with that, it's more likely a bug in LongFormer than in `to_tf_dataset`!
Hi @Rocketknight1 thanks for the reply. That all makes sense. This thread has kind of morphed, but I believe you solved the original issue which dealt with trying to pass ragged tensors to the model.
The next issue that came up from that was that the `TensorScatterUpdate` op in TF expects `tf.int32` inputs (according to the traceback) but was getting `tf.int64`. That originates in the `modeling_tf_longformer.py` module when the `global_attention_mask` is created.
I can take a look and see if there is anything to be done in that longformer file, but this seems like a lower-level TF op issue to me. But you are the TF scape-GOAT around here, so I'll defer to your guidance/wisdom :)
|
2022-09-06T18:33:51Z
|
<patch>
diff --git a/src/transformers/models/led/modeling_tf_led.py b/src/transformers/models/led/modeling_tf_led.py
--- a/src/transformers/models/led/modeling_tf_led.py
+++ b/src/transformers/models/led/modeling_tf_led.py
@@ -472,7 +472,7 @@ def _sliding_chunks_query_key_matmul(self, query, key, window_overlap):
)
first_chunk_mask = (
tf.tile(
- tf.range(chunks_count + 1)[None, :, None, None],
+ tf.range(chunks_count + 1, dtype=tf.int64)[None, :, None, None],
(batch_size * num_heads, 1, window_overlap, window_overlap),
)
< 1
@@ -1335,10 +1335,10 @@ class TFLEDPreTrainedModel(TFPreTrainedModel):
@property
def dummy_inputs(self):
- input_ids = tf.convert_to_tensor([[7, 6, 0, 0, 1], [1, 2, 3, 0, 0]])
+ input_ids = tf.convert_to_tensor([[7, 6, 0, 0, 1], [1, 2, 3, 0, 0]], dtype=tf.int64)
# make sure global layers are initialized
- attention_mask = tf.convert_to_tensor([[1, 1, 0, 0, 1], [1, 1, 1, 0, 0]])
- global_attention_mask = tf.convert_to_tensor([[0, 0, 0, 0, 1], [0, 0, 1, 0, 0]])
+ attention_mask = tf.convert_to_tensor([[1, 1, 0, 0, 1], [1, 1, 1, 0, 0]], dtype=tf.int64)
+ global_attention_mask = tf.convert_to_tensor([[0, 0, 0, 0, 1], [0, 0, 1, 0, 0]], dtype=tf.int64)
dummy_inputs = {
"input_ids": input_ids,
"attention_mask": attention_mask,
@@ -1350,10 +1350,10 @@ def dummy_inputs(self):
@tf.function(
input_signature=[
{
- "input_ids": tf.TensorSpec((None, None), tf.int32, name="input_ids"),
- "attention_mask": tf.TensorSpec((None, None), tf.int32, name="attention_mask"),
- "decoder_input_ids": tf.TensorSpec((None, None), tf.int32, name="decoder_input_ids"),
- "decoder_attention_mask": tf.TensorSpec((None, None), tf.int32, name="decoder_attention_mask"),
+ "input_ids": tf.TensorSpec((None, None), tf.int64, name="input_ids"),
+ "attention_mask": tf.TensorSpec((None, None), tf.int64, name="attention_mask"),
+ "decoder_input_ids": tf.TensorSpec((None, None), tf.int64, name="decoder_input_ids"),
+ "decoder_attention_mask": tf.TensorSpec((None, None), tf.int64, name="decoder_attention_mask"),
}
]
)
diff --git a/src/transformers/models/longformer/modeling_tf_longformer.py b/src/transformers/models/longformer/modeling_tf_longformer.py
--- a/src/transformers/models/longformer/modeling_tf_longformer.py
+++ b/src/transformers/models/longformer/modeling_tf_longformer.py
@@ -395,11 +395,10 @@ def _compute_global_attention_mask(input_ids_shape, sep_token_indices, before_se
Computes global attention mask by putting attention on all tokens before `sep_token_id` if `before_sep_token is
True` else after `sep_token_id`.
"""
-
assert shape_list(sep_token_indices)[1] == 2, "`input_ids` should have two dimensions"
question_end_index = tf.reshape(sep_token_indices, (input_ids_shape[0], 3, 2))[:, 0, 1][:, None]
# bool attention mask with True in locations of global attention
- attention_mask = tf.expand_dims(tf.range(input_ids_shape[1]), axis=0)
+ attention_mask = tf.expand_dims(tf.range(input_ids_shape[1], dtype=tf.int64), axis=0)
attention_mask = tf.tile(attention_mask, (input_ids_shape[0], 1))
if before_sep_token is True:
question_end_index = tf.tile(question_end_index, (1, input_ids_shape[1]))
@@ -468,10 +467,9 @@ def call(self, hidden_states):
return hidden_states
-# Copied from transformers.models.roberta.modeling_tf_roberta.TFRobertaEmbeddings with Roberta->Longformer
class TFLongformerEmbeddings(tf.keras.layers.Layer):
"""
- Same as BertEmbeddings with a tiny tweak for positional embeddings indexing.
+ Same as BertEmbeddings with a tiny tweak for positional embeddings indexing and some extra casting.
"""
def __init__(self, config, **kwargs):
@@ -547,7 +545,7 @@ def call(
input_shape = shape_list(inputs_embeds)[:-1]
if token_type_ids is None:
- token_type_ids = tf.fill(dims=input_shape, value=0)
+ token_type_ids = tf.cast(tf.fill(dims=input_shape, value=0), tf.int64)
if position_ids is None:
if input_ids is not None:
@@ -557,7 +555,8 @@ def call(
)
else:
position_ids = tf.expand_dims(
- tf.range(start=self.padding_idx + 1, limit=input_shape[-1] + self.padding_idx + 1), axis=0
+ tf.range(start=self.padding_idx + 1, limit=input_shape[-1] + self.padding_idx + 1, dtype=tf.int64),
+ axis=0,
)
position_embeds = tf.gather(params=self.position_embeddings, indices=position_ids)
@@ -998,7 +997,7 @@ def _sliding_chunks_query_key_matmul(self, query, key, window_overlap):
)
first_chunk_mask = (
tf.tile(
- tf.range(chunks_count + 1)[None, :, None, None],
+ tf.range(chunks_count + 1, dtype=tf.int64)[None, :, None, None],
(batch_size * num_heads, 1, window_overlap, window_overlap),
)
< 1
@@ -1701,6 +1700,21 @@ def call(
training=False,
):
+ if input_ids is not None and not isinstance(input_ids, tf.Tensor):
+ input_ids = tf.convert_to_tensor(input_ids, dtype=tf.int64)
+ elif input_ids is not None:
+ input_ids = tf.cast(input_ids, tf.int64)
+
+ if attention_mask is not None and not isinstance(attention_mask, tf.Tensor):
+ attention_mask = tf.convert_to_tensor(attention_mask, dtype=tf.int64)
+ elif attention_mask is not None:
+ attention_mask = tf.cast(attention_mask, tf.int64)
+
+ if global_attention_mask is not None and not isinstance(global_attention_mask, tf.Tensor):
+ global_attention_mask = tf.convert_to_tensor(global_attention_mask, dtype=tf.int64)
+ elif global_attention_mask is not None:
+ global_attention_mask = tf.cast(global_attention_mask, tf.int64)
+
if input_ids is not None and inputs_embeds is not None:
raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time")
elif input_ids is not None:
@@ -1711,10 +1725,10 @@ def call(
raise ValueError("You have to specify either input_ids or inputs_embeds")
if attention_mask is None:
- attention_mask = tf.fill(input_shape, 1)
+ attention_mask = tf.cast(tf.fill(input_shape, 1), tf.int64)
if token_type_ids is None:
- token_type_ids = tf.fill(input_shape, 0)
+ token_type_ids = tf.cast(tf.fill(input_shape, 0), tf.int64)
# merge `global_attention_mask` and `attention_mask`
if global_attention_mask is not None:
@@ -1831,7 +1845,7 @@ def _pad_to_window_size(
if inputs_embeds is not None:
def pad_embeddings():
- input_ids_padding = tf.fill((batch_size, padding_len), self.pad_token_id)
+ input_ids_padding = tf.cast(tf.fill((batch_size, padding_len), self.pad_token_id), tf.int64)
inputs_embeds_padding = self.embeddings(input_ids_padding)
return tf.concat([inputs_embeds, inputs_embeds_padding], axis=-2)
@@ -1875,10 +1889,15 @@ class TFLongformerPreTrainedModel(TFPreTrainedModel):
@property
def dummy_inputs(self):
- input_ids = tf.convert_to_tensor([[7, 6, 0, 0, 1], [1, 2, 3, 0, 0], [0, 0, 0, 4, 5]])
+ input_ids = tf.convert_to_tensor([[7, 6, 0, 0, 1], [1, 2, 3, 0, 0], [0, 0, 0, 4, 5]], dtype=tf.int64)
# make sure global layers are initialized
- attention_mask = tf.convert_to_tensor([[1, 1, 0, 0, 1], [1, 1, 1, 0, 0], [1, 0, 0, 1, 1]])
- global_attention_mask = tf.convert_to_tensor([[0, 0, 0, 0, 1], [0, 0, 1, 0, 0], [0, 0, 0, 0, 1]])
+ attention_mask = tf.convert_to_tensor([[1, 1, 0, 0, 1], [1, 1, 1, 0, 0], [1, 0, 0, 1, 1]], dtype=tf.int64)
+ global_attention_mask = tf.convert_to_tensor(
+ [[0, 0, 0, 0, 1], [0, 0, 1, 0, 0], [0, 0, 0, 0, 1]], dtype=tf.int64
+ )
+ global_attention_mask = tf.convert_to_tensor(
+ [[0, 0, 0, 0, 1], [0, 0, 1, 0, 0], [0, 0, 0, 0, 1]], dtype=tf.int64
+ )
return {
"input_ids": input_ids,
"attention_mask": attention_mask,
@@ -1888,8 +1907,8 @@ def dummy_inputs(self):
@tf.function(
input_signature=[
{
- "input_ids": tf.TensorSpec((None, None), tf.int32, name="input_ids"),
- "attention_mask": tf.TensorSpec((None, None), tf.int32, name="attention_mask"),
+ "input_ids": tf.TensorSpec((None, None), tf.int64, name="input_ids"),
+ "attention_mask": tf.TensorSpec((None, None), tf.int64, name="attention_mask"),
}
]
)
@@ -2230,6 +2249,21 @@ def call(
are not taken into account for computing the loss.
"""
+ if input_ids is not None and not isinstance(input_ids, tf.Tensor):
+ input_ids = tf.convert_to_tensor(input_ids, dtype=tf.int64)
+ elif input_ids is not None:
+ input_ids = tf.cast(input_ids, tf.int64)
+
+ if attention_mask is not None and not isinstance(attention_mask, tf.Tensor):
+ attention_mask = tf.convert_to_tensor(attention_mask, dtype=tf.int64)
+ elif attention_mask is not None:
+ attention_mask = tf.cast(attention_mask, tf.int64)
+
+ if global_attention_mask is not None and not isinstance(global_attention_mask, tf.Tensor):
+ global_attention_mask = tf.convert_to_tensor(global_attention_mask, dtype=tf.int64)
+ elif global_attention_mask is not None:
+ global_attention_mask = tf.cast(global_attention_mask, tf.int64)
+
# set global attention on question tokens
if global_attention_mask is None and input_ids is not None:
if shape_list(tf.where(input_ids == self.config.sep_token_id))[0] != 3 * shape_list(input_ids)[0]:
@@ -2239,12 +2273,12 @@ def call(
" forward function to avoid this. This is most likely an error. The global attention is disabled"
" for this forward pass."
)
- global_attention_mask = tf.fill(shape_list(input_ids), value=0)
+ global_attention_mask = tf.cast(tf.fill(shape_list(input_ids), value=0), tf.int64)
else:
logger.info("Initializing global attention on question tokens...")
# put global attention on all tokens until `config.sep_token_id` is reached
sep_token_indices = tf.where(input_ids == self.config.sep_token_id)
- sep_token_indices = tf.cast(sep_token_indices, dtype=input_ids.dtype)
+ sep_token_indices = tf.cast(sep_token_indices, dtype=tf.int64)
global_attention_mask = _compute_global_attention_mask(shape_list(input_ids), sep_token_indices)
outputs = self.longformer(
@@ -2370,13 +2404,28 @@ def call(
training: Optional[bool] = False,
) -> Union[TFLongformerSequenceClassifierOutput, Tuple[tf.Tensor]]:
+ if input_ids is not None and not isinstance(input_ids, tf.Tensor):
+ input_ids = tf.convert_to_tensor(input_ids, dtype=tf.int64)
+ elif input_ids is not None:
+ input_ids = tf.cast(input_ids, tf.int64)
+
+ if attention_mask is not None and not isinstance(attention_mask, tf.Tensor):
+ attention_mask = tf.convert_to_tensor(attention_mask, dtype=tf.int64)
+ elif attention_mask is not None:
+ attention_mask = tf.cast(attention_mask, tf.int64)
+
+ if global_attention_mask is not None and not isinstance(global_attention_mask, tf.Tensor):
+ global_attention_mask = tf.convert_to_tensor(global_attention_mask, dtype=tf.int64)
+ elif global_attention_mask is not None:
+ global_attention_mask = tf.cast(global_attention_mask, tf.int64)
+
if global_attention_mask is None and input_ids is not None:
logger.info("Initializing global attention on CLS token...")
# global attention on cls token
global_attention_mask = tf.zeros_like(input_ids)
- updates = tf.ones(shape_list(input_ids)[0], dtype=tf.int32)
+ updates = tf.ones(shape_list(input_ids)[0], dtype=tf.int64)
indices = tf.pad(
- tensor=tf.expand_dims(tf.range(shape_list(input_ids)[0]), axis=1),
+ tensor=tf.expand_dims(tf.range(shape_list(input_ids)[0], dtype=tf.int64), axis=1),
paddings=[[0, 0], [0, 1]],
constant_values=0,
)
@@ -2448,9 +2497,9 @@ def __init__(self, config, *inputs, **kwargs):
@property
def dummy_inputs(self):
- input_ids = tf.convert_to_tensor(MULTIPLE_CHOICE_DUMMY_INPUTS)
+ input_ids = tf.convert_to_tensor(MULTIPLE_CHOICE_DUMMY_INPUTS, dtype=tf.int64)
# make sure global layers are initialized
- global_attention_mask = tf.convert_to_tensor([[[0, 0, 0, 1], [0, 0, 0, 1]]] * 2)
+ global_attention_mask = tf.convert_to_tensor([[[0, 0, 0, 1], [0, 0, 0, 1]]] * 2, dtype=tf.int64)
return {"input_ids": input_ids, "global_attention_mask": global_attention_mask}
@unpack_inputs
@@ -2542,8 +2591,8 @@ def call(
@tf.function(
input_signature=[
{
- "input_ids": tf.TensorSpec((None, None, None), tf.int32, name="input_ids"),
- "attention_mask": tf.TensorSpec((None, None, None), tf.int32, name="attention_mask"),
+ "input_ids": tf.TensorSpec((None, None, None), tf.int64, name="input_ids"),
+ "attention_mask": tf.TensorSpec((None, None, None), tf.int64, name="attention_mask"),
}
]
)
</patch>
|
[]
|
[]
| |||
mesonbuild__meson-9484
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Throws exception instead of parsing error
meson git c6d74ac7e0890c323bd1190d5f5d3d938fc6d59a
When building this tree, meson throws an exception instead of complaining about the parsing error and where it occurred.
[grilo-wip-hadess-grlnet-disable-fix.zip](https://github.com/mesonbuild/meson/files/7278069/grilo-wip-hadess-grlnet-disable-fix.zip)
```sh
$ ~/Projects/jhbuild/meson/meson.py --prefix /home/hadess/Projects/gnome-install --libdir lib --buildtype=debugoptimized /home/hadess/Downloads/grilo-wip-hadess-grlnet-disable-fix
The Meson build system
Version: 0.59.99
Source dir: /home/hadess/Downloads/grilo-wip-hadess-grlnet-disable-fix
Build dir: /tmp/bug-repro
Build type: native build
Project name: grilo
Project version: 0.3.14
C compiler for the host machine: ccache cc (gcc 11.2.1 "cc (GCC) 11.2.1 20210728 (Red Hat 11.2.1-1)")
C linker for the host machine: cc ld.bfd 2.37-10
Host machine cpu family: x86_64
Host machine cpu: x86_64
Found pkg-config: /usr/bin/pkg-config (1.8.0)
Run-time dependency gio-2.0 found: YES 2.70.0
Run-time dependency glib-2.0 found: YES 2.70.0
Run-time dependency gmodule-2.0 found: YES 2.70.0
Run-time dependency gobject-2.0 found: YES 2.70.0
Run-time dependency libxml-2.0 found: YES 2.9.12
Run-time dependency libsoup-2.4 found: YES 2.74.0
Run-time dependency totem-plparser found: YES 3.26.6
Program g-ir-scanner found: YES (/usr/bin/g-ir-scanner)
Program vapigen found: YES (/usr/bin/vapigen)
Run-time dependency gtk+-3.0 found: YES 3.24.30
Run-time dependency oauth found: YES 1.0.3
Run-time dependency gobject-introspection-1.0 found: YES 1.70.0
Run-time dependency vapigen found: YES 0.54.1
Found pkg-config: /usr/bin/pkg-config (1.8.0)
Program glib-genmarshal found: YES (/usr/bin/glib-genmarshal)
Traceback (most recent call last):
File "/home/hadess/Projects/jhbuild/meson/mesonbuild/mesonmain.py", line 228, in run
return options.run_func(options)
File "/home/hadess/Projects/jhbuild/meson/mesonbuild/msetup.py", line 290, in run
app.generate()
File "/home/hadess/Projects/jhbuild/meson/mesonbuild/msetup.py", line 181, in generate
self._generate(env)
File "/home/hadess/Projects/jhbuild/meson/mesonbuild/msetup.py", line 225, in _generate
intr.run()
File "/home/hadess/Projects/jhbuild/meson/mesonbuild/interpreter/interpreter.py", line 2456, in run
super().run()
File "/home/hadess/Projects/jhbuild/meson/mesonbuild/interpreterbase/interpreterbase.py", line 165, in run
self.evaluate_codeblock(self.ast, start=1)
File "/home/hadess/Projects/jhbuild/meson/mesonbuild/interpreterbase/interpreterbase.py", line 190, in evaluate_codeblock
raise e
File "/home/hadess/Projects/jhbuild/meson/mesonbuild/interpreterbase/interpreterbase.py", line 183, in evaluate_codeblock
self.evaluate_statement(cur)
File "/home/hadess/Projects/jhbuild/meson/mesonbuild/interpreterbase/interpreterbase.py", line 196, in evaluate_statement
return self.function_call(cur)
File "/home/hadess/Projects/jhbuild/meson/mesonbuild/interpreterbase/interpreterbase.py", line 82, in wrapper
res = f(self, node)
File "/home/hadess/Projects/jhbuild/meson/mesonbuild/interpreterbase/interpreterbase.py", line 629, in function_call
return func(node, func_args, kwargs)
File "/home/hadess/Projects/jhbuild/meson/mesonbuild/interpreterbase/decorators.py", line 697, in wrapped
return f(*wrapped_args, **wrapped_kwargs)
File "/home/hadess/Projects/jhbuild/meson/mesonbuild/interpreterbase/decorators.py", line 114, in wrapped
return f(*wrapped_args, **wrapped_kwargs)
File "/home/hadess/Projects/jhbuild/meson/mesonbuild/interpreterbase/decorators.py", line 275, in wrapper
return f(*nargs, **wrapped_kwargs)
File "/home/hadess/Projects/jhbuild/meson/mesonbuild/interpreter/interpreter.py", line 1941, in func_subdir
self.evaluate_codeblock(codeblock)
File "/home/hadess/Projects/jhbuild/meson/mesonbuild/interpreterbase/interpreterbase.py", line 190, in evaluate_codeblock
raise e
File "/home/hadess/Projects/jhbuild/meson/mesonbuild/interpreterbase/interpreterbase.py", line 183, in evaluate_codeblock
self.evaluate_statement(cur)
File "/home/hadess/Projects/jhbuild/meson/mesonbuild/interpreterbase/interpreterbase.py", line 198, in evaluate_statement
self.assignment(cur)
File "/home/hadess/Projects/jhbuild/meson/mesonbuild/interpreterbase/interpreterbase.py", line 848, in assignment
value = self.evaluate_statement(node.value)
File "/home/hadess/Projects/jhbuild/meson/mesonbuild/interpreterbase/interpreterbase.py", line 200, in evaluate_statement
return self.method_call(cur)
File "/home/hadess/Projects/jhbuild/meson/mesonbuild/interpreterbase/interpreterbase.py", line 666, in method_call
return self._holderify(obj.method_call(method_name, args, kwargs))
File "/home/hadess/Projects/jhbuild/meson/mesonbuild/interpreter/interpreterobjects.py", line 751, in method_call
ret = method(state, args, kwargs)
File "/home/hadess/Projects/jhbuild/meson/mesonbuild/interpreterbase/decorators.py", line 114, in wrapped
return f(*wrapped_args, **wrapped_kwargs)
File "/home/hadess/Projects/jhbuild/meson/mesonbuild/modules/gnome.py", line 1669, in genmarshal
header = build.CustomTarget(output + '_h', state.subdir, state.subproject, custom_kwargs)
File "/home/hadess/Projects/jhbuild/meson/mesonbuild/build.py", line 2317, in __init__
self.process_kwargs(kwargs, backend)
File "/home/hadess/Projects/jhbuild/meson/mesonbuild/build.py", line 2426, in process_kwargs
if isinstance(kwargs['install_dir'], list):
KeyError: 'install_dir'
```
</issue>
<code>
[start of README.md]
1 <p align="center">
2 <img src="https://mesonbuild.com/assets/images/meson_logo.png">
3 </p>
4 Meson® is a project to create the best possible next-generation
5 build system.
6
7 #### Status
8
9 [](https://pypi.python.org/pypi/meson)
10 [](https://dev.azure.com/jussi0947/jussi/_build/latest?definitionId=1)
11 [](https://codecov.io/gh/mesonbuild/meson/branch/master)
12 [](https://lgtm.com/projects/g/mesonbuild/meson/context:python)
13 [](https://lgtm.com/projects/g/mesonbuild/meson/alerts)
14
15 #### Dependencies
16
17 - [Python](https://python.org) (version 3.6 or newer)
18 - [Ninja](https://ninja-build.org) (version 1.8.2 or newer)
19
20 #### Installing from source
21
22 Meson is available on [PyPi](https://pypi.python.org/pypi/meson), so
23 it can be installed with `pip3 install meson`. The exact command to
24 type to install with `pip` can vary between systems, be sure to use
25 the Python 3 version of `pip`.
26
27 If you wish you can install it locally with the standard Python command:
28
29 ```console
30 python3 -m pip install meson
31 ```
32
33 For builds using Ninja, Ninja can be downloaded directly from Ninja
34 [GitHub release page](https://github.com/ninja-build/ninja/releases)
35 or via [PyPi](https://pypi.python.org/pypi/ninja)
36
37 ```console
38 python3 -m pip install ninja
39 ```
40
41 More on Installing Meson build can be found at the
42 [getting meson page](https://mesonbuild.com/Getting-meson.html).
43
44 #### Creating a standalone script
45
46 Meson can be run as a [Python zip
47 app](https://docs.python.org/3/library/zipapp.html). To generate the
48 executable run the following command:
49
50 ./packaging/create_zipapp.py --outfile meson.pyz --interpreter '/usr/bin/env python3' <source checkout>
51
52 #### Running
53
54 Meson requires that you have a source directory and a build directory
55 and that these two are different. In your source root must exist a
56 file called `meson.build`. To generate the build system run this
57 command:
58
59 `meson setup <source directory> <build directory>`
60
61 Depending on how you obtained Meson the command might also be called
62 `meson.py` instead of plain `meson`. In the rest of this document we
63 are going to use the latter form.
64
65 You can omit either of the two directories, and Meson will substitute
66 the current directory and autodetect what you mean. This allows you to
67 do things like this:
68
69 ```console
70 cd <source root>
71 meson setup builddir
72 ```
73
74 To compile, cd into your build directory and type `ninja`. To run unit
75 tests, type `ninja test`.
76
77 More on running Meson build system commands can be found at the
78 [running meson page](https://mesonbuild.com/Running-Meson.html)
79 or by typing `meson --help`.
80
81 #### Contributing
82
83 We love code contributions. See the [contribution
84 page](https://mesonbuild.com/Contributing.html) on the website for
85 details.
86
87
88 #### IRC
89
90 The channel to use is `#mesonbuild` either via Matrix ([web
91 interface][matrix_web]) or [OFTC IRC][oftc_irc].
92
93 [matrix_web]: https://app.element.io/#/room/#mesonbuild:matrix.org
94 [oftc_irc]: https://www.oftc.net/
95
96 #### Further info
97
98 More information about the Meson build system can be found at the
99 [project's home page](https://mesonbuild.com).
100
101 Meson is a registered trademark of ***Jussi Pakkanen***.
102
[end of README.md]
[start of mesonbuild/interpreterbase/interpreterbase.py]
1 # Copyright 2016-2017 The Meson development team
2
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6
7 # http://www.apache.org/licenses/LICENSE-2.0
8
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 # This class contains the basic functionality needed to run any interpreter
16 # or an interpreter-based tool.
17
18 from .. import mparser, mesonlib
19 from .. import environment
20
21 from .baseobjects import (
22 InterpreterObject,
23 MesonInterpreterObject,
24 MutableInterpreterObject,
25 InterpreterObjectTypeVar,
26 ObjectHolder,
27 IterableObject,
28
29 TYPE_var,
30 TYPE_kwargs,
31
32 HoldableTypes,
33 )
34
35 from .exceptions import (
36 InterpreterException,
37 InvalidCode,
38 InvalidArguments,
39 SubdirDoneRequest,
40 ContinueRequest,
41 BreakRequest
42 )
43
44 from .decorators import FeatureNew
45 from .disabler import Disabler, is_disabled
46 from .helpers import default_resolve_key, flatten, resolve_second_level_holders
47 from .operator import MesonOperator
48 from ._unholder import _unholder
49
50 import os, copy, re, pathlib
51 import typing as T
52 import textwrap
53
54 if T.TYPE_CHECKING:
55 from ..interpreter import Interpreter
56
57 HolderMapType = T.Dict[
58 T.Union[
59 T.Type[mesonlib.HoldableObject],
60 T.Type[int],
61 T.Type[bool],
62 T.Type[str],
63 T.Type[list],
64 T.Type[dict],
65 ],
66 # For some reason, this has to be a callable and can't just be ObjectHolder[InterpreterObjectTypeVar]
67 T.Callable[[InterpreterObjectTypeVar, 'Interpreter'], ObjectHolder[InterpreterObjectTypeVar]]
68 ]
69
70 FunctionType = T.Dict[
71 str,
72 T.Callable[[mparser.BaseNode, T.List[TYPE_var], T.Dict[str, TYPE_var]], TYPE_var]
73 ]
74
75 class InterpreterBase:
76 def __init__(self, source_root: str, subdir: str, subproject: str):
77 self.source_root = source_root
78 self.funcs: FunctionType = {}
79 self.builtin: T.Dict[str, InterpreterObject] = {}
80 # Holder maps store a mapping from an HoldableObject to a class ObjectHolder
81 self.holder_map: HolderMapType = {}
82 self.bound_holder_map: HolderMapType = {}
83 self.subdir = subdir
84 self.root_subdir = subdir
85 self.subproject = subproject
86 self.variables: T.Dict[str, InterpreterObject] = {}
87 self.argument_depth = 0
88 self.current_lineno = -1
89 # Current node set during a function call. This can be used as location
90 # when printing a warning message during a method call.
91 self.current_node = None # type: mparser.BaseNode
92 # This is set to `version_string` when this statement is evaluated:
93 # meson.version().compare_version(version_string)
94 # If it was part of a if-clause, it is used to temporally override the
95 # current meson version target within that if-block.
96 self.tmp_meson_version = None # type: T.Optional[str]
97
98 def load_root_meson_file(self) -> None:
99 mesonfile = os.path.join(self.source_root, self.subdir, environment.build_filename)
100 if not os.path.isfile(mesonfile):
101 raise InvalidArguments('Missing Meson file in %s' % mesonfile)
102 with open(mesonfile, encoding='utf-8') as mf:
103 code = mf.read()
104 if code.isspace():
105 raise InvalidCode('Builder file is empty.')
106 assert isinstance(code, str)
107 try:
108 self.ast = mparser.Parser(code, mesonfile).parse()
109 except mesonlib.MesonException as me:
110 me.file = mesonfile
111 raise me
112
113 def parse_project(self) -> None:
114 """
115 Parses project() and initializes languages, compilers etc. Do this
116 early because we need this before we parse the rest of the AST.
117 """
118 self.evaluate_codeblock(self.ast, end=1)
119
120 def sanity_check_ast(self) -> None:
121 if not isinstance(self.ast, mparser.CodeBlockNode):
122 raise InvalidCode('AST is of invalid type. Possibly a bug in the parser.')
123 if not self.ast.lines:
124 raise InvalidCode('No statements in code.')
125 first = self.ast.lines[0]
126 if not isinstance(first, mparser.FunctionNode) or first.func_name != 'project':
127 p = pathlib.Path(self.source_root).resolve()
128 found = p
129 for parent in p.parents:
130 if (parent / 'meson.build').is_file():
131 with open(parent / 'meson.build', encoding='utf-8') as f:
132 if f.readline().startswith('project('):
133 found = parent
134 break
135 else:
136 break
137
138 error = 'first statement must be a call to project()'
139 if found != p:
140 raise InvalidCode(f'Not the project root: {error}\n\nDid you mean to run meson from the directory: "{found}"?')
141 else:
142 raise InvalidCode(f'Invalid source tree: {error}')
143
144 def run(self) -> None:
145 # Evaluate everything after the first line, which is project() because
146 # we already parsed that in self.parse_project()
147 try:
148 self.evaluate_codeblock(self.ast, start=1)
149 except SubdirDoneRequest:
150 pass
151
152 def evaluate_codeblock(self, node: mparser.CodeBlockNode, start: int = 0, end: T.Optional[int] = None) -> None:
153 if node is None:
154 return
155 if not isinstance(node, mparser.CodeBlockNode):
156 e = InvalidCode('Tried to execute a non-codeblock. Possibly a bug in the parser.')
157 e.lineno = node.lineno
158 e.colno = node.colno
159 raise e
160 statements = node.lines[start:end]
161 i = 0
162 while i < len(statements):
163 cur = statements[i]
164 try:
165 self.current_lineno = cur.lineno
166 self.evaluate_statement(cur)
167 except Exception as e:
168 if getattr(e, 'lineno', None) is None:
169 # We are doing the equivalent to setattr here and mypy does not like it
170 e.lineno = cur.lineno # type: ignore
171 e.colno = cur.colno # type: ignore
172 e.file = os.path.join(self.source_root, self.subdir, environment.build_filename) # type: ignore
173 raise e
174 i += 1 # In THE FUTURE jump over blocks and stuff.
175
176 def evaluate_statement(self, cur: mparser.BaseNode) -> T.Optional[InterpreterObject]:
177 self.current_node = cur
178 if isinstance(cur, mparser.FunctionNode):
179 return self.function_call(cur)
180 elif isinstance(cur, mparser.AssignmentNode):
181 self.assignment(cur)
182 elif isinstance(cur, mparser.MethodNode):
183 return self.method_call(cur)
184 elif isinstance(cur, mparser.StringNode):
185 return self._holderify(cur.value)
186 elif isinstance(cur, mparser.BooleanNode):
187 return self._holderify(cur.value)
188 elif isinstance(cur, mparser.IfClauseNode):
189 return self.evaluate_if(cur)
190 elif isinstance(cur, mparser.IdNode):
191 return self.get_variable(cur.value)
192 elif isinstance(cur, mparser.ComparisonNode):
193 return self.evaluate_comparison(cur)
194 elif isinstance(cur, mparser.ArrayNode):
195 return self.evaluate_arraystatement(cur)
196 elif isinstance(cur, mparser.DictNode):
197 return self.evaluate_dictstatement(cur)
198 elif isinstance(cur, mparser.NumberNode):
199 return self._holderify(cur.value)
200 elif isinstance(cur, mparser.AndNode):
201 return self.evaluate_andstatement(cur)
202 elif isinstance(cur, mparser.OrNode):
203 return self.evaluate_orstatement(cur)
204 elif isinstance(cur, mparser.NotNode):
205 return self.evaluate_notstatement(cur)
206 elif isinstance(cur, mparser.UMinusNode):
207 return self.evaluate_uminusstatement(cur)
208 elif isinstance(cur, mparser.ArithmeticNode):
209 return self.evaluate_arithmeticstatement(cur)
210 elif isinstance(cur, mparser.ForeachClauseNode):
211 self.evaluate_foreach(cur)
212 elif isinstance(cur, mparser.PlusAssignmentNode):
213 self.evaluate_plusassign(cur)
214 elif isinstance(cur, mparser.IndexNode):
215 return self.evaluate_indexing(cur)
216 elif isinstance(cur, mparser.TernaryNode):
217 return self.evaluate_ternary(cur)
218 elif isinstance(cur, mparser.FormatStringNode):
219 return self.evaluate_fstring(cur)
220 elif isinstance(cur, mparser.ContinueNode):
221 raise ContinueRequest()
222 elif isinstance(cur, mparser.BreakNode):
223 raise BreakRequest()
224 else:
225 raise InvalidCode("Unknown statement.")
226 return None
227
228 def evaluate_arraystatement(self, cur: mparser.ArrayNode) -> InterpreterObject:
229 (arguments, kwargs) = self.reduce_arguments(cur.args)
230 if len(kwargs) > 0:
231 raise InvalidCode('Keyword arguments are invalid in array construction.')
232 return self._holderify([_unholder(x) for x in arguments])
233
234 @FeatureNew('dict', '0.47.0')
235 def evaluate_dictstatement(self, cur: mparser.DictNode) -> InterpreterObject:
236 def resolve_key(key: mparser.BaseNode) -> str:
237 if not isinstance(key, mparser.StringNode):
238 FeatureNew.single_use('Dictionary entry using non literal key', '0.53.0', self.subproject)
239 str_key = _unholder(self.evaluate_statement(key))
240 if not isinstance(str_key, str):
241 raise InvalidArguments('Key must be a string')
242 return str_key
243 arguments, kwargs = self.reduce_arguments(cur.args, key_resolver=resolve_key, duplicate_key_error='Duplicate dictionary key: {}')
244 assert not arguments
245 return self._holderify({k: _unholder(v) for k, v in kwargs.items()})
246
247 def evaluate_notstatement(self, cur: mparser.NotNode) -> InterpreterObject:
248 v = self.evaluate_statement(cur.value)
249 if isinstance(v, Disabler):
250 return v
251 return self._holderify(v.operator_call(MesonOperator.NOT, None))
252
253 def evaluate_if(self, node: mparser.IfClauseNode) -> T.Optional[Disabler]:
254 assert isinstance(node, mparser.IfClauseNode)
255 for i in node.ifs:
256 # Reset self.tmp_meson_version to know if it gets set during this
257 # statement evaluation.
258 self.tmp_meson_version = None
259 result = self.evaluate_statement(i.condition)
260 if isinstance(result, Disabler):
261 return result
262 if not isinstance(result, InterpreterObject):
263 raise mesonlib.MesonBugException(f'Argument to not ({result}) is not an InterpreterObject but {type(result).__name__}.')
264 res = result.operator_call(MesonOperator.BOOL, None)
265 if not isinstance(res, bool):
266 raise InvalidCode(f'If clause {result!r} does not evaluate to true or false.')
267 if res:
268 prev_meson_version = mesonlib.project_meson_versions[self.subproject]
269 if self.tmp_meson_version:
270 mesonlib.project_meson_versions[self.subproject] = self.tmp_meson_version
271 try:
272 self.evaluate_codeblock(i.block)
273 finally:
274 mesonlib.project_meson_versions[self.subproject] = prev_meson_version
275 return None
276 if not isinstance(node.elseblock, mparser.EmptyNode):
277 self.evaluate_codeblock(node.elseblock)
278 return None
279
280 def evaluate_comparison(self, node: mparser.ComparisonNode) -> InterpreterObject:
281 val1 = self.evaluate_statement(node.left)
282 if isinstance(val1, Disabler):
283 return val1
284 val2 = self.evaluate_statement(node.right)
285 if isinstance(val2, Disabler):
286 return val2
287
288 # New code based on InterpreterObjects
289 operator = {
290 'in': MesonOperator.IN,
291 'notin': MesonOperator.NOT_IN,
292 '==': MesonOperator.EQUALS,
293 '!=': MesonOperator.NOT_EQUALS,
294 '>': MesonOperator.GREATER,
295 '<': MesonOperator.LESS,
296 '>=': MesonOperator.GREATER_EQUALS,
297 '<=': MesonOperator.LESS_EQUALS,
298 }[node.ctype]
299
300 # Check if the arguments should be reversed for simplicity (this essentially converts `in` to `contains`)
301 if operator in (MesonOperator.IN, MesonOperator.NOT_IN):
302 val1, val2 = val2, val1
303
304 val1.current_node = node
305 return self._holderify(val1.operator_call(operator, _unholder(val2)))
306
307 def evaluate_andstatement(self, cur: mparser.AndNode) -> InterpreterObject:
308 l = self.evaluate_statement(cur.left)
309 if isinstance(l, Disabler):
310 return l
311 l_bool = l.operator_call(MesonOperator.BOOL, None)
312 if not l_bool:
313 return self._holderify(l_bool)
314 r = self.evaluate_statement(cur.right)
315 if isinstance(r, Disabler):
316 return r
317 return self._holderify(r.operator_call(MesonOperator.BOOL, None))
318
319 def evaluate_orstatement(self, cur: mparser.OrNode) -> InterpreterObject:
320 l = self.evaluate_statement(cur.left)
321 if isinstance(l, Disabler):
322 return l
323 l_bool = l.operator_call(MesonOperator.BOOL, None)
324 if l_bool:
325 return self._holderify(l_bool)
326 r = self.evaluate_statement(cur.right)
327 if isinstance(r, Disabler):
328 return r
329 return self._holderify(r.operator_call(MesonOperator.BOOL, None))
330
331 def evaluate_uminusstatement(self, cur: mparser.UMinusNode) -> InterpreterObject:
332 v = self.evaluate_statement(cur.value)
333 if isinstance(v, Disabler):
334 return v
335 v.current_node = cur
336 return self._holderify(v.operator_call(MesonOperator.UMINUS, None))
337
338 def evaluate_arithmeticstatement(self, cur: mparser.ArithmeticNode) -> InterpreterObject:
339 l = self.evaluate_statement(cur.left)
340 if isinstance(l, Disabler):
341 return l
342 r = self.evaluate_statement(cur.right)
343 if isinstance(r, Disabler):
344 return r
345
346 mapping: T.Dict[str, MesonOperator] = {
347 'add': MesonOperator.PLUS,
348 'sub': MesonOperator.MINUS,
349 'mul': MesonOperator.TIMES,
350 'div': MesonOperator.DIV,
351 'mod': MesonOperator.MOD,
352 }
353 l.current_node = cur
354 res = l.operator_call(mapping[cur.operation], _unholder(r))
355 return self._holderify(res)
356
357 def evaluate_ternary(self, node: mparser.TernaryNode) -> T.Optional[InterpreterObject]:
358 assert isinstance(node, mparser.TernaryNode)
359 result = self.evaluate_statement(node.condition)
360 if isinstance(result, Disabler):
361 return result
362 result.current_node = node
363 result_bool = result.operator_call(MesonOperator.BOOL, None)
364 if result_bool:
365 return self.evaluate_statement(node.trueblock)
366 else:
367 return self.evaluate_statement(node.falseblock)
368
369 @FeatureNew('format strings', '0.58.0')
370 def evaluate_fstring(self, node: mparser.FormatStringNode) -> InterpreterObject:
371 assert isinstance(node, mparser.FormatStringNode)
372
373 def replace(match: T.Match[str]) -> str:
374 var = str(match.group(1))
375 try:
376 val = _unholder(self.variables[var])
377 if not isinstance(val, (str, int, float, bool)):
378 raise InvalidCode(f'Identifier "{var}" does not name a formattable variable ' +
379 '(has to be an integer, a string, a floating point number or a boolean).')
380
381 return str(val)
382 except KeyError:
383 raise InvalidCode(f'Identifier "{var}" does not name a variable.')
384
385 res = re.sub(r'@([_a-zA-Z][_0-9a-zA-Z]*)@', replace, node.value)
386 return self._holderify(res)
387
388 def evaluate_foreach(self, node: mparser.ForeachClauseNode) -> None:
389 assert isinstance(node, mparser.ForeachClauseNode)
390 items = self.evaluate_statement(node.items)
391 if not isinstance(items, IterableObject):
392 raise InvalidArguments('Items of foreach loop do not support iterating')
393
394 tsize = items.iter_tuple_size()
395 if len(node.varnames) != (tsize or 1):
396 raise InvalidArguments(f'Foreach expects exactly {tsize or 1} variables for iterating over objects of type {items.display_name()}')
397
398 for i in items.iter_self():
399 if tsize is None:
400 if isinstance(i, tuple):
401 raise mesonlib.MesonBugException(f'Iteration of {items} returned a tuple even though iter_tuple_size() is None')
402 self.set_variable(node.varnames[0], self._holderify(i))
403 else:
404 if not isinstance(i, tuple):
405 raise mesonlib.MesonBugException(f'Iteration of {items} did not return a tuple even though iter_tuple_size() is {tsize}')
406 if len(i) != tsize:
407 raise mesonlib.MesonBugException(f'Iteration of {items} did not return a tuple even though iter_tuple_size() is {tsize}')
408 for j in range(tsize):
409 self.set_variable(node.varnames[j], self._holderify(i[j]))
410 try:
411 self.evaluate_codeblock(node.block)
412 except ContinueRequest:
413 continue
414 except BreakRequest:
415 break
416
417 def evaluate_plusassign(self, node: mparser.PlusAssignmentNode) -> None:
418 assert isinstance(node, mparser.PlusAssignmentNode)
419 varname = node.var_name
420 addition = self.evaluate_statement(node.value)
421
422 # Remember that all variables are immutable. We must always create a
423 # full new variable and then assign it.
424 old_variable = self.get_variable(varname)
425 old_variable.current_node = node
426 new_value = self._holderify(old_variable.operator_call(MesonOperator.PLUS, _unholder(addition)))
427 self.set_variable(varname, new_value)
428
429 def evaluate_indexing(self, node: mparser.IndexNode) -> InterpreterObject:
430 assert isinstance(node, mparser.IndexNode)
431 iobject = self.evaluate_statement(node.iobject)
432 if isinstance(iobject, Disabler):
433 return iobject
434 index = _unholder(self.evaluate_statement(node.index))
435
436 if iobject is None:
437 raise InterpreterException('Tried to evaluate indexing on None')
438 iobject.current_node = node
439 return self._holderify(iobject.operator_call(MesonOperator.INDEX, index))
440
441 def function_call(self, node: mparser.FunctionNode) -> T.Optional[InterpreterObject]:
442 func_name = node.func_name
443 (h_posargs, h_kwargs) = self.reduce_arguments(node.args)
444 (posargs, kwargs) = self._unholder_args(h_posargs, h_kwargs)
445 if is_disabled(posargs, kwargs) and func_name not in {'get_variable', 'set_variable', 'unset_variable', 'is_disabler'}:
446 return Disabler()
447 if func_name in self.funcs:
448 func = self.funcs[func_name]
449 func_args = posargs
450 if not getattr(func, 'no-args-flattening', False):
451 func_args = flatten(posargs)
452 if not getattr(func, 'no-second-level-holder-flattening', False):
453 func_args, kwargs = resolve_second_level_holders(func_args, kwargs)
454 res = func(node, func_args, kwargs)
455 return self._holderify(res) if res is not None else None
456 else:
457 self.unknown_function_called(func_name)
458 return None
459
460 def method_call(self, node: mparser.MethodNode) -> T.Optional[InterpreterObject]:
461 invokable = node.source_object
462 obj: T.Optional[InterpreterObject]
463 if isinstance(invokable, mparser.IdNode):
464 object_name = invokable.value
465 obj = self.get_variable(object_name)
466 else:
467 obj = self.evaluate_statement(invokable)
468 method_name = node.name
469 (h_args, h_kwargs) = self.reduce_arguments(node.args)
470 (args, kwargs) = self._unholder_args(h_args, h_kwargs)
471 if is_disabled(args, kwargs):
472 return Disabler()
473 if not isinstance(obj, InterpreterObject):
474 raise InvalidArguments('Variable "%s" is not callable.' % object_name)
475 # TODO: InterpreterBase **really** shouldn't be in charge of checking this
476 if method_name == 'extract_objects':
477 if isinstance(obj, ObjectHolder):
478 self.validate_extraction(obj.held_object)
479 elif not isinstance(obj, Disabler):
480 raise InvalidArguments(f'Invalid operation "extract_objects" on variable "{object_name}" of type {type(obj).__name__}')
481 obj.current_node = node
482 res = obj.method_call(method_name, args, kwargs)
483 return self._holderify(res) if res is not None else None
484
485 def _holderify(self, res: T.Union[TYPE_var, InterpreterObject]) -> InterpreterObject:
486 if isinstance(res, HoldableTypes):
487 # Always check for an exact match first.
488 cls = self.holder_map.get(type(res), None)
489 if cls is not None:
490 # Casts to Interpreter are required here since an assertion would
491 # not work for the `ast` module.
492 return cls(res, T.cast('Interpreter', self))
493 # Try the boundary types next.
494 for typ, cls in self.bound_holder_map.items():
495 if isinstance(res, typ):
496 return cls(res, T.cast('Interpreter', self))
497 raise mesonlib.MesonBugException(f'Object {res} of type {type(res).__name__} is neither in self.holder_map nor self.bound_holder_map.')
498 elif isinstance(res, ObjectHolder):
499 raise mesonlib.MesonBugException(f'Returned object {res} of type {type(res).__name__} is an object holder.')
500 elif isinstance(res, MesonInterpreterObject):
501 return res
502 raise mesonlib.MesonBugException(f'Unknown returned object {res} of type {type(res).__name__} in the parameters.')
503
504 def _unholder_args(self,
505 args: T.List[InterpreterObject],
506 kwargs: T.Dict[str, InterpreterObject]) -> T.Tuple[T.List[TYPE_var], TYPE_kwargs]:
507 return [_unholder(x) for x in args], {k: _unholder(v) for k, v in kwargs.items()}
508
509 def unknown_function_called(self, func_name: str) -> None:
510 raise InvalidCode('Unknown function "%s".' % func_name)
511
512 def reduce_arguments(
513 self,
514 args: mparser.ArgumentNode,
515 key_resolver: T.Callable[[mparser.BaseNode], str] = default_resolve_key,
516 duplicate_key_error: T.Optional[str] = None,
517 ) -> T.Tuple[
518 T.List[InterpreterObject],
519 T.Dict[str, InterpreterObject]
520 ]:
521 assert isinstance(args, mparser.ArgumentNode)
522 if args.incorrect_order():
523 raise InvalidArguments('All keyword arguments must be after positional arguments.')
524 self.argument_depth += 1
525 reduced_pos = [self.evaluate_statement(arg) for arg in args.arguments]
526 if any(x is None for x in reduced_pos):
527 raise InvalidArguments(f'At least one value in the arguments is void.')
528 reduced_kw: T.Dict[str, InterpreterObject] = {}
529 for key, val in args.kwargs.items():
530 reduced_key = key_resolver(key)
531 assert isinstance(val, mparser.BaseNode)
532 reduced_val = self.evaluate_statement(val)
533 if reduced_val is None:
534 raise InvalidArguments(f'Value of key {reduced_key} is void.')
535 if duplicate_key_error and reduced_key in reduced_kw:
536 raise InvalidArguments(duplicate_key_error.format(reduced_key))
537 reduced_kw[reduced_key] = reduced_val
538 self.argument_depth -= 1
539 final_kw = self.expand_default_kwargs(reduced_kw)
540 return reduced_pos, final_kw
541
542 def expand_default_kwargs(self, kwargs: T.Dict[str, T.Optional[InterpreterObject]]) -> T.Dict[str, T.Optional[InterpreterObject]]:
543 if 'kwargs' not in kwargs:
544 return kwargs
545 to_expand = _unholder(kwargs.pop('kwargs'))
546 if not isinstance(to_expand, dict):
547 raise InterpreterException('Value of "kwargs" must be dictionary.')
548 if 'kwargs' in to_expand:
549 raise InterpreterException('Kwargs argument must not contain a "kwargs" entry. Points for thinking meta, though. :P')
550 for k, v in to_expand.items():
551 if k in kwargs:
552 raise InterpreterException(f'Entry "{k}" defined both as a keyword argument and in a "kwarg" entry.')
553 kwargs[k] = self._holderify(v)
554 return kwargs
555
556 def assignment(self, node: mparser.AssignmentNode) -> None:
557 assert isinstance(node, mparser.AssignmentNode)
558 if self.argument_depth != 0:
559 raise InvalidArguments(textwrap.dedent('''\
560 Tried to assign values inside an argument list.
561 To specify a keyword argument, use : instead of =.
562 '''))
563 var_name = node.var_name
564 if not isinstance(var_name, str):
565 raise InvalidArguments('Tried to assign value to a non-variable.')
566 value = self.evaluate_statement(node.value)
567 # For mutable objects we need to make a copy on assignment
568 if isinstance(value, MutableInterpreterObject):
569 value = copy.deepcopy(value)
570 self.set_variable(var_name, value)
571 return None
572
573 def set_variable(self, varname: str, variable: T.Union[TYPE_var, InterpreterObject], *, holderify: bool = False) -> None:
574 if variable is None:
575 raise InvalidCode('Can not assign None to variable.')
576 if holderify:
577 variable = self._holderify(variable)
578 else:
579 # Ensure that we are always storing ObjectHolders
580 if not isinstance(variable, InterpreterObject):
581 raise mesonlib.MesonBugException(f'set_variable in InterpreterBase called with a non InterpreterObject {variable} of type {type(variable).__name__}')
582 if not isinstance(varname, str):
583 raise InvalidCode('First argument to set_variable must be a string.')
584 if re.match('[_a-zA-Z][_0-9a-zA-Z]*$', varname) is None:
585 raise InvalidCode('Invalid variable name: ' + varname)
586 if varname in self.builtin:
587 raise InvalidCode('Tried to overwrite internal variable "%s"' % varname)
588 self.variables[varname] = variable
589
590 def get_variable(self, varname: str) -> InterpreterObject:
591 if varname in self.builtin:
592 return self.builtin[varname]
593 if varname in self.variables:
594 return self.variables[varname]
595 raise InvalidCode('Unknown variable "%s".' % varname)
596
597 def validate_extraction(self, buildtarget: mesonlib.HoldableObject) -> None:
598 raise InterpreterException('validate_extraction is not implemented in this context (please file a bug)')
599
[end of mesonbuild/interpreterbase/interpreterbase.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
mesonbuild/meson
|
ae35b1f45ac5850547f2db52b7b50a54789fcca1
|
Throws exception instead of parsing error
meson git c6d74ac7e0890c323bd1190d5f5d3d938fc6d59a
When building this tree, meson throws an exception instead of complaining about the parsing error and where it occurred.
[grilo-wip-hadess-grlnet-disable-fix.zip](https://github.com/mesonbuild/meson/files/7278069/grilo-wip-hadess-grlnet-disable-fix.zip)
```sh
$ ~/Projects/jhbuild/meson/meson.py --prefix /home/hadess/Projects/gnome-install --libdir lib --buildtype=debugoptimized /home/hadess/Downloads/grilo-wip-hadess-grlnet-disable-fix
The Meson build system
Version: 0.59.99
Source dir: /home/hadess/Downloads/grilo-wip-hadess-grlnet-disable-fix
Build dir: /tmp/bug-repro
Build type: native build
Project name: grilo
Project version: 0.3.14
C compiler for the host machine: ccache cc (gcc 11.2.1 "cc (GCC) 11.2.1 20210728 (Red Hat 11.2.1-1)")
C linker for the host machine: cc ld.bfd 2.37-10
Host machine cpu family: x86_64
Host machine cpu: x86_64
Found pkg-config: /usr/bin/pkg-config (1.8.0)
Run-time dependency gio-2.0 found: YES 2.70.0
Run-time dependency glib-2.0 found: YES 2.70.0
Run-time dependency gmodule-2.0 found: YES 2.70.0
Run-time dependency gobject-2.0 found: YES 2.70.0
Run-time dependency libxml-2.0 found: YES 2.9.12
Run-time dependency libsoup-2.4 found: YES 2.74.0
Run-time dependency totem-plparser found: YES 3.26.6
Program g-ir-scanner found: YES (/usr/bin/g-ir-scanner)
Program vapigen found: YES (/usr/bin/vapigen)
Run-time dependency gtk+-3.0 found: YES 3.24.30
Run-time dependency oauth found: YES 1.0.3
Run-time dependency gobject-introspection-1.0 found: YES 1.70.0
Run-time dependency vapigen found: YES 0.54.1
Found pkg-config: /usr/bin/pkg-config (1.8.0)
Program glib-genmarshal found: YES (/usr/bin/glib-genmarshal)
Traceback (most recent call last):
File "/home/hadess/Projects/jhbuild/meson/mesonbuild/mesonmain.py", line 228, in run
return options.run_func(options)
File "/home/hadess/Projects/jhbuild/meson/mesonbuild/msetup.py", line 290, in run
app.generate()
File "/home/hadess/Projects/jhbuild/meson/mesonbuild/msetup.py", line 181, in generate
self._generate(env)
File "/home/hadess/Projects/jhbuild/meson/mesonbuild/msetup.py", line 225, in _generate
intr.run()
File "/home/hadess/Projects/jhbuild/meson/mesonbuild/interpreter/interpreter.py", line 2456, in run
super().run()
File "/home/hadess/Projects/jhbuild/meson/mesonbuild/interpreterbase/interpreterbase.py", line 165, in run
self.evaluate_codeblock(self.ast, start=1)
File "/home/hadess/Projects/jhbuild/meson/mesonbuild/interpreterbase/interpreterbase.py", line 190, in evaluate_codeblock
raise e
File "/home/hadess/Projects/jhbuild/meson/mesonbuild/interpreterbase/interpreterbase.py", line 183, in evaluate_codeblock
self.evaluate_statement(cur)
File "/home/hadess/Projects/jhbuild/meson/mesonbuild/interpreterbase/interpreterbase.py", line 196, in evaluate_statement
return self.function_call(cur)
File "/home/hadess/Projects/jhbuild/meson/mesonbuild/interpreterbase/interpreterbase.py", line 82, in wrapper
res = f(self, node)
File "/home/hadess/Projects/jhbuild/meson/mesonbuild/interpreterbase/interpreterbase.py", line 629, in function_call
return func(node, func_args, kwargs)
File "/home/hadess/Projects/jhbuild/meson/mesonbuild/interpreterbase/decorators.py", line 697, in wrapped
return f(*wrapped_args, **wrapped_kwargs)
File "/home/hadess/Projects/jhbuild/meson/mesonbuild/interpreterbase/decorators.py", line 114, in wrapped
return f(*wrapped_args, **wrapped_kwargs)
File "/home/hadess/Projects/jhbuild/meson/mesonbuild/interpreterbase/decorators.py", line 275, in wrapper
return f(*nargs, **wrapped_kwargs)
File "/home/hadess/Projects/jhbuild/meson/mesonbuild/interpreter/interpreter.py", line 1941, in func_subdir
self.evaluate_codeblock(codeblock)
File "/home/hadess/Projects/jhbuild/meson/mesonbuild/interpreterbase/interpreterbase.py", line 190, in evaluate_codeblock
raise e
File "/home/hadess/Projects/jhbuild/meson/mesonbuild/interpreterbase/interpreterbase.py", line 183, in evaluate_codeblock
self.evaluate_statement(cur)
File "/home/hadess/Projects/jhbuild/meson/mesonbuild/interpreterbase/interpreterbase.py", line 198, in evaluate_statement
self.assignment(cur)
File "/home/hadess/Projects/jhbuild/meson/mesonbuild/interpreterbase/interpreterbase.py", line 848, in assignment
value = self.evaluate_statement(node.value)
File "/home/hadess/Projects/jhbuild/meson/mesonbuild/interpreterbase/interpreterbase.py", line 200, in evaluate_statement
return self.method_call(cur)
File "/home/hadess/Projects/jhbuild/meson/mesonbuild/interpreterbase/interpreterbase.py", line 666, in method_call
return self._holderify(obj.method_call(method_name, args, kwargs))
File "/home/hadess/Projects/jhbuild/meson/mesonbuild/interpreter/interpreterobjects.py", line 751, in method_call
ret = method(state, args, kwargs)
File "/home/hadess/Projects/jhbuild/meson/mesonbuild/interpreterbase/decorators.py", line 114, in wrapped
return f(*wrapped_args, **wrapped_kwargs)
File "/home/hadess/Projects/jhbuild/meson/mesonbuild/modules/gnome.py", line 1669, in genmarshal
header = build.CustomTarget(output + '_h', state.subdir, state.subproject, custom_kwargs)
File "/home/hadess/Projects/jhbuild/meson/mesonbuild/build.py", line 2317, in __init__
self.process_kwargs(kwargs, backend)
File "/home/hadess/Projects/jhbuild/meson/mesonbuild/build.py", line 2426, in process_kwargs
if isinstance(kwargs['install_dir'], list):
KeyError: 'install_dir'
```
|
The fix is here below, but I would have expected an error about the type mismatches before the ninja file generation.
```patch
diff --git a/bindings/vala/meson.build b/bindings/vala/meson.build
index f5723b3..493634c 100644
--- a/bindings/vala/meson.build
+++ b/bindings/vala/meson.build
@@ -10,7 +10,7 @@ vala_sources = [ # LIBRARY, GIR, DEPS
]
if enable_grlnet
- vala_sources += ['grilo-net-@0@'.format(grl_majorminor), grlnet_gir[0], ['gio-2.0']]
+ vala_sources += [['grilo-net-@0@'.format(grl_majorminor), grlnet_gir[0], ['gio-2.0']]]
endif
foreach s: vala_sources
```
Sounds like a regression since custom_target() got ported to typed_kwargs(). Modules are creating ct without going through those decorators.
@dcbaker I think the fix is to add a wrapper on ModuleState() object to create custom targets. Modules should stop using internal APIs like that.
See ModuleState.test(), we did the same thing there.
@xclaesse The plan is to make CustomTarget itself useful, we shouldn't have to add wrappers around the initialzers of internal class, they should stop doing the interpreter's job, stop taking a `kwargs`, and use keywords. I'll get it fixed.
True, that's why state.test() takes python arguments instead of a kwargs dict, even if internally it goes back to a kwarg for now. I think we should still wrap that with a ModuleState method for CustomTarget too, one reason for that is I want - on the long term - get ride of ModuleReturnValue, ModuleState should be responsible of adding targets into the build list instead of process_new_values().
Not sure if that's the same issue, but I just saw this trying to build gtkmm 3.24.4 with meson 0.60:
```
0.0 seconds
starting phase `build'
make all-recursive
make[1]: Entering directory '/tmp/guix-build-gtkspell3-3.0.10.drv-0/gtkspell3-3.0.10'
Making all in gtkspell
make[2]: Entering directory '/tmp/guix-build-gtkspell3-3.0.10.drv-0/gtkspell3-3.0.10/gtkspell'
CC libgtkspell3_3_la-gtkspell.lo
CC libgtkspell3_3_la-gtkspell-codetable.lo
GEN gtkspell3-3.0.deps
autoreconf: running: aclocal --force --warnings=no-portability -I m4
aclocal: warning: autoconf input should be named 'configure.ac', not 'configure.in'
gtkspell.c: In function ‘gtk_spell_checker_class_init’:
gtkspell.c:860:3: warning: ‘g_type_class_add_private’ is deprecated [-Wdeprecated-declarations]
860 | g_type_class_add_private (klass, sizeof (GtkSpellCheckerPrivate));
| ^~~~~~~~~~~~~~~~~~~~~~~~
In file included from /gnu/store/l4zh3hgnykfrm0mlqzhf5bg2g5minkqs-glib-2.70.0/include/glib-2.0/gobject/gobject.h:24,
from /gnu/store/l4zh3hgnykfrm0mlqzhf5bg2g5minkqs-glib-2.70.0/include/glib-2.0/gobject/gbinding.h:29,
from /gnu/store/l4zh3hgnykfrm0mlqzhf5bg2g5minkqs-glib-2.70.0/include/glib-2.0/glib-object.h:22,
from gtkspell.h:25,
from gtkspell.c:23:
/gnu/store/l4zh3hgnykfrm0mlqzhf5bg2g5minkqs-glib-2.70.0/include/glib-2.0/gobject/gtype.h:1346:10: note: declared here
1346 | void g_type_class_add_private (gpointer g_class,
| ^~~~~~~~~~~~~~~~~~~~~~~~
gtkspell.c: In function ‘gtk_spell_checker_init’:
gtkspell.c:904:13: warning: Deprecated pre-processor symbol: replace with "G_ADD_PRIVATE"
904 | self->priv = GTK_SPELL_CHECKER_GET_PRIVATE (self);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Traceback (most recent call last):
File "/gnu/store/iajwhaqi5ah00d80k1frh2ylfbpk92nc-meson-0.60.0/lib/python3.9/site-packages/mesonbuild/mesonmain.py", line 138, in run
return options.run_func(options)
File "/gnu/store/iajwhaqi5ah00d80k1frh2ylfbpk92nc-meson-0.60.0/lib/python3.9/site-packages/mesonbuild/msetup.py", line 294, in run
app.generate()
File "/gnu/store/iajwhaqi5ah00d80k1frh2ylfbpk92nc-meson-0.60.0/lib/python3.9/site-packages/mesonbuild/msetup.py", line 185, in generate
self._generate(env)
File "/gnu/store/iajwhaqi5ah00d80k1frh2ylfbpk92nc-meson-0.60.0/lib/python3.9/site-packages/mesonbuild/msetup.py", line 229, in _generate
intr.run()
File "/gnu/store/iajwhaqi5ah00d80k1frh2ylfbpk92nc-meson-0.60.0/lib/python3.9/site-packages/mesonbuild/interpreter/interpreter.py", line 2484, in run
super().run()
File "/gnu/store/iajwhaqi5ah00d80k1frh2ylfbpk92nc-meson-0.60.0/lib/python3.9/site-packages/mesonbuild/interpreterbase/interpreterbase.py", line 150, in run
self.evaluate_codeblock(self.ast, start=1)
File "/gnu/store/iajwhaqi5ah00d80k1frh2ylfbpk92nc-meson-0.60.0/lib/python3.9/site-packages/mesonbuild/interpreterbase/interpreterbase.py", line 175, in evaluate_codeblock
raise e
File "/gnu/store/iajwhaqi5ah00d80k1frh2ylfbpk92nc-meson-0.60.0/lib/python3.9/site-packages/mesonbuild/interpreterbase/interpreterbase.py", line 168, in evaluate_codeblock
self.evaluate_statement(cur)
File "/gnu/store/iajwhaqi5ah00d80k1frh2ylfbpk92nc-meson-0.60.0/lib/python3.9/site-packages/mesonbuild/interpreterbase/interpreterbase.py", line 181, in evaluate_statement
return self.function_call(cur)
File "/gnu/store/iajwhaqi5ah00d80k1frh2ylfbpk92nc-meson-0.60.0/lib/python3.9/site-packages/mesonbuild/interpreterbase/interpreterbase.py", line 456, in function_call
res = func(node, func_args, kwargs)
File "/gnu/store/iajwhaqi5ah00d80k1frh2ylfbpk92nc-meson-0.60.0/lib/python3.9/site-packages/mesonbuild/interpreterbase/decorators.py", line 713, in wrapped
return f(*wrapped_args, **wrapped_kwargs)
File "/gnu/store/iajwhaqi5ah00d80k1frh2ylfbpk92nc-meson-0.60.0/lib/python3.9/site-packages/mesonbuild/interpreterbase/decorators.py", line 115, in wrapped
return f(*wrapped_args, **wrapped_kwargs)
File "/gnu/store/iajwhaqi5ah00d80k1frh2ylfbpk92nc-meson-0.60.0/lib/python3.9/site-packages/mesonbuild/interpreterbase/decorators.py", line 276, in wrapper
return f(*nargs, **wrapped_kwargs)
File "/gnu/store/iajwhaqi5ah00d80k1frh2ylfbpk92nc-meson-0.60.0/lib/python3.9/site-packages/mesonbuild/interpreter/interpreter.py", line 1969, in func_subdir
self.evaluate_codeblock(codeblock)
File "/gnu/store/iajwhaqi5ah00d80k1frh2ylfbpk92nc-meson-0.60.0/lib/python3.9/site-packages/mesonbuild/interpreterbase/interpreterbase.py", line 175, in evaluate_codeblock
raise e
File "/gnu/store/iajwhaqi5ah00d80k1frh2ylfbpk92nc-meson-0.60.0/lib/python3.9/site-packages/mesonbuild/interpreterbase/interpreterbase.py", line 168, in evaluate_codeblock
self.evaluate_statement(cur)
File "/gnu/store/iajwhaqi5ah00d80k1frh2ylfbpk92nc-meson-0.60.0/lib/python3.9/site-packages/mesonbuild/interpreterbase/interpreterbase.py", line 191, in evaluate_statement
return self.evaluate_if(cur)
File "/gnu/store/iajwhaqi5ah00d80k1frh2ylfbpk92nc-meson-0.60.0/lib/python3.9/site-packages/mesonbuild/interpreterbase/interpreterbase.py", line 274, in evaluate_if
self.evaluate_codeblock(i.block)
File "/gnu/store/iajwhaqi5ah00d80k1frh2ylfbpk92nc-meson-0.60.0/lib/python3.9/site-packages/mesonbuild/interpreterbase/interpreterbase.py", line 175, in evaluate_codeblock
raise e
File "/gnu/store/iajwhaqi5ah00d80k1frh2ylfbpk92nc-meson-0.60.0/lib/python3.9/site-packages/mesonbuild/interpreterbase/interpreterbase.py", line 168, in evaluate_codeblock
self.evaluate_statement(cur)
File "/gnu/store/iajwhaqi5ah00d80k1frh2ylfbpk92nc-meson-0.60.0/lib/python3.9/site-packages/mesonbuild/interpreterbase/interpreterbase.py", line 183, in evaluate_statement
self.assignment(cur)
File "/gnu/store/iajwhaqi5ah00d80k1frh2ylfbpk92nc-meson-0.60.0/lib/python3.9/site-packages/mesonbuild/interpreterbase/interpreterbase.py", line 568, in assignment
value = self.evaluate_statement(node.value)
File "/gnu/store/iajwhaqi5ah00d80k1frh2ylfbpk92nc-meson-0.60.0/lib/python3.9/site-packages/mesonbuild/interpreterbase/interpreterbase.py", line 185, in evaluate_statement
return self.method_call(cur)
File "/gnu/store/iajwhaqi5ah00d80k1frh2ylfbpk92nc-meson-0.60.0/lib/python3.9/site-packages/mesonbuild/interpreterbase/interpreterbase.py", line 484, in method_call
res = obj.method_call(method_name, args, kwargs)
File "/gnu/store/iajwhaqi5ah00d80k1frh2ylfbpk92nc-meson-0.60.0/lib/python3.9/site-packages/mesonbuild/interpreter/interpreterobjects.py", line 754, in method_call
ret = method(state, args, kwargs)
File "/gnu/store/iajwhaqi5ah00d80k1frh2ylfbpk92nc-meson-0.60.0/lib/python3.9/site-packages/mesonbuild/interpreterbase/decorators.py", line 713, in wrapped
return f(*wrapped_args, **wrapped_kwargs)
File "/gnu/store/iajwhaqi5ah00d80k1frh2ylfbpk92nc-meson-0.60.0/lib/python3.9/site-packages/mesonbuild/interpreterbase/decorators.py", line 115, in wrapped
return f(*wrapped_args, **wrapped_kwargs)
File "/gnu/store/iajwhaqi5ah00d80k1frh2ylfbpk92nc-meson-0.60.0/lib/python3.9/site-packages/mesonbuild/modules/gnome.py", line 292, in compile_resources
target_c = GResourceTarget(name, state.subdir, state.subproject, kwargs)
File "/gnu/store/iajwhaqi5ah00d80k1frh2ylfbpk92nc-meson-0.60.0/lib/python3.9/site-packages/mesonbuild/modules/__init__.py", line 202, in __init__
super().__init__(name, subdir, subproject, kwargs)
File "/gnu/store/iajwhaqi5ah00d80k1frh2ylfbpk92nc-meson-0.60.0/lib/python3.9/site-packages/mesonbuild/build.py", line 2330, in __init__
self.process_kwargs(kwargs, backend)
File "/gnu/store/iajwhaqi5ah00d80k1frh2ylfbpk92nc-meson-0.60.0/lib/python3.9/site-packages/mesonbuild/build.py", line 2441, in process_kwargs
if isinstance(kwargs['install_dir'], list):
KeyError: 'install_dir'
The Meson build system
Version: 0.60.0
Source dir: /tmp/guix-build-gtkmm-3.24.4.drv-0/gtkmm-3.24.4
Build dir: /tmp/guix-build-gtkmm-3.24.4.drv-0/build
Build type: native build
Project name: gtkmm
Project version: 3.24.4
C compiler for the host machine: gcc (gcc 10.3.0 "gcc (GCC) 10.3.0")
C linker for the host machine: gcc ld.bfd 2.37
C++ compiler for the host machine: c++ (gcc 10.3.0 "c++ (GCC) 10.3.0")
C++ linker for the host machine: c++ ld.bfd 2.37
Host machine cpu family: x86_64
Host machine cpu: x86_64
Program python3 found: YES (/gnu/store/qygy1q98ikvapa3gnmhsq92dmnc60ffd-python-3.9.6/bin/python3)
Found pkg-config: /gnu/store/wc7lqi6kgki5qzr01g3gvriwdadjiwpc-pkg-config-0.29.2/bin/pkg-config (0.29.2)
Run-time dependency gtk+-3.0 found: YES 3.24.30
Run-time dependency cairomm-1.0 found: YES 1.14.2
Run-time dependency pangomm-1.4 found: YES 2.46.0
Run-time dependency gdk-pixbuf-2.0 found: YES 2.42.4
Run-time dependency atkmm-1.6 found: YES 2.28.1
Run-time dependency epoxy found: YES 1.5.5
Run-time dependency glibmm-2.4 found: YES 2.64.5
Run-time dependency giomm-2.4 found: YES 2.64.5
Run-time dependency gtk+-unix-print-3.0 found: YES 3.24.30
Program mm-common-get found: YES (/gnu/store/jgjk0xss7qw8fj0zz4hn8ai9dnq7vlbi-mm-common-1.0.3/bin/mm-common-get)
Program m4 found: YES (/gnu/store/4xvaqcjyyrw6y1fwm3drl87xrz1vsln9-m4-1.4.18/bin/m4)
Program perl found: YES (/gnu/store/k18as85v9s0z66w40wg9sym4c5qz03l2-perl-5.34.0/bin/perl)
Program doxygen found: YES (/gnu/store/79lhh65z70qzwb0x4flgy1n4z05ypr4f-doxygen-1.9.1/bin/doxygen)
Program dot found: YES (/gnu/store/a4b3bc2rm9lqyd00vwz16p2ycfva62fk-graphviz-2.49.0/bin/dot)
Program xsltproc found: YES (/gnu/store/546xz3wbs1zriz48vz2kxyfjyyqbmgjh-libxslt-1.1.34/bin/xsltproc)
Compiler for C++ supports arguments -Wall: YES
../gtkmm-3.24.4/meson.build:307: WARNING: Consider using the built-in warning_level option instead of using "-Wall".
Library glibmm_generate_extra_defs-2.4 found: YES
Configuring gdkmm-3.0.pc using configuration
Configuring gdkmm-3.0-uninstalled.pc using configuration
Configuring gdkmmconfig.h using configuration
Configuring gdkmm.rc using configuration
Program /gnu/store/qygy1q98ikvapa3gnmhsq92dmnc60ffd-python-3.9.6/bin/python3 found: YES (/gnu/store/qygy1q98ikvapa3gnmhsq92dmnc60ffd-python-3.9.6/bin/python3)
Program /gnu/store/qygy1q98ikvapa3gnmhsq92dmnc60ffd-python-3.9.6/bin/python3 found: YES (/gnu/store/qygy1q98ikvapa3gnmhsq92dmnc60ffd-python-3.9.6/bin/python3)
Configuring gtkmm-3.0.pc using configuration
Configuring gtkmm-3.0-uninstalled.pc using configuration
Configuring gtkmmconfig.h using configuration
Configuring gtkmm.rc using configuration
Program /gnu/store/qygy1q98ikvapa3gnmhsq92dmnc60ffd-python-3.9.6/bin/python3 found: YES (/gnu/store/qygy1q98ikvapa3gnmhsq92dmnc60ffd-python-3.9.6/bin/python3)
Program /gnu/store/qygy1q98ikvapa3gnmhsq92dmnc60ffd-python-3.9.6/bin/python3 found: YES (/gnu/store/qygy1q98ikvapa3gnmhsq92dmnc60ffd-python-3.9.6/bin/python3)
Found pkg-config: /gnu/store/wc7lqi6kgki5qzr01g3gvriwdadjiwpc-pkg-config-0.29.2/bin/pkg-config (0.29.2)
Program glib-compile-resources found: YES (/gnu/store/w7a78mbizznxgykmqiia3l0bdyjpn675-glib-2.70.0-bin/bin/glib-compile-resources)
error: in phase 'configure': uncaught exception:
%exception #<&invoke-error program: "meson" arguments: ("--prefix=/gnu/store/s926zb4xrnnasc7iipnbyp9k0fnlsain-gtkmm-3.24.4" "--buildtype=debugoptimized" "-Dc_link_args=-Wl,-rpath=/gnu/store/s926zb4xrnnasc7iipnbyp9k0fnlsain-gtkmm-3.24.4/lib" "-Dcpp_link_args=-Wl,-rpath=/gnu/store/s926zb4xrnnasc7iipnbyp9k0fnlsain-gtkmm-3.24.4/lib" "-Dbuild-documentation=true" "/tmp/guix-build-gtkmm-3.24.4.drv-0/gtkmm-3.24.4") exit-status: 2 term-signal: #f stop-signal: #f>
phase `configure' failed after 1.2 seconds
command "meson" "--prefix=/gnu/store/s926zb4xrnnasc7iipnbyp9k0fnlsain-gtkmm-3.24.4" "--buildtype=debugoptimized" "-Dc_link_args=-Wl,-rpath=/gnu/store/s926zb4xrnnasc7iipnbyp9k0fnlsain-gtkmm-3.24.4/lib" "-Dcpp_link_args=-Wl,-rpath=/gnu/store/s926zb4xrnnasc7iipnbyp9k0fnlsain-gtkmm-3.24.4/lib" "-Dbuild-documentation=true" "/tmp/guix-build-gtkmm-3.24.4.drv-0/gtkmm-3.24.4" failed with status 2
builder for `/gnu/store/rp4pmx13wi8ahw781myiryxqb5r7mv33-gtkmm-3.24.4.drv' failed with exit code 1
build of /gnu/store/rp4pmx13wi8ahw781myiryxqb5r7mv33-gtkmm-3.24.4.drv failed
View build log at '/var/log/guix/drvs/rp/4pmx13wi8ahw781myiryxqb5r7mv33-gtkmm-3.24.4.drv'.
cannot build derivation `/gnu/store/xcwz5cznv97p6030wksj0x8rqc50k7y9-inkscape-1.1.drv': 1 dependencies couldn't be built
cannot build derivation `/gnu/store/hqx4mbf6xd9giafjjvz57af41sj3v9kz-dblatex-0.3.12.drv': 1 dependencies couldn't be built
cannot build derivation `/gnu/store/ls9qyw3alxm7vvj0brscxcdhkjj0qrqx-dblatex-0.3.12.drv': 1 dependencies couldn't be built
cannot build derivation `/gnu/store/gg6x452zqilb4ckgvli5xdzl6naxqvv2-gtk-doc-1.33.2.drv': 1 dependencies couldn't be built
cannot build derivation `/gnu/store/q2gb7xi10s1px246s65wv04waphk9yyl-gtk-doc-1.33.2.drv': 1 dependencies couldn't be built
cannot build derivation `/gnu/store/xyl76zvjzyx2aas862nblvrr2za5hdxp-ibus-1.5.24.drv': 1 dependencies couldn't be built
cannot build derivation `/gnu/store/s7sws9a019jpa1gycsshdqkmx0f5iviv-libical-3.0.10.drv': 1 dependencies couldn't be built
cannot build derivation `/gnu/store/bplbpjlyhj3421wsaj510xn9n2l4yaz9-libnotify-0.7.9.drv': 1 dependencies couldn't be built
cannot build derivation `/gnu/store/631m221qjkqr5g449xa0vcg8v7j0a1iz-network-manager-1.24.0.drv': 1 dependencies couldn't be built
cannot build derivation `/gnu/store/y4ph9885j8ksyd7mg42l0lyi4q4zwlah-orc-0.4.32.drv': 1 dependencies couldn't be built
cannot build derivation `/gnu/store/zxkbxvmxsaf2yv0dd4x4hysy3kcj65jg-bluez-5.61.drv': 1 dependencies couldn't be built
cannot build derivation `/gnu/store/kh7bnm5yqa44n1zqri8r4xifvyvsqcxq-jami-qt-20210606.1.e2f9490.drv': 1 dependencies couldn't be built
guix build: error: build of `/gnu/store/kh7bnm5yqa44n1zqri8r4xifvyvsqcxq-jami-qt-20210606.1.e2f9490.drv' failed
mcournoyer@raisin ~/src/guix-core-updates-next [env]$ ./pre-inst-env guix build jami-qt ungoogled-chromium icecat cling^C
mcournoyer@raisin ~/src/guix-core-updates-next [env]$ less /var/log/guix/drvs/rp/4pmx13wi8ahw781myiryxqb5r7mv33-gtkmm-3.24.4.drv
WARNING: (guile-user): imported module (guix build utils) overrides core binding `delete'
starting phase `set-SOURCE-DATE-EPOCH'
phase `set-SOURCE-DATE-EPOCH' succeeded after 0.0 seconds
starting phase `set-paths'
environment variable `PATH' set to `/gnu/store/iajwhaqi5ah00d80k1frh2ylfbpk92nc-meson-0.60.0/bin:/gnu/store/2vcx1im63fsm3w7yfwgq4az0jg87qvng-ninja-1.10.2/bin:/gnu/store/a4b3bc2rm9lqyd00vwz16p2ycfva62fk-graphviz-2.49.0/bin:/gnu/store/79lhh65z70qzwb0x4flgy1n4z05ypr4f-doxygen-1.9.1/bin:/gnu/store/w7a78mbizznxgykmqiia3l0bdyjpn675-glib-2.70.0-bin/bin:/gnu/store/4xvaqcjyyrw6y1fwm3drl87xrz1vsln9-m4-1.4.18/bin:/gnu/store/jgjk0xss7qw8fj0zz4hn8ai9dnq7vlbi-mm-common-1.0.3/bin:/gnu/store/k18as85v9s0z66w40wg9sym4c5qz03l2-perl-5.34.0/bin:/gnu/store/wc7lqi6kgki5qzr01g3gvriwdadjiwpc-pkg-config-0.29.2/bin:/gnu/store/546xz3wbs1zriz48vz2kxyfjyyqbmgjh-libxslt-1.1.34/bin:/gnu/store/fzzl2j23cjjr2srfwxbnbh5jq78vr42f-xorg-server-1.20.10/bin:/gnu/store/ikc2qn1h260f8mdhjz2cwndv2i7n753z-tar-1.34/bin:/gnu/store/v5l2251ck0r4n3w2rjw3l5dzyqsb9zj1-gzip-1.10/bin:/gnu/store/w6zv2mz56wyxr471q38a3cmzm4iqimy8-bzip2-1.0.8/bin:/gnu/store/72kpdqplq4nc87fm7ch47kxy3nlkzsyx-xz-5.2.5/bin:/gnu/store/jazwfsj2v41yih2yzw5nydqgdz1hgfm4-file-5.39/bin:/gnu/store/s49sxmz9g4xmzz222cay9k04ziy3qgmj-diffutils-3.8/bin:/gnu/store/m96snvbl92ivkd32giqh5f3d21bc2n5x-patch-2.7.6/bin:/gnu/store/l8l2y2dlya5rs9hfypmcb4saay7iwn0q-findutils-4.8.0/bin:/gnu/store/v45pjc1yqmy59j9ff913p3ywn9qk4lqx-gawk-5.1.0/bin:/gnu/store/7cddr79rydhc1m4hxr921mq17pz1jj3z-sed-4.8/bin:/gnu/store/ldkm5jwql0qsrfh3ax6rljjsrk0jzv7z-grep-3.6/bin:/gnu/store/25gv43v5rm05i26z39ajrd6nsxa5r461-coreutils-8.32/bin:/gnu/store/l8kxrs01lll3pzjrd590p45l8k045q6q-make-4.3/bin:/gnu/store/vx6vfbmmazvfi7vp8xyjn2mcyylvw9gn-bash-minimal-5.1.8/bin:/gnu/store/k452f0r5bk6n0cbsf5ndxbp5qcf6bpfw-ld-wrapper-0/bin:/gnu/store/6mqcv52gwn2dnbislgv9k99525mzmrrs-binutils-2.37/bin:/gnu/store/vakvgvrb839igv16jkif4lmx11d25jqb-gcc-10.3.0/bin:/gnu/store/2fk1gz2s7ppdicynscra9b19byrrr866-glibc-2.33/bin:/gnu/store/2fk1gz2s7ppdicynscra9b19byrrr866-glibc-2.33/sbin:/gnu/store/qygy1q98ikvapa3gnmhsq92dmnc60ffd-python-3.9.6/bin:/gnu/store/g9lfd19jck8rpxh8wmjbk8rdfcpsh7im-cairo-1.16.0/bin:/gnu/store/if7y9430fim7v0bxdzqhy3yjgsjl2whd-wayland-1.19.0/bin:/gnu/store/zcb10k8l85b4vwgr9s5kl7qwl6hchx37-pango-1.48.10/bin:/gnu/store/4qqz2mrmw92d6yn09fljjn9wpy075qgf-libxkbcommon-1.3.0/bin:/gnu/store/j4ndfdhyzl2az6hifgfh5fwwfhp87ykl-librsvg-2.50.7/bin:/gnu/store/375858dr3cqbwry58xcgc0776205p0mf-freetype-2.10.4/bin:/gnu/store/379xr5pkxprcamhlcbqz9nghj90qxw86-fontconfig-minimal-2.13.94/bin:/gnu/store/0h3ami2ijmyvkc63hk5sqsdh3qkrrcv0-fribidi-1.0.9/bin:/gnu/store/brpl5kly3c64ah2y9v3fgbwl32641c0f-libpng-1.6.37/bin:/gnu/store/p1ydkidsr8xkr8ymymzy3lhy2ajwpmiy-gdk-pixbuf-2.42.4/bin:/gnu/store/s0w7szfsajdy6cnrz2w7z4h5spyl4aaj-expat-2.4.1/bin:/gnu/store/xkfcl7pxgblc52hwzq5294r0m7fv53bq-libdatrie-0.2.13/bin:/gnu/store/7sik9fp2ffvnr7l01s423x4r5pn75rf9-icu4c-69.1/bin:/gnu/store/7sik9fp2ffvnr7l01s423x4r5pn75rf9-icu4c-69.1/sbin:/gnu/store/x0ixwynndmgzwjijhhzk703cfac8ygjd-graphite2-1.3.13/bin:/gnu/store/rj72k5d3kaigjvyyx58q7hbn8x9aslff-shared-mime-info-1.15/bin:/gnu/store/a68mx8v3ibbw8igmbfl29xc3r8qacb2n-dbus-1.12.20/bin'
environment variable `XDG_DATA_DIRS' set to `/gnu/store/iajwhaqi5ah00d80k1frh2ylfbpk92nc-meson-0.60.0/share:/gnu/store/2vcx1im63fsm3w7yfwgq4az0jg87qvng-ninja-1.10.2/share:/gnu/store/a4b3bc2rm9lqyd00vwz16p2ycfva62fk-graphviz-2.49.0/share:/gnu/store/79lhh65z70qzwb0x4flgy1n4z05ypr4f-doxygen-1.9.1/share:/gnu/store/4xvaqcjyyrw6y1fwm3drl87xrz1vsln9-m4-1.4.18/share:/gnu/store/jgjk0xss7qw8fj0zz4hn8ai9dnq7vlbi-mm-common-1.0.3/share:/gnu/store/k18as85v9s0z66w40wg9sym4c5qz03l2-perl-5.34.0/share:/gnu/store/wc7lqi6kgki5qzr01g3gvriwdadjiwpc-pkg-config-0.29.2/share:/gnu/store/546xz3wbs1zriz48vz2kxyfjyyqbmgjh-libxslt-1.1.34/share:/gnu/store/fzzl2j23cjjr2srfwxbnbh5jq78vr42f-xorg-server-1.20.10/share:/gnu/store/ma0zpvl77bsbdqzwchv8zj3504a85id6-atkmm-2.28.1/share:/gnu/store/0szh0b2fc08m8d7sphg3v9rkk526dxym-cairomm-1.14.2/share:/gnu/store/j1lfwcmmp1g5sy2m8g1glvzk834dv48r-glibmm-2.68.0/share:/gnu/store/aj3jcqff1l4g7fzsqa6y1yjbqiayyrgj-gtk+-3.24.30/share:/gnu/store/8pz417xcywz8s52pf0sjry8nm62v0zdf-pangomm-2.46.0/share:/gnu/store/ikc2qn1h260f8mdhjz2cwndv2i7n753z-tar-1.34/share:/gnu/store/v5l2251ck0r4n3w2rjw3l5dzyqsb9zj1-gzip-1.10/share:/gnu/store/w6zv2mz56wyxr471q38a3cmzm4iqimy8-bzip2-1.0.8/share:/gnu/store/72kpdqplq4nc87fm7ch47kxy3nlkzsyx-xz-5.2.5/share:/gnu/store/jazwfsj2v41yih2yzw5nydqgdz1hgfm4-file-5.39/share:/gnu/store/s49sxmz9g4xmzz222cay9k04ziy3qgmj-diffutils-3.8/share:/gnu/store/m96snvbl92ivkd32giqh5f3d21bc2n5x-patch-2.7.6/share:/gnu/store/l8l2y2dlya5rs9hfypmcb4saay7iwn0q-findutils-4.8.0/share:/gnu/store/v45pjc1yqmy59j9ff913p3ywn9qk4lqx-gawk-5.1.0/share:/gnu/store/7cddr79rydhc1m4hxr921mq17pz1jj3z-sed-4.8/share:/gnu/store/ldkm5jwql0qsrfh3ax6rljjsrk0jzv7z-grep-3.6/share:/gnu/store/25gv43v5rm05i26z39ajrd6nsxa5r461-coreutils-8.32/share:/gnu/store/l8kxrs01lll3pzjrd590p45l8k045q6q-make-4.3/share:/gnu/store/vx6vfbmmazvfi7vp8xyjn2mcyylvw9gn-bash-minimal-5.1.8/share:/gnu/store/6mqcv52gwn2dnbislgv9k99525mzmrrs-binutils-2.37/share:/gnu/store/vakvgvrb839igv16jkif4lmx11d25jqb-gcc-10.3.0/share:/gnu/store/2fk1gz2s7ppdicynscra9b19byrrr866-glibc-2.33/share:/gnu/store/3ywv7s8d38rgcq7ljmc1s084358a2m3h-glibc-2.33-static/share:/gnu/store/qygy1q98ikvapa3gnmhsq92dmnc60ffd-python-3.9.6/share:/gnu/store/2i0zpa5w320y8m4zbqk1va8vs6dbawv0-zlib-1.2.11/share:/gnu/store/vpps8gpd7xzj630ixalx7a5gc3bhbk1p-util-linux-2.37.1-lib/share:/gnu/store/g1klcqy022aybasx0p6xbk0kxyxnfxzs-pcre-8.45/share:/gnu/store/hbw4pf3plhab5kqyfpczl3mm0wr9rb39-libffi-3.3/share:/gnu/store/w6hgiyz99wlipjl93967li2z80vfl3gw-xorgproto-2021.4/share:/gnu/store/3nxgh1pmqg44f0ccg16ny4fs81mmafvq-pixman-0.40.0/share:/gnu/store/0kc479whbavx0drhvivahdppr2xmlcyb-mesa-21.2.4/share:/gnu/store/k2xykbwy7z5czrn1xf52fy1x9wdmbr9z-libpciaccess-0.16/share:/gnu/store/0rqgwbhbbpgnn7xygv7am1ljlzfpyb7q-atk-2.36.0/share:/gnu/store/y6bj01dx23smg7lf1jifql2a1hgmy9s4-glibmm-2.64.5/share:/gnu/store/g9lfd19jck8rpxh8wmjbk8rdfcpsh7im-cairo-1.16.0/share:/gnu/store/55wynb8361fq3p7alr8ii4z17fhw44fr-libsigc++-3.0.6/share:/gnu/store/gwsy836sazxn8gr0my9kcgjl971f5d4q-libsigc++-2.9.3/share:/gnu/store/l4zh3hgnykfrm0mlqzhf5bg2g5minkqs-glib-2.70.0/share:/gnu/store/2iy5hrrd581myawb2njjrsxz6zmz6wfm-wayland-protocols-1.23/share:/gnu/store/if7y9430fim7v0bxdzqhy3yjgsjl2whd-wayland-1.19.0/share:/gnu/store/zcb10k8l85b4vwgr9s5kl7qwl6hchx37-pango-1.48.10/share:/gnu/store/1kyfwib4r2sxfyvknfq4r5y5zqc4yddk-libxrender-0.9.10/share:/gnu/store/3dr3qqf6y1lppnshpn3v666z6998nzda-libxrandr-1.5.2/share:/gnu/store/4qqz2mrmw92d6yn09fljjn9wpy075qgf-libxkbcommon-1.3.0/share:/gnu/store/k3cyvfg33cfviiyv176jbg0yw1ipgq4l-libxinerama-1.1.4/share:/gnu/store/36lapwm1w5vi2vcpvxdrj468w0rqmjsr-libxi-1.7.10/share:/gnu/store/yh0pa3cm02qq6a4qxm60hin2qzfcvf8p-libxfixes-6.0.0/share:/gnu/store/zqg1vg42ympnp5398g1p61aqnfn20jyk-libxext-1.3.4/share:/gnu/store/q9n2gkcxx99qrfmh4xdq744qim4cc9rn-libxdamage-1.1.5/share:/gnu/store/mlpma06sa580l8l5mizx5bvm663i0wsk-libxcursor-1.2.0/share:/gnu/store/md29ch9fzxf5712vh42kn6rzw90pmdzy-libxcomposite-0.4.5/share:/gnu/store/xhl2apl68gqkkcqx7c4l78c82l9g5z09-libx11-1.7.2/share:/gnu/store/j4ndfdhyzl2az6hifgfh5fwwfhp87ykl-librsvg-2.50.7/share:/gnu/store/375858dr3cqbwry58xcgc0776205p0mf-freetype-2.10.4/share:/gnu/store/379xr5pkxprcamhlcbqz9nghj90qxw86-fontconfig-minimal-2.13.94/share:/gnu/store/0h3ami2ijmyvkc63hk5sqsdh3qkrrcv0-fribidi-1.0.9/share:/gnu/store/99a2njzz22dkzd8pz75fsi5nbgv9ww0x-linux-libre-headers-5.10.35/share:/gnu/store/39lk5jq66zvsk8r5yfrgmdwdi61gv5xd-util-macros-1.19.3/share:/gnu/store/5kyiwkbmwif4c36sv3qd252c08sb5ipr-libxxf86vm-1.1.4/share:/gnu/store/pblw05r4v2xz72cccv2mg5bh4vaxjrqr-libxshmfence-1.3/share:/gnu/store/xfb3dwr8qj17yqn0lys6bxwl4xvrmnxa-libdrm-2.4.107/share:/gnu/store/pzdnx43bkzpf8ka6hisgs72q6b9xzjf6-libxcb-1.14/share:/gnu/store/brpl5kly3c64ah2y9v3fgbwl32641c0f-libpng-1.6.37/share:/gnu/store/j623gzkdxbiaa04kv8gg6ca8c1fhqhfp-libxft-2.3.3/share:/gnu/store/nrpykryznfzz28gr45lmnfv7a4d9150r-libthai-0.1.28/share:/gnu/store/i7vlvvpr0bf1vzsp994340dcp3izwd95-harfbuzz-2.8.2/share:/gnu/store/p1ydkidsr8xkr8ymymzy3lhy2ajwpmiy-gdk-pixbuf-2.42.4/share:/gnu/store/s0w7szfsajdy6cnrz2w7z4h5spyl4aaj-expat-2.4.1/share:/gnu/store/1g2f2gy9mpnkl2w37j1vqslxakzlr8g8-at-spi2-core-2.40.0/share:/gnu/store/xfbpzv3m5pyhd4pk079v9fhqlp47143s-libxdmcp-1.1.3/share:/gnu/store/ka95pg02g2xv2a6bf56w7daghz3fk2xz-libxau-1.0.9/share:/gnu/store/8r97pw4h31vkkks40ysnlb53sr2cqi9y-libpthread-stubs-0.4/share:/gnu/store/xkfcl7pxgblc52hwzq5294r0m7fv53bq-libdatrie-0.2.13/share:/gnu/store/7sik9fp2ffvnr7l01s423x4r5pn75rf9-icu4c-69.1/share:/gnu/store/x0ixwynndmgzwjijhhzk703cfac8ygjd-graphite2-1.3.13/share:/gnu/store/rj72k5d3kaigjvyyx58q7hbn8x9aslff-shared-mime-info-1.15/share:/gnu/store/24r1mvps1i27831g3z7kq3ylajyq69g0-libxtst-1.2.3/share:/gnu/store/a68mx8v3ibbw8igmbfl29xc3r8qacb2n-dbus-1.12.20/share'
environment variable `GIO_EXTRA_MODULES' unset
environment variable `PERL5LIB' set to `/gnu/store/k18as85v9s0z66w40wg9sym4c5qz03l2-perl-5.34.0/lib/perl5/site_perl'
environment variable `PKG_CONFIG_PATH' set to `/gnu/store/a4b3bc2rm9lqyd00vwz16p2ycfva62fk-graphviz-2.49.0/lib/pkgconfig:/gnu/store/jgjk0xss7qw8fj0zz4hn8ai9dnq7vlbi-mm-common-1.0.3/share/pkgconfig:/gnu/store/546xz3wbs1zriz48vz2kxyfjyyqbmgjh-libxslt-1.1.34/lib/pkgconfig:/gnu/store/fzzl2j23cjjr2srfwxbnbh5jq78vr42f-xorg-server-1.20.10/lib/pkgconfig:/gnu/store/ma0zpvl77bsbdqzwchv8zj3504a85id6-atkmm-2.28.1/lib/pkgconfig:/gnu/store/0szh0b2fc08m8d7sphg3v9rkk526dxym-cairomm-1.14.2/lib/pkgconfig:/gnu/store/j1lfwcmmp1g5sy2m8g1glvzk834dv48r-glibmm-2.68.0/lib/pkgconfig:/gnu/store/aj3jcqff1l4g7fzsqa6y1yjbqiayyrgj-gtk+-3.24.30/lib/pkgconfig:/gnu/store/8pz417xcywz8s52pf0sjry8nm62v0zdf-pangomm-2.46.0/lib/pkgconfig:/gnu/store/72kpdqplq4nc87fm7ch47kxy3nlkzsyx-xz-5.2.5/lib/pkgconfig:/gnu/store/jazwfsj2v41yih2yzw5nydqgdz1hgfm4-file-5.39/lib/pkgconfig:/gnu/store/qygy1q98ikvapa3gnmhsq92dmnc60ffd-python-3.9.6/lib/pkgconfig:/gnu/store/2i0zpa5w320y8m4zbqk1va8vs6dbawv0-zlib-1.2.11/lib/pkgconfig:/gnu/store/vpps8gpd7xzj630ixalx7a5gc3bhbk1p-util-linux-2.37.1-lib/lib/pkgconfig:/gnu/store/g1klcqy022aybasx0p6xbk0kxyxnfxzs-pcre-8.45/lib/pkgconfig:/gnu/store/hbw4pf3plhab5kqyfpczl3mm0wr9rb39-libffi-3.3/lib/pkgconfig:/gnu/store/w6hgiyz99wlipjl93967li2z80vfl3gw-xorgproto-2021.4/share/pkgconfig:/gnu/store/3nxgh1pmqg44f0ccg16ny4fs81mmafvq-pixman-0.40.0/lib/pkgconfig:/gnu/store/0kc479whbavx0drhvivahdppr2xmlcyb-mesa-21.2.4/lib/pkgconfig:/gnu/store/k2xykbwy7z5czrn1xf52fy1x9wdmbr9z-libpciaccess-0.16/lib/pkgconfig:/gnu/store/0rqgwbhbbpgnn7xygv7am1ljlzfpyb7q-atk-2.36.0/lib/pkgconfig:/gnu/store/y6bj01dx23smg7lf1jifql2a1hgmy9s4-glibmm-2.64.5/lib/pkgconfig:/gnu/store/g9lfd19jck8rpxh8wmjbk8rdfcpsh7im-cairo-1.16.0/lib/pkgconfig:/gnu/store/55wynb8361fq3p7alr8ii4z17fhw44fr-libsigc++-3.0.6/lib/pkgconfig:/gnu/store/gwsy836sazxn8gr0my9kcgjl971f5d4q-libsigc++-2.9.3/lib/pkgconfig:/gnu/store/l4zh3hgnykfrm0mlqzhf5bg2g5minkqs-glib-2.70.0/lib/pkgconfig:/gnu/store/2iy5hrrd581myawb2njjrsxz6zmz6wfm-wayland-protocols-1.23/share/pkgconfig:/gnu/store/if7y9430fim7v0bxdzqhy3yjgsjl2whd-wayland-1.19.0/lib/pkgconfig:/gnu/store/zcb10k8l85b4vwgr9s5kl7qwl6hchx37-pango-1.48.10/lib/pkgconfig:/gnu/store/1kyfwib4r2sxfyvknfq4r5y5zqc4yddk-libxrender-0.9.10/lib/pkgconfig:/gnu/store/3dr3qqf6y1lppnshpn3v666z6998nzda-libxrandr-1.5.2/lib/pkgconfig:/gnu/store/4qqz2mrmw92d6yn09fljjn9wpy075qgf-libxkbcommon-1.3.0/lib/pkgconfig:/gnu/store/k3cyvfg33cfviiyv176jbg0yw1ipgq4l-libxinerama-1.1.4/lib/pkgconfig:/gnu/store/36lapwm1w5vi2vcpvxdrj468w0rqmjsr-libxi-1.7.10/lib/pkgconfig:/gnu/store/yh0pa3cm02qq6a4qxm60hin2qzfcvf8p-libxfixes-6.0.0/lib/pkgconfig:/gnu/store/zqg1vg42ympnp5398g1p61aqnfn20jyk-libxext-1.3.4/lib/pkgconfig:/gnu/store/q9n2gkcxx99qrfmh4xdq744qim4cc9rn-libxdamage-1.1.5/lib/pkgconfig:/gnu/store/mlpma06sa580l8l5mizx5bvm663i0wsk-libxcursor-1.2.0/lib/pkgconfig:/gnu/store/md29ch9fzxf5712vh42kn6rzw90pmdzy-libxcomposite-0.4.5/lib/pkgconfig:/gnu/store/xhl2apl68gqkkcqx7c4l78c82l9g5z09-libx11-1.7.2/lib/pkgconfig:/gnu/store/k1d80vmiaixyqi5lrjqi18r5i9qhhaj1-libepoxy-1.5.5/lib/pkgconfig:/gnu/store/dhfmlvc8iyfs7c86nm6magglr835idpg-libcloudproviders-minimal-0.3.1/lib/pkgconfig:/gnu/store/j4ndfdhyzl2az6hifgfh5fwwfhp87ykl-librsvg-2.50.7/lib/pkgconfig:/gnu/store/375858dr3cqbwry58xcgc0776205p0mf-freetype-2.10.4/lib/pkgconfig:/gnu/store/379xr5pkxprcamhlcbqz9nghj90qxw86-fontconfig-minimal-2.13.94/lib/pkgconfig:/gnu/store/0h3ami2ijmyvkc63hk5sqsdh3qkrrcv0-fribidi-1.0.9/lib/pkgconfig:/gnu/store/sxp3lfbnyqzdq7m87qc23yaq1invxa14-at-spi2-atk-2.38.0/lib/pkgconfig:/gnu/store/39lk5jq66zvsk8r5yfrgmdwdi61gv5xd-util-macros-1.19.3/lib/pkgconfig:/gnu/store/5kyiwkbmwif4c36sv3qd252c08sb5ipr-libxxf86vm-1.1.4/lib/pkgconfig:/gnu/store/pblw05r4v2xz72cccv2mg5bh4vaxjrqr-libxshmfence-1.3/lib/pkgconfig:/gnu/store/h9mqw1kn1vs6jqzafaybi1gc846r148i-libvdpau-1.4/lib/pkgconfig:/gnu/store/xfb3dwr8qj17yqn0lys6bxwl4xvrmnxa-libdrm-2.4.107/lib/pkgconfig:/gnu/store/pzdnx43bkzpf8ka6hisgs72q6b9xzjf6-libxcb-1.14/lib/pkgconfig:/gnu/store/brpl5kly3c64ah2y9v3fgbwl32641c0f-libpng-1.6.37/lib/pkgconfig:/gnu/store/j623gzkdxbiaa04kv8gg6ca8c1fhqhfp-libxft-2.3.3/lib/pkgconfig:/gnu/store/nrpykryznfzz28gr45lmnfv7a4d9150r-libthai-0.1.28/lib/pkgconfig:/gnu/store/i7vlvvpr0bf1vzsp994340dcp3izwd95-harfbuzz-2.8.2/lib/pkgconfig:/gnu/store/p1ydkidsr8xkr8ymymzy3lhy2ajwpmiy-gdk-pixbuf-2.42.4/lib/pkgconfig:/gnu/store/s0w7szfsajdy6cnrz2w7z4h5spyl4aaj-expat-2.4.1/lib/pkgconfig:/gnu/store/1g2f2gy9mpnkl2w37j1vqslxakzlr8g8-at-spi2-core-2.40.0/lib/pkgconfig:/gnu/store/xfbpzv3m5pyhd4pk079v9fhqlp47143s-libxdmcp-1.1.3/lib/pkgconfig:/gnu/store/ka95pg02g2xv2a6bf56w7daghz3fk2xz-libxau-1.0.9/lib/pkgconfig:/gnu/store/8r97pw4h31vkkks40ysnlb53sr2cqi9y-libpthread-stubs-0.4/lib/pkgconfig:/gnu/store/xkfcl7pxgblc52hwzq5294r0m7fv53bq-libdatrie-0.2.13/lib/pkgconfig:/gnu/store/7sik9fp2ffvnr7l01s423x4r5pn75rf9-icu4c-69.1/lib/pkgconfig:/gnu/store/x0ixwynndmgzwjijhhzk703cfac8ygjd-graphite2-1.3.13/lib/pkgconfig:/gnu/store/rj72k5d3kaigjvyyx58q7hbn8x9aslff-shared-mime-info-1.15/share/pkgconfig:/gnu/store/24r1mvps1i27831g3z7kq3ylajyq69g0-libxtst-1.2.3/lib/pkgconfig:/gnu/store/a68mx8v3ibbw8igmbfl29xc3r8qacb2n-dbus-1.12.20/lib/pkgconfig'
environment variable `GUIX_GTK3_PATH' set to `/gnu/store/aj3jcqff1l4g7fzsqa6y1yjbqiayyrgj-gtk+-3.24.30/lib/gtk-3.0'
environment variable `BASH_LOADABLES_PATH' unset
environment variable `C_INCLUDE_PATH' set to `/gnu/store/a4b3bc2rm9lqyd00vwz16p2ycfva62fk-graphviz-2.49.0/include:/gnu/store/546xz3wbs1zriz48vz2kxyfjyyqbmgjh-libxslt-1.1.34/include:/gnu/store/fzzl2j23cjjr2srfwxbnbh5jq78vr42f-xorg-server-1.20.10/include:/gnu/store/ma0zpvl77bsbdqzwchv8zj3504a85id6-atkmm-2.28.1/include:/gnu/store/0szh0b2fc08m8d7sphg3v9rkk526dxym-cairomm-1.14.2/include:/gnu/store/j1lfwcmmp1g5sy2m8g1glvzk834dv48r-glibmm-2.68.0/include:/gnu/store/aj3jcqff1l4g7fzsqa6y1yjbqiayyrgj-gtk+-3.24.30/include:/gnu/store/8pz417xcywz8s52pf0sjry8nm62v0zdf-pangomm-2.46.0/include:/gnu/store/w6zv2mz56wyxr471q38a3cmzm4iqimy8-bzip2-1.0.8/include:/gnu/store/72kpdqplq4nc87fm7ch47kxy3nlkzsyx-xz-5.2.5/include:/gnu/store/jazwfsj2v41yih2yzw5nydqgdz1hgfm4-file-5.39/include:/gnu/store/v45pjc1yqmy59j9ff913p3ywn9qk4lqx-gawk-5.1.0/include:/gnu/store/l8kxrs01lll3pzjrd590p45l8k045q6q-make-4.3/include:/gnu/store/6mqcv52gwn2dnbislgv9k99525mzmrrs-binutils-2.37/include:/gnu/store/vakvgvrb839igv16jkif4lmx11d25jqb-gcc-10.3.0/include:/gnu/store/2fk1gz2s7ppdicynscra9b19byrrr866-glibc-2.33/include:/gnu/store/qygy1q98ikvapa3gnmhsq92dmnc60ffd-python-3.9.6/include:/gnu/store/2i0zpa5w320y8m4zbqk1va8vs6dbawv0-zlib-1.2.11/include:/gnu/store/vpps8gpd7xzj630ixalx7a5gc3bhbk1p-util-linux-2.37.1-lib/include:/gnu/store/g1klcqy022aybasx0p6xbk0kxyxnfxzs-pcre-8.45/include:/gnu/store/hbw4pf3plhab5kqyfpczl3mm0wr9rb39-libffi-3.3/include:/gnu/store/w6hgiyz99wlipjl93967li2z80vfl3gw-xorgproto-2021.4/include:/gnu/store/3nxgh1pmqg44f0ccg16ny4fs81mmafvq-pixman-0.40.0/include:/gnu/store/0kc479whbavx0drhvivahdppr2xmlcyb-mesa-21.2.4/include:/gnu/store/k2xykbwy7z5czrn1xf52fy1x9wdmbr9z-libpciaccess-0.16/include:/gnu/store/0rqgwbhbbpgnn7xygv7am1ljlzfpyb7q-atk-2.36.0/include:/gnu/store/y6bj01dx23smg7lf1jifql2a1hgmy9s4-glibmm-2.64.5/include:/gnu/store/g9lfd19jck8rpxh8wmjbk8rdfcpsh7im-cairo-1.16.0/include:/gnu/store/55wynb8361fq3p7alr8ii4z17fhw44fr-libsigc++-3.0.6/include:/gnu/store/gwsy836sazxn8gr0my9kcgjl971f5d4q-libsigc++-2.9.3/include:/gnu/store/l4zh3hgnykfrm0mlqzhf5bg2g5minkqs-glib-2.70.0/include:/gnu/store/if7y9430fim7v0bxdzqhy3yjgsjl2whd-wayland-1.19.0/include:/gnu/store/zcb10k8l85b4vwgr9s5kl7qwl6hchx37-pango-1.48.10/include:/gnu/store/1kyfwib4r2sxfyvknfq4r5y5zqc4yddk-libxrender-0.9.10/include:/gnu/store/3dr3qqf6y1lppnshpn3v666z6998nzda-libxrandr-1.5.2/include:/gnu/store/4qqz2mrmw92d6yn09fljjn9wpy075qgf-libxkbcommon-1.3.0/include:/gnu/store/k3cyvfg33cfviiyv176jbg0yw1ipgq4l-libxinerama-1.1.4/include:/gnu/store/36lapwm1w5vi2vcpvxdrj468w0rqmjsr-libxi-1.7.10/include:/gnu/store/yh0pa3cm02qq6a4qxm60hin2qzfcvf8p-libxfixes-6.0.0/include:/gnu/store/zqg1vg42ympnp5398g1p61aqnfn20jyk-libxext-1.3.4/include:/gnu/store/q9n2gkcxx99qrfmh4xdq744qim4cc9rn-libxdamage-1.1.5/include:/gnu/store/mlpma06sa580l8l5mizx5bvm663i0wsk-libxcursor-1.2.0/include:/gnu/store/md29ch9fzxf5712vh42kn6rzw90pmdzy-libxcomposite-0.4.5/include:/gnu/store/xhl2apl68gqkkcqx7c4l78c82l9g5z09-libx11-1.7.2/include:/gnu/store/k1d80vmiaixyqi5lrjqi18r5i9qhhaj1-libepoxy-1.5.5/include:/gnu/store/dhfmlvc8iyfs7c86nm6magglr835idpg-libcloudproviders-minimal-0.3.1/include:/gnu/store/j4ndfdhyzl2az6hifgfh5fwwfhp87ykl-librsvg-2.50.7/include:/gnu/store/375858dr3cqbwry58xcgc0776205p0mf-freetype-2.10.4/include:/gnu/store/379xr5pkxprcamhlcbqz9nghj90qxw86-fontconfig-minimal-2.13.94/include:/gnu/store/0h3ami2ijmyvkc63hk5sqsdh3qkrrcv0-fribidi-1.0.9/include:/gnu/store/sxp3lfbnyqzdq7m87qc23yaq1invxa14-at-spi2-atk-2.38.0/include:/gnu/store/99a2njzz22dkzd8pz75fsi5nbgv9ww0x-linux-libre-headers-5.10.35/include:/gnu/store/5kyiwkbmwif4c36sv3qd252c08sb5ipr-libxxf86vm-1.1.4/include:/gnu/store/pblw05r4v2xz72cccv2mg5bh4vaxjrqr-libxshmfence-1.3/include:/gnu/store/h9mqw1kn1vs6jqzafaybi1gc846r148i-libvdpau-1.4/include:/gnu/store/xfb3dwr8qj17yqn0lys6bxwl4xvrmnxa-libdrm-2.4.107/include:/gnu/store/pzdnx43bkzpf8ka6hisgs72q6b9xzjf6-libxcb-1.14/include:/gnu/store/brpl5kly3c64ah2y9v3fgbwl32641c0f-libpng-1.6.37/include:/gnu/store/j623gzkdxbiaa04kv8gg6ca8c1fhqhfp-libxft-2.3.3/include:/gnu/store/nrpykryznfzz28gr45lmnfv7a4d9150r-libthai-0.1.28/include:/gnu/store/i7vlvvpr0bf1vzsp994340dcp3izwd95-harfbuzz-2.8.2/include:/gnu/store/p1ydkidsr8xkr8ymymzy3lhy2ajwpmiy-gdk-pixbuf-2.42.4/include:/gnu/store/s0w7szfsajdy6cnrz2w7z4h5spyl4aaj-expat-2.4.1/include:/gnu/store/1g2f2gy9mpnkl2w37j1vqslxakzlr8g8-at-spi2-core-2.40.0/include:/gnu/store/xfbpzv3m5pyhd4pk079v9fhqlp47143s-libxdmcp-1.1.3/include:/gnu/store/ka95pg02g2xv2a6bf56w7daghz3fk2xz-libxau-1.0.9/include:/gnu/store/xkfcl7pxgblc52hwzq5294r0m7fv53bq-libdatrie-0.2.13/include:/gnu/store/7sik9fp2ffvnr7l01s423x4r5pn75rf9-icu4c-69.1/include:/gnu/store/x0ixwynndmgzwjijhhzk703cfac8ygjd-graphite2-1.3.13/include:/gnu/store/24r1mvps1i27831g3z7kq3ylajyq69g0-libxtst-1.2.3/include:/gnu/store/a68mx8v3ibbw8igmbfl29xc3r8qacb2n-dbus-1.12.20/include'
environment variable `CPLUS_INCLUDE_PATH' set to `/gnu/store/a4b3bc2rm9lqyd00vwz16p2ycfva62fk-graphviz-2.49.0/include:/gnu/store/546xz3wbs1zriz48vz2kxyfjyyqbmgjh-libxslt-1.1.34/include:/gnu/store/fzzl2j23cjjr2srfwxbnbh5jq78vr42f-xorg-server-1.20.10/include:/gnu/store/ma0zpvl77bsbdqzwchv8zj3504a85id6-atkmm-2.28.1/include:/gnu/store/0szh0b2fc08m8d7sphg3v9rkk526dxym-cairomm-1.14.2/include:/gnu/store/j1lfwcmmp1g5sy2m8g1glvzk834dv48r-glibmm-2.68.0/include:/gnu/store/aj3jcqff1l4g7fzsqa6y1yjbqiayyrgj-gtk+-3.24.30/include:/gnu/store/8pz417xcywz8s52pf0sjry8nm62v0zdf-pangomm-2.46.0/include:/gnu/store/w6zv2mz56wyxr471q38a3cmzm4iqimy8-bzip2-1.0.8/include:/gnu/store/72kpdqplq4nc87fm7ch47kxy3nlkzsyx-xz-5.2.5/include:/gnu/store/jazwfsj2v41yih2yzw5nydqgdz1hgfm4-file-5.39/include:/gnu/store/v45pjc1yqmy59j9ff913p3ywn9qk4lqx-gawk-5.1.0/include:/gnu/store/l8kxrs01lll3pzjrd590p45l8k045q6q-make-4.3/include:/gnu/store/6mqcv52gwn2dnbislgv9k99525mzmrrs-binutils-2.37/include:/gnu/store/vakvgvrb839igv16jkif4lmx11d25jqb-gcc-10.3.0/include/c++:/gnu/store/vakvgvrb839igv16jkif4lmx11d25jqb-gcc-10.3.0/include:/gnu/store/2fk1gz2s7ppdicynscra9b19byrrr866-glibc-2.33/include:/gnu/store/qygy1q98ikvapa3gnmhsq92dmnc60ffd-python-3.9.6/include:/gnu/store/2i0zpa5w320y8m4zbqk1va8vs6dbawv0-zlib-1.2.11/include:/gnu/store/vpps8gpd7xzj630ixalx7a5gc3bhbk1p-util-linux-2.37.1-lib/include:/gnu/store/g1klcqy022aybasx0p6xbk0kxyxnfxzs-pcre-8.45/include:/gnu/store/hbw4pf3plhab5kqyfpczl3mm0wr9rb39-libffi-3.3/include:/gnu/store/w6hgiyz99wlipjl93967li2z80vfl3gw-xorgproto-2021.4/include:/gnu/store/3nxgh1pmqg44f0ccg16ny4fs81mmafvq-pixman-0.40.0/include:/gnu/store/0kc479whbavx0drhvivahdppr2xmlcyb-mesa-21.2.4/include:/gnu/store/k2xykbwy7z5czrn1xf52fy1x9wdmbr9z-libpciaccess-0.16/include:/gnu/store/0rqgwbhbbpgnn7xygv7am1ljlzfpyb7q-atk-2.36.0/include:/gnu/store/y6bj01dx23smg7lf1jifql2a1hgmy9s4-glibmm-2.64.5/include:/gnu/store/g9lfd19jck8rpxh8wmjbk8rdfcpsh7im-cairo-1.16.0/include:/gnu/store/55wynb8361fq3p7alr8ii4z17fhw44fr-libsigc++-3.0.6/include:/gnu/store/gwsy836sazxn8gr0my9kcgjl971f5d4q-libsigc++-2.9.3/include:/gnu/store/l4zh3hgnykfrm0mlqzhf5bg2g5minkqs-glib-2.70.0/include:/gnu/store/if7y9430fim7v0bxdzqhy3yjgsjl2whd-wayland-1.19.0/include:/gnu/store/zcb10k8l85b4vwgr9s5kl7qwl6hchx37-pango-1.48.10/include:/gnu/store/1kyfwib4r2sxfyvknfq4r5y5zqc4yddk-libxrender-0.9.10/include:/gnu/store/3dr3qqf6y1lppnshpn3v666z6998nzda-libxrandr-1.5.2/include:/gnu/store/4qqz2mrmw92d6yn09fljjn9wpy075qgf-libxkbcommon-1.3.0/include:/gnu/store/k3cyvfg33cfviiyv176jbg0yw1ipgq4l-libxinerama-1.1.4/include:/gnu/store/36lapwm1w5vi2vcpvxdrj468w0rqmjsr-libxi-1.7.10/include:/gnu/store/yh0pa3cm02qq6a4qxm60hin2qzfcvf8p-libxfixes-6.0.0/include:/gnu/store/zqg1vg42ympnp5398g1p61aqnfn20jyk-libxext-1.3.4/include:/gnu/store/q9n2gkcxx99qrfmh4xdq744qim4cc9rn-libxdamage-1.1.5/include:/gnu/store/mlpma06sa580l8l5mizx5bvm663i0wsk-libxcursor-1.2.0/include:/gnu/store/md29ch9fzxf5712vh42kn6rzw90pmdzy-libxcomposite-0.4.5/include:/gnu/store/xhl2apl68gqkkcqx7c4l78c82l9g5z09-libx11-1.7.2/include:/gnu/store/k1d80vmiaixyqi5lrjqi18r5i9qhhaj1-libepoxy-1.5.5/include:/gnu/store/dhfmlvc8iyfs7c86nm6magglr835idpg-libcloudproviders-minimal-0.3.1/include:/gnu/store/j4ndfdhyzl2az6hifgfh5fwwfhp87ykl-librsvg-2.50.7/include:/gnu/store/375858dr3cqbwry58xcgc0776205p0mf-freetype-2.10.4/include:/gnu/store/379xr5pkxprcamhlcbqz9nghj90qxw86-fontconfig-minimal-2.13.94/include:/gnu/store/0h3ami2ijmyvkc63hk5sqsdh3qkrrcv0-fribidi-1.0.9/include:/gnu/store/sxp3lfbnyqzdq7m87qc23yaq1invxa14-at-spi2-atk-2.38.0/include:/gnu/store/99a2njzz22dkzd8pz75fsi5nbgv9ww0x-linux-libre-headers-5.10...skipping...
gtkmm-3.24.4/untracked/gtk/gtkmm/treemodelsort.h
gtkmm-3.24.4/untracked/gtk/gtkmm/treepath.cc
gtkmm-3.24.4/untracked/gtk/gtkmm/treepath.h
gtkmm-3.24.4/untracked/gtk/gtkmm/treerowreference.cc
gtkmm-3.24.4/untracked/gtk/gtkmm/treerowreference.h
gtkmm-3.24.4/untracked/gtk/gtkmm/treeselection.cc
gtkmm-3.24.4/untracked/gtk/gtkmm/treeselection.h
gtkmm-3.24.4/untracked/gtk/gtkmm/treesortable.cc
gtkmm-3.24.4/untracked/gtk/gtkmm/treesortable.h
gtkmm-3.24.4/untracked/gtk/gtkmm/treestore.cc
gtkmm-3.24.4/untracked/gtk/gtkmm/treestore.h
gtkmm-3.24.4/untracked/gtk/gtkmm/treeview.cc
gtkmm-3.24.4/untracked/gtk/gtkmm/treeview.h
gtkmm-3.24.4/untracked/gtk/gtkmm/treeviewcolumn.cc
gtkmm-3.24.4/untracked/gtk/gtkmm/treeviewcolumn.h
gtkmm-3.24.4/untracked/gtk/gtkmm/uimanager.cc
gtkmm-3.24.4/untracked/gtk/gtkmm/uimanager.h
gtkmm-3.24.4/untracked/gtk/gtkmm/viewport.cc
gtkmm-3.24.4/untracked/gtk/gtkmm/viewport.h
gtkmm-3.24.4/untracked/gtk/gtkmm/volumebutton.cc
gtkmm-3.24.4/untracked/gtk/gtkmm/volumebutton.h
gtkmm-3.24.4/untracked/gtk/gtkmm/widget.cc
gtkmm-3.24.4/untracked/gtk/gtkmm/widget.h
gtkmm-3.24.4/untracked/gtk/gtkmm/widgetpath.cc
gtkmm-3.24.4/untracked/gtk/gtkmm/widgetpath.h
gtkmm-3.24.4/untracked/gtk/gtkmm/window.cc
gtkmm-3.24.4/untracked/gtk/gtkmm/window.h
gtkmm-3.24.4/untracked/gtk/gtkmm/windowgroup.cc
gtkmm-3.24.4/untracked/gtk/gtkmm/windowgroup.h
gtkmm-3.24.4/untracked/gtk/gtkmm/wrap_init.cc
phase `unpack' succeeded after 0.9 seconds
starting phase `generate-gdk-pixbuf-loaders-cache-file'
GDK_PIXBUF_MODULE_FILE set to `/gnu/store/s926zb4xrnnasc7iipnbyp9k0fnlsain-gtkmm-3.24.4/lib/gdk-pixbuf-2.0/2.10.0/loaders.cache'
phase `generate-gdk-pixbuf-loaders-cache-file' succeeded after 0.2 seconds
starting phase `patch-usr-bin-file'
phase `patch-usr-bin-file' succeeded after 0.0 seconds
starting phase `patch-source-shebangs'
patch-shebang: ./autogen.sh: changing `/bin/sh' to `/gnu/store/vx6vfbmmazvfi7vp8xyjn2mcyylvw9gn-bash-minimal-5.1.8/bin/sh'
patch-shebang: ./tools/dummy-header.py: changing `/usr/bin/env python3' to `/gnu/store/qygy1q98ikvapa3gnmhsq92dmnc60ffd-python-3.9.6/bin/python3'
patch-shebang: ./tools/gen_scripts/gdk_generate_docs.sh: changing `/bin/bash' to `/gnu/store/vx6vfbmmazvfi7vp8xyjn2mcyylvw9gn-bash-minimal-5.1.8/bin/bash'
patch-shebang: ./tools/gen_scripts/gdk_generate_enums.sh: changing `/bin/bash' to `/gnu/store/vx6vfbmmazvfi7vp8xyjn2mcyylvw9gn-bash-minimal-5.1.8/bin/bash'
patch-shebang: ./tools/gen_scripts/gdk_generate_extra_defs.sh: changing `/bin/bash' to `/gnu/store/vx6vfbmmazvfi7vp8xyjn2mcyylvw9gn-bash-minimal-5.1.8/bin/bash'
patch-shebang: ./tools/gen_scripts/gdk_generate_methods.sh: changing `/bin/bash' to `/gnu/store/vx6vfbmmazvfi7vp8xyjn2mcyylvw9gn-bash-minimal-5.1.8/bin/bash'
patch-shebang: ./tools/gen_scripts/generate_docs_and_defs.sh: changing `/bin/bash' to `/gnu/store/vx6vfbmmazvfi7vp8xyjn2mcyylvw9gn-bash-minimal-5.1.8/bin/bash'
patch-shebang: ./tools/gen_scripts/gtk_generate_docs.sh: changing `/bin/bash' to `/gnu/store/vx6vfbmmazvfi7vp8xyjn2mcyylvw9gn-bash-minimal-5.1.8/bin/bash'
patch-shebang: ./tools/gen_scripts/gtk_generate_enums.sh: changing `/bin/bash' to `/gnu/store/vx6vfbmmazvfi7vp8xyjn2mcyylvw9gn-bash-minimal-5.1.8/bin/bash'
patch-shebang: ./tools/gen_scripts/gtk_generate_extra_defs.sh: changing `/bin/bash' to `/gnu/store/vx6vfbmmazvfi7vp8xyjn2mcyylvw9gn-bash-minimal-5.1.8/bin/bash'
patch-shebang: ./tools/gen_scripts/gtk_generate_methods.sh: changing `/bin/bash' to `/gnu/store/vx6vfbmmazvfi7vp8xyjn2mcyylvw9gn-bash-minimal-5.1.8/bin/bash'
patch-shebang: ./tools/gen_scripts/init_generate.sh: changing `/bin/bash' to `/gnu/store/vx6vfbmmazvfi7vp8xyjn2mcyylvw9gn-bash-minimal-5.1.8/bin/bash'
patch-shebang: ./untracked/build_scripts/check-dllexport-usage.py: changing `/usr/bin/env python3' to `/gnu/store/qygy1q98ikvapa3gnmhsq92dmnc60ffd-python-3.9.6/bin/python3'
patch-shebang: ./untracked/build_scripts/dist-build-scripts.py: changing `/usr/bin/env python3' to `/gnu/store/qygy1q98ikvapa3gnmhsq92dmnc60ffd-python-3.9.6/bin/python3'
patch-shebang: ./untracked/build_scripts/dist-changelog.py: changing `/usr/bin/env python3' to `/gnu/store/qygy1q98ikvapa3gnmhsq92dmnc60ffd-python-3.9.6/bin/python3'
patch-shebang: ./untracked/build_scripts/doc-reference.py: changing `/usr/bin/env python3' to `/gnu/store/qygy1q98ikvapa3gnmhsq92dmnc60ffd-python-3.9.6/bin/python3'
patch-shebang: ./untracked/build_scripts/generate-binding.py: changing `/usr/bin/env python3' to `/gnu/store/qygy1q98ikvapa3gnmhsq92dmnc60ffd-python-3.9.6/bin/python3'
phase `patch-source-shebangs' succeeded after 0.2 seconds
starting phase `configure'
Traceback (most recent call last):
File "/gnu/store/iajwhaqi5ah00d80k1frh2ylfbpk92nc-meson-0.60.0/lib/python3.9/site-packages/mesonbuild/mesonmain.py", line 138, in run
return options.run_func(options)
File "/gnu/store/iajwhaqi5ah00d80k1frh2ylfbpk92nc-meson-0.60.0/lib/python3.9/site-packages/mesonbuild/msetup.py", line 294, in run
app.generate()
File "/gnu/store/iajwhaqi5ah00d80k1frh2ylfbpk92nc-meson-0.60.0/lib/python3.9/site-packages/mesonbuild/msetup.py", line 185, in generate
self._generate(env)
File "/gnu/store/iajwhaqi5ah00d80k1frh2ylfbpk92nc-meson-0.60.0/lib/python3.9/site-packages/mesonbuild/msetup.py", line 229, in _generate
intr.run()
File "/gnu/store/iajwhaqi5ah00d80k1frh2ylfbpk92nc-meson-0.60.0/lib/python3.9/site-packages/mesonbuild/interpreter/interpreter.py", line 2484, in run
super().run()
File "/gnu/store/iajwhaqi5ah00d80k1frh2ylfbpk92nc-meson-0.60.0/lib/python3.9/site-packages/mesonbuild/interpreterbase/interpreterbase.py", line 150, in run
self.evaluate_codeblock(self.ast, start=1)
File "/gnu/store/iajwhaqi5ah00d80k1frh2ylfbpk92nc-meson-0.60.0/lib/python3.9/site-packages/mesonbuild/interpreterbase/interpreterbase.py", line 175, in evaluate_codeblock
raise e
File "/gnu/store/iajwhaqi5ah00d80k1frh2ylfbpk92nc-meson-0.60.0/lib/python3.9/site-packages/mesonbuild/interpreterbase/interpreterbase.py", line 168, in evaluate_codeblock
self.evaluate_statement(cur)
File "/gnu/store/iajwhaqi5ah00d80k1frh2ylfbpk92nc-meson-0.60.0/lib/python3.9/site-packages/mesonbuild/interpreterbase/interpreterbase.py", line 181, in evaluate_statement
return self.function_call(cur)
File "/gnu/store/iajwhaqi5ah00d80k1frh2ylfbpk92nc-meson-0.60.0/lib/python3.9/site-packages/mesonbuild/interpreterbase/interpreterbase.py", line 456, in function_call
res = func(node, func_args, kwargs)
File "/gnu/store/iajwhaqi5ah00d80k1frh2ylfbpk92nc-meson-0.60.0/lib/python3.9/site-packages/mesonbuild/interpreterbase/decorators.py", line 713, in wrapped
return f(*wrapped_args, **wrapped_kwargs)
File "/gnu/store/iajwhaqi5ah00d80k1frh2ylfbpk92nc-meson-0.60.0/lib/python3.9/site-packages/mesonbuild/interpreterbase/decorators.py", line 115, in wrapped
return f(*wrapped_args, **wrapped_kwargs)
File "/gnu/store/iajwhaqi5ah00d80k1frh2ylfbpk92nc-meson-0.60.0/lib/python3.9/site-packages/mesonbuild/interpreterbase/decorators.py", line 276, in wrapper
return f(*nargs, **wrapped_kwargs)
File "/gnu/store/iajwhaqi5ah00d80k1frh2ylfbpk92nc-meson-0.60.0/lib/python3.9/site-packages/mesonbuild/interpreter/interpreter.py", line 1969, in func_subdir
self.evaluate_codeblock(codeblock)
File "/gnu/store/iajwhaqi5ah00d80k1frh2ylfbpk92nc-meson-0.60.0/lib/python3.9/site-packages/mesonbuild/interpreterbase/interpreterbase.py", line 175, in evaluate_codeblock
raise e
File "/gnu/store/iajwhaqi5ah00d80k1frh2ylfbpk92nc-meson-0.60.0/lib/python3.9/site-packages/mesonbuild/interpreterbase/interpreterbase.py", line 168, in evaluate_codeblock
self.evaluate_statement(cur)
File "/gnu/store/iajwhaqi5ah00d80k1frh2ylfbpk92nc-meson-0.60.0/lib/python3.9/site-packages/mesonbuild/interpreterbase/interpreterbase.py", line 191, in evaluate_statement
return self.evaluate_if(cur)
File "/gnu/store/iajwhaqi5ah00d80k1frh2ylfbpk92nc-meson-0.60.0/lib/python3.9/site-packages/mesonbuild/interpreterbase/interpreterbase.py", line 274, in evaluate_if
self.evaluate_codeblock(i.block)
File "/gnu/store/iajwhaqi5ah00d80k1frh2ylfbpk92nc-meson-0.60.0/lib/python3.9/site-packages/mesonbuild/interpreterbase/interpreterbase.py", line 175, in evaluate_codeblock
raise e
File "/gnu/store/iajwhaqi5ah00d80k1frh2ylfbpk92nc-meson-0.60.0/lib/python3.9/site-packages/mesonbuild/interpreterbase/interpreterbase.py", line 168, in evaluate_codeblock
self.evaluate_statement(cur)
File "/gnu/store/iajwhaqi5ah00d80k1frh2ylfbpk92nc-meson-0.60.0/lib/python3.9/site-packages/mesonbuild/interpreterbase/interpreterbase.py", line 183, in evaluate_statement
self.assignment(cur)
File "/gnu/store/iajwhaqi5ah00d80k1frh2ylfbpk92nc-meson-0.60.0/lib/python3.9/site-packages/mesonbuild/interpreterbase/interpreterbase.py", line 568, in assignment
value = self.evaluate_statement(node.value)
File "/gnu/store/iajwhaqi5ah00d80k1frh2ylfbpk92nc-meson-0.60.0/lib/python3.9/site-packages/mesonbuild/interpreterbase/interpreterbase.py", line 185, in evaluate_statement
return self.method_call(cur)
File "/gnu/store/iajwhaqi5ah00d80k1frh2ylfbpk92nc-meson-0.60.0/lib/python3.9/site-packages/mesonbuild/interpreterbase/interpreterbase.py", line 484, in method_call
res = obj.method_call(method_name, args, kwargs)
File "/gnu/store/iajwhaqi5ah00d80k1frh2ylfbpk92nc-meson-0.60.0/lib/python3.9/site-packages/mesonbuild/interpreter/interpreterobjects.py", line 754, in method_call
ret = method(state, args, kwargs)
File "/gnu/store/iajwhaqi5ah00d80k1frh2ylfbpk92nc-meson-0.60.0/lib/python3.9/site-packages/mesonbuild/interpreterbase/decorators.py", line 713, in wrapped
return f(*wrapped_args, **wrapped_kwargs)
File "/gnu/store/iajwhaqi5ah00d80k1frh2ylfbpk92nc-meson-0.60.0/lib/python3.9/site-packages/mesonbuild/interpreterbase/decorators.py", line 115, in wrapped
return f(*wrapped_args, **wrapped_kwargs)
File "/gnu/store/iajwhaqi5ah00d80k1frh2ylfbpk92nc-meson-0.60.0/lib/python3.9/site-packages/mesonbuild/modules/gnome.py", line 292, in compile_resources
target_c = GResourceTarget(name, state.subdir, state.subproject, kwargs)
File "/gnu/store/iajwhaqi5ah00d80k1frh2ylfbpk92nc-meson-0.60.0/lib/python3.9/site-packages/mesonbuild/modules/__init__.py", line 202, in __init__
super().__init__(name, subdir, subproject, kwargs)
File "/gnu/store/iajwhaqi5ah00d80k1frh2ylfbpk92nc-meson-0.60.0/lib/python3.9/site-packages/mesonbuild/build.py", line 2330, in __init__
self.process_kwargs(kwargs, backend)
File "/gnu/store/iajwhaqi5ah00d80k1frh2ylfbpk92nc-meson-0.60.0/lib/python3.9/site-packages/mesonbuild/build.py", line 2441, in process_kwargs
if isinstance(kwargs['install_dir'], list):
KeyError: 'install_dir'
The Meson build system
Version: 0.60.0
Source dir: /tmp/guix-build-gtkmm-3.24.4.drv-0/gtkmm-3.24.4
Build dir: /tmp/guix-build-gtkmm-3.24.4.drv-0/build
Build type: native build
Project name: gtkmm
Project version: 3.24.4
C compiler for the host machine: gcc (gcc 10.3.0 "gcc (GCC) 10.3.0")
C linker for the host machine: gcc ld.bfd 2.37
C++ compiler for the host machine: c++ (gcc 10.3.0 "c++ (GCC) 10.3.0")
C++ linker for the host machine: c++ ld.bfd 2.37
Host machine cpu family: x86_64
Host machine cpu: x86_64
Program python3 found: YES (/gnu/store/qygy1q98ikvapa3gnmhsq92dmnc60ffd-python-3.9.6/bin/python3)
Found pkg-config: /gnu/store/wc7lqi6kgki5qzr01g3gvriwdadjiwpc-pkg-config-0.29.2/bin/pkg-config (0.29.2)
Run-time dependency gtk+-3.0 found: YES 3.24.30
Run-time dependency cairomm-1.0 found: YES 1.14.2
Run-time dependency pangomm-1.4 found: YES 2.46.0
Run-time dependency gdk-pixbuf-2.0 found: YES 2.42.4
Run-time dependency atkmm-1.6 found: YES 2.28.1
Run-time dependency epoxy found: YES 1.5.5
Run-time dependency glibmm-2.4 found: YES 2.64.5
Run-time dependency giomm-2.4 found: YES 2.64.5
Run-time dependency gtk+-unix-print-3.0 found: YES 3.24.30
Program mm-common-get found: YES (/gnu/store/jgjk0xss7qw8fj0zz4hn8ai9dnq7vlbi-mm-common-1.0.3/bin/mm-common-get)
Program m4 found: YES (/gnu/store/4xvaqcjyyrw6y1fwm3drl87xrz1vsln9-m4-1.4.18/bin/m4)
Program perl found: YES (/gnu/store/k18as85v9s0z66w40wg9sym4c5qz03l2-perl-5.34.0/bin/perl)
Program doxygen found: YES (/gnu/store/79lhh65z70qzwb0x4flgy1n4z05ypr4f-doxygen-1.9.1/bin/doxygen)
Program dot found: YES (/gnu/store/a4b3bc2rm9lqyd00vwz16p2ycfva62fk-graphviz-2.49.0/bin/dot)
Program xsltproc found: YES (/gnu/store/546xz3wbs1zriz48vz2kxyfjyyqbmgjh-libxslt-1.1.34/bin/xsltproc)
Compiler for C++ supports arguments -Wall: YES
../gtkmm-3.24.4/meson.build:307: WARNING: Consider using the built-in warning_level option instead of using "-Wall".
Library glibmm_generate_extra_defs-2.4 found: YES
Configuring gdkmm-3.0.pc using configuration
Configuring gdkmm-3.0-uninstalled.pc using configuration
Configuring gdkmmconfig.h using configuration
Configuring gdkmm.rc using configuration
Program /gnu/store/qygy1q98ikvapa3gnmhsq92dmnc60ffd-python-3.9.6/bin/python3 found: YES (/gnu/store/qygy1q98ikvapa3gnmhsq92dmnc60ffd-python-3.9.6/bin/python3)
Program /gnu/store/qygy1q98ikvapa3gnmhsq92dmnc60ffd-python-3.9.6/bin/python3 found: YES (/gnu/store/qygy1q98ikvapa3gnmhsq92dmnc60ffd-python-3.9.6/bin/python3)
Configuring gtkmm-3.0.pc using configuration
Configuring gtkmm-3.0-uninstalled.pc using configuration
Configuring gtkmmconfig.h using configuration
Configuring gtkmm.rc using configuration
Program /gnu/store/qygy1q98ikvapa3gnmhsq92dmnc60ffd-python-3.9.6/bin/python3 found: YES (/gnu/store/qygy1q98ikvapa3gnmhsq92dmnc60ffd-python-3.9.6/bin/python3)
Program /gnu/store/qygy1q98ikvapa3gnmhsq92dmnc60ffd-python-3.9.6/bin/python3 found: YES (/gnu/store/qygy1q98ikvapa3gnmhsq92dmnc60ffd-python-3.9.6/bin/python3)
Found pkg-config: /gnu/store/wc7lqi6kgki5qzr01g3gvriwdadjiwpc-pkg-config-0.29.2/bin/pkg-config (0.29.2)
Program glib-compile-resources found: YES (/gnu/store/w7a78mbizznxgykmqiia3l0bdyjpn675-glib-2.70.0-bin/bin/glib-compile-resources)
error: in phase 'configure': uncaught exception:
%exception #<&invoke-error program: "meson" arguments: ("--prefix=/gnu/store/s926zb4xrnnasc7iipnbyp9k0fnlsain-gtkmm-3.24.4" "--buildtype=debugoptimized" "-Dc_link_args=-Wl,-rpath=/gnu/store/s926zb4xrnnasc7iipnbyp9k0fnlsain-gtkmm-3.24.4/lib" "-Dcpp_link_args=-Wl,-rpath=/gnu/store/s926zb4xrnnasc7iipnbyp9k0fnlsain-gtkmm-3.24.4/lib" "-Dbuild-documentation=true" "/tmp/guix-build-gtkmm-3.24.4.drv-0/gtkmm-3.24.4") exit-status: 2 term-signal: #f stop-signal: #f>
phase `configure' failed after 1.2 seconds
command "meson" "--prefix=/gnu/store/s926zb4xrnnasc7iipnbyp9k0fnlsain-gtkmm-3.24.4" "--buildtype=debugoptimized" "-Dc_link_args=-Wl,-rpath=/gnu/store/s926zb4xrnnasc7iipnbyp9k0fnlsain-gtkmm-3.24.4/lib" "-Dcpp_link_args=-Wl,-rpath=/gnu/store/s926zb4xrnnasc7iipnbyp9k0fnlsain-gtkmm-3.24.4/lib" "-Dbuild-documentation=true" "/tmp/guix-build-gtkmm-3.24.4.drv-0/gtkmm-3.24.4" failed with status 2
mcournoyer@raisin ~/src/guix-core-updates-next [env]$ less /var/log/guix/drvs/rp/4pmx13wi8ahw781myiryxqb5r7mv33-gtkmm-3.24.4.drv
WARNING: (guile-user): imported module (guix build utils) overrides core binding `delete'
starting phase `set-SOURCE-DATE-EPOCH'
phase `set-SOURCE-DATE-EPOCH' succeeded after 0.0 seconds
starting phase `set-paths'
environment variable `PATH' set to `/gnu/store/iajwhaqi5ah00d80k1frh2ylfbpk92nc-meson-0.60.0/bin:/gnu/store/2vcx1im63fsm3w7yfwgq4az0jg87qvng-ninja-1.10.2/bin:/gnu/store/a4b3bc2rm9lqyd00vwz16p2ycfva62fk-graphviz-2.49.0/bin:/gnu/store/79lhh65z70qzwb0x4flgy1n4z05ypr4f-doxygen-1.9.1/bin:/gnu/store/w7a78mbizznxgykmqiia3l0bdyjpn675-glib-2.70.0-bin/bin:/gnu/store/4xvaqcjyyrw6y1fwm3drl87xrz1vsln9-m4-1.4.18/bin:/gnu/store/jgjk0xss7qw8fj0zz4hn8ai9dnq7vlbi-mm-common-1.0.3/bin:/gnu/store/k18as85v9s0z66w40wg9sym4c5qz03l2-perl-5.34.0/bin:/gnu/store/wc7lqi6kgki5qzr01g3gvriwdadjiwpc-pkg-config-0.29.2/bin:/gnu/store/546xz3wbs1zriz48vz2kxyfjyyqbmgjh-libxslt-1.1.34/bin:/gnu/store/fzzl2j23cjjr2srfwxbnbh5jq78vr42f-xorg-server-1.20.10/bin:/gnu/store/ikc2qn1h260f8mdhjz2cwndv2i7n753z-tar-1.34/bin:/gnu/store/v5l2251ck0r4n3w2rjw3l5dzyqsb9zj1-gzip-1.10/bin:/gnu/store/w6zv2mz56wyxr471q38a3cmzm4iqimy8-bzip2-1.0.8/bin:/gnu/store/72kpdqplq4nc87fm7ch47kxy3nlkzsyx-xz-5.2.5/bin:/gnu/store/jazwfsj2v41yih2yzw5nydqgdz1hgfm4-file-5.39/bin:/gnu/store/s49sxmz9g4xmzz222cay9k04ziy3qgmj-diffutils-3.8/bin:/gnu/store/m96snvbl92ivkd32giqh5f3d21bc2n5x-patch-2.7.6/bin:/gnu/store/l8l2y2dlya5rs9hfypmcb4saay7iwn0q-findutils-4.8.0/bin:/gnu/store/v45pjc1yqmy59j9ff913p3ywn9qk4lqx-gawk-5.1.0/bin:/gnu/store/7cddr79rydhc1m4hxr921mq17pz1jj3z-sed-4.8/bin:/gnu/store/ldkm5jwql0qsrfh3ax6rljjsrk0jzv7z-grep-3.6/bin:/gnu/store/25gv43v5rm05i26z39ajrd6nsxa5r461-coreutils-8.32/bin:/gnu/store/l8kxrs01lll3pzjrd590p45l8k045q6q-make-4.3/bin:/gnu/store/vx6vfbmmazvfi7vp8xyjn2mcyylvw9gn-bash-minimal-5.1.8/bin:/gnu/store/k452f0r5bk6n0cbsf5ndxbp5qcf6bpfw-ld-wrapper-0/bin:/gnu/store/6mqcv52gwn2dnbislgv9k99525mzmrrs-binutils-2.37/bin:/gnu/store/vakvgvrb839igv16jkif4lmx11d25jqb-gcc-10.3.0/bin:/gnu/store/2fk1gz2s7ppdicynscra9b19byrrr866-glibc-2.33/bin:/gnu/store/2fk1gz2s7ppdicynscra9b19byrrr866-glibc-2.33/sbin:/gnu/store/qygy1q98ikvapa3gnmhsq92dmnc60ffd-python-3.9.6/bin:/gnu/store/g9lfd19jck8rpxh8wmjbk8rdfcpsh7im-cairo-1.16.0/bin:/gnu/store/if7y9430fim7v0bxdzqhy3yjgsjl2whd-wayland-1.19.0/bin:/gnu/store/zcb10k8l85b4vwgr9s5kl7qwl6hchx37-pango-1.48.10/bin:/gnu/store/4qqz2mrmw92d6yn09fljjn9wpy075qgf-libxkbcommon-1.3.0/bin:/gnu/store/j4ndfdhyzl2az6hifgfh5fwwfhp87ykl-librsvg-2.50.7/bin:/gnu/store/375858dr3cqbwry58xcgc0776205p0mf-freetype-2.10.4/bin:/gnu/store/379xr5pkxprcamhlcbqz9nghj90qxw86-fontconfig-minimal-2.13.94/bin:/gnu/store/0h3ami2ijmyvkc63hk5sqsdh3qkrrcv0-fribidi-1.0.9/bin:/gnu/store/brpl5kly3c64ah2y9v3fgbwl32641c0f-libpng-1.6.37/bin:/gnu/store/p1ydkidsr8xkr8ymymzy3lhy2ajwpmiy-gdk-pixbuf-2.42.4/bin:/gnu/store/s0w7szfsajdy6cnrz2w7z4h5spyl4aaj-expat-2.4.1/bin:/gnu/store/xkfcl7pxgblc52hwzq5294r0m7fv53bq-libdatrie-0.2.13/bin:/gnu/store/7sik9fp2ffvnr7l01s423x4r5pn75rf9-icu4c-69.1/bin:/gnu/store/7sik9fp2ffvnr7l01s423x4r5pn75rf9-icu4c-69.1/sbin:/gnu/store/x0ixwynndmgzwjijhhzk703cfac8ygjd-graphite2-1.3.13/bin:/gnu/store/rj72k5d3kaigjvyyx58q7hbn8x9aslff-shared-mime-info-1.15/bin:/gnu/store/a68mx8v3ibbw8igmbfl29xc3r8qacb2n-dbus-1.12.20/bin'
environment variable `XDG_DATA_DIRS' set to `/gnu/store/iajwhaqi5ah00d80k1frh2ylfbpk92nc-meson-0.60.0/share:/gnu/store/2vcx1im63fsm3w7yfwgq4az0jg87qvng-ninja-1.10.2/share:/gnu/store/a4b3bc2rm9lqyd00vwz16p2ycfva62fk-graphviz-2.49.0/share:/gnu/store/79lhh65z70qzwb0x4flgy1n4z05ypr4f-doxygen-1.9.1/share:/gnu/store/4xvaqcjyyrw6y1fwm3drl87xrz1vsln9-m4-1.4.18/share:/gnu/store/jgjk0xss7qw8fj0zz4hn8ai9dnq7vlbi-mm-common-1.0.3/share:/gnu/store/k18as85v9s0z66w40wg9sym4c5qz03l2-perl-5.34.0/share:/gnu/store/wc7lqi6kgki5qzr01g3gvriwdadjiwpc-pkg-config-0.29.2/share:/gnu/store/546xz3wbs1zriz48vz2kxyfjyyqbmgjh-libxslt-1.1.34/share:/gnu/store/fzzl2j23cjjr2srfwxbnbh5jq78vr42f-xorg-server-1.20.10/share:/gnu/store/ma0zpvl77bsbdqzwchv8zj3504a85id6-atkmm-2.28.1/share:/gnu/store/0szh0b2fc08m8d7sphg3v9rkk526dxym-cairomm-1.14.2/share:/gnu/store/j1lfwcmmp1g5sy2m8g1glvzk834dv48r-glibmm-2.68.0/share:/gnu/store/aj3jcqff1l4g7fzsqa6y1yjbqiayyrgj-gtk+-3.24.30/share:/gnu/store/8pz417xcywz8s52pf0sjry8nm62v0zdf-pangomm-2.46.0/share:/gnu/store/ikc2qn1h260f8mdhjz2cwndv2i7n753z-tar-1.34/share:/gnu/store/v5l2251ck0r4n3w2rjw3l5dzyqsb9zj1-gzip-1.10/share:/gnu/store/w6zv2mz56wyxr471q38a3cmzm4iqimy8-bzip2-1.0.8/share:/gnu/store/72kpdqplq4nc87fm7ch47kxy3nlkzsyx-xz-5.2.5/share:/gnu/store/jazwfsj2v41yih2yzw5nydqgdz1hgfm4-file-5.39/share:/gnu/store/s49sxmz9g4xmzz222cay9k04ziy3qgmj-diffutils-3.8/share:/gnu/store/m96snvbl92ivkd32giqh5f3d21bc2n5x-patch-2.7.6/share:/gnu/store/l8l2y2dlya5rs9hfypmcb4saay7iwn0q-findutils-4.8.0/share:/gnu/store/v45pjc1yqmy59j9ff913p3ywn9qk4lqx-gawk-5.1.0/share:/gnu/store/7cddr79rydhc1m4hxr921mq17pz1jj3z-sed-4.8/share:/gnu/store/ldkm5jwql0qsrfh3ax6rljjsrk0jzv7z-grep-3.6/share:/gnu/store/25gv43v5rm05i26z39ajrd6nsxa5r461-coreutils-8.32/share:/gnu/store/l8kxrs01lll3pzjrd590p45l8k045q6q-make-4.3/share:/gnu/store/vx6vfbmmazvfi7vp8xyjn2mcyylvw9gn-bash-minimal-5.1.8/share:/gnu/store/6mqcv52gwn2dnbislgv9k99525mzmrrs-binutils-2.37/share:/gnu/store/vakvgvrb839igv16jkif4lmx11d25jqb-gcc-10.3.0/share:/gnu/store/2fk1gz2s7ppdicynscra9b19byrrr866-glibc-2.33/share:/gnu/store/3ywv7s8d38rgcq7ljmc1s084358a2m3h-glibc-2.33-static/share:/gnu/store/qygy1q98ikvapa3gnmhsq92dmnc60ffd-python-3.9.6/share:/gnu/store/2i0zpa5w320y8m4zbqk1va8vs6dbawv0-zlib-1.2.11/share:/gnu/store/vpps8gpd7xzj630ixalx7a5gc3bhbk1p-util-linux-2.37.1-lib/share:/gnu/store/g1klcqy022aybasx0p6xbk0kxyxnfxzs-pcre-8.45/share:/gnu/store/hbw4pf3plhab5kqyfpczl3mm0wr9rb39-libffi-3.3/share:/gnu/store/w6hgiyz99wlipjl93967li2z80vfl3gw-xorgproto-2021.4/share:/gnu/store/3nxgh1pmqg44f0ccg16ny4fs81mmafvq-pixman-0.40.0/share:/gnu/store/0kc479whbavx0drhvivahdppr2xmlcyb-mesa-21.2.4/share:/gnu/store/k2xykbwy7z5czrn1xf52fy1x9wdmbr9z-libpciaccess-0.16/share:/gnu/store/0rqgwbhbbpgnn7xygv7am1ljlzfpyb7q-atk-2.36.0/share:/gnu/store/y6bj01dx23smg7lf1jifql2a1hgmy9s4-glibmm-2.64.5/share:/gnu/store/g9lfd19jck8rpxh8wmjbk8rdfcpsh7im-cairo-1.16.0/share:/gnu/store/55wynb8361fq3p7alr8ii4z17fhw44fr-libsigc++-3.0.6/share:/gnu/store/gwsy836sazxn8gr0my9kcgjl971f5d4q-libsigc++-2.9.3/share:/gnu/store/l4zh3hgnykfrm0mlqzhf5bg2g5minkqs-glib-2.70.0/share:/gnu/store/2iy5hrrd581myawb2njjrsxz6zmz6wfm-wayland-protocols-1.23/share:/gnu/store/if7y9430fim7v0bxdzqhy3yjgsjl2whd-wayland-1.19.0/share:/gnu/store/zcb10k8l85b4vwgr9s5kl7qwl6hchx37-pango-1.48.10/share:/gnu/store/1kyfwib4r2sxfyvknfq4r5y5zqc4yddk-libxrender-0.9.10/share:/gnu/store/3dr3qqf6y1lppnshpn3v666z6998nzda-libxrandr-1.5.2/share:/gnu/store/4qqz2mrmw92d6yn09fljjn9wpy075qgf-libxkbcommon-1.3.0/share:/gnu/store/k3cyvfg33cfviiyv176jbg0yw1ipgq4l-libxinerama-1.1.4/share:/gnu/store/36lapwm1w5vi2vcpvxdrj468w0rqmjsr-libxi-1.7.10/share:/gnu/store/yh0pa3cm02qq6a4qxm60hin2qzfcvf8p-libxfixes-6.0.0/share:/gnu/store/zqg1vg42ympnp5398g1p61aqnfn20jyk-libxext-1.3.4/share:/gnu/store/q9n2gkcxx99qrfmh4xdq744qim4cc9rn-libxdamage-1.1.5/share:/gnu/store/mlpma06sa580l8l5mizx5bvm663i0wsk-libxcursor-1.2.0/share:/gnu/store/md29ch9fzxf5712vh42kn6rzw90pmdzy-libxcomposite-0.4.5/share:/gnu/store/xhl2apl68gqkkcqx7c4l78c82l9g5z09-libx11-1.7.2/share:/gnu/store/j4ndfdhyzl2az6hifgfh5fwwfhp87ykl-librsvg-2.50.7/share:/gnu/store/375858dr3cqbwry58xcgc0776205p0mf-freetype-2.10.4/share:/gnu/store/379xr5pkxprcamhlcbqz9nghj90qxw86-fontconfig-minimal-2.13.94/share:/gnu/store/0h3ami2ijmyvkc63hk5sqsdh3qkrrcv0-fribidi-1.0.9/share:/gnu/store/99a2njzz22dkzd8pz75fsi5nbgv9ww0x-linux-libre-headers-5.10.35/share:/gnu/store/39lk5jq66zvsk8r5yfrgmdwdi61gv5xd-util-macros-1.19.3/share:/gnu/store/5kyiwkbmwif4c36sv3qd252c08sb5ipr-libxxf86vm-1.1.4/share:/gnu/store/pblw05r4v2xz72cccv2mg5bh4vaxjrqr-libxshmfence-1.3/share:/gnu/store/xfb3dwr8qj17yqn0lys6bxwl4xvrmnxa-libdrm-2.4.107/share:/gnu/store/pzdnx43bkzpf8ka6hisgs72q6b9xzjf6-libxcb-1.14/share:/gnu/store/brpl5kly3c64ah2y9v3fgbwl32641c0f-libpng-1.6.37/share:/gnu/store/j623gzkdxbiaa04kv8gg6ca8c1fhqhfp-libxft-2.3.3/share:/gnu/store/nrpykryznfzz28gr45lmnfv7a4d9150r-libthai-0.1.28/share:/gnu/store/i7vlvvpr0bf1vzsp994340dcp3izwd95-harfbuzz-2.8.2/share:/gnu/store/p1ydkidsr8xkr8ymymzy3lhy2ajwpmiy-gdk-pixbuf-2.42.4/share:/gnu/store/s0w7szfsajdy6cnrz2w7z4h5spyl4aaj-expat-2.4.1/share:/gnu/store/1g2f2gy9mpnkl2w37j1vqslxakzlr8g8-at-spi2-core-2.40.0/share:/gnu/store/xfbpzv3m5pyhd4pk079v9fhqlp47143s-libxdmcp-1.1.3/share:/gnu/store/ka95pg02g2xv2a6bf56w7daghz3fk2xz-libxau-1.0.9/share:/gnu/store/8r97pw4h31vkkks40ysnlb53sr2cqi9y-libpthread-stubs-0.4/share:/gnu/store/xkfcl7pxgblc52hwzq5294r0m7fv53bq-libdatrie-0.2.13/share:/gnu/store/7sik9fp2ffvnr7l01s423x4r5pn75rf9-icu4c-69.1/share:/gnu/store/x0ixwynndmgzwjijhhzk703cfac8ygjd-graphite2-1.3.13/share:/gnu/store/rj72k5d3kaigjvyyx58q7hbn8x9aslff-shared-mime-info-1.15/share:/gnu/store/24r1mvps1i27831g3z7kq3ylajyq69g0-libxtst-1.2.3/share:/gnu/store/a68mx8v3ibbw8igmbfl29xc3r8qacb2n-dbus-1.12.20/share'
environment variable `GIO_EXTRA_MODULES' unset
environment variable `PERL5LIB' set to `/gnu/store/k18as85v9s0z66w40wg9sym4c5qz03l2-perl-5.34.0/lib/perl5/site_perl'
environment variable `PKG_CONFIG_PATH' set to `/gnu/store/a4b3bc2rm9lqyd00vwz16p2ycfva62fk-graphviz-2.49.0/lib/pkgconfig:/gnu/store/jgjk0xss7qw8fj0zz4hn8ai9dnq7vlbi-mm-common-1.0.3/share/pkgconfig:/gnu/store/546xz3wbs1zriz48vz2kxyfjyyqbmgjh-libxslt-1.1.34/lib/pkgconfig:/gnu/store/fzzl2j23cjjr2srfwxbnbh5jq78vr42f-xorg-server-1.20.10/lib/pkgconfig:/gnu/store/ma0zpvl77bsbdqzwchv8zj3504a85id6-atkmm-2.28.1/lib/pkgconfig:/gnu/store/0szh0b2fc08m8d7sphg3v9rkk526dxym-cairomm-1.14.2/lib/pkgconfig:/gnu/store/j1lfwcmmp1g5sy2m8g1glvzk834dv48r-glibmm-2.68.0/lib/pkgconfig:/gnu/store/aj3jcqff1l4g7fzsqa6y1yjbqiayyrgj-gtk+-3.24.30/lib/pkgconfig:/gnu/store/8pz417xcywz8s52pf0sjry8nm62v0zdf-pangomm-2.46.0/lib/pkgconfig:/gnu/store/72kpdqplq4nc87fm7ch47kxy3nlkzsyx-xz-5.2.5/lib/pkgconfig:/gnu/store/jazwfsj2v41yih2yzw5nydqgdz1hgfm4-file-5.39/lib/pkgconfig:/gnu/store/qygy1q98ikvapa3gnmhsq92dmnc60ffd-python-3.9.6/lib/pkgconfig:/gnu/store/2i0zpa5w320y8m4zbqk1va8vs6dbawv0-zlib-1.2.11/lib/pkgconfig:/gnu/store/vpps8gpd7xzj630ixalx7a5gc3bhbk1p-util-linux-2.37.1-lib/lib/pkgconfig:/gnu/store/g1klcqy022aybasx0p6xbk0kxyxnfxzs-pcre-8.45/lib/pkgconfig:/gnu/store/hbw4pf3plhab5kqyfpczl3mm0wr9rb39-libffi-3.3/lib/pkgconfig:/gnu/store/w6hgiyz99wlipjl93967li2z80vfl3gw-xorgproto-2021.4/share/pkgconfig:/gnu/store/3nxgh1pmqg44f0ccg16ny4fs81mmafvq-pixman-0.40.0/lib/pkgconfig:/gnu/store/0kc479whbavx0drhvivahdppr2xmlcyb-mesa-21.2.4/lib/pkgconfig:/gnu/store/k2xykbwy7z5czrn1xf52fy1x9wdmbr9z-libpciaccess-0.16/lib/pkgconfig:/gnu/store/0rqgwbhbbpgnn7xygv7am1ljlzfpyb7q-atk-2.36.0/lib/pkgconfig:/gnu/store/y6bj01dx23smg7lf1jifql2a1hgmy9s4-glibmm-2.64.5/lib/pkgconfig:/gnu/store/g9lfd19jck8rpxh8wmjbk8rdfcpsh7im-cairo-1.16.0/lib/pkgconfig:/gnu/store/55wynb8361fq3p7alr8ii4z17fhw44fr-libsigc++-3.0.6/lib/pkgconfig:/gnu/store/gwsy836sazxn8gr0my9kcgjl971f5d4q-libsigc++-2.9.3/lib/pkgconfig:/gnu/store/l4zh3hgnykfrm0mlqzhf5bg2g5minkqs-glib-2.70.0/lib/pkgconfig:/gnu/store/2iy5hrrd581myawb2njjrsxz6zmz6wfm-wayland-protocols-1.23/share/pkgconfig:/gnu/store/if7y9430fim7v0bxdzqhy3yjgsjl2whd-wayland-1.19.0/lib/pkgconfig:/gnu/store/zcb10k8l85b4vwgr9s5kl7qwl6hchx37-pango-1.48.10/lib/pkgconfig:/gnu/store/1kyfwib4r2sxfyvknfq4r5y5zqc4yddk-libxrender-0.9.10/lib/pkgconfig:/gnu/store/3dr3qqf6y1lppnshpn3v666z6998nzda-libxrandr-1.5.2/lib/pkgconfig:/gnu/store/4qqz2mrmw92d6yn09fljjn9wpy075qgf-libxkbcommon-1.3.0/lib/pkgconfig:/gnu/store/k3cyvfg33cfviiyv176jbg0yw1ipgq4l-libxinerama-1.1.4/lib/pkgconfig:/gnu/store/36lapwm1w5vi2vcpvxdrj468w0rqmjsr-libxi-1.7.10/lib/pkgconfig:/gnu/store/yh0pa3cm02qq6a4qxm60hin2qzfcvf8p-libxfixes-6.0.0/lib/pkgconfig:/gnu/store/zqg1vg42ympnp5398g1p61aqnfn20jyk-libxext-1.3.4/lib/pkgconfig:/gnu/store/q9n2gkcxx99qrfmh4xdq744qim4cc9rn-libxdamage-1.1.5/lib/pkgconfig:/gnu/store/mlpma06sa580l8l5mizx5bvm663i0wsk-libxcursor-1.2.0/lib/pkgconfig:/gnu/store/md29ch9fzxf5712vh42kn6rzw90pmdzy-libxcomposite-0.4.5/lib/pkgconfig:/gnu/store/xhl2apl68gqkkcqx7c4l78c82l9g5z09-libx11-1.7.2/lib/pkgconfig:/gnu/store/k1d80vmiaixyqi5lrjqi18r5i9qhhaj1-libepoxy-1.5.5/lib/pkgconfig:/gnu/store/dhfmlvc8iyfs7c86nm6magglr835idpg-libcloudproviders-minimal-0.3.1/lib/pkgconfig:/gnu/store/j4ndfdhyzl2az6hifgfh5fwwfhp87ykl-librsvg-2.50.7/lib/pkgconfig:/gnu/store/375858dr3cqbwry58xcgc0776205p0mf-freetype-2.10.4/lib/pkgconfig:/gnu/store/379xr5pkxprcamhlcbqz9nghj90qxw86-fontconfig-minimal-2.13.94/lib/pkgconfig:/gnu/store/0h3ami2ijmyvkc63hk5sqsdh3qkrrcv0-fribidi-1.0.9/lib/pkgconfig:/gnu/store/sxp3lfbnyqzdq7m87qc23yaq1invxa14-at-spi2-atk-2.38.0/lib/pkgconfig:/gnu/store/39lk5jq66zvsk8r5yfrgmdwdi61gv5xd-util-macros-1.19.3/lib/pkgconfig:/gnu/store/5kyiwkbmwif4c36sv3qd252c08sb5ipr-libxxf86vm-1.1.4/lib/pkgconfig:/gnu/store/pblw05r4v2xz72cccv2mg5bh4vaxjrqr-libxshmfence-1.3/lib/pkgconfig:/gnu/store/h9mqw1kn1vs6jqzafaybi1gc846r148i-libvdpau-1.4/lib/pkgconfig:/gnu/store/xfb3dwr8qj17yqn0lys6bxwl4xvrmnxa-libdrm-2.4.107/lib/pkgconfig:/gnu/store/pzdnx43bkzpf8ka6hisgs72q6b9xzjf6-libxcb-1.14/lib/pkgconfig:/gnu/store/brpl5kly3c64ah2y9v3fgbwl32641c0f-libpng-1.6.37/lib/pkgconfig:/gnu/store/j623gzkdxbiaa04kv8gg6ca8c1fhqhfp-libxft-2.3.3/lib/pkgconfig:/gnu/store/nrpykryznfzz28gr45lmnfv7a4d9150r-libthai-0.1.28/lib/pkgconfig:/gnu/store/i7vlvvpr0bf1vzsp994340dcp3izwd95-harfbuzz-2.8.2/lib/pkgconfig:/gnu/store/p1ydkidsr8xkr8ymymzy3lhy2ajwpmiy-gdk-pixbuf-2.42.4/lib/pkgconfig:/gnu/store/s0w7szfsajdy6cnrz2w7z4h5spyl4aaj-expat-2.4.1/lib/pkgconfig:/gnu/store/1g2f2gy9mpnkl2w37j1vqslxakzlr8g8-at-spi2-core-2.40.0/lib/pkgconfig:/gnu/store/xfbpzv3m5pyhd4pk079v9fhqlp47143s-libxdmcp-1.1.3/lib/pkgconfig:/gnu/store/ka95pg02g2xv2a6bf56w7daghz3fk2xz-libxau-1.0.9/lib/pkgconfig:/gnu/store/8r97pw4h31vkkks40ysnlb53sr2cqi9y-libpthread-stubs-0.4/lib/pkgconfig:/gnu/store/xkfcl7pxgblc52hwzq5294r0m7fv53bq-libdatrie-0.2.13/lib/pkgconfig:/gnu/store/7sik9fp2ffvnr7l01s423x4r5pn75rf9-icu4c-69.1/lib/pkgconfig:/gnu/store/x0ixwynndmgzwjijhhzk703cfac8ygjd-graphite2-1.3.13/lib/pkgconfig:/gnu/store/rj72k5d3kaigjvyyx58q7hbn8x9aslff-shared-mime-info-1.15/share/pkgconfig:/gnu/store/24r1mvps1i27831g3z7kq3ylajyq69g0-libxtst-1.2.3/lib/pkgconfig:/gnu/store/a68mx8v3ibbw8igmbfl29xc3r8qacb2n-dbus-1.12.20/lib/pkgconfig'
environment variable `GUIX_GTK3_PATH' set to `/gnu/store/aj3jcqff1l4g7fzsqa6y1yjbqiayyrgj-gtk+-3.24.30/lib/gtk-3.0'
environment variable `BASH_LOADABLES_PATH' unset
environment variable `C_INCLUDE_PATH' set to `/gnu/store/a4b3bc2rm9lqyd00vwz16p2ycfva62fk-graphviz-2.49.0/include:/gnu/store/546xz3wbs1zriz48vz2kxyfjyyqbmgjh-libxslt-1.1.34/include:/gnu/store/fzzl2j23cjjr2srfwxbnbh5jq78vr42f-xorg-server-1.20.10/include:/gnu/store/ma0zpvl77bsbdqzwchv8zj3504a85id6-atkmm-2.28.1/include:/gnu/store/0szh0b2fc08m8d7sphg3v9rkk526dxym-cairomm-1.14.2/include:/gnu/store/j1lfwcmmp1g5sy2m8g1glvzk834dv48r-glibmm-2.68.0/include:/gnu/store/aj3jcqff1l4g7fzsqa6y1yjbqiayyrgj-gtk+-3.24.30/include:/gnu/store/8pz417xcywz8s52pf0sjry8nm62v0zdf-pangomm-2.46.0/include:/gnu/store/w6zv2mz56wyxr471q38a3cmzm4iqimy8-bzip2-1.0.8/include:/gnu/store/72kpdqplq4nc87fm7ch47kxy3nlkzsyx-xz-5.2.5/include:/gnu/store/jazwfsj2v41yih2yzw5nydqgdz1hgfm4-file-5.39/include:/gnu/store/v45pjc1yqmy59j9ff913p3ywn9qk4lqx-gawk-5.1.0/include:/gnu/store/l8kxrs01lll3pzjrd590p45l8k045q6q-make-4.3/include:/gnu/store/6mqcv52gwn2dnbislgv9k99525mzmrrs-binutils-2.37/include:/gnu/store/vakvgvrb839igv16jkif4lmx11d25jqb-gcc-10.3.0/include:/gnu/store/2fk1gz2s7ppdicynscra9b19byrrr866-glibc-2.33/include:/gnu/store/qygy1q98ikvapa3gnmhsq92dmnc60ffd-python-3.9.6/include:/gnu/store/2i0zpa5w320y8m4zbqk1va8vs6dbawv0-zlib-1.2.11/include:/gnu/store/vpps8gpd7xzj630ixalx7a5gc3bhbk1p-util-linux-2.37.1-lib/include:/gnu/store/g1klcqy022aybasx0p6xbk0kxyxnfxzs-pcre-8.45/include:/gnu/store/hbw4pf3plhab5kqyfpczl3mm0wr9rb39-libffi-3.3/include:/gnu/store/w6hgiyz99wlipjl93967li2z80vfl3gw-xorgproto-2021.4/include:/gnu/store/3nxgh1pmqg44f0ccg16ny4fs81mmafvq-pixman-0.40.0/include:/gnu/store/0kc479whbavx0drhvivahdppr2xmlcyb-mesa-21.2.4/include:/gnu/store/k2xykbwy7z5czrn1xf52fy1x9wdmbr9z-libpciaccess-0.16/include:/gnu/store/0rqgwbhbbpgnn7xygv7am1ljlzfpyb7q-atk-2.36.0/include:/gnu/store/y6bj01dx23smg7lf1jifql2a1hgmy9s4-glibmm-2.64.5/include:/gnu/store/g9lfd19jck8rpxh8wmjbk8rdfcpsh7im-cairo-1.16.0/include:/gnu/store/55wynb8361fq3p7alr8ii4z17fhw44fr-libsigc++-3.0.6/include:/gnu/store/gwsy836sazxn8gr0my9kcgjl971f5d4q-libsigc++-2.9.3/include:/gnu/store/l4zh3hgnykfrm0mlqzhf5bg2g5minkqs-glib-2.70.0/include:/gnu/store/if7y9430fim7v0bxdzqhy3yjgsjl2whd-wayland-1.19.0/include:/gnu/store/zcb10k8l85b4vwgr9s5kl7qwl6hchx37-pango-1.48.10/include:/gnu/store/1kyfwib4r2sxfyvknfq4r5y5zqc4yddk-libxrender-0.9.10/include:/gnu/store/3dr3qqf6y1lppnshpn3v666z6998nzda-libxrandr-1.5.2/include:/gnu/store/4qqz2mrmw92d6yn09fljjn9wpy075qgf-libxkbcommon-1.3.0/include:/gnu/store/k3cyvfg33cfviiyv176jbg0yw1ipgq4l-libxinerama-1.1.4/include:/gnu/store/36lapwm1w5vi2vcpvxdrj468w0rqmjsr-libxi-1.7.10/include:/gnu/store/yh0pa3cm02qq6a4qxm60hin2qzfcvf8p-libxfixes-6.0.0/include:/gnu/store/zqg1vg42ympnp5398g1p61aqnfn20jyk-libxext-1.3.4/include:/gnu/store/q9n2gkcxx99qrfmh4xdq744qim4cc9rn-libxdamage-1.1.5/include:/gnu/store/mlpma06sa580l8l5mizx5bvm663i0wsk-libxcursor-1.2.0/include:/gnu/store/md29ch9fzxf5712vh42kn6rzw90pmdzy-libxcomposite-0.4.5/include:/gnu/store/xhl2apl68gqkkcqx7c4l78c82l9g5z09-libx11-1.7.2/include:/gnu/store/k1d80vmiaixyqi5lrjqi18r5i9qhhaj1-libepoxy-1.5.5/include:/gnu/store/dhfmlvc8iyfs7c86nm6magglr835idpg-libcloudproviders-minimal-0.3.1/include:/gnu/store/j4ndfdhyzl2az6hifgfh5fwwfhp87ykl-librsvg-2.50.7/include:/gnu/store/375858dr3cqbwry58xcgc0776205p0mf-freetype-2.10.4/include:/gnu/store/379xr5pkxprcamhlcbqz9nghj90qxw86-fontconfig-minimal-2.13.94/include:/gnu/store/0h3ami2ijmyvkc63hk5sqsdh3qkrrcv0-fribidi-1.0.9/include:/gnu/store/sxp3lfbnyqzdq7m87qc23yaq1invxa14-at-spi2-atk-2.38.0/include:/gnu/store/99a2njzz22dkzd8pz75fsi5nbgv9ww0x-linux-libre-headers-5.10.35/include:/gnu/store/5kyiwkbmwif4c36sv3qd252c08sb5ipr-libxxf86vm-1.1.4/include:/gnu/store/pblw05r4v2xz72cccv2mg5bh4vaxjrqr-libxshmfence-1.3/include:/gnu/store/h9mqw1kn1vs6jqzafaybi1gc846r148i-libvdpau-1.4/include:/gnu/store/xfb3dwr8qj17yqn0lys6bxwl4xvrmnxa-libdrm-2.4.107/include:/gnu/store/pzdnx43bkzpf8ka6hisgs72q6b9xzjf6-libxcb-1.14/include:/gnu/store/brpl5kly3c64ah2y9v3fgbwl32641c0f-libpng-1.6.37/include:/gnu/store/j623gzkdxbiaa04kv8gg6ca8c1fhqhfp-libxft-2.3.3/include:/gnu/store/nrpykryznfzz28gr45lmnfv7a4d9150r-libthai-0.1.28/include:/gnu/store/i7vlvvpr0bf1vzsp994340dcp3izwd95-harfbuzz-2.8.2/include:/gnu/store/p1ydkidsr8xkr8ymymzy3lhy2ajwpmiy-gdk-pixbuf-2.42.4/include:/gnu/store/s0w7szfsajdy6cnrz2w7z4h5spyl4aaj-expat-2.4.1/include:/gnu/store/1g2f2gy9mpnkl2w37j1vqslxakzlr8g8-at-spi2-core-2.40.0/include:/gnu/store/xfbpzv3m5pyhd4pk079v9fhqlp47143s-libxdmcp-1.1.3/include:/gnu/store/ka95pg02g2xv2a6bf56w7daghz3fk2xz-libxau-1.0.9/include:/gnu/store/xkfcl7pxgblc52hwzq5294r0m7fv53bq-libdatrie-0.2.13/include:/gnu/store/7sik9fp2ffvnr7l01s423x4r5pn75rf9-icu4c-69.1/include:/gnu/store/x0ixwynndmgzwjijhhzk703cfac8ygjd-graphite2-1.3.13/include:/gnu/store/24r1mvps1i27831g3z7kq3ylajyq69g0-libxtst-1.2.3/include:/gnu/store/a68mx8v3ibbw8igmbfl29xc3r8qacb2n-dbus-1.12.20/include'
environment variable `CPLUS_INCLUDE_PATH' set to `/gnu/store/a4b3bc2rm9lqyd00vwz16p2ycfva62fk-graphviz-2.49.0/include:/gnu/store/546xz3wbs1zriz48vz2kxyfjyyqbmgjh-libxslt-1.1.34/include:/gnu/store/fzzl2j23cjjr2srfwxbnbh5jq78vr42f-xorg-server-1.20.10/include:/gnu/store/ma0zpvl77bsbdqzwchv8zj3504a85id6-atkmm-2.28.1/include:/gnu/store/0szh0b2fc08m8d7sphg3v9rkk526dxym-cairomm-1.14.2/include:/gnu/store/j1lfwcmmp1g5sy2m8g1glvzk834dv48r-glibmm-2.68.0/include:/gnu/store/aj3jcqff1l4g7fzsqa6y1yjbqiayyrgj-gtk+-3.24.30/include:/gnu/store/8pz417xcywz8s52pf0sjry8nm62v0zdf-pangomm-2.46.0/include:/gnu/store/w6zv2mz56wyxr471q38a3cmzm4iqimy8-bzip2-1.0.8/include:/gnu/store/72kpdqplq4nc87fm7ch47kxy3nlkzsyx-xz-5.2.5/include:/gnu/store/jazwfsj2v41yih2yzw5nydqgdz1hgfm4-file-5.39/include:/gnu/store/v45pjc1yqmy59j9ff913p3ywn9qk4lqx-gawk-5.1.0/include:/gnu/store/l8kxrs01lll3pzjrd590p45l8k045q6q-make-4.3/include:/gnu/store/6mqcv52gwn2dnbislgv9k99525mzmrrs-binutils-2.37/include:/gnu/store/vakvgvrb839igv16jkif4lmx11d25jqb-gcc-10.3.0/include/c++:/gnu/store/vakvgvrb839igv16jkif4lmx11d25jqb-gcc-10.3.0/include:/gnu/store/2fk1gz2s7ppdicynscra9b19byrrr866-glibc-2.33/include:/gnu/store/qygy1q98ikvapa3gnmhsq92dmnc60ffd-python-3.9.6/include:/gnu/store/2i0zpa5w320y8m4zbqk1va8vs6dbawv0-zlib-1.2.11/include:/gnu/store/vpps8gpd7xzj630ixalx7a5gc3bhbk1p-util-linux-2.37.1-lib/include:/gnu/store/g1klcqy022aybasx0p6xbk0kxyxnfxzs-pcre-8.45/include:/gnu/store/hbw4pf3plhab5kqyfpczl3mm0wr9rb39-libffi-3.3/include:/gnu/store/w6hgiyz99wlipjl93967li2z80vfl3gw-xorgproto-2021.4/include:/gnu/store/3nxgh1pmqg44f0ccg16ny4fs81mmafvq-pixman-0.40.0/include:/gnu/store/0kc479whbavx0drhvivahdppr2xmlcyb-mesa-21.2.4/include:/gnu/store/k2xykbwy7z5czrn1xf52fy1x9wdmbr9z-libpciaccess-0.16/include:/gnu/store/0rqgwbhbbpgnn7xygv7am1ljlzfpyb7q-atk-2.36.0/include:/gnu/store/y6bj01dx23smg7lf1jifql2a1hgmy9s4-glibmm-2.64.5/include:/gnu/store/g9lfd19jck8rpxh8wmjbk8rdfcpsh7im-cairo-1.16.0/include:/gnu/store/55wynb8361fq3p7alr8ii4z17fhw44fr-libsigc++-3.0.6/include:/gnu/store/gwsy836sazxn8gr0my9kcgjl971f5d4q-libsigc++-2.9.3/include:/gnu/store/l4zh3hgnykfrm0mlqzhf5bg2g5minkqs-glib-2.70.0/include:/gnu/store/if7y9430fim7v0bxdzqhy3yjgsjl2whd-wayland-1.19.0/include:/gnu/store/zcb10k8l85b4vwgr9s5kl7qwl6hchx37-pango-1.48.10/include:/gnu/store/1kyfwib4r2sxfyvknfq4r5y5zqc4yddk-libxrender-0.9.10/include:/gnu/store/3dr3qqf6y1lppnshpn3v666z6998nzda-libxrandr-1.5.2/include:/gnu/store/4qqz2mrmw92d6yn09fljjn9wpy075qgf-libxkbcommon-1.3.0/include:/gnu/store/k3cyvfg33cfviiyv176jbg0yw1ipgq4l-libxinerama-1.1.4/include:/gnu/store/36lapwm1w5vi2vcpvxdrj468w0rqmjsr-libxi-1.7.10/include:/gnu/store/yh0pa3cm02qq6a4qxm60hin2qzfcvf8p-libxfixes-6.0.0/include:/gnu/store/zqg1vg42ympnp5398g1p61aqnfn20jyk-libxext-1.3.4/include:/gnu/store/q9n2gkcxx99qrfmh4xdq744qim4cc9rn-libxdamage-1.1.5/include:/gnu/store/mlpma06sa580l8l5mizx5bvm663i0wsk-libxcursor-1.2.0/include:/gnu/store/md29ch9fzxf5712vh42kn6rzw90pmdzy-libxcomposite-0.4.5/include:/gnu/store/xhl2apl68gqkkcqx7c4l78c82l9g5z09-libx11-1.7.2/include:/gnu/store/k1d80vmiaixyqi5lrjqi18r5i9qhhaj1-libepoxy-1.5.5/include:/gnu/store/dhfmlvc8iyfs7c86nm6magglr835idpg-libcloudproviders-minimal-0.3.1/include:/gnu/store/j4ndfdhyzl2az6hifgfh5fwwfhp87ykl-librsvg-2.50.7/include:/gnu/store/375858dr3cqbwry58xcgc0776205p0mf-freetype-2.10.4/include:/gnu/store/379xr5pkxprcamhlcbqz9nghj90qxw86-fontconfig-minimal-2.13.94/include:/gnu/store/0h3ami2ijmyvkc63hk5sqsdh3qkrrcv0-fribidi-1.0.9/include:/gnu/store/sxp3lfbnyqzdq7m87qc23yaq1invxa14-at-spi2-atk-2.38.0/include:/gnu/store/99a2njzz22dkzd8pz75fsi5nbgv9ww0x-linux-libre-headers-5.10.35/include:/gnu/store/5kyiwkbmwif4c36sv3qd252c08sb5ipr-libxxf86vm-1.1.4/include:/gnu/store/pblw05r4v2xz72cccv2mg5bh4vaxjrqr-libxshmfence-1.3/include:/gnu/store/h9mqw1kn1vs6jqzafaybi1gc846r148i-libvdpau-1.4/include:/gnu/store/xfb3dwr8qj17yqn0lys6bxwl4xvrmnxa-libdrm-2.4.107/include:/gnu/store/pzdnx43bkzpf8ka6hisgs72q6b9xzjf6-libxcb-1.14/include:/gnu/store/brpl5kly3c64ah2y9v3fgbwl32641c0f-libpng-1.6.37/include:/gnu/store/j623gzkdxbiaa04kv8gg6ca8c1fhqhfp-libxft-2.3.3/include:/gnu/store/nrpykryznfzz28gr45lmnfv7a4d9150r-libthai-0.1.28/include:/gnu/store/i7vlvvpr0bf1vzsp994340dcp3izwd95-harfbuzz-2.8.2/include:/gnu/store/p1ydkidsr8xkr8ymymzy3lhy2ajwpmiy-gdk-pixbuf-2.42.4/include:/gnu/store/s0w7szfsajdy6cnrz2w7z4h5spyl4aaj-expat-2.4.1/include:/gnu/store/1g2f2gy9mpnkl2w37j1vqslxakzlr8g8-at-spi2-core-2.40.0/include:/gnu/store/xfbpzv3m5pyhd4pk079v9fhqlp47143s-libxdmcp-1.1.3/include:/gnu/store/ka95pg02g2xv2a6bf56w7daghz3fk2xz-libxau-1.0.9/include:/gnu/store/xkfcl7pxgblc52hwzq5294r0m7fv53bq-libdatrie-0.2.13/include:/gnu/store/7sik9fp2ffvnr7l01s423x4r5pn75rf9-icu4c-69.1/include:/gnu/store/x0ixwynndmgzwjijhhzk703cfac8ygjd-graphite2-1.3.13/include:/gnu/store/24r1mvps1i27831g3z7kq3ylajyq69g0-libxtst-1.2.3/include:/gnu/store/a68mx8v3ibbw8igmbfl29xc3r8qacb2n-dbus-1.12.20/include'
environment variable `LIBRARY_PATH' set to `/gnu/store/iajwhaqi5ah00d80k1frh2ylfbpk92nc-meson-0.60.0/lib:/gnu/store/a4b3bc2rm9lqyd00vwz16p2ycfva62fk-graphviz-2.49.0/lib:/gnu/store/k18as85v9s0z66w40wg9sym4c5qz03l2-perl-5.34.0/lib:/gnu/store/546xz3wbs1zriz48vz2kxyfjyyqbmgjh-libxslt-1.1.34/lib:/gnu/store/fzzl2j23cjjr2srfwxbnbh5jq78vr42f-xorg-server-1.20.10/lib:/gnu/store/ma0zpvl77bsbdqzwchv8zj3504a85id6-atkmm-2.28.1/lib:/gnu/store/0szh0b2fc08m8d7sphg3v9rkk526dxym-cairomm-1.14.2/lib:/gnu/store/j1lfwcmmp1g5sy2m8g1glvzk834dv48r-glibmm-2.68.0/lib:/gnu/store/aj3jcqff1l4g7fzsqa6y1yjbqiayyrgj-gtk+-3.24.30/lib:/gnu/store/8pz417xcywz8s52pf0sjry8nm62v0zdf-pangomm-2.46.0/lib:/gnu/store/w6zv2mz56wyxr471q38a3cmzm4iqimy8-bzip2-1.0.8/lib:/gnu/store/72kpdqplq4nc87fm7ch47kxy3nlkzsyx-xz-5.2.5/lib:/gnu/store/jazwfsj2v41yih2yzw5nydqgdz1hgfm4-file-5.39/lib:/gnu/store/v45pjc1yqmy59j9ff913p3ywn9qk4lqx-gawk-5.1.0/lib:/gnu/store/6mqcv52gwn2dnbislgv9k99525mzmrrs-binutils-2.37/lib:/gnu/store/2fk1gz2s7ppdicynscra9b19byrrr866-glibc-2.33/lib:/gnu/store/3ywv7s8d38rgcq7ljmc1s084358a2m3h-glibc-2.33-static/lib:/gnu/store/0brdikxqk59hdr47qabcm3sya001d8vf-glibc-utf8-locales-2.33/lib:/gnu/store/qygy1q98ikvapa3gnmhsq92dmnc60ffd-python-3.9.6/lib:/gnu/store/2i0zpa5w320y8m4zbqk1va8vs6dbawv0-zlib-1.2.11/lib:/gnu/store/vpps8gpd7xzj630ixalx7a5gc3bhbk1p-util-linux-2.37.1-lib/lib:/gnu/store/g1klcqy022aybasx0p6xbk0kxyxnfxzs-pcre-8.45/lib:/gnu/store/hbw4pf3plhab5kqyfpczl3mm0wr9rb39-libffi-3.3/lib:/gnu/store/3nxgh1pmqg44f0ccg16ny4fs81mmafvq-pixman-0.40.0/lib:/gnu/store/0kc479whbavx0drhvivahdppr2xmlcyb-mesa-21.2.4/lib:/gnu/store/k2xykbwy7z5czrn1xf52fy1x9wdmbr9z-libpciaccess-0.16/lib:/gnu/store/0rqgwbhbbpgnn7xygv7am1ljlzfpyb7q-atk-2.36.0/lib:/gnu/store/y6bj01dx23smg7lf1jifql2a1hgmy9s4-glibmm-2.64.5/lib:/gnu/store/g9lfd19jck8rpxh8wmjbk8rdfcpsh7im-cairo-1.16.0/lib:/gnu/store/55wynb8361fq3p7alr8ii4z17fhw44fr-libsigc++-3.0.6/lib:/gnu/store/gwsy836sazxn8gr0my9kcgjl971f5d4q-libsigc++-2.9.3/lib:/gnu/store/l4zh3hgnykfrm0mlqzhf5bg2g5minkqs-glib-2.70.0/lib:/gnu/store/if7y9430fim7v0bxdzqhy3yjgsjl2whd-wayland-1.19.0/lib:/gnu/store/zcb10k8l85b4vwgr9s5kl7qwl6hchx37-pango-1.48.10/lib:/gnu/store/1kyfwib4r2sxfyvknfq4r5y5zqc4yddk-libxrender-0.9.10/lib:/gnu/store/3dr3qqf6y1lppnshpn3v666z6998nzda-libxrandr-1.5.2/lib:/gnu/store/4qqz2mrmw92d6yn09fljjn9wpy075qgf-libxkbcommon-1.3.0/lib:/gnu/store/k3cyvfg33cfviiyv176jbg0yw1ipgq4l-libxinerama-1.1.4/lib:/gnu/store/36lapwm1w5vi2vcpvxdrj468w0rqmjsr-libxi-1.7.10/lib:/gnu/store/yh0pa3cm02qq6a4qxm60hin2qzfcvf8p-libxfixes-6.0.0/lib:/gnu/store/zqg1vg42ympnp5398g1p61aqnfn20jyk-libxext-1.3.4/lib:/gnu/store/q9n2gkcxx99qrfmh4xdq744qim4cc9rn-libxdamage-1.1.5/lib:/gnu/store/mlpma06sa580l8l5mizx5bvm663i0wsk-libxcursor-1.2.0/lib:/gnu/store/md29ch9fzxf5712vh42kn6rzw90pmdzy-libxcomposite-0.4.5/lib:/gnu/store/xhl2apl68gqkkcqx7c4l78c82l9g5z09-libx11-1.7.2/lib:/gnu/store/k1d80vmiaixyqi5lrjqi18r5i9qhhaj1-libepoxy-1.5.5/lib:/gnu/store/dhfmlvc8iyfs7c86nm6magglr835idpg-libcloudproviders-minimal-0.3.1/lib:/gnu/store/j4ndfdhyzl2az6hifgfh5fwwfhp87ykl-librsvg-2.50.7/lib:/gnu/store/375858dr3cqbwry58xcgc0776205p0mf-freetype-2.10.4/lib:/gnu/store/379xr5pkxprcamhlcbqz9nghj90qxw86-fontconfig-minimal-2.13.94/lib:/gnu/store/0h3ami2ijmyvkc63hk5sqsdh3qkrrcv0-fribidi-1.0.9/lib:/gnu/store/sxp3lfbnyqzdq7m87qc23yaq1invxa14-at-spi2-atk-2.38.0/lib:/gnu/store/39lk5jq66zvsk8r5yfrgmdwdi61gv5xd-util-macros-1.19.3/lib:/gnu/store/5kyiwkbmwif4c36sv3qd252c08sb5ipr-libxxf86vm-1.1.4/lib:/gnu/store/pblw05r4v2xz72cccv2mg5bh4vaxjrqr-libxshmfence-1.3/lib:/gnu/store/h9mqw1kn1vs6jqzafaybi1gc846r148i-libvdpau-1.4/lib:/gnu/store/xfb3dwr8qj17yqn0lys6bxwl4xvrmnxa-libdrm-2.4.107/lib:/gnu/store/pzdnx43bkzpf8ka6hisgs72q6b9xzjf6-libxcb-1.14/lib:/gnu/store/brpl5kly3c64ah2y9v3fgbwl32641c0f-libpng-1.6.37/lib:/gnu/store/j623gzkdxbiaa04kv8gg6ca8c1fhqhfp-libxft-2.3.3/lib:/gnu/store/nrpykryznfzz28gr45lmnfv7a4d9150r-libthai-0.1.28/lib:/gnu/store/i7vlvvpr0bf1vzsp994340dcp3izwd95-harfbuzz-2.8.2/lib:/gnu/store/p1ydkidsr8xkr8ymymzy3lhy2ajwpmiy-gdk-pixbuf-2.42.4/lib:/gnu/store/s0w7szfsajdy6cnrz2w7z4h5spyl4aaj-expat-2.4.1/lib:/gnu/store/1g2f2gy9mpnkl2w37j1vqslxakzlr8g8-at-spi2-core-2.40.0/lib:/gnu/store/xfbpzv3m5pyhd4pk079v9fhqlp47143s-libxdmcp-1.1.3/lib:/gnu/store/ka95pg02g2xv2a6bf56w7daghz3fk2xz-libxau-1.0.9/lib:/gnu/store/8r97pw4h31vkkks40ysnlb53sr2cqi9y-libpthread-stubs-0.4/lib:/gnu/store/xkfcl7pxgblc52hwzq5294r0m7fv53bq-libdatrie-0.2.13/lib:/gnu/store/7sik9fp2ffvnr7l01s423x4r5pn75rf9-icu4c-69.1/lib:/gnu/store/x0ixwynndmgzwjijhhzk703cfac8ygjd-graphite2-1.3.13/lib:/gnu/store/24r1mvps1i27831g3z7kq3ylajyq69g0-libxtst-1.2.3/lib:/gnu/store/a68mx8v3ibbw8igmbfl29xc3r8qacb2n-dbus-1.12.20/lib'
environment variable `GUIX_LOCPATH' set to `/gnu/store/0brdikxqk59hdr47qabcm3sya001d8vf-glibc-utf8-locales-2.33/lib/locale'
environment variable `GUIX_PYTHONPATH' set to `/gnu/store/iajwhaqi5ah00d80k1frh2ylfbpk92nc-meson-0.60.0/lib/python3.9/site-packages:/gnu/store/qygy1q98ikvapa3gnmhsq92dmnc60ffd-python-3.9.6/lib/python3.9/site-packages'
environment variable `PYTHONTZPATH' unset
environment variable `GDK_PIXBUF_MODULE_FILE' set to `/gnu/store/aj3jcqff1l4g7fzsqa6y1yjbqiayyrgj-gtk+-3.24.30/lib/gdk-pixbuf-2.0/2.10.0/loaders.cache'
phase `set-paths' succeeded after 0.0 seconds
starting phase `install-locale'
using 'en_US.utf8' locale for category "LC_ALL"
phase `install-locale' succeeded after 0.0 seconds
starting phase `unpack'
gtkmm-3.24.4/
[....]
gtkmm-3.24.4/untracked/gtk/gtkmm/celllayout.h
gtkmm-3.24.4/untracked/gtk/gtkmm/cellrenderer.cc
...skipping...
self.process_kwargs(kwargs, backend)
File "/gnu/store/iajwhaqi5ah00d80k1frh2ylfbpk92nc-meson-0.60.0/lib/python3.9/site-packages/mesonbuild/build.py", line 2441, in process_kwargs
if isinstance(kwargs['install_dir'], list):
KeyError: 'install_dir'
The Meson build system
Version: 0.60.0
Source dir: /tmp/guix-build-gtkmm-3.24.4.drv-0/gtkmm-3.24.4
Build dir: /tmp/guix-build-gtkmm-3.24.4.drv-0/build
Build type: native build
Project name: gtkmm
Project version: 3.24.4
C compiler for the host machine: gcc (gcc 10.3.0 "gcc (GCC) 10.3.0")
C linker for the host machine: gcc ld.bfd 2.37
C++ compiler for the host machine: c++ (gcc 10.3.0 "c++ (GCC) 10.3.0")
C++ linker for the host machine: c++ ld.bfd 2.37
Host machine cpu family: x86_64
Host machine cpu: x86_64
Program python3 found: YES (/gnu/store/qygy1q98ikvapa3gnmhsq92dmnc60ffd-python-3.9.6/bin/python3)
Found pkg-config: /gnu/store/wc7lqi6kgki5qzr01g3gvriwdadjiwpc-pkg-config-0.29.2/bin/pkg-config (0.29.2)
Run-time dependency gtk+-3.0 found: YES 3.24.30
Run-time dependency cairomm-1.0 found: YES 1.14.2
Run-time dependency pangomm-1.4 found: YES 2.46.0
Run-time dependency gdk-pixbuf-2.0 found: YES 2.42.4
Run-time dependency atkmm-1.6 found: YES 2.28.1
Run-time dependency epoxy found: YES 1.5.5
Run-time dependency glibmm-2.4 found: YES 2.64.5
Run-time dependency giomm-2.4 found: YES 2.64.5
Run-time dependency gtk+-unix-print-3.0 found: YES 3.24.30
Program mm-common-get found: YES (/gnu/store/jgjk0xss7qw8fj0zz4hn8ai9dnq7vlbi-mm-common-1.0.3/bin/mm-common-get)
Program m4 found: YES (/gnu/store/4xvaqcjyyrw6y1fwm3drl87xrz1vsln9-m4-1.4.18/bin/m4)
Program perl found: YES (/gnu/store/k18as85v9s0z66w40wg9sym4c5qz03l2-perl-5.34.0/bin/perl)
Program doxygen found: YES (/gnu/store/79lhh65z70qzwb0x4flgy1n4z05ypr4f-doxygen-1.9.1/bin/doxygen)
Program dot found: YES (/gnu/store/a4b3bc2rm9lqyd00vwz16p2ycfva62fk-graphviz-2.49.0/bin/dot)
Program xsltproc found: YES (/gnu/store/546xz3wbs1zriz48vz2kxyfjyyqbmgjh-libxslt-1.1.34/bin/xsltproc)
Compiler for C++ supports arguments -Wall: YES
../gtkmm-3.24.4/meson.build:307: WARNING: Consider using the built-in warning_level option instead of using "-Wall".
Library glibmm_generate_extra_defs-2.4 found: YES
Configuring gdkmm-3.0.pc using configuration
Configuring gdkmm-3.0-uninstalled.pc using configuration
Configuring gdkmmconfig.h using configuration
Configuring gdkmm.rc using configuration
Program /gnu/store/qygy1q98ikvapa3gnmhsq92dmnc60ffd-python-3.9.6/bin/python3 found: YES (/gnu/store/qygy1q98ikvapa3gnmhsq92dmnc60ffd-python-3.9.6/bin/python3)
Program /gnu/store/qygy1q98ikvapa3gnmhsq92dmnc60ffd-python-3.9.6/bin/python3 found: YES (/gnu/store/qygy1q98ikvapa3gnmhsq92dmnc60ffd-python-3.9.6/bin/python3)
Configuring gtkmm-3.0.pc using configuration
Configuring gtkmm-3.0-uninstalled.pc using configuration
Configuring gtkmmconfig.h using configuration
Configuring gtkmm.rc using configuration
Program /gnu/store/qygy1q98ikvapa3gnmhsq92dmnc60ffd-python-3.9.6/bin/python3 found: YES (/gnu/store/qygy1q98ikvapa3gnmhsq92dmnc60ffd-python-3.9.6/bin/python3)
Program /gnu/store/qygy1q98ikvapa3gnmhsq92dmnc60ffd-python-3.9.6/bin/python3 found: YES (/gnu/store/qygy1q98ikvapa3gnmhsq92dmnc60ffd-python-3.9.6/bin/python3)
Found pkg-config: /gnu/store/wc7lqi6kgki5qzr01g3gvriwdadjiwpc-pkg-config-0.29.2/bin/pkg-config (0.29.2)
Program glib-compile-resources found: YES (/gnu/store/w7a78mbizznxgykmqiia3l0bdyjpn675-glib-2.70.0-bin/bin/glib-compile-resources)
error: in phase 'configure': uncaught exception:
%exception #<&invoke-error program: "meson" arguments: ("--prefix=/gnu/store/s926zb4xrnnasc7iipnbyp9k0fnlsain-gtkmm-3.24.4" "--buildtype=debugoptimized" "-Dc_link_args=-Wl,-rpath=/gnu/store/s926zb4xrnnasc7iipnbyp9k0fnlsain-gtkmm-3.24.4/lib" "-Dcpp_link_args=-Wl,-rpath=/gnu/store/s926zb4xrnnasc7iipnbyp9k0fnlsain-gtkmm-3.24.4/lib" "-Dbuild-documentation=true" "/tmp/guix-build-gtkmm-3.24.4.drv-0/gtkmm-3.24.4") exit-status: 2 term-signal: #f stop-signal: #f>
phase `configure' failed after 1.2 seconds
command "meson" "--prefix=/gnu/store/s926zb4xrnnasc7iipnbyp9k0fnlsain-gtkmm-3.24.4" "--buildtype=debugoptimized" "-Dc_link_args=-Wl,-rpath=/gnu/store/s926zb4xrnnasc7iipnbyp9k0fnlsain-gtkmm-3.24.4/lib" "-Dcpp_link_args=-Wl,-rpath=/gnu/store/s926zb4xrnnasc7iipnbyp9k0fnlsain-gtkmm-3.24.4/lib" "-Dbuild-documentation=true" "/tmp/guix-build-gtkmm-3.24.4.drv-0/gtkmm-3.24.4" failed with status 2
...skipping...
self.process_kwargs(kwargs, backend)
File "/gnu/store/iajwhaqi5ah00d80k1frh2ylfbpk92nc-meson-0.60.0/lib/python3.9/site-packages/mesonbuild/build.py", line 2441, in process_kwargs
if isinstance(kwargs['install_dir'], list):
KeyError: 'install_dir'
The Meson build system
Version: 0.60.0
Source dir: /tmp/guix-build-gtkmm-3.24.4.drv-0/gtkmm-3.24.4
Build dir: /tmp/guix-build-gtkmm-3.24.4.drv-0/build
Build type: native build
Project name: gtkmm
Project version: 3.24.4
C compiler for the host machine: gcc (gcc 10.3.0 "gcc (GCC) 10.3.0")
C linker for the host machine: gcc ld.bfd 2.37
C++ compiler for the host machine: c++ (gcc 10.3.0 "c++ (GCC) 10.3.0")
C++ linker for the host machine: c++ ld.bfd 2.37
Host machine cpu family: x86_64
Host machine cpu: x86_64
Program python3 found: YES (/gnu/store/qygy1q98ikvapa3gnmhsq92dmnc60ffd-python-3.9.6/bin/python3)
Found pkg-config: /gnu/store/wc7lqi6kgki5qzr01g3gvriwdadjiwpc-pkg-config-0.29.2/bin/pkg-config (0.29.2)
Run-time dependency gtk+-3.0 found: YES 3.24.30
Run-time dependency cairomm-1.0 found: YES 1.14.2
Run-time dependency pangomm-1.4 found: YES 2.46.0
Run-time dependency gdk-pixbuf-2.0 found: YES 2.42.4
Run-time dependency atkmm-1.6 found: YES 2.28.1
Run-time dependency epoxy found: YES 1.5.5
Run-time dependency glibmm-2.4 found: YES 2.64.5
Run-time dependency giomm-2.4 found: YES 2.64.5
Run-time dependency gtk+-unix-print-3.0 found: YES 3.24.30
Program mm-common-get found: YES (/gnu/store/jgjk0xss7qw8fj0zz4hn8ai9dnq7vlbi-mm-common-1.0.3/bin/mm-common-get)
Program m4 found: YES (/gnu/store/4xvaqcjyyrw6y1fwm3drl87xrz1vsln9-m4-1.4.18/bin/m4)
Program perl found: YES (/gnu/store/k18as85v9s0z66w40wg9sym4c5qz03l2-perl-5.34.0/bin/perl)
Program doxygen found: YES (/gnu/store/79lhh65z70qzwb0x4flgy1n4z05ypr4f-doxygen-1.9.1/bin/doxygen)
Program dot found: YES (/gnu/store/a4b3bc2rm9lqyd00vwz16p2ycfva62fk-graphviz-2.49.0/bin/dot)
Program xsltproc found: YES (/gnu/store/546xz3wbs1zriz48vz2kxyfjyyqbmgjh-libxslt-1.1.34/bin/xsltproc)
Compiler for C++ supports arguments -Wall: YES
../gtkmm-3.24.4/meson.build:307: WARNING: Consider using the built-in warning_level option instead of using "-Wall".
Library glibmm_generate_extra_defs-2.4 found: YES
Configuring gdkmm-3.0.pc using configuration
Configuring gdkmm-3.0-uninstalled.pc using configuration
Configuring gdkmmconfig.h using configuration
Configuring gdkmm.rc using configuration
Program /gnu/store/qygy1q98ikvapa3gnmhsq92dmnc60ffd-python-3.9.6/bin/python3 found: YES (/gnu/store/qygy1q98ikvapa3gnmhsq92dmnc60ffd-python-3.9.6/bin/python3)
Program /gnu/store/qygy1q98ikvapa3gnmhsq92dmnc60ffd-python-3.9.6/bin/python3 found: YES (/gnu/store/qygy1q98ikvapa3gnmhsq92dmnc60ffd-python-3.9.6/bin/python3)
Configuring gtkmm-3.0.pc using configuration
Configuring gtkmm-3.0-uninstalled.pc using configuration
Configuring gtkmmconfig.h using configuration
Configuring gtkmm.rc using configuration
Program /gnu/store/qygy1q98ikvapa3gnmhsq92dmnc60ffd-python-3.9.6/bin/python3 found: YES (/gnu/store/qygy1q98ikvapa3gnmhsq92dmnc60ffd-python-3.9.6/bin/python3)
Program /gnu/store/qygy1q98ikvapa3gnmhsq92dmnc60ffd-python-3.9.6/bin/python3 found: YES (/gnu/store/qygy1q98ikvapa3gnmhsq92dmnc60ffd-python-3.9.6/bin/python3)
Found pkg-config: /gnu/store/wc7lqi6kgki5qzr01g3gvriwdadjiwpc-pkg-config-0.29.2/bin/pkg-config (0.29.2)
Program glib-compile-resources found: YES (/gnu/store/w7a78mbizznxgykmqiia3l0bdyjpn675-glib-2.70.0-bin/bin/glib-compile-resources)
error: in phase 'configure': uncaught exception:
%exception #<&invoke-error program: "meson" arguments: ("--prefix=/gnu/store/s926zb4xrnnasc7iipnbyp9k0fnlsain-gtkmm-3.24.4" "--buildtype=debugoptimized" "-Dc_link_args=-Wl,-rpath=/gnu/store/s926zb4xrnnasc7iipnbyp9k0fnlsain-gtkmm-3.24.4/lib" "-Dcpp_link_args=-Wl,-rpath=/gnu/store/s926zb4xrnnasc7iipnbyp9k0fnlsain-gtkmm-3.24.4/lib" "-Dbuild-documentation=true" "/tmp/guix-build-gtkmm-3.24.4.drv-0/gtkmm-3.24.4") exit-status: 2 term-signal: #f stop-signal: #f>
phase `configure' failed after 1.2 seconds
command "meson" "--prefix=/gnu/store/s926zb4xrnnasc7iipnbyp9k0fnlsain-gtkmm-3.24.4" "--buildtype=debugoptimized" "-Dc_link_args=-Wl,-rpath=/gnu/store/s926zb4xrnnasc7iipnbyp9k0fnlsain-gtkmm-3.24.4/lib" "-Dcpp_link_args=-Wl,-rpath=/gnu/store/s926zb4xrnnasc7iipnbyp9k0fnlsain-gtkmm-3.24.4/lib" "-Dbuild-documentation=true" "/tmp/guix-build-gtkmm-3.24.4.drv-0/gtkmm-3.24.4" failed with status 2
...skipping...
self.process_kwargs(kwargs, backend)
File "/gnu/store/iajwhaqi5ah00d80k1frh2ylfbpk92nc-meson-0.60.0/lib/python3.9/site-packages/mesonbuild/build.py", line 2441, in process_kwargs
if isinstance(kwargs['install_dir'], list):
KeyError: 'install_dir'
The Meson build system
Version: 0.60.0
Source dir: /tmp/guix-build-gtkmm-3.24.4.drv-0/gtkmm-3.24.4
Build dir: /tmp/guix-build-gtkmm-3.24.4.drv-0/build
Build type: native build
Project name: gtkmm
Project version: 3.24.4
C compiler for the host machine: gcc (gcc 10.3.0 "gcc (GCC) 10.3.0")
C linker for the host machine: gcc ld.bfd 2.37
C++ compiler for the host machine: c++ (gcc 10.3.0 "c++ (GCC) 10.3.0")
C++ linker for the host machine: c++ ld.bfd 2.37
Host machine cpu family: x86_64
Host machine cpu: x86_64
Program python3 found: YES (/gnu/store/qygy1q98ikvapa3gnmhsq92dmnc60ffd-python-3.9.6/bin/python3)
Found pkg-config: /gnu/store/wc7lqi6kgki5qzr01g3gvriwdadjiwpc-pkg-config-0.29.2/bin/pkg-config (0.29.2)
Run-time dependency gtk+-3.0 found: YES 3.24.30
Run-time dependency cairomm-1.0 found: YES 1.14.2
Run-time dependency pangomm-1.4 found: YES 2.46.0
Run-time dependency gdk-pixbuf-2.0 found: YES 2.42.4
Run-time dependency atkmm-1.6 found: YES 2.28.1
Run-time dependency epoxy found: YES 1.5.5
Run-time dependency glibmm-2.4 found: YES 2.64.5
Run-time dependency giomm-2.4 found: YES 2.64.5
Run-time dependency gtk+-unix-print-3.0 found: YES 3.24.30
Program mm-common-get found: YES (/gnu/store/jgjk0xss7qw8fj0zz4hn8ai9dnq7vlbi-mm-common-1.0.3/bin/mm-common-get)
Program m4 found: YES (/gnu/store/4xvaqcjyyrw6y1fwm3drl87xrz1vsln9-m4-1.4.18/bin/m4)
Program perl found: YES (/gnu/store/k18as85v9s0z66w40wg9sym4c5qz03l2-perl-5.34.0/bin/perl)
Program doxygen found: YES (/gnu/store/79lhh65z70qzwb0x4flgy1n4z05ypr4f-doxygen-1.9.1/bin/doxygen)
Program dot found: YES (/gnu/store/a4b3bc2rm9lqyd00vwz16p2ycfva62fk-graphviz-2.49.0/bin/dot)
Program xsltproc found: YES (/gnu/store/546xz3wbs1zriz48vz2kxyfjyyqbmgjh-libxslt-1.1.34/bin/xsltproc)
Compiler for C++ supports arguments -Wall: YES
../gtkmm-3.24.4/meson.build:307: WARNING: Consider using the built-in warning_level option instead of using "-Wall".
Library glibmm_generate_extra_defs-2.4 found: YES
Configuring gdkmm-3.0.pc using configuration
Configuring gdkmm-3.0-uninstalled.pc using configuration
Configuring gdkmmconfig.h using configuration
Configuring gdkmm.rc using configuration
Program /gnu/store/qygy1q98ikvapa3gnmhsq92dmnc60ffd-python-3.9.6/bin/python3 found: YES (/gnu/store/qygy1q98ikvapa3gnmhsq92dmnc60ffd-python-3.9.6/bin/python3)
Program /gnu/store/qygy1q98ikvapa3gnmhsq92dmnc60ffd-python-3.9.6/bin/python3 found: YES (/gnu/store/qygy1q98ikvapa3gnmhsq92dmnc60ffd-python-3.9.6/bin/python3)
Configuring gtkmm-3.0.pc using configuration
Configuring gtkmm-3.0-uninstalled.pc using configuration
Configuring gtkmmconfig.h using configuration
Configuring gtkmm.rc using configuration
Program /gnu/store/qygy1q98ikvapa3gnmhsq92dmnc60ffd-python-3.9.6/bin/python3 found: YES (/gnu/store/qygy1q98ikvapa3gnmhsq92dmnc60ffd-python-3.9.6/bin/python3)
Program /gnu/store/qygy1q98ikvapa3gnmhsq92dmnc60ffd-python-3.9.6/bin/python3 found: YES (/gnu/store/qygy1q98ikvapa3gnmhsq92dmnc60ffd-python-3.9.6/bin/python3)
Found pkg-config: /gnu/store/wc7lqi6kgki5qzr01g3gvriwdadjiwpc-pkg-config-0.29.2/bin/pkg-config (0.29.2)
Program glib-compile-resources found: YES (/gnu/store/w7a78mbizznxgykmqiia3l0bdyjpn675-glib-2.70.0-bin/bin/glib-compile-resources)
error: in phase 'configure': uncaught exception:
%exception #<&invoke-error program: "meson" arguments: ("--prefix=/gnu/store/s926zb4xrnnasc7iipnbyp9k0fnlsain-gtkmm-3.24.4" "--buildtype=debugoptimized" "-Dc_link_args=-Wl,-rpath=/gnu/store/s926zb4xrnnasc7iipnbyp9k0fnlsain-gtkmm-3.24.4/lib" "-Dcpp_link_args=-Wl,-rpath=/gnu/store/s926zb4xrnnasc7iipnbyp9k0fnlsain-gtkmm-3.24.4/lib" "-Dbuild-documentation=true" "/tmp/guix-build-gtkmm-3.24.4.drv-0/gtkmm-3.24.4") exit-status: 2 term-signal: #f stop-signal: #f>
phase `configure' failed after 1.2 seconds
command "meson" "--prefix=/gnu/store/s926zb4xrnnasc7iipnbyp9k0fnlsain-gtkmm-3.24.4" "--buildtype=debugoptimized" "-Dc_link_args=-Wl,-rpath=/gnu/store/s926zb4xrnnasc7iipnbyp9k0fnlsain-gtkmm-3.24.4/lib" "-Dcpp_link_args=-Wl,-rpath=/gnu/store/s926zb4xrnnasc7iipnbyp9k0fnlsain-gtkmm-3.24.4/lib" "-Dbuild-documentation=true" "/tmp/guix-build-gtkmm-3.24.4.drv-0/gtkmm-3.24.4" failed with status 2
```
@Apteryks that seems to be another case that will need to be fixed. @dcbaker I grepped all places we instantiate CustomTarget() class, but we did not think about its subclasses we have for the gnome module...
|
2021-10-28T17:55:23Z
|
<patch>
diff --git a/mesonbuild/modules/gnome.py b/mesonbuild/modules/gnome.py
--- a/mesonbuild/modules/gnome.py
+++ b/mesonbuild/modules/gnome.py
@@ -278,18 +278,20 @@ def compile_resources(self, state, args, kwargs):
if install_header and not export:
raise MesonException('GResource header is installed yet export is not enabled')
- kwargs['input'] = args[1]
- kwargs['output'] = output
- kwargs['depends'] = depends
+ c_kwargs = kwargs.copy()
+ c_kwargs['input'] = args[1]
+ c_kwargs['output'] = output
+ c_kwargs['depends'] = depends
+ c_kwargs.setdefault('install_dir', [])
if not mesonlib.version_compare(glib_version, gresource_dep_needed_version):
# This will eventually go out of sync if dependencies are added
- kwargs['depend_files'] = depend_files
- kwargs['command'] = cmd
+ c_kwargs['depend_files'] = depend_files
+ c_kwargs['command'] = cmd
else:
depfile = f'{output}.d'
- kwargs['depfile'] = depfile
- kwargs['command'] = copy.copy(cmd) + ['--dependency-file', '@DEPFILE@']
- target_c = GResourceTarget(name, state.subdir, state.subproject, kwargs)
+ c_kwargs['depfile'] = depfile
+ c_kwargs['command'] = copy.copy(cmd) + ['--dependency-file', '@DEPFILE@']
+ target_c = GResourceTarget(name, state.subdir, state.subproject, c_kwargs)
if gresource: # Only one target for .gresource files
return ModuleReturnValue(target_c, [target_c])
@@ -1456,7 +1458,7 @@ def mkenums(self, state, args, kwargs):
c_kwargs = custom_kwargs.copy()
# Never install the C file. Complain on bug tracker if you need it.
c_kwargs['install'] = False
- c_kwargs['install_dir'] = False
+ c_kwargs['install_dir'] = []
if h_template is not None:
if 'depends' in custom_kwargs:
c_kwargs['depends'] += [h_target]
</patch>
|
[]
|
[]
| |||
google__jax-958
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
TypeError when taking inverse
In this case, taking the inverse in jax.numpy throws the error 'No abstraction handler for type: <class 'jax.numpy.lax_numpy.ndarray'>', while doing the same thing in numpy does not.
```
import jax.numpy as np
import numpy.random as random
import matplotlib.pyplot as plt
class KalmanFilter():
def __init__(self):
self.initialized = False
def to_ndarray(self, x):
if(type(x) is not np.ndarray):
x_2D = np.ndarray((1, 1))
x_2D[0, 0] = x
else:
x_2D = x
return x_2D
def initialize(self, x, A, B, H, P, Q, R):
self.initialized = True
x, A, B, H, P, Q, R = self.to_ndarray(x), self.to_ndarray(A), self.to_ndarray(B), self.to_ndarray(H), self.to_ndarray(P), self.to_ndarray(Q), self.to_ndarray(R)
self.x, self.A, self.B, self.H, self.P, self.Q, self.R = x, A, B, H, P, Q, R
self.K = np.ndarray(A.shape)
def step(self, u, z, n = 1):
u, z = self.to_ndarray(u), self.to_ndarray(z)
for i in range(n):
self.x = self.A @ self.x + self.B @ u
self.P = self.A @ self.P @ self.A.T + self.Q
self.K = self.P @ self.H.T @ np.linalg.inv(self.H @ self.P @ self.H.T + self.R)
self.x = self.x + self.K @ (z - self.H @ self.x)
self.P = self.P - self.K @ self.H @ self.P
if(type(z) is float):
return float(self.x)
else:
return self.x
def predict(self, u, z, n = 1):
u, z = self.to_ndarray(u), self.to_ndarray(z)
for i in range(n):
x_temp = self.A @ self.x + self.B @ u
P_temp = self.A @ self.P @ self.A.T + self.Q
K_temp = P_temp @ self.H.T @ np.linalg.inv(self.H @ P_temp @ self.H.T + self.R)
x_temp = x_temp + K_temp @ (z - self.H @ x_temp)
if(type(z) is not np.ndarray):
return float(x_temp)
else:
return x_temp
def test_kalman_filter(steps=100, show_plot=True):
T = steps
x_true = 0.5
env_noise = 0.1
x0 = 0
model = KalmanFilter()
model.initialize(x0, 1, 0, 1, 1, 0, env_noise)
loss = lambda x_true, x_pred: (x_true - x_pred)**2
results = []
for i in range(T):
z = x_true + float(random.normal(0, env_noise, 1))
x_pred = model.step(0, z)
cur_loss = float(loss(x_true, x_pred))
results.append(cur_loss)
if show_plot:
plt.plot(results)
plt.title("KalmanFilter model on constant signal")
plt.show(block=False)
plt.pause(1)
plt.close()
print("test_kalman_filter passed")
return
if __name__=="__main__":
test_kalman_filter()
```
</issue>
<code>
[start of README.md]
1 <div align="center">
2 <img src="https://raw.githubusercontent.com/google/jax/master/images/jax_logo_250px.png" alt="logo"></img>
3 </div>
4
5 # JAX: Autograd and XLA [](https://travis-ci.org/google/jax)
6
7 [**Reference docs**](https://jax.readthedocs.io/en/latest/)
8 | [**Install guide**](#installation)
9 | [**Quickstart**](#quickstart-colab-in-the-cloud)
10
11 JAX is [Autograd](https://github.com/hips/autograd) and
12 [XLA](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/xla/g3doc/overview.md),
13 brought together for high-performance machine learning research.
14
15 With its updated version of [Autograd](https://github.com/hips/autograd),
16 JAX can automatically differentiate native
17 Python and NumPy functions. It can differentiate through loops, branches,
18 recursion, and closures, and it can take derivatives of derivatives of
19 derivatives. It supports reverse-mode differentiation (a.k.a. backpropagation)
20 via [`grad`](#automatic-differentiation-with-grad) as well as forward-mode differentiation,
21 and the two can be composed arbitrarily to any order.
22
23 What’s new is that JAX uses
24 [XLA](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/xla/g3doc/overview.md)
25 to compile and run your NumPy programs on GPUs and TPUs. Compilation happens
26 under the hood by default, with library calls getting just-in-time compiled and
27 executed. But JAX also lets you just-in-time compile your own Python functions
28 into XLA-optimized kernels using a one-function API,
29 [`jit`](#compilation-with-jit). Compilation and automatic differentiation can be
30 composed arbitrarily, so you can express sophisticated algorithms and get
31 maximal performance without leaving Python.
32
33 Dig a little deeper, and you'll see that JAX is really an extensible system for
34 [composable function transformations](#transformations). Both
35 [`grad`](#automatic-differentiation-with-grad) and [`jit`](#compilation-with-jit)
36 are instances of such transformations. Another is [`vmap`](#auto-vectorization-with-vmap)
37 for automatic vectorization, with more to come.
38
39 This is a research project, not an official Google product. Expect bugs and
40 [sharp edges](https://colab.research.google.com/github/google/jax/blob/master/notebooks/Common_Gotchas_in_JAX.ipynb).
41 Please help by trying it out, [reporting
42 bugs](https://github.com/google/jax/issues), and letting us know what you
43 think!
44
45 ```python
46 import jax.numpy as np
47 from jax import grad, jit, vmap
48
49 def predict(params, inputs):
50 for W, b in params:
51 outputs = np.dot(inputs, W) + b
52 inputs = np.tanh(outputs)
53 return outputs
54
55 def logprob_fun(params, inputs, targets):
56 preds = predict(params, inputs)
57 return np.sum((preds - targets)**2)
58
59 grad_fun = jit(grad(logprob_fun)) # compiled gradient evaluation function
60 perex_grads = jit(vmap(grad_fun, in_axes=(None, 0, 0))) # fast per-example grads
61 ```
62
63 JAX started as a research project by [Matt Johnson](https://github.com/mattjj),
64 [Roy Frostig](https://github.com/froystig), [Dougal
65 Maclaurin](https://github.com/dougalm), and [Chris
66 Leary](https://github.com/learyg), and is now developed [in the
67 open](https://github.com/google/jax) by a growing number of
68 [contributors](#contributors).
69
70 ### Contents
71 * [Quickstart: Colab in the Cloud](#quickstart-colab-in-the-cloud)
72 * [Installation](#installation)
73 * [Running the tests](#running-the-tests)
74 * [Reference documentation](#reference-documentation)
75 * [A brief tour](#a-brief-tour)
76 * [What's supported](#whats-supported)
77 * [Transformations](#transformations)
78 * [Random numbers are different](#random-numbers-are-different)
79 * [Mini-libraries](#mini-libraries)
80 * [How it works](#how-it-works)
81 * [What we're working on](#what-were-working-on)
82 * [Current gotchas](#current-gotchas)
83
84 ## Quickstart: Colab in the Cloud
85 Jump right in using a notebook in your browser, connected to a Google Cloud GPU. Here are some starter notebooks:
86 - [The basics: NumPy on accelerators, `grad` for differentiation, `jit` for compilation, and `vmap` for vectorization](https://colab.research.google.com/github/google/jax/blob/master/notebooks/quickstart.ipynb)
87 - [Training a Simple Neural Network, with PyTorch Data Loading](https://colab.research.google.com/github/google/jax/blob/master/notebooks/neural_network_and_data_loading.ipynb)
88 - [Training a Simple Neural Network, with TensorFlow Dataset Data Loading](https://colab.research.google.com/github/google/jax/blob/master/notebooks/neural_network_with_tfds_data.ipynb)
89
90 And for a deeper dive into JAX:
91 - [Common gotchas and sharp edges](https://colab.research.google.com/github/google/jax/blob/master/notebooks/Common_Gotchas_in_JAX.ipynb)
92 - [The Autodiff Cookbook, Part 1: easy and powerful automatic differentiation in JAX](https://colab.research.google.com/github/google/jax/blob/master/notebooks/autodiff_cookbook.ipynb)
93 - [Directly using XLA in Python](https://colab.research.google.com/github/google/jax/blob/master/notebooks/XLA_in_Python.ipynb)
94 - [MAML Tutorial with JAX](https://colab.research.google.com/github/google/jax/blob/master/notebooks/maml.ipynb).
95
96 ## Installation
97 JAX is written in pure Python, but it depends on XLA, which needs to be compiled
98 and installed as the `jaxlib` package. Use the following instructions to build
99 JAX from source or install a binary package with pip.
100
101 We support installing or building `jaxlib` on Linux and macOS platforms, but not
102 Windows. We're not currently working on Windows support, but contributions are
103 welcome (see [#438](https://github.com/google/jax/issues/438)).
104
105 ### Building JAX from source
106 First, obtain the JAX source code, and make sure `scipy` is installed.
107
108 ```bash
109 git clone https://github.com/google/jax
110 cd jax
111 pip install scipy
112 ```
113
114 If you are building on a Mac, make sure XCode and the XCode command line tools
115 are installed.
116
117 To build XLA with CUDA support, you can run
118
119 ```bash
120 python build/build.py --enable_cuda
121 pip install -e build # install jaxlib (includes XLA)
122 pip install -e . # install jax (pure Python)
123 ```
124
125 See `python build/build.py --help` for configuration options, including ways to
126 specify the paths to CUDA and CUDNN, which you must have installed. The build
127 also depends on NumPy, and a compiler toolchain corresponding to that of
128 Ubuntu 16.04 or newer.
129
130 To build XLA without CUDA GPU support (CPU only), drop the `--enable_cuda`:
131
132 ```bash
133 python build/build.py
134 pip install -e build # install jaxlib (includes XLA)
135 pip install -e . # install jax
136 ```
137
138 To upgrade to the latest version from GitHub, just run `git pull` from the JAX
139 repository root, and rebuild by running `build.py` if necessary. You shouldn't have
140 to reinstall because `pip install -e` sets up symbolic links from site-packages
141 into the repository.
142
143 ### pip installation
144
145 Installing XLA with prebuilt binaries via `pip` is still experimental,
146 especially with GPU support. Let us know on [the issue
147 tracker](https://github.com/google/jax/issues) if you run into any errors.
148
149 To install a CPU-only version, which might be useful for doing local
150 development on a laptop, you can run
151
152 ```bash
153 pip install --upgrade jax jaxlib # CPU-only version
154 ```
155
156 If you want to install JAX with both CPU and GPU support, using existing CUDA
157 and CUDNN7 installations on your machine (for example, preinstalled on your
158 cloud VM), you can run
159
160 ```bash
161 # install jaxlib
162 PYTHON_VERSION=cp27 # alternatives: cp27, cp35, cp36, cp37
163 CUDA_VERSION=cuda92 # alternatives: cuda90, cuda92, cuda100
164 PLATFORM=linux_x86_64 # alternatives: linux_x86_64
165 BASE_URL='https://storage.googleapis.com/jax-wheels'
166 pip install --upgrade $BASE_URL/$CUDA_VERSION/jaxlib-0.1.21-$PYTHON_VERSION-none-$PLATFORM.whl
167
168 pip install --upgrade jax # install jax
169 ```
170
171 The library package name must correspond to the version of the existing CUDA
172 installation you want to use, with `cuda100` for CUDA 10.0, `cuda92` for CUDA
173 9.2, and `cuda90` for CUDA 9.0. To find your CUDA and CUDNN versions, you can
174 run commands like these, depending on your CUDNN install path:
175
176 ```bash
177 nvcc --version
178 grep CUDNN_MAJOR -A 2 /usr/local/cuda/include/cudnn.h # might need different path
179 ```
180
181 The Python version must match your Python interpreter. There are prebuilt wheels
182 for Python 2.7, 3.6, and 3.7; for anything else, you must build from source.
183
184
185 ## Running the tests
186
187 To run all the JAX tests, we recommend using `pytest-xdist`, which can run tests in
188 parallel. First, install `pytest-xdist` by running `pip install pytest-xdist`.
189 Then, from the repository root directory run
190
191 ```bash
192 pytest -n auto tests
193 ```
194
195 JAX generates test cases combinatorially, and you can control the number of
196 cases that are generated and checked for each test (default 10):
197
198 ```bash
199 JAX_NUM_GENERATED_CASES=100 pytest -n auto tests
200 ```
201
202 You can run a more specific set of tests using
203 [`pytest`](https://docs.pytest.org/en/latest/usage.html#specifying-tests-selecting-tests)'s
204 built-in selection mechanisms, or alternatively you can run a specific test
205 file directly to see more detailed information about the cases being run:
206
207 ```bash
208 python tests/lax_numpy_test.py --num_generated_cases=5
209 ```
210
211 ## Reference documentation
212
213 For details about the JAX API, see the
214 [reference documentation](https://jax.readthedocs.io/).
215
216 ## A brief tour
217
218 ```python
219 In [1]: import jax.numpy as np
220
221 In [2]: from jax import random
222
223 In [3]: key = random.PRNGKey(0)
224
225 In [4]: x = random.normal(key, (5000, 5000))
226
227 In [5]: print(np.dot(x, x.T) / 2) # fast!
228 [[ 2.52727051e+03 8.15895557e+00 -8.53276134e-01 ..., # ...
229
230 In [6]: print(np.dot(x, x.T) / 2) # even faster!
231 # JIT-compiled code is cached and reused in the 2nd call
232 [[ 2.52727051e+03 8.15895557e+00 -8.53276134e-01 ..., # ...
233 ```
234
235 What’s happening behind-the-scenes is that JAX is using XLA to just-in-time
236 (JIT) compile and execute these individual operations on the GPU. First the
237 `random.normal` call is compiled and the array referred to by `x` is generated
238 on the GPU. Next, each function called on `x` (namely `transpose`, `dot`, and
239 `divide`) is individually JIT-compiled and executed, each keeping its results on
240 the device.
241 It’s only when a value needs to be printed, plotted, saved, or passed into a raw
242 NumPy function that a read-only copy of the value is brought back to the host as
243 an ndarray and cached. The second call to `dot` is faster because the
244 JIT-compiled code is cached and reused, saving the compilation time.
245
246 The fun really starts when you use `grad` for automatic differentiation and
247 `jit` to compile your own functions end-to-end. Here’s a more complete toy
248 example:
249
250 ```python
251 from jax import grad, jit
252 import jax.numpy as np
253
254 def sigmoid(x):
255 return 0.5 * (np.tanh(x / 2.) + 1)
256
257 # Outputs probability of a label being true according to logistic model.
258 def logistic_predictions(weights, inputs):
259 return sigmoid(np.dot(inputs, weights))
260
261 # Training loss is the negative log-likelihood of the training labels.
262 def loss(weights, inputs, targets):
263 preds = logistic_predictions(weights, inputs)
264 label_logprobs = np.log(preds) * targets + np.log(1 - preds) * (1 - targets)
265 return -np.sum(label_logprobs)
266
267 # Build a toy dataset.
268 inputs = np.array([[0.52, 1.12, 0.77],
269 [0.88, -1.08, 0.15],
270 [0.52, 0.06, -1.30],
271 [0.74, -2.49, 1.39]])
272 targets = np.array([True, True, False, True])
273
274 # Define a compiled function that returns gradients of the training loss
275 training_gradient_fun = jit(grad(loss))
276
277 # Optimize weights using gradient descent.
278 weights = np.array([0.0, 0.0, 0.0])
279 print("Initial loss: {:0.2f}".format(loss(weights, inputs, targets)))
280 for i in range(100):
281 weights -= 0.1 * training_gradient_fun(weights, inputs, targets)
282
283 print("Trained loss: {:0.2f}".format(loss(weights, inputs, targets)))
284 ```
285
286 To see more, check out the [quickstart
287 notebook](https://colab.research.google.com/github/google/jax/blob/master/notebooks/quickstart.ipynb),
288 a [simple MNIST classifier
289 example](https://github.com/google/jax/blob/master/examples/mnist_classifier.py)
290 and the rest of the [JAX
291 examples](https://github.com/google/jax/blob/master/examples/).
292
293 ## What's supported
294
295 If you’re using JAX just as an accelerator-backed NumPy, without using `grad` or
296 `jit` in your code, then in principle there are no constraints, though some
297 NumPy functions haven’t been implemented yet. A list of supported functions can
298 be found in the [reference documentation](https://jax.readthedocs.io/).
299
300 Generally using `np.dot(A, B)` is
301 better than `A.dot(B)` because the former gives us more opportunities to run the
302 computation on the device. NumPy also does a lot of work to cast any array-like
303 function arguments to arrays, as in `np.sum([x, y])`, while `jax.numpy`
304 typically requires explicit casting of array arguments, like
305 `np.sum(np.array([x, y]))`.
306
307 For automatic differentiation with `grad`, JAX has the same restrictions
308 as [Autograd](https://github.com/hips/autograd). Specifically, differentiation
309 works with indexing (`x = A[i, j, :]`) but not indexed assignment (`A[i, j] =
310 x`) or indexed in-place updating (`A[i] += b`) (use
311 [`jax.ops.index_update`](https://jax.readthedocs.io/en/latest/_autosummary/jax.ops.index_update.html#jax.ops.index_update)
312 or
313 [`jax.ops.index_add`](https://jax.readthedocs.io/en/latest/_autosummary/jax.ops.index_add.html#jax.ops.index_add)
314 instead). You can use lists, tuples, and
315 dicts freely: JAX doesn't even see them. Using `np.dot(A, B)` rather than
316 `A.dot(B)` is required for automatic differentiation when `A` is a raw ndarray.
317
318 For compiling your own functions with `jit` there are a few more requirements.
319 Because `jit` aims to specialize Python functions only on shapes and dtypes
320 during tracing, rather than on concrete values, Python control flow that depends
321 on concrete values won’t be able to execute and will instead raise an error. If
322 you want compiled control flow, use structured control flow primitives like
323 lax.cond and lax.while_loop. Some indexing features, like slice-based indexing
324 `A[i:i+5]` for argument-dependent `i`, or boolean-based indexing `A[bool_ind]`
325 for argument-dependent `bool_ind`, produce abstract values of unknown shape and
326 are thus unsupported in `jit` functions.
327
328 In general, JAX is intended to be used with a functional style of Python
329 programming. Functions passed to transformations like `grad` and `jit` are
330 expected to be free of side-effects. You can write print statements for
331 debugging but they may only be executed once if they're under a `jit` decorator.
332
333 > TLDR **Do use**
334 >
335 > * Functional programming
336 > * [Many](https://jax.readthedocs.io/en/latest/jax.numpy.html) of NumPy’s
337 > functions (help us add more!)
338 > * [Some](https://jax.readthedocs.io/en/latest/jax.scipy.html) SciPy functions
339 > * Indexing and slicing of arrays like `x = A[[5, 1, 7], :, 2:4]`
340 > * Explicit array creation from lists like `A = np.array([x, y])`
341 >
342 > **Don’t use**
343 >
344 > * Assignment into arrays like `A[0, 0] = x` (use
345 > [`jax.ops.index_update`](https://jax.readthedocs.io/en/latest/_autosummary/jax.ops.index_add.html#jax.ops.index_update)
346 > instead)
347 > * Implicit casting to arrays like `np.sum([x, y])` (use `np.sum(np.array([x,
348 > y])` instead)
349 > * `A.dot(B)` method syntax for functions of more than one argument (use
350 > `np.dot(A, B)` instead)
351 > * Side-effects like mutation of arguments or mutation of global variables
352 > * The `out` argument of NumPy functions
353 > * Dtype casting like `np.float64(x)` (use `x.astype('float64')` or
354 > `x.astype(np.float64)` instead).
355 >
356 > **For jit functions, also don’t use**
357 >
358 > * Control flow based on dynamic values `if x > 0: ...`. Control flow based
359 > on shapes is fine: `if x.shape[0] > 2: ...` and `for subarr in array`.
360 > * Slicing `A[i:i+5]` for dynamic index `i` (use `lax.dynamic_slice` instead)
361 > or boolean indexing `A[bool_ind]` for traced values `bool_ind`.
362
363 You should get loud errors if your code violates any of these.
364
365 ## Transformations
366
367 At its core, JAX is an extensible system for transforming numerical functions.
368 We currently expose three important transformations: `grad`, `jit`, and `vmap`.
369
370 ### Automatic differentiation with grad
371
372 JAX has roughly the same API as [Autograd](https://github.com/hips/autograd).
373 The most popular function is `grad` for reverse-mode gradients:
374
375 ```python
376 from jax import grad
377 import jax.numpy as np
378
379 def tanh(x): # Define a function
380 y = np.exp(-2.0 * x)
381 return (1.0 - y) / (1.0 + y)
382
383 grad_tanh = grad(tanh) # Obtain its gradient function
384 print(grad_tanh(1.0)) # Evaluate it at x = 1.0
385 # prints 0.41997434161402603
386 ```
387
388 You can differentiate to any order with `grad`.
389
390 For more advanced autodiff, you can use `jax.vjp` for reverse-mode
391 vector-Jacobian products and `jax.jvp` for forward-mode Jacobian-vector
392 products. The two can be composed arbitrarily with one another, and with other
393 JAX transformations. Here's one way to compose
394 those to make a function that efficiently computes full Hessian matrices:
395
396 ```python
397 from jax import jit, jacfwd, jacrev
398 def hessian(fun):
399 return jit(jacfwd(jacrev(fun)))
400 ```
401
402 As with Autograd, you're free to use differentiation with Python control
403 structures:
404
405 ```python
406 def abs_val(x):
407 if x > 0:
408 return x
409 else:
410 return -x
411
412 abs_val_grad = grad(abs_val)
413 print(abs_val_grad(1.0)) # prints 1.0
414 print(abs_val_grad(-1.0)) # prints -1.0 (abs_val is re-evaluated)
415 ```
416
417 ### Compilation with jit
418
419 You can use XLA to compile your functions end-to-end with `jit`, used either as
420 an `@jit` decorator or as a higher-order function.
421
422 ```python
423 import jax.numpy as np
424 from jax import jit
425
426 def slow_f(x):
427 # Element-wise ops see a large benefit from fusion
428 return x * x + x * 2.0
429
430 x = np.ones((5000, 5000))
431 fast_f = jit(slow_f)
432 %timeit -n10 -r3 fast_f(x) # ~ 4.5 ms / loop on Titan X
433 %timeit -n10 -r3 slow_f(x) # ~ 14.5 ms / loop (also on GPU via JAX)
434 ```
435
436 You can mix `jit` and `grad` and any other JAX transformation however you like.
437
438 ### Auto-vectorization with vmap
439
440 `vmap` is the vectorizing map.
441 It has the familiar semantics of mapping a function along array axes, but
442 instead of keeping the loop on the outside, it pushes the loop down into a
443 function’s primitive operations for better performance.
444
445 Using `vmap` can save you from having to carry around batch dimensions in your
446 code. For example, consider this simple *unbatched* neural network prediction
447 function:
448
449 ```python
450 def predict(params, input_vec):
451 assert input_vec.ndim == 1
452 for W, b in params:
453 output_vec = np.dot(W, input_vec) + b # `input_vec` on the right-hand side!
454 input_vec = np.tanh(output_vec)
455 return output_vec
456 ```
457
458 We often instead write `np.dot(inputs, W)` to allow for a batch dimension on the
459 left side of `inputs`, but we’ve written this particular prediction function to
460 apply only to single input vectors. If we wanted to apply this function to a
461 batch of inputs at once, semantically we could just write
462
463 ```python
464 from functools import partial
465 predictions = np.stack(list(map(partial(predict, params), input_batch)))
466 ```
467
468 But pushing one example through the network at a time would be slow! It’s better
469 to vectorize the computation, so that at every layer we’re doing matrix-matrix
470 multiplies rather than matrix-vector multiplies.
471
472 The `vmap` function does that transformation for us. That is, if we write
473
474 ```python
475 from jax import vmap
476 predictions = vmap(partial(predict, params))(input_batch)
477 # or, alternatively
478 predictions = vmap(predict, in_axes=(None, 0))(params, input_batch)
479 ```
480
481 then the `vmap` function will push the outer loop inside the function, and our
482 machine will end up executing matrix-matrix multiplications exactly as if we’d
483 done the batching by hand.
484
485 It’s easy enough to manually batch a simple neural network without `vmap`, but
486 in other cases manual vectorization can be impractical or impossible. Take the
487 problem of efficiently computing per-example gradients: that is, for a fixed set
488 of parameters, we want to compute the gradient of our loss function evaluated
489 separately at each example in a batch. With `vmap`, it’s easy:
490
491 ```python
492 per_example_gradients = vmap(partial(grad(loss), params))(inputs, targets)
493 ```
494
495 Of course, `vmap` can be arbitrarily composed with `jit`, `grad`, and any other
496 JAX transformation! We use `vmap` with both forward- and reverse-mode automatic
497 differentiation for fast Jacobian and Hessian matrix calculations in
498 `jax.jacfwd`, `jax.jacrev`, and `jax.hessian`.
499
500
501 ## Random numbers are different
502
503 JAX needs a [functional pseudo-random number generator (PRNG) system](design_notes/prng.md) to provide
504 reproducible results invariant to compilation boundaries and backends, while
505 also maximizing performance by enabling vectorized generation and
506 parallelization across random calls. The `numpy.random` library doesn’t have
507 those properties. The `jax.random` library meets those needs: it’s functionally
508 pure, but it doesn’t require you to pass stateful random objects back out of
509 every function.
510
511 The `jax.random` library uses
512 [count-based PRNGs](http://www.thesalmons.org/john/random123/papers/random123sc11.pdf)
513 and a functional array-oriented
514 [splitting model](http://publications.lib.chalmers.se/records/fulltext/183348/local_183348.pdf).
515 To generate random values, you call a function like `jax.random.normal` and give
516 it a PRNG key:
517
518 ```python
519 import jax.random as random
520
521 key = random.PRNGKey(0)
522 print(random.normal(key, shape=(3,))) # [ 1.81608593 -0.48262325 0.33988902]
523 ```
524
525 If we make the same call again with the same key, we get the same values:
526
527 ```python
528 print(random.normal(key, shape=(3,))) # [ 1.81608593 -0.48262325 0.33988902]
529 ```
530
531 The key never gets updated. So how do we get fresh random values? We use
532 `jax.random.split` to create new keys from existing ones. A common pattern is to
533 split off a new key for every function call that needs random values:
534
535 ```python
536 key = random.PRNGKey(0)
537
538 key, subkey = random.split(key)
539 print(random.normal(subkey, shape=(3,))) # [ 1.1378783 -1.22095478 -0.59153646]
540
541 key, subkey = random.split(key)
542 print(random.normal(subkey, shape=(3,))) # [-0.06607265 0.16676566 1.17800343]
543 ```
544
545 By splitting the PRNG key, not only do we avoid having to thread random states
546 back out of every function call, but also we can generate multiple random arrays
547 in parallel because we can avoid unnecessary sequential dependencies.
548
549 There's a gotcha here, which is that it's easy to unintentionally reuse a key
550 without splitting. We intend to add a check for this (a sort of dynamic linear
551 typing) but for now it's something to be careful about.
552
553 For more detailed information on the design and the reasoning behind it, see the
554 [PRNG design doc](design_notes/prng.md).
555
556
557 ## Mini-libraries
558
559 JAX provides some small, experimental libraries for machine learning. These
560 libraries are in part about providing tools and in part about serving as
561 examples for how to build such libraries using JAX. Each one is only a few
562 hundred lines of code, so take a look inside and adapt them as you need!
563
564 ### Neural-net building with Stax
565
566 **Stax** is a functional neural network building library. The basic idea is that
567 a single layer or an entire network can be modeled as an `(init_fun, apply_fun)`
568 pair. The `init_fun` is used to initialize network parameters and the
569 `apply_fun` takes parameters and inputs to produce outputs. There are
570 constructor functions for common basic pairs, like `Conv` and `Relu`, and these
571 pairs can be composed in series using `stax.serial` or in parallel using
572 `stax.parallel`.
573
574 Here’s an example:
575
576 ```python
577 import jax.numpy as np
578 from jax import random
579 from jax.experimental import stax
580 from jax.experimental.stax import Conv, Dense, MaxPool, Relu, Flatten, LogSoftmax
581
582 # Use stax to set up network initialization and evaluation functions
583 net_init, net_apply = stax.serial(
584 Conv(32, (3, 3), padding='SAME'), Relu,
585 Conv(64, (3, 3), padding='SAME'), Relu,
586 MaxPool((2, 2)), Flatten,
587 Dense(128), Relu,
588 Dense(10), LogSoftmax,
589 )
590
591 # Initialize parameters, not committing to a batch shape
592 rng = random.PRNGKey(0)
593 in_shape = (-1, 28, 28, 1)
594 out_shape, net_params = net_init(rng, in_shape)
595
596 # Apply network to dummy inputs
597 inputs = np.zeros((128, 28, 28, 1))
598 predictions = net_apply(net_params, inputs)
599 ```
600
601 ### First-order optimization
602
603 JAX has a minimal optimization library focused on stochastic first-order
604 optimizers. Every optimizer is modeled as an `(init_fun, update_fun,
605 get_params)` triple of functions. The `init_fun` is used to initialize the
606 optimizer state, which could include things like momentum variables, and the
607 `update_fun` accepts a gradient and an optimizer state to produce a new
608 optimizer state. The `get_params` function extracts the current iterate (i.e.
609 the current parameters) from the optimizer state. The parameters being optimized
610 can be ndarrays or arbitrarily-nested list/tuple/dict structures, so you can
611 store your parameters however you’d like.
612
613 Here’s an example, using `jit` to compile the whole update end-to-end:
614
615 ```python
616 from jax.experimental import optimizers
617 from jax import jit, grad
618
619 # Define a simple squared-error loss
620 def loss(params, batch):
621 inputs, targets = batch
622 predictions = net_apply(params, inputs)
623 return np.sum((predictions - targets)**2)
624
625 # Use optimizers to set optimizer initialization and update functions
626 opt_init, opt_update, get_params = optimizers.momentum(step_size=1e-3, mass=0.9)
627
628 # Define a compiled update step
629 @jit
630 def step(i, opt_state, batch):
631 params = get_params(opt_state)
632 g = grad(loss)(params, batch)
633 return opt_update(i, g, opt_state)
634
635 # Dummy input data stream
636 data_generator = ((np.zeros((128, 28, 28, 1)), np.zeros((128, 10)))
637 for _ in range(10))
638
639 # Optimize parameters in a loop
640 opt_state = opt_init(net_params)
641 for i in range(10):
642 opt_state = step(i, opt_state, next(data_generator))
643 net_params = get_params(opt_state)
644 ```
645
646 ## How it works
647
648 Programming in machine learning is about expressing and transforming functions.
649 Transformations include automatic differentiation, compilation for accelerators,
650 and automatic batching. High-level languages like Python are great for
651 expressing functions, but usually all we can do with them is apply them. We lose
652 access to their internal structure which would let us perform transformations.
653
654 JAX is a tool for specializing and translating high-level Python+NumPy functions
655 into a representation that can be transformed and then lifted back into a Python
656 function.
657
658 
659
660 JAX specializes Python functions by tracing. Tracing a function means monitoring
661 all the basic operations that are applied to its input to produce its output,
662 and recording these operations and the data-flow between them in a directed
663 acyclic graph (DAG). To perform tracing, JAX wraps primitive operations, like
664 basic numerical kernels, so that when they’re called they add themselves to a
665 list of operations performed along with their inputs and outputs. To keep track
666 of how data flows between these primitives, values being tracked are wrapped in
667 instances of the `Tracer` class.
668
669 When a Python function is provided to `grad` or `jit`, it’s wrapped for tracing
670 and returned. When the wrapped function is called, we abstract the concrete
671 arguments provided into instances of the `AbstractValue` class, box them for
672 tracing in instances of the `Tracer` class, and call the function on them.
673 Abstract arguments represent sets of possible values rather than specific
674 values: for example, `jit` abstracts ndarray arguments to abstract values that
675 represent all ndarrays with the same shape and dtype. In contrast, `grad`
676 abstracts ndarray arguments to represent an infinitesimal neighborhood of the
677 underlying
678 value. By tracing the Python function on these abstract values, we ensure that
679 it’s specialized enough so that it’s tractable to transform, and that it’s still
680 general enough so that the transformed result is useful, and possibly reusable.
681 These transformed functions are then lifted back into Python callables in a way
682 that allows them to be traced and transformed again as needed.
683
684 The primitive functions that JAX traces are mostly in 1:1 correspondence with
685 [XLA HLO](https://www.tensorflow.org/xla/operation_semantics) and are defined
686 in [lax.py](https://github.com/google/jax/blob/master/jax/lax.py). This 1:1
687 correspondence makes most of the translations to XLA essentially trivial, and
688 ensures we only have a small set of primitives to cover for other
689 transformations like automatic differentiation. The [`jax.numpy`
690 layer](https://github.com/google/jax/blob/master/jax/numpy/) is written in pure
691 Python simply by expressing NumPy functions in terms of the LAX functions (and
692 other NumPy functions we’ve already written). That makes `jax.numpy` easy to
693 extend.
694
695 When you use `jax.numpy`, the underlying LAX primitives are `jit`-compiled
696 behind the scenes, allowing you to write unrestricted Python+Numpy code while
697 still executing each primitive operation on an accelerator.
698
699 But JAX can do more: instead of just compiling and dispatching to a fixed set of
700 individual primitives, you can use `jit` on larger and larger functions to be
701 end-to-end compiled and optimized. For example, instead of just compiling and
702 dispatching a convolution op, you can compile a whole network, or a whole
703 gradient evaluation and optimizer update step.
704
705 The tradeoff is that `jit` functions have to satisfy some additional
706 specialization requirements: since we want to compile traces that are
707 specialized on shapes and dtypes, but not specialized all the way to concrete
708 values, the Python code under a `jit` decorator must be applicable to abstract
709 values. If we try to evaluate `x > 0` on an abstract `x`, the result is an
710 abstract value representing the set `{True, False}`, and so a Python branch like
711 `if x > 0` will raise an error: it doesn’t know which way to go!
712 See [What’s supported](#whats-supported) for more
713 information about `jit` requirements.
714
715 The good news about this tradeoff is that `jit` is opt-in: JAX libraries use
716 `jit` on individual operations and functions behind the scenes, allowing you to
717 write unrestricted Python+Numpy and still make use of a hardware accelerator.
718 But when you want to maximize performance, you can often use `jit` in your own
719 code to compile and end-to-end optimize much bigger functions.
720
721 ## What we're working on
722 1. Documentation!
723 2. Cloud TPU support
724 3. Multi-GPU and multi-TPU support
725 4. Full NumPy coverage and some SciPy coverage
726 5. Full coverage for vmap
727 6. Make everything faster
728 * Lowering the XLA function dispatch overhead
729 * Linear algebra routines (MKL on CPU, MAGMA on GPU)
730 7. `cond` and `while` primitives with efficient automatic differentiation
731
732 ## Current gotchas
733
734 For a survey of current gotchas, with examples and explanations, we highly
735 recommend reading the [Gotchas Notebook](https://colab.research.google.com/github/google/jax/blob/master/notebooks/Common_Gotchas_in_JAX.ipynb).
736
737 Some stand-out gotchas that might surprise NumPy users:
738 1. JAX enforces single-precision (32-bit, e.g. `float32`) values by default, and
739 to enable double-precision (64-bit, e.g. `float64`) one needs to set the
740 `jax_enable_x64` variable **at startup** (or set the environment variable
741 `JAX_ENABLE_x64=True`, see [the Gotchas Notebook](https://colab.research.google.com/github/google/jax/blob/master/notebooks/Common_Gotchas_in_JAX.ipynb#scrollTo=YTktlwTTMgFl))
742 2. Some of NumPy's dtype promotion semantics involving a mix of Python scalars
743 and NumPy types aren't preserved, namely `np.add(1, np.array([2],
744 np.float32)).dtype` is `float64` rather than `float32`.
745 3. In-place mutation of arrays isn't supported, though [there is an
746 alternative](https://jax.readthedocs.io/en/latest/jax.ops.html). Generally
747 JAX requires functional code.
748 4. PRNGs are different and can be awkward, though for [good
749 reasons](https://github.com/google/jax/blob/master/design_notes/prng.md), and
750 non-reuse (linearity) is not yet checked.
751
752 See [the notebook](https://colab.research.google.com/github/google/jax/blob/master/notebooks/Common_Gotchas_in_JAX.ipynb) for much more information.
753
754 ## Contributors
755
756 So far, JAX includes lots of help and [contributions](https://github.com/google/jax/graphs/contributors). In addition to the code contributions reflected on GitHub, JAX has benefitted substantially from the advice of
757 [Jamie Townsend](https://github.com/j-towns),
758 [Peter Hawkins](https://github.com/hawkinsp),
759 [Jonathan Ragan-Kelley](https://people.eecs.berkeley.edu/~jrk/),
760 [Alex Wiltschko](http://github.com/alexbw),
761 George Dahl,
762 [Stephan Hoyer](http://stephanhoyer.com/),
763 Sam Schoenholz,
764 [Eli Bendersky](https://github.com/eliben),
765 Zak Stone,
766 [Alexey Radul](https://github.com/axch),
767 Michael Isard,
768 Skye Wanderman-Milne,
769 and many others.
770
[end of README.md]
[start of jax/scipy/linalg.py]
1 # Copyright 2018 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from __future__ import absolute_import
16 from __future__ import division
17 from __future__ import print_function
18
19 import warnings
20
21 import scipy.linalg
22
23 from .. import lax
24 from .. import lax_linalg
25 from ..numpy.lax_numpy import _wraps
26 from ..numpy import lax_numpy as np
27 from ..numpy import linalg as np_linalg
28
29
30 _EXPERIMENTAL_WARNING = "scipy.linalg support is experimental and may cause silent failures or wrong outputs"
31
32 _T = lambda x: np.swapaxes(x, -1, -2)
33
34 @_wraps(scipy.linalg.cholesky)
35 def cholesky(a, lower=False, overwrite_a=False, check_finite=True):
36 warnings.warn(_EXPERIMENTAL_WARNING)
37 del overwrite_a, check_finite
38 a = np_linalg._promote_arg_dtypes(np.asarray(a))
39 l = lax_linalg.cholesky(a if lower else np.conj(_T(a)), symmetrize_input=False)
40 return l if lower else np.conj(_T(l))
41
42
43 @_wraps(scipy.linalg.cho_factor)
44 def cho_factor(a, lower=False, overwrite_a=False, check_finite=True):
45 return (cholesky(a, lower=lower), lower)
46
47
48 @_wraps(scipy.linalg.cho_solve)
49 def cho_solve(c_and_lower, b, overwrite_b=False, check_finite=True):
50 del overwrite_b, check_finite
51 c, lower = c_and_lower
52
53 c, b = np_linalg._promote_arg_dtypes(np.asarray(c), np.asarray(b))
54 c_shape = np.shape(c)
55 b_shape = np.shape(b)
56 c_ndims = len(c_shape)
57 b_ndims = len(b_shape)
58 if not (c_ndims >= 2 and c_shape[-1] == c_shape[-2] and
59 (c_ndims == b_ndims or c_ndims == b_ndims + 1)):
60 msg = ("The arguments to solve must have shapes a=[..., m, m] and "
61 "b=[..., m, k] or b=[..., m]; got a={} and b={}")
62 raise ValueError(msg.format(c_shape, b_shape))
63
64 # TODO(phawkins): triangular_solve only supports matrices on the RHS, so we
65 # add a dummy dimension. Extend it to support vectors and simplify this.
66 b = b if c_ndims == b_ndims else b[..., None]
67 b = lax_linalg.triangular_solve(c, b, left_side=True, lower=lower,
68 transpose_a=not lower, conjugate_a=not lower)
69 b = lax_linalg.triangular_solve(c, b, left_side=True, lower=lower,
70 transpose_a=lower, conjugate_a=lower)
71 return b[..., 0] if c_ndims != b_ndims else b
72
73
74 @_wraps(scipy.linalg.svd)
75 def svd(a, full_matrices=True, compute_uv=True, overwrite_a=False,
76 check_finite=True, lapack_driver='gesdd'):
77 warnings.warn(_EXPERIMENTAL_WARNING)
78 del overwrite_a, check_finite, lapack_driver
79 a = np_linalg._promote_arg_dtypes(np.asarray(a))
80 return lax_linalg.svd(a, full_matrices, compute_uv)
81
82
83 @_wraps(scipy.linalg.det)
84 def det(a, overwrite_a=False, check_finite=True):
85 warnings.warn(_EXPERIMENTAL_WARNING)
86 del overwrite_a, check_finite
87 return np_linalg.det(a)
88
89
90 @_wraps(scipy.linalg.eigh)
91 def eigh(a, b=None, lower=True, eigvals_only=False, overwrite_a=False,
92 overwrite_b=False, turbo=True, eigvals=None, type=1,
93 check_finite=True):
94 del overwrite_a, overwrite_b, turbo, check_finite
95 if b is not None:
96 raise NotImplementedError("Only the b=None case of eigh is implemented")
97 if type != 1:
98 raise NotImplementedError("Only the type=1 case of eigh is implemented.")
99 if eigvals is not None:
100 raise NotImplementedError(
101 "Only the eigvals=None case of eigh is implemented.")
102
103 a = np_linalg._promote_arg_dtypes(np.asarray(a))
104 v, w = lax_linalg.eigh(a, lower=lower)
105
106 if eigvals_only:
107 return w
108 else:
109 return w, v
110
111
112
113 @_wraps(scipy.linalg.inv)
114 def inv(a, overwrite_a=False, check_finite=True):
115 warnings.warn(_EXPERIMENTAL_WARNING)
116 del overwrite_a, check_finite
117 return np_linalg.inv(a)
118
119
120 @_wraps(scipy.linalg.lu_factor)
121 def lu_factor(a, overwrite_a=False, check_finite=True):
122 del overwrite_a, check_finite
123 a = np_linalg._promote_arg_dtypes(np.asarray(a))
124 return lax_linalg.lu(a)
125
126
127 @_wraps(scipy.linalg.lu)
128 def lu(a, permute_l=False, overwrite_a=False, check_finite=True):
129 del overwrite_a, check_finite
130 a = np_linalg._promote_arg_dtypes(np.asarray(a))
131 lu, pivots = lax_linalg.lu(a)
132 dtype = lax.dtype(a)
133 m, n = np.shape(a)
134 permutation = lax_linalg.lu_pivots_to_permutation(pivots, m)
135 p = np.real(np.array(permutation == np.arange(m)[:, None], dtype=dtype))
136 k = min(m, n)
137 l = np.tril(lu, -1)[:, :k] + np.eye(m, k, dtype=dtype)
138 u = np.triu(lu)[:k, :]
139 if permute_l:
140 return np.matmul(p, l), u
141 else:
142 return p, l, u
143
144
145 @_wraps(scipy.linalg.qr)
146 def qr(a, overwrite_a=False, lwork=None, mode="full", pivoting=False,
147 check_finite=True):
148 warnings.warn(_EXPERIMENTAL_WARNING)
149 del overwrite_a, lwork, check_finite
150 if pivoting:
151 raise NotImplementedError(
152 "The pivoting=True case of qr is not implemented.")
153 if mode in ("full", "r"):
154 full_matrices = True
155 elif mode == "economic":
156 full_matrices = False
157 else:
158 raise ValueError("Unsupported QR decomposition mode '{}'".format(mode))
159 a = np_linalg._promote_arg_dtypes(np.asarray(a))
160 q, r = lax_linalg.qr(a, full_matrices)
161 if mode == "r":
162 return r
163 return q, r
164
165 @_wraps(scipy.linalg.solve)
166 def solve(a, b, sym_pos=False, lower=False, overwrite_a=False, overwrite_b=False,
167 debug=False, check_finite=True):
168 del overwrite_a, overwrite_b, debug, check_finite
169 if not sym_pos:
170 return np_linalg.solve(a, b)
171
172 a, b = np_linalg._promote_arg_dtypes(np.asarray(a), np.asarray(b))
173 return cho_solve(cho_factor(a, lower=lower), b)
174
175
176 @_wraps(scipy.linalg.solve_triangular)
177 def solve_triangular(a, b, trans=0, lower=False, unit_diagonal=False,
178 overwrite_b=False, debug=None, check_finite=True):
179 warnings.warn(_EXPERIMENTAL_WARNING)
180 del overwrite_b, debug, check_finite
181
182 if trans == 0 or trans == "N":
183 transpose_a, conjugate_a = False, False
184 elif trans == 1 or trans == "T":
185 transpose_a, conjugate_a = True, False
186 elif trans == 2 or trans == "C":
187 transpose_a, conjugate_a = True, True
188 else:
189 raise ValueError("Invalid 'trans' value {}".format(trans))
190
191 a, b = np_linalg._promote_arg_dtypes(np.asarray(a), np.asarray(b))
192
193 # lax_linalg.triangular_solve only supports matrix 'b's at the moment.
194 b_is_vector = np.ndim(a) == np.ndim(b) + 1
195 if b_is_vector:
196 b = b[..., None]
197 out = lax_linalg.triangular_solve(a, b, left_side=True, lower=lower,
198 transpose_a=transpose_a,
199 conjugate_a=conjugate_a,
200 unit_diagonal=unit_diagonal)
201 if b_is_vector:
202 return out[..., 0]
203 else:
204 return out
205
206
207 @_wraps(scipy.linalg.tril)
208 def tril(m, k=0):
209 return np.tril(m, k)
210
211
212 @_wraps(scipy.linalg.triu)
213 def triu(m, k=0):
214 return np.triu(m, k)
215
[end of jax/scipy/linalg.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
google/jax
|
acda3f398bc2930e6f4b9da68a973cced9478c4c
|
TypeError when taking inverse
In this case, taking the inverse in jax.numpy throws the error 'No abstraction handler for type: <class 'jax.numpy.lax_numpy.ndarray'>', while doing the same thing in numpy does not.
```
import jax.numpy as np
import numpy.random as random
import matplotlib.pyplot as plt
class KalmanFilter():
def __init__(self):
self.initialized = False
def to_ndarray(self, x):
if(type(x) is not np.ndarray):
x_2D = np.ndarray((1, 1))
x_2D[0, 0] = x
else:
x_2D = x
return x_2D
def initialize(self, x, A, B, H, P, Q, R):
self.initialized = True
x, A, B, H, P, Q, R = self.to_ndarray(x), self.to_ndarray(A), self.to_ndarray(B), self.to_ndarray(H), self.to_ndarray(P), self.to_ndarray(Q), self.to_ndarray(R)
self.x, self.A, self.B, self.H, self.P, self.Q, self.R = x, A, B, H, P, Q, R
self.K = np.ndarray(A.shape)
def step(self, u, z, n = 1):
u, z = self.to_ndarray(u), self.to_ndarray(z)
for i in range(n):
self.x = self.A @ self.x + self.B @ u
self.P = self.A @ self.P @ self.A.T + self.Q
self.K = self.P @ self.H.T @ np.linalg.inv(self.H @ self.P @ self.H.T + self.R)
self.x = self.x + self.K @ (z - self.H @ self.x)
self.P = self.P - self.K @ self.H @ self.P
if(type(z) is float):
return float(self.x)
else:
return self.x
def predict(self, u, z, n = 1):
u, z = self.to_ndarray(u), self.to_ndarray(z)
for i in range(n):
x_temp = self.A @ self.x + self.B @ u
P_temp = self.A @ self.P @ self.A.T + self.Q
K_temp = P_temp @ self.H.T @ np.linalg.inv(self.H @ P_temp @ self.H.T + self.R)
x_temp = x_temp + K_temp @ (z - self.H @ x_temp)
if(type(z) is not np.ndarray):
return float(x_temp)
else:
return x_temp
def test_kalman_filter(steps=100, show_plot=True):
T = steps
x_true = 0.5
env_noise = 0.1
x0 = 0
model = KalmanFilter()
model.initialize(x0, 1, 0, 1, 1, 0, env_noise)
loss = lambda x_true, x_pred: (x_true - x_pred)**2
results = []
for i in range(T):
z = x_true + float(random.normal(0, env_noise, 1))
x_pred = model.step(0, z)
cur_loss = float(loss(x_true, x_pred))
results.append(cur_loss)
if show_plot:
plt.plot(results)
plt.title("KalmanFilter model on constant signal")
plt.show(block=False)
plt.pause(1)
plt.close()
print("test_kalman_filter passed")
return
if __name__=="__main__":
test_kalman_filter()
```
|
2019-07-01T18:57:15Z
|
<patch>
diff --git a/jax/numpy/lax_numpy.py b/jax/numpy/lax_numpy.py
--- a/jax/numpy/lax_numpy.py
+++ b/jax/numpy/lax_numpy.py
@@ -82,7 +82,10 @@ def __instancecheck__(self, instance):
# pylint: disable=invalid-name
class ndarray(six.with_metaclass(_ArrayMeta, onp.ndarray)):
- pass
+ def __init__(shape, dtype=None, buffer=None, offset=0, strides=None,
+ order=None):
+ raise TypeError("jax.numpy.ndarray() should not be instantiated explicitly."
+ " Use jax.numpy.array, or jax.numpy.zeros instead.")
# pylint: enable=invalid-name
</patch>
|
[]
|
[]
| ||||
docker__compose-1356
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Specify project and service with container labels
Labels are likely going to land in Docker 1.6: https://github.com/docker/docker/pull/9882
If we used these to specify what project and service a container is part of, a whole load of stuff becomes a lot simpler. We don't have to do gross things like `project, service, n = container_name.split("_", 3)`.
Questions:
- Do we still want to give containers names? It's useful to be able to reference containers like that (`docker exec project_web_1`) and we want something useful to show up in `docker ps`
- Do we still want to give containers integer references of some kind? It's pretty handy to be able to have a human readable reference to a container
</issue>
<code>
[start of README.md]
1 Docker Compose
2 ==============
3 *(Previously known as Fig)*
4
5 Compose is a tool for defining and running complex applications with Docker.
6 With Compose, you define a multi-container application in a single file, then
7 spin your application up in a single command which does everything that needs to
8 be done to get it running.
9
10 Compose is great for development environments, staging servers, and CI. We don't
11 recommend that you use it in production yet.
12
13 Using Compose is basically a three-step process.
14
15 1. Define your app's environment with a `Dockerfile` so it can be
16 reproduced anywhere.
17 2. Define the services that make up your app in `docker-compose.yml` so
18 they can be run together in an isolated environment:
19 3. Lastly, run `docker-compose up` and Compose will start and run your entire app.
20
21 A `docker-compose.yml` looks like this:
22
23 web:
24 build: .
25 ports:
26 - "5000:5000"
27 volumes:
28 - .:/code
29 links:
30 - redis
31 redis:
32 image: redis
33
34 Compose has commands for managing the whole lifecycle of your application:
35
36 * Start, stop and rebuild services
37 * View the status of running services
38 * Stream the log output of running services
39 * Run a one-off command on a service
40
41 Installation and documentation
42 ------------------------------
43
44 - Full documentation is available on [Docker's website](http://docs.docker.com/compose/).
45 - If you have any questions, you can talk in real-time with other developers in the #docker-compose IRC channel on Freenode. [Click here to join using IRCCloud.](https://www.irccloud.com/invite?hostname=irc.freenode.net&channel=%23docker-compose)
46
47 Contributing
48 ------------
49
50 [](http://jenkins.dockerproject.com/job/Compose%20Master/)
51
52 Want to help build Compose? Check out our [contributing documentation](https://github.com/docker/compose/blob/master/CONTRIBUTING.md).
53
54
[end of README.md]
[start of compose/__init__.py]
1 from __future__ import unicode_literals
2 from .service import Service # noqa:flake8
3
4 __version__ = '1.3.0dev'
5
[end of compose/__init__.py]
[start of compose/cli/command.py]
1 from __future__ import unicode_literals
2 from __future__ import absolute_import
3 from requests.exceptions import ConnectionError, SSLError
4 import logging
5 import os
6 import re
7 import six
8
9 from .. import config
10 from ..project import Project
11 from ..service import ConfigError
12 from .docopt_command import DocoptCommand
13 from .utils import call_silently, is_mac, is_ubuntu, find_candidates_in_parent_dirs
14 from .docker_client import docker_client
15 from . import verbose_proxy
16 from . import errors
17 from .. import __version__
18
19 log = logging.getLogger(__name__)
20
21 SUPPORTED_FILENAMES = [
22 'docker-compose.yml',
23 'docker-compose.yaml',
24 'fig.yml',
25 'fig.yaml',
26 ]
27
28
29 class Command(DocoptCommand):
30 base_dir = '.'
31
32 def dispatch(self, *args, **kwargs):
33 try:
34 super(Command, self).dispatch(*args, **kwargs)
35 except SSLError as e:
36 raise errors.UserError('SSL error: %s' % e)
37 except ConnectionError:
38 if call_silently(['which', 'docker']) != 0:
39 if is_mac():
40 raise errors.DockerNotFoundMac()
41 elif is_ubuntu():
42 raise errors.DockerNotFoundUbuntu()
43 else:
44 raise errors.DockerNotFoundGeneric()
45 elif call_silently(['which', 'boot2docker']) == 0:
46 raise errors.ConnectionErrorBoot2Docker()
47 else:
48 raise errors.ConnectionErrorGeneric(self.get_client().base_url)
49
50 def perform_command(self, options, handler, command_options):
51 if options['COMMAND'] == 'help':
52 # Skip looking up the compose file.
53 handler(None, command_options)
54 return
55
56 if 'FIG_FILE' in os.environ:
57 log.warn('The FIG_FILE environment variable is deprecated.')
58 log.warn('Please use COMPOSE_FILE instead.')
59
60 explicit_config_path = options.get('--file') or os.environ.get('COMPOSE_FILE') or os.environ.get('FIG_FILE')
61 project = self.get_project(
62 self.get_config_path(explicit_config_path),
63 project_name=options.get('--project-name'),
64 verbose=options.get('--verbose'))
65
66 handler(project, command_options)
67
68 def get_client(self, verbose=False):
69 client = docker_client()
70 if verbose:
71 version_info = six.iteritems(client.version())
72 log.info("Compose version %s", __version__)
73 log.info("Docker base_url: %s", client.base_url)
74 log.info("Docker version: %s",
75 ", ".join("%s=%s" % item for item in version_info))
76 return verbose_proxy.VerboseProxy('docker', client)
77 return client
78
79 def get_project(self, config_path, project_name=None, verbose=False):
80 try:
81 return Project.from_dicts(
82 self.get_project_name(config_path, project_name),
83 config.load(config_path),
84 self.get_client(verbose=verbose))
85 except ConfigError as e:
86 raise errors.UserError(six.text_type(e))
87
88 def get_project_name(self, config_path, project_name=None):
89 def normalize_name(name):
90 return re.sub(r'[^a-z0-9]', '', name.lower())
91
92 if 'FIG_PROJECT_NAME' in os.environ:
93 log.warn('The FIG_PROJECT_NAME environment variable is deprecated.')
94 log.warn('Please use COMPOSE_PROJECT_NAME instead.')
95
96 project_name = project_name or os.environ.get('COMPOSE_PROJECT_NAME') or os.environ.get('FIG_PROJECT_NAME')
97 if project_name is not None:
98 return normalize_name(project_name)
99
100 project = os.path.basename(os.path.dirname(os.path.abspath(config_path)))
101 if project:
102 return normalize_name(project)
103
104 return 'default'
105
106 def get_config_path(self, file_path=None):
107 if file_path:
108 return os.path.join(self.base_dir, file_path)
109
110 (candidates, path) = find_candidates_in_parent_dirs(SUPPORTED_FILENAMES, self.base_dir)
111
112 if len(candidates) == 0:
113 raise errors.ComposeFileNotFound(SUPPORTED_FILENAMES)
114
115 winner = candidates[0]
116
117 if len(candidates) > 1:
118 log.warning("Found multiple config files with supported names: %s", ", ".join(candidates))
119 log.warning("Using %s\n", winner)
120
121 if winner == 'docker-compose.yaml':
122 log.warning("Please be aware that .yml is the expected extension "
123 "in most cases, and using .yaml can cause compatibility "
124 "issues in future.\n")
125
126 if winner.startswith("fig."):
127 log.warning("%s is deprecated and will not be supported in future. "
128 "Please rename your config file to docker-compose.yml\n" % winner)
129
130 return os.path.join(path, winner)
131
[end of compose/cli/command.py]
[start of compose/cli/log_printer.py]
1 from __future__ import unicode_literals
2 from __future__ import absolute_import
3 import sys
4
5 from itertools import cycle
6
7 from .multiplexer import Multiplexer, STOP
8 from . import colors
9 from .utils import split_buffer
10
11
12 class LogPrinter(object):
13 def __init__(self, containers, attach_params=None, output=sys.stdout, monochrome=False):
14 self.containers = containers
15 self.attach_params = attach_params or {}
16 self.prefix_width = self._calculate_prefix_width(containers)
17 self.generators = self._make_log_generators(monochrome)
18 self.output = output
19
20 def run(self):
21 mux = Multiplexer(self.generators)
22 for line in mux.loop():
23 self.output.write(line)
24
25 def _calculate_prefix_width(self, containers):
26 """
27 Calculate the maximum width of container names so we can make the log
28 prefixes line up like so:
29
30 db_1 | Listening
31 web_1 | Listening
32 """
33 prefix_width = 0
34 for container in containers:
35 prefix_width = max(prefix_width, len(container.name_without_project))
36 return prefix_width
37
38 def _make_log_generators(self, monochrome):
39 color_fns = cycle(colors.rainbow())
40 generators = []
41
42 def no_color(text):
43 return text
44
45 for container in self.containers:
46 if monochrome:
47 color_fn = no_color
48 else:
49 color_fn = next(color_fns)
50 generators.append(self._make_log_generator(container, color_fn))
51
52 return generators
53
54 def _make_log_generator(self, container, color_fn):
55 prefix = color_fn(self._generate_prefix(container)).encode('utf-8')
56 # Attach to container before log printer starts running
57 line_generator = split_buffer(self._attach(container), '\n')
58
59 for line in line_generator:
60 yield prefix + line
61
62 exit_code = container.wait()
63 yield color_fn("%s exited with code %s\n" % (container.name, exit_code))
64 yield STOP
65
66 def _generate_prefix(self, container):
67 """
68 Generate the prefix for a log line without colour
69 """
70 name = container.name_without_project
71 padding = ' ' * (self.prefix_width - len(name))
72 return ''.join([name, padding, ' | '])
73
74 def _attach(self, container):
75 params = {
76 'stdout': True,
77 'stderr': True,
78 'stream': True,
79 }
80 params.update(self.attach_params)
81 params = dict((name, 1 if value else 0) for (name, value) in list(params.items()))
82 return container.attach(**params)
83
[end of compose/cli/log_printer.py]
[start of compose/cli/main.py]
1 from __future__ import print_function
2 from __future__ import unicode_literals
3 from inspect import getdoc
4 from operator import attrgetter
5 import logging
6 import re
7 import signal
8 import sys
9
10 from docker.errors import APIError
11 import dockerpty
12
13 from .. import __version__
14 from ..project import NoSuchService, ConfigurationError
15 from ..service import BuildError, CannotBeScaledError
16 from ..config import parse_environment
17 from .command import Command
18 from .docopt_command import NoSuchCommand
19 from .errors import UserError
20 from .formatter import Formatter
21 from .log_printer import LogPrinter
22 from .utils import yesno
23
24 log = logging.getLogger(__name__)
25
26
27 def main():
28 setup_logging()
29 try:
30 command = TopLevelCommand()
31 command.sys_dispatch()
32 except KeyboardInterrupt:
33 log.error("\nAborting.")
34 sys.exit(1)
35 except (UserError, NoSuchService, ConfigurationError) as e:
36 log.error(e.msg)
37 sys.exit(1)
38 except NoSuchCommand as e:
39 log.error("No such command: %s", e.command)
40 log.error("")
41 log.error("\n".join(parse_doc_section("commands:", getdoc(e.supercommand))))
42 sys.exit(1)
43 except APIError as e:
44 log.error(e.explanation)
45 sys.exit(1)
46 except BuildError as e:
47 log.error("Service '%s' failed to build: %s" % (e.service.name, e.reason))
48 sys.exit(1)
49
50
51 def setup_logging():
52 console_handler = logging.StreamHandler(sys.stderr)
53 console_handler.setFormatter(logging.Formatter())
54 console_handler.setLevel(logging.INFO)
55 root_logger = logging.getLogger()
56 root_logger.addHandler(console_handler)
57 root_logger.setLevel(logging.DEBUG)
58
59 # Disable requests logging
60 logging.getLogger("requests").propagate = False
61
62
63 # stolen from docopt master
64 def parse_doc_section(name, source):
65 pattern = re.compile('^([^\n]*' + name + '[^\n]*\n?(?:[ \t].*?(?:\n|$))*)',
66 re.IGNORECASE | re.MULTILINE)
67 return [s.strip() for s in pattern.findall(source)]
68
69
70 class TopLevelCommand(Command):
71 """Fast, isolated development environments using Docker.
72
73 Usage:
74 docker-compose [options] [COMMAND] [ARGS...]
75 docker-compose -h|--help
76
77 Options:
78 -f, --file FILE Specify an alternate compose file (default: docker-compose.yml)
79 -p, --project-name NAME Specify an alternate project name (default: directory name)
80 --verbose Show more output
81 -v, --version Print version and exit
82
83 Commands:
84 build Build or rebuild services
85 help Get help on a command
86 kill Kill containers
87 logs View output from containers
88 port Print the public port for a port binding
89 ps List containers
90 pull Pulls service images
91 restart Restart services
92 rm Remove stopped containers
93 run Run a one-off command
94 scale Set number of containers for a service
95 start Start services
96 stop Stop services
97 up Create and start containers
98
99 """
100 def docopt_options(self):
101 options = super(TopLevelCommand, self).docopt_options()
102 options['version'] = "docker-compose %s" % __version__
103 return options
104
105 def build(self, project, options):
106 """
107 Build or rebuild services.
108
109 Services are built once and then tagged as `project_service`,
110 e.g. `composetest_db`. If you change a service's `Dockerfile` or the
111 contents of its build directory, you can run `docker-compose build` to rebuild it.
112
113 Usage: build [options] [SERVICE...]
114
115 Options:
116 --no-cache Do not use cache when building the image.
117 """
118 no_cache = bool(options.get('--no-cache', False))
119 project.build(service_names=options['SERVICE'], no_cache=no_cache)
120
121 def help(self, project, options):
122 """
123 Get help on a command.
124
125 Usage: help COMMAND
126 """
127 command = options['COMMAND']
128 if not hasattr(self, command):
129 raise NoSuchCommand(command, self)
130 raise SystemExit(getdoc(getattr(self, command)))
131
132 def kill(self, project, options):
133 """
134 Force stop service containers.
135
136 Usage: kill [options] [SERVICE...]
137
138 Options:
139 -s SIGNAL SIGNAL to send to the container.
140 Default signal is SIGKILL.
141 """
142 signal = options.get('-s', 'SIGKILL')
143
144 project.kill(service_names=options['SERVICE'], signal=signal)
145
146 def logs(self, project, options):
147 """
148 View output from containers.
149
150 Usage: logs [options] [SERVICE...]
151
152 Options:
153 --no-color Produce monochrome output.
154 """
155 containers = project.containers(service_names=options['SERVICE'], stopped=True)
156
157 monochrome = options['--no-color']
158 print("Attaching to", list_containers(containers))
159 LogPrinter(containers, attach_params={'logs': True}, monochrome=monochrome).run()
160
161 def port(self, project, options):
162 """
163 Print the public port for a port binding.
164
165 Usage: port [options] SERVICE PRIVATE_PORT
166
167 Options:
168 --protocol=proto tcp or udp (defaults to tcp)
169 --index=index index of the container if there are multiple
170 instances of a service (defaults to 1)
171 """
172 service = project.get_service(options['SERVICE'])
173 try:
174 container = service.get_container(number=options.get('--index') or 1)
175 except ValueError as e:
176 raise UserError(str(e))
177 print(container.get_local_port(
178 options['PRIVATE_PORT'],
179 protocol=options.get('--protocol') or 'tcp') or '')
180
181 def ps(self, project, options):
182 """
183 List containers.
184
185 Usage: ps [options] [SERVICE...]
186
187 Options:
188 -q Only display IDs
189 """
190 containers = sorted(
191 project.containers(service_names=options['SERVICE'], stopped=True) +
192 project.containers(service_names=options['SERVICE'], one_off=True),
193 key=attrgetter('name'))
194
195 if options['-q']:
196 for container in containers:
197 print(container.id)
198 else:
199 headers = [
200 'Name',
201 'Command',
202 'State',
203 'Ports',
204 ]
205 rows = []
206 for container in containers:
207 command = container.human_readable_command
208 if len(command) > 30:
209 command = '%s ...' % command[:26]
210 rows.append([
211 container.name,
212 command,
213 container.human_readable_state,
214 container.human_readable_ports,
215 ])
216 print(Formatter().table(headers, rows))
217
218 def pull(self, project, options):
219 """
220 Pulls images for services.
221
222 Usage: pull [options] [SERVICE...]
223
224 Options:
225 --allow-insecure-ssl Allow insecure connections to the docker
226 registry
227 """
228 insecure_registry = options['--allow-insecure-ssl']
229 project.pull(
230 service_names=options['SERVICE'],
231 insecure_registry=insecure_registry
232 )
233
234 def rm(self, project, options):
235 """
236 Remove stopped service containers.
237
238 Usage: rm [options] [SERVICE...]
239
240 Options:
241 -f, --force Don't ask to confirm removal
242 -v Remove volumes associated with containers
243 """
244 all_containers = project.containers(service_names=options['SERVICE'], stopped=True)
245 stopped_containers = [c for c in all_containers if not c.is_running]
246
247 if len(stopped_containers) > 0:
248 print("Going to remove", list_containers(stopped_containers))
249 if options.get('--force') \
250 or yesno("Are you sure? [yN] ", default=False):
251 project.remove_stopped(
252 service_names=options['SERVICE'],
253 v=options.get('-v', False)
254 )
255 else:
256 print("No stopped containers")
257
258 def run(self, project, options):
259 """
260 Run a one-off command on a service.
261
262 For example:
263
264 $ docker-compose run web python manage.py shell
265
266 By default, linked services will be started, unless they are already
267 running. If you do not want to start linked services, use
268 `docker-compose run --no-deps SERVICE COMMAND [ARGS...]`.
269
270 Usage: run [options] [-e KEY=VAL...] SERVICE [COMMAND] [ARGS...]
271
272 Options:
273 --allow-insecure-ssl Allow insecure connections to the docker
274 registry
275 -d Detached mode: Run container in the background, print
276 new container name.
277 --entrypoint CMD Override the entrypoint of the image.
278 -e KEY=VAL Set an environment variable (can be used multiple times)
279 -u, --user="" Run as specified username or uid
280 --no-deps Don't start linked services.
281 --rm Remove container after run. Ignored in detached mode.
282 --service-ports Run command with the service's ports enabled and mapped
283 to the host.
284 -T Disable pseudo-tty allocation. By default `docker-compose run`
285 allocates a TTY.
286 """
287 service = project.get_service(options['SERVICE'])
288
289 insecure_registry = options['--allow-insecure-ssl']
290
291 if not options['--no-deps']:
292 deps = service.get_linked_names()
293
294 if len(deps) > 0:
295 project.up(
296 service_names=deps,
297 start_deps=True,
298 recreate=False,
299 insecure_registry=insecure_registry,
300 detach=options['-d']
301 )
302
303 tty = True
304 if options['-d'] or options['-T'] or not sys.stdin.isatty():
305 tty = False
306
307 if options['COMMAND']:
308 command = [options['COMMAND']] + options['ARGS']
309 else:
310 command = service.options.get('command')
311
312 container_options = {
313 'command': command,
314 'tty': tty,
315 'stdin_open': not options['-d'],
316 'detach': options['-d'],
317 }
318
319 if options['-e']:
320 container_options['environment'] = parse_environment(options['-e'])
321
322 if options['--entrypoint']:
323 container_options['entrypoint'] = options.get('--entrypoint')
324
325 if options['--rm']:
326 container_options['restart'] = None
327
328 if options['--user']:
329 container_options['user'] = options.get('--user')
330
331 if not options['--service-ports']:
332 container_options['ports'] = []
333
334 container = service.create_container(
335 one_off=True,
336 insecure_registry=insecure_registry,
337 **container_options
338 )
339
340 if options['-d']:
341 service.start_container(container)
342 print(container.name)
343 else:
344 service.start_container(container)
345 dockerpty.start(project.client, container.id, interactive=not options['-T'])
346 exit_code = container.wait()
347 if options['--rm']:
348 log.info("Removing %s..." % container.name)
349 project.client.remove_container(container.id)
350 sys.exit(exit_code)
351
352 def scale(self, project, options):
353 """
354 Set number of containers to run for a service.
355
356 Numbers are specified in the form `service=num` as arguments.
357 For example:
358
359 $ docker-compose scale web=2 worker=3
360
361 Usage: scale [SERVICE=NUM...]
362 """
363 for s in options['SERVICE=NUM']:
364 if '=' not in s:
365 raise UserError('Arguments to scale should be in the form service=num')
366 service_name, num = s.split('=', 1)
367 try:
368 num = int(num)
369 except ValueError:
370 raise UserError('Number of containers for service "%s" is not a '
371 'number' % service_name)
372 try:
373 project.get_service(service_name).scale(num)
374 except CannotBeScaledError:
375 raise UserError(
376 'Service "%s" cannot be scaled because it specifies a port '
377 'on the host. If multiple containers for this service were '
378 'created, the port would clash.\n\nRemove the ":" from the '
379 'port definition in docker-compose.yml so Docker can choose a random '
380 'port for each container.' % service_name)
381
382 def start(self, project, options):
383 """
384 Start existing containers.
385
386 Usage: start [SERVICE...]
387 """
388 project.start(service_names=options['SERVICE'])
389
390 def stop(self, project, options):
391 """
392 Stop running containers without removing them.
393
394 They can be started again with `docker-compose start`.
395
396 Usage: stop [options] [SERVICE...]
397
398 Options:
399 -t, --timeout TIMEOUT Specify a shutdown timeout in seconds.
400 (default: 10)
401 """
402 timeout = options.get('--timeout')
403 params = {} if timeout is None else {'timeout': int(timeout)}
404 project.stop(service_names=options['SERVICE'], **params)
405
406 def restart(self, project, options):
407 """
408 Restart running containers.
409
410 Usage: restart [options] [SERVICE...]
411
412 Options:
413 -t, --timeout TIMEOUT Specify a shutdown timeout in seconds.
414 (default: 10)
415 """
416 timeout = options.get('--timeout')
417 params = {} if timeout is None else {'timeout': int(timeout)}
418 project.restart(service_names=options['SERVICE'], **params)
419
420 def up(self, project, options):
421 """
422 Build, (re)create, start and attach to containers for a service.
423
424 By default, `docker-compose up` will aggregate the output of each container, and
425 when it exits, all containers will be stopped. If you run `docker-compose up -d`,
426 it'll start the containers in the background and leave them running.
427
428 If there are existing containers for a service, `docker-compose up` will stop
429 and recreate them (preserving mounted volumes with volumes-from),
430 so that changes in `docker-compose.yml` are picked up. If you do not want existing
431 containers to be recreated, `docker-compose up --no-recreate` will re-use existing
432 containers.
433
434 Usage: up [options] [SERVICE...]
435
436 Options:
437 --allow-insecure-ssl Allow insecure connections to the docker
438 registry
439 -d Detached mode: Run containers in the background,
440 print new container names.
441 --no-color Produce monochrome output.
442 --no-deps Don't start linked services.
443 --no-recreate If containers already exist, don't recreate them.
444 --no-build Don't build an image, even if it's missing
445 -t, --timeout TIMEOUT When attached, use this timeout in seconds
446 for the shutdown. (default: 10)
447
448 """
449 insecure_registry = options['--allow-insecure-ssl']
450 detached = options['-d']
451
452 monochrome = options['--no-color']
453
454 start_deps = not options['--no-deps']
455 recreate = not options['--no-recreate']
456 service_names = options['SERVICE']
457
458 project.up(
459 service_names=service_names,
460 start_deps=start_deps,
461 recreate=recreate,
462 insecure_registry=insecure_registry,
463 detach=detached,
464 do_build=not options['--no-build'],
465 )
466
467 to_attach = [c for s in project.get_services(service_names) for c in s.containers()]
468
469 if not detached:
470 print("Attaching to", list_containers(to_attach))
471 log_printer = LogPrinter(to_attach, attach_params={"logs": True}, monochrome=monochrome)
472
473 try:
474 log_printer.run()
475 finally:
476 def handler(signal, frame):
477 project.kill(service_names=service_names)
478 sys.exit(0)
479 signal.signal(signal.SIGINT, handler)
480
481 print("Gracefully stopping... (press Ctrl+C again to force)")
482 timeout = options.get('--timeout')
483 params = {} if timeout is None else {'timeout': int(timeout)}
484 project.stop(service_names=service_names, **params)
485
486
487 def list_containers(containers):
488 return ", ".join(c.name for c in containers)
489
[end of compose/cli/main.py]
[start of compose/service.py]
1 from __future__ import unicode_literals
2 from __future__ import absolute_import
3 from collections import namedtuple
4 import logging
5 import re
6 import sys
7 from operator import attrgetter
8
9 import six
10 from docker.errors import APIError
11 from docker.utils import create_host_config, LogConfig
12
13 from .config import DOCKER_CONFIG_KEYS, merge_environment
14 from .container import Container, get_container_name
15 from .progress_stream import stream_output, StreamOutputError
16
17 log = logging.getLogger(__name__)
18
19
20 DOCKER_START_KEYS = [
21 'cap_add',
22 'cap_drop',
23 'devices',
24 'dns',
25 'dns_search',
26 'env_file',
27 'extra_hosts',
28 'read_only',
29 'net',
30 'log_driver',
31 'pid',
32 'privileged',
33 'restart',
34 'volumes_from',
35 ]
36
37 VALID_NAME_CHARS = '[a-zA-Z0-9]'
38
39
40 class BuildError(Exception):
41 def __init__(self, service, reason):
42 self.service = service
43 self.reason = reason
44
45
46 class CannotBeScaledError(Exception):
47 pass
48
49
50 class ConfigError(ValueError):
51 pass
52
53
54 VolumeSpec = namedtuple('VolumeSpec', 'external internal mode')
55
56
57 ServiceName = namedtuple('ServiceName', 'project service number')
58
59
60 class Service(object):
61 def __init__(self, name, client=None, project='default', links=None, external_links=None, volumes_from=None, net=None, **options):
62 if not re.match('^%s+$' % VALID_NAME_CHARS, name):
63 raise ConfigError('Invalid service name "%s" - only %s are allowed' % (name, VALID_NAME_CHARS))
64 if not re.match('^%s+$' % VALID_NAME_CHARS, project):
65 raise ConfigError('Invalid project name "%s" - only %s are allowed' % (project, VALID_NAME_CHARS))
66 if 'image' in options and 'build' in options:
67 raise ConfigError('Service %s has both an image and build path specified. A service can either be built to image or use an existing image, not both.' % name)
68 if 'image' not in options and 'build' not in options:
69 raise ConfigError('Service %s has neither an image nor a build path specified. Exactly one must be provided.' % name)
70
71 self.name = name
72 self.client = client
73 self.project = project
74 self.links = links or []
75 self.external_links = external_links or []
76 self.volumes_from = volumes_from or []
77 self.net = net or None
78 self.options = options
79
80 def containers(self, stopped=False, one_off=False):
81 return [Container.from_ps(self.client, container)
82 for container in self.client.containers(all=stopped)
83 if self.has_container(container, one_off=one_off)]
84
85 def has_container(self, container, one_off=False):
86 """Return True if `container` was created to fulfill this service."""
87 name = get_container_name(container)
88 if not name or not is_valid_name(name, one_off):
89 return False
90 project, name, _number = parse_name(name)
91 return project == self.project and name == self.name
92
93 def get_container(self, number=1):
94 """Return a :class:`compose.container.Container` for this service. The
95 container must be active, and match `number`.
96 """
97 for container in self.client.containers():
98 if not self.has_container(container):
99 continue
100 _, _, container_number = parse_name(get_container_name(container))
101 if container_number == number:
102 return Container.from_ps(self.client, container)
103
104 raise ValueError("No container found for %s_%s" % (self.name, number))
105
106 def start(self, **options):
107 for c in self.containers(stopped=True):
108 self.start_container_if_stopped(c, **options)
109
110 def stop(self, **options):
111 for c in self.containers():
112 log.info("Stopping %s..." % c.name)
113 c.stop(**options)
114
115 def kill(self, **options):
116 for c in self.containers():
117 log.info("Killing %s..." % c.name)
118 c.kill(**options)
119
120 def restart(self, **options):
121 for c in self.containers():
122 log.info("Restarting %s..." % c.name)
123 c.restart(**options)
124
125 def scale(self, desired_num):
126 """
127 Adjusts the number of containers to the specified number and ensures
128 they are running.
129
130 - creates containers until there are at least `desired_num`
131 - stops containers until there are at most `desired_num` running
132 - starts containers until there are at least `desired_num` running
133 - removes all stopped containers
134 """
135 if not self.can_be_scaled():
136 raise CannotBeScaledError()
137
138 # Create enough containers
139 containers = self.containers(stopped=True)
140 while len(containers) < desired_num:
141 log.info("Creating %s..." % self._next_container_name(containers))
142 containers.append(self.create_container(detach=True))
143
144 running_containers = []
145 stopped_containers = []
146 for c in containers:
147 if c.is_running:
148 running_containers.append(c)
149 else:
150 stopped_containers.append(c)
151 running_containers.sort(key=lambda c: c.number)
152 stopped_containers.sort(key=lambda c: c.number)
153
154 # Stop containers
155 while len(running_containers) > desired_num:
156 c = running_containers.pop()
157 log.info("Stopping %s..." % c.name)
158 c.stop(timeout=1)
159 stopped_containers.append(c)
160
161 # Start containers
162 while len(running_containers) < desired_num:
163 c = stopped_containers.pop(0)
164 log.info("Starting %s..." % c.name)
165 self.start_container(c)
166 running_containers.append(c)
167
168 self.remove_stopped()
169
170 def remove_stopped(self, **options):
171 for c in self.containers(stopped=True):
172 if not c.is_running:
173 log.info("Removing %s..." % c.name)
174 c.remove(**options)
175
176 def create_container(self,
177 one_off=False,
178 insecure_registry=False,
179 do_build=True,
180 previous_container=None,
181 **override_options):
182 """
183 Create a container for this service. If the image doesn't exist, attempt to pull
184 it.
185 """
186 container_options = self._get_container_create_options(
187 override_options,
188 one_off=one_off,
189 previous_container=previous_container,
190 )
191
192 if (do_build and
193 self.can_be_built() and
194 not self.client.images(name=self.full_name)):
195 self.build()
196
197 try:
198 return Container.create(self.client, **container_options)
199 except APIError as e:
200 if e.response.status_code == 404 and e.explanation and 'No such image' in str(e.explanation):
201 self.pull(insecure_registry=insecure_registry)
202 return Container.create(self.client, **container_options)
203 raise
204
205 def recreate_containers(self, insecure_registry=False, do_build=True, **override_options):
206 """
207 If a container for this service doesn't exist, create and start one. If there are
208 any, stop them, create+start new ones, and remove the old containers.
209 """
210 containers = self.containers(stopped=True)
211 if not containers:
212 log.info("Creating %s..." % self._next_container_name(containers))
213 container = self.create_container(
214 insecure_registry=insecure_registry,
215 do_build=do_build,
216 **override_options)
217 self.start_container(container)
218 return [container]
219
220 return [
221 self.recreate_container(
222 c,
223 insecure_registry=insecure_registry,
224 **override_options)
225 for c in containers
226 ]
227
228 def recreate_container(self, container, **override_options):
229 """Recreate a container.
230
231 The original container is renamed to a temporary name so that data
232 volumes can be copied to the new container, before the original
233 container is removed.
234 """
235 log.info("Recreating %s..." % container.name)
236 try:
237 container.stop()
238 except APIError as e:
239 if (e.response.status_code == 500
240 and e.explanation
241 and 'no such process' in str(e.explanation)):
242 pass
243 else:
244 raise
245
246 # Use a hopefully unique container name by prepending the short id
247 self.client.rename(
248 container.id,
249 '%s_%s' % (container.short_id, container.name))
250
251 override_options = dict(
252 override_options,
253 environment=merge_environment(
254 override_options.get('environment'),
255 {'affinity:container': '=' + container.id}))
256 new_container = self.create_container(
257 do_build=False,
258 previous_container=container,
259 **override_options)
260 self.start_container(new_container)
261 container.remove()
262 return new_container
263
264 def start_container_if_stopped(self, container):
265 if container.is_running:
266 return container
267 else:
268 log.info("Starting %s..." % container.name)
269 return self.start_container(container)
270
271 def start_container(self, container):
272 container.start()
273 return container
274
275 def start_or_create_containers(
276 self,
277 insecure_registry=False,
278 detach=False,
279 do_build=True):
280 containers = self.containers(stopped=True)
281
282 if not containers:
283 log.info("Creating %s..." % self._next_container_name(containers))
284 new_container = self.create_container(
285 insecure_registry=insecure_registry,
286 detach=detach,
287 do_build=do_build,
288 )
289 return [self.start_container(new_container)]
290 else:
291 return [self.start_container_if_stopped(c) for c in containers]
292
293 def get_linked_names(self):
294 return [s.name for (s, _) in self.links]
295
296 def get_volumes_from_names(self):
297 return [s.name for s in self.volumes_from if isinstance(s, Service)]
298
299 def get_net_name(self):
300 if isinstance(self.net, Service):
301 return self.net.name
302 else:
303 return
304
305 def _next_container_name(self, all_containers, one_off=False):
306 bits = [self.project, self.name]
307 if one_off:
308 bits.append('run')
309 return '_'.join(bits + [str(self._next_container_number(all_containers))])
310
311 def _next_container_number(self, all_containers):
312 numbers = [parse_name(c.name).number for c in all_containers]
313 return 1 if not numbers else max(numbers) + 1
314
315 def _get_links(self, link_to_self):
316 links = []
317 for service, link_name in self.links:
318 for container in service.containers():
319 links.append((container.name, link_name or service.name))
320 links.append((container.name, container.name))
321 links.append((container.name, container.name_without_project))
322 if link_to_self:
323 for container in self.containers():
324 links.append((container.name, self.name))
325 links.append((container.name, container.name))
326 links.append((container.name, container.name_without_project))
327 for external_link in self.external_links:
328 if ':' not in external_link:
329 link_name = external_link
330 else:
331 external_link, link_name = external_link.split(':')
332 links.append((external_link, link_name))
333 return links
334
335 def _get_volumes_from(self):
336 volumes_from = []
337 for volume_source in self.volumes_from:
338 if isinstance(volume_source, Service):
339 containers = volume_source.containers(stopped=True)
340 if not containers:
341 volumes_from.append(volume_source.create_container().id)
342 else:
343 volumes_from.extend(map(attrgetter('id'), containers))
344
345 elif isinstance(volume_source, Container):
346 volumes_from.append(volume_source.id)
347
348 return volumes_from
349
350 def _get_net(self):
351 if not self.net:
352 return "bridge"
353
354 if isinstance(self.net, Service):
355 containers = self.net.containers()
356 if len(containers) > 0:
357 net = 'container:' + containers[0].id
358 else:
359 log.warning("Warning: Service %s is trying to use reuse the network stack "
360 "of another service that is not running." % (self.net.name))
361 net = None
362 elif isinstance(self.net, Container):
363 net = 'container:' + self.net.id
364 else:
365 net = self.net
366
367 return net
368
369 def _get_container_create_options(
370 self,
371 override_options,
372 one_off=False,
373 previous_container=None):
374 container_options = dict(
375 (k, self.options[k])
376 for k in DOCKER_CONFIG_KEYS if k in self.options)
377 container_options.update(override_options)
378
379 container_options['name'] = self._next_container_name(
380 self.containers(stopped=True, one_off=one_off),
381 one_off)
382
383 # If a qualified hostname was given, split it into an
384 # unqualified hostname and a domainname unless domainname
385 # was also given explicitly. This matches the behavior of
386 # the official Docker CLI in that scenario.
387 if ('hostname' in container_options
388 and 'domainname' not in container_options
389 and '.' in container_options['hostname']):
390 parts = container_options['hostname'].partition('.')
391 container_options['hostname'] = parts[0]
392 container_options['domainname'] = parts[2]
393
394 if 'ports' in container_options or 'expose' in self.options:
395 ports = []
396 all_ports = container_options.get('ports', []) + self.options.get('expose', [])
397 for port in all_ports:
398 port = str(port)
399 if ':' in port:
400 port = port.split(':')[-1]
401 if '/' in port:
402 port = tuple(port.split('/'))
403 ports.append(port)
404 container_options['ports'] = ports
405
406 override_options['binds'] = merge_volume_bindings(
407 container_options.get('volumes') or [],
408 previous_container)
409
410 if 'volumes' in container_options:
411 container_options['volumes'] = dict(
412 (parse_volume_spec(v).internal, {})
413 for v in container_options['volumes'])
414
415 container_options['environment'] = merge_environment(
416 self.options.get('environment'),
417 override_options.get('environment'))
418
419 if self.can_be_built():
420 container_options['image'] = self.full_name
421
422 # Delete options which are only used when starting
423 for key in DOCKER_START_KEYS:
424 container_options.pop(key, None)
425
426 container_options['host_config'] = self._get_container_host_config(
427 override_options,
428 one_off=one_off)
429
430 return container_options
431
432 def _get_container_host_config(self, override_options, one_off=False):
433 options = dict(self.options, **override_options)
434 port_bindings = build_port_bindings(options.get('ports') or [])
435
436 privileged = options.get('privileged', False)
437 cap_add = options.get('cap_add', None)
438 cap_drop = options.get('cap_drop', None)
439 log_config = LogConfig(type=options.get('log_driver', 'json-file'))
440 pid = options.get('pid', None)
441
442 dns = options.get('dns', None)
443 if isinstance(dns, six.string_types):
444 dns = [dns]
445
446 dns_search = options.get('dns_search', None)
447 if isinstance(dns_search, six.string_types):
448 dns_search = [dns_search]
449
450 restart = parse_restart_spec(options.get('restart', None))
451
452 extra_hosts = build_extra_hosts(options.get('extra_hosts', None))
453 read_only = options.get('read_only', None)
454
455 devices = options.get('devices', None)
456
457 return create_host_config(
458 links=self._get_links(link_to_self=one_off),
459 port_bindings=port_bindings,
460 binds=options.get('binds'),
461 volumes_from=self._get_volumes_from(),
462 privileged=privileged,
463 network_mode=self._get_net(),
464 devices=devices,
465 dns=dns,
466 dns_search=dns_search,
467 restart_policy=restart,
468 cap_add=cap_add,
469 cap_drop=cap_drop,
470 log_config=log_config,
471 extra_hosts=extra_hosts,
472 read_only=read_only,
473 pid_mode=pid
474 )
475
476 def build(self, no_cache=False):
477 log.info('Building %s...' % self.name)
478
479 path = six.binary_type(self.options['build'])
480
481 build_output = self.client.build(
482 path=path,
483 tag=self.full_name,
484 stream=True,
485 rm=True,
486 nocache=no_cache,
487 dockerfile=self.options.get('dockerfile', None),
488 )
489
490 try:
491 all_events = stream_output(build_output, sys.stdout)
492 except StreamOutputError as e:
493 raise BuildError(self, unicode(e))
494
495 # Ensure the HTTP connection is not reused for another
496 # streaming command, as the Docker daemon can sometimes
497 # complain about it
498 self.client.close()
499
500 image_id = None
501
502 for event in all_events:
503 if 'stream' in event:
504 match = re.search(r'Successfully built ([0-9a-f]+)', event.get('stream', ''))
505 if match:
506 image_id = match.group(1)
507
508 if image_id is None:
509 raise BuildError(self, event if all_events else 'Unknown')
510
511 return image_id
512
513 def can_be_built(self):
514 return 'build' in self.options
515
516 @property
517 def full_name(self):
518 """
519 The tag to give to images built for this service.
520 """
521 return '%s_%s' % (self.project, self.name)
522
523 def can_be_scaled(self):
524 for port in self.options.get('ports', []):
525 if ':' in str(port):
526 return False
527 return True
528
529 def pull(self, insecure_registry=False):
530 if 'image' not in self.options:
531 return
532
533 repo, tag = parse_repository_tag(self.options['image'])
534 tag = tag or 'latest'
535 log.info('Pulling %s (%s:%s)...' % (self.name, repo, tag))
536 output = self.client.pull(
537 repo,
538 tag=tag,
539 stream=True,
540 insecure_registry=insecure_registry)
541 stream_output(output, sys.stdout)
542
543
544 def get_container_data_volumes(container, volumes_option):
545 """Find the container data volumes that are in `volumes_option`, and return
546 a mapping of volume bindings for those volumes.
547 """
548 volumes = []
549
550 volumes_option = volumes_option or []
551 container_volumes = container.get('Volumes') or {}
552 image_volumes = container.image_config['ContainerConfig'].get('Volumes') or {}
553
554 for volume in set(volumes_option + image_volumes.keys()):
555 volume = parse_volume_spec(volume)
556 # No need to preserve host volumes
557 if volume.external:
558 continue
559
560 volume_path = container_volumes.get(volume.internal)
561 # New volume, doesn't exist in the old container
562 if not volume_path:
563 continue
564
565 # Copy existing volume from old container
566 volume = volume._replace(external=volume_path)
567 volumes.append(build_volume_binding(volume))
568
569 return dict(volumes)
570
571
572 def merge_volume_bindings(volumes_option, previous_container):
573 """Return a list of volume bindings for a container. Container data volumes
574 are replaced by those from the previous container.
575 """
576 volume_bindings = dict(
577 build_volume_binding(parse_volume_spec(volume))
578 for volume in volumes_option or []
579 if ':' in volume)
580
581 if previous_container:
582 volume_bindings.update(
583 get_container_data_volumes(previous_container, volumes_option))
584
585 return volume_bindings
586
587
588 NAME_RE = re.compile(r'^([^_]+)_([^_]+)_(run_)?(\d+)$')
589
590
591 def is_valid_name(name, one_off=False):
592 match = NAME_RE.match(name)
593 if match is None:
594 return False
595 if one_off:
596 return match.group(3) == 'run_'
597 else:
598 return match.group(3) is None
599
600
601 def parse_name(name):
602 match = NAME_RE.match(name)
603 (project, service_name, _, suffix) = match.groups()
604 return ServiceName(project, service_name, int(suffix))
605
606
607 def parse_restart_spec(restart_config):
608 if not restart_config:
609 return None
610 parts = restart_config.split(':')
611 if len(parts) > 2:
612 raise ConfigError("Restart %s has incorrect format, should be "
613 "mode[:max_retry]" % restart_config)
614 if len(parts) == 2:
615 name, max_retry_count = parts
616 else:
617 name, = parts
618 max_retry_count = 0
619
620 return {'Name': name, 'MaximumRetryCount': int(max_retry_count)}
621
622
623 def parse_volume_spec(volume_config):
624 parts = volume_config.split(':')
625 if len(parts) > 3:
626 raise ConfigError("Volume %s has incorrect format, should be "
627 "external:internal[:mode]" % volume_config)
628
629 if len(parts) == 1:
630 return VolumeSpec(None, parts[0], 'rw')
631
632 if len(parts) == 2:
633 parts.append('rw')
634
635 external, internal, mode = parts
636 if mode not in ('rw', 'ro'):
637 raise ConfigError("Volume %s has invalid mode (%s), should be "
638 "one of: rw, ro." % (volume_config, mode))
639
640 return VolumeSpec(external, internal, mode)
641
642
643 def parse_repository_tag(s):
644 if ":" not in s:
645 return s, ""
646 repo, tag = s.rsplit(":", 1)
647 if "/" in tag:
648 return s, ""
649 return repo, tag
650
651
652 def build_volume_binding(volume_spec):
653 internal = {'bind': volume_spec.internal, 'ro': volume_spec.mode == 'ro'}
654 return volume_spec.external, internal
655
656
657 def build_port_bindings(ports):
658 port_bindings = {}
659 for port in ports:
660 internal_port, external = split_port(port)
661 if internal_port in port_bindings:
662 port_bindings[internal_port].append(external)
663 else:
664 port_bindings[internal_port] = [external]
665 return port_bindings
666
667
668 def split_port(port):
669 parts = str(port).split(':')
670 if not 1 <= len(parts) <= 3:
671 raise ConfigError('Invalid port "%s", should be '
672 '[[remote_ip:]remote_port:]port[/protocol]' % port)
673
674 if len(parts) == 1:
675 internal_port, = parts
676 return internal_port, None
677 if len(parts) == 2:
678 external_port, internal_port = parts
679 return internal_port, external_port
680
681 external_ip, external_port, internal_port = parts
682 return internal_port, (external_ip, external_port or None)
683
684
685 def build_extra_hosts(extra_hosts_config):
686 if not extra_hosts_config:
687 return {}
688
689 if isinstance(extra_hosts_config, list):
690 extra_hosts_dict = {}
691 for extra_hosts_line in extra_hosts_config:
692 if not isinstance(extra_hosts_line, six.string_types):
693 raise ConfigError(
694 "extra_hosts_config \"%s\" must be either a list of strings or a string->string mapping," %
695 extra_hosts_config
696 )
697 host, ip = extra_hosts_line.split(':')
698 extra_hosts_dict.update({host.strip(): ip.strip()})
699 extra_hosts_config = extra_hosts_dict
700
701 if isinstance(extra_hosts_config, dict):
702 return extra_hosts_config
703
704 raise ConfigError(
705 "extra_hosts_config \"%s\" must be either a list of strings or a string->string mapping," %
706 extra_hosts_config
707 )
708
[end of compose/service.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
docker/compose
|
c8022457ebc94a50ee3acc41e8efa31f5181b839
|
Specify project and service with container labels
Labels are likely going to land in Docker 1.6: https://github.com/docker/docker/pull/9882
If we used these to specify what project and service a container is part of, a whole load of stuff becomes a lot simpler. We don't have to do gross things like `project, service, n = container_name.split("_", 3)`.
Questions:
- Do we still want to give containers names? It's useful to be able to reference containers like that (`docker exec project_web_1`) and we want something useful to show up in `docker ps`
- Do we still want to give containers integer references of some kind? It's pretty handy to be able to have a human readable reference to a container
|
:+1: :)
> Do we still want to give containers names?
Why not? I'd say, keep the convention to name containers that are managed by `compose`. However, there are various feature-requests to be able to override the name, e.g. by allowing a `name: com.my-container.app` in `docker-compose.yml`
> Do we still want to give containers integer references of some kind
I think that's still needed if scaling is possible. ie, you can use a (custom)name for a container, but a second instance should still have a unique name (suffix). This should also be stored in a label, though, to allow filtering.
> I'd say, keep the convention to name containers that are managed by compose. However, there are various feature-requests to be able to override the name,
+1. The names are still nice to have, but being able to customize them is good too
+1 Names are usefull to me.
Please keep the names... :+1:
Happy to inform that support for labels (https://github.com/docker/docker/pull/9882) was merged and will be included in the upcoming 1.6 release of Docker.
Also, I think there are currently a number of related issues in this area (https://github.com/docker/compose/issues/652, https://github.com/docker/compose/issues/869, https://github.com/docker/compose/issues/941 among others); perhaps the discussion should be focussed in _this_ issue (but that's just a suggestion) :)
Ahhwwww yeah.
@bfirsh I want to start working around this, can I start PR stuff?
I'm going to take a first pass at this to use labels instead of the naming convention to identify containers.
@dnephin
Take a look at https://github.com/docker/compose/pull/1269
@aanm right, #1269 is not quite the same thing. This ticket is about using labels internally in compose to specify the project/service/etc that a container belongs to. #1269 is about supporting user-specified labels through the compose config.
I've made sure that my changes are compatible with that branch, I don't think there is any overlap.
@dnephin when implementing, I think it would be good to namespace the labels used by compose internally, to prevent them from conflicting with user-defined labels, e.g. `com.docker.compose.project=myproject`, `com.docker.compose.service=web`, `com.docker.compose.instance=1`. Might have to decide on the namespace used, to be consistent with other projects, like swarm and/or machine if they decide to use namespaces as well.
@thaJeztah https://github.com/docker/compose/compare/master...dnephin:use_labels_instead_of_names
I agree :)
@dnephin :+1: great!
|
2015-04-27T01:29:36Z
|
<patch>
diff --git a/compose/__init__.py b/compose/__init__.py
--- a/compose/__init__.py
+++ b/compose/__init__.py
@@ -1,4 +1,3 @@
from __future__ import unicode_literals
-from .service import Service # noqa:flake8
__version__ = '1.3.0dev'
diff --git a/compose/cli/main.py b/compose/cli/main.py
--- a/compose/cli/main.py
+++ b/compose/cli/main.py
@@ -11,6 +11,7 @@
import dockerpty
from .. import __version__
+from .. import migration
from ..project import NoSuchService, ConfigurationError
from ..service import BuildError, CannotBeScaledError
from ..config import parse_environment
@@ -81,20 +82,21 @@ class TopLevelCommand(Command):
-v, --version Print version and exit
Commands:
- build Build or rebuild services
- help Get help on a command
- kill Kill containers
- logs View output from containers
- port Print the public port for a port binding
- ps List containers
- pull Pulls service images
- restart Restart services
- rm Remove stopped containers
- run Run a one-off command
- scale Set number of containers for a service
- start Start services
- stop Stop services
- up Create and start containers
+ build Build or rebuild services
+ help Get help on a command
+ kill Kill containers
+ logs View output from containers
+ port Print the public port for a port binding
+ ps List containers
+ pull Pulls service images
+ restart Restart services
+ rm Remove stopped containers
+ run Run a one-off command
+ scale Set number of containers for a service
+ start Start services
+ stop Stop services
+ up Create and start containers
+ migrate_to_labels Recreate containers to add labels
"""
def docopt_options(self):
@@ -483,6 +485,9 @@ def handler(signal, frame):
params = {} if timeout is None else {'timeout': int(timeout)}
project.stop(service_names=service_names, **params)
+ def migrate_to_labels(self, project, _options):
+ migration.migrate_project_to_labels(project)
+
def list_containers(containers):
return ", ".join(c.name for c in containers)
diff --git a/compose/const.py b/compose/const.py
new file mode 100644
--- /dev/null
+++ b/compose/const.py
@@ -0,0 +1,6 @@
+
+LABEL_CONTAINER_NUMBER = 'com.docker.compose.container-number'
+LABEL_ONE_OFF = 'com.docker.compose.oneoff'
+LABEL_PROJECT = 'com.docker.compose.project'
+LABEL_SERVICE = 'com.docker.compose.service'
+LABEL_VERSION = 'com.docker.compose.version'
diff --git a/compose/container.py b/compose/container.py
--- a/compose/container.py
+++ b/compose/container.py
@@ -4,6 +4,8 @@
import six
from functools import reduce
+from .const import LABEL_CONTAINER_NUMBER, LABEL_SERVICE
+
class Container(object):
"""
@@ -58,14 +60,15 @@ def name(self):
@property
def name_without_project(self):
- return '_'.join(self.dictionary['Name'].split('_')[1:])
+ return '{0}_{1}'.format(self.labels.get(LABEL_SERVICE), self.number)
@property
def number(self):
- try:
- return int(self.name.split('_')[-1])
- except ValueError:
- return None
+ number = self.labels.get(LABEL_CONTAINER_NUMBER)
+ if not number:
+ raise ValueError("Container {0} does not have a {1} label".format(
+ self.short_id, LABEL_CONTAINER_NUMBER))
+ return int(number)
@property
def ports(self):
@@ -159,6 +162,7 @@ def inspect(self):
self.has_been_inspected = True
return self.dictionary
+ # TODO: only used by tests, move to test module
def links(self):
links = []
for container in self.client.containers():
diff --git a/compose/migration.py b/compose/migration.py
new file mode 100644
--- /dev/null
+++ b/compose/migration.py
@@ -0,0 +1,35 @@
+import logging
+import re
+
+from .container import get_container_name, Container
+
+
+log = logging.getLogger(__name__)
+
+
+# TODO: remove this section when migrate_project_to_labels is removed
+NAME_RE = re.compile(r'^([^_]+)_([^_]+)_(run_)?(\d+)$')
+
+
+def is_valid_name(name):
+ match = NAME_RE.match(name)
+ return match is not None
+
+
+def add_labels(project, container, name):
+ project_name, service_name, one_off, number = NAME_RE.match(name).groups()
+ if project_name != project.name or service_name not in project.service_names:
+ return
+ service = project.get_service(service_name)
+ service.recreate_container(container)
+
+
+def migrate_project_to_labels(project):
+ log.info("Running migration to labels for project %s", project.name)
+
+ client = project.client
+ for container in client.containers(all=True):
+ name = get_container_name(container)
+ if not is_valid_name(name):
+ continue
+ add_labels(project, Container.from_ps(client, container), name)
diff --git a/compose/project.py b/compose/project.py
--- a/compose/project.py
+++ b/compose/project.py
@@ -1,12 +1,14 @@
from __future__ import unicode_literals
from __future__ import absolute_import
import logging
-
from functools import reduce
+
+from docker.errors import APIError
+
from .config import get_service_name_from_net, ConfigurationError
-from .service import Service
+from .const import LABEL_PROJECT, LABEL_ONE_OFF
+from .service import Service, check_for_legacy_containers
from .container import Container
-from docker.errors import APIError
log = logging.getLogger(__name__)
@@ -60,6 +62,12 @@ def __init__(self, name, services, client):
self.services = services
self.client = client
+ def labels(self, one_off=False):
+ return [
+ '{0}={1}'.format(LABEL_PROJECT, self.name),
+ '{0}={1}'.format(LABEL_ONE_OFF, "True" if one_off else "False"),
+ ]
+
@classmethod
def from_dicts(cls, name, service_dicts, client):
"""
@@ -75,6 +83,10 @@ def from_dicts(cls, name, service_dicts, client):
volumes_from=volumes_from, **service_dict))
return project
+ @property
+ def service_names(self):
+ return [service.name for service in self.services]
+
def get_service(self, name):
"""
Retrieve a service by name. Raises NoSuchService
@@ -102,7 +114,7 @@ def get_services(self, service_names=None, include_deps=False):
"""
if service_names is None or len(service_names) == 0:
return self.get_services(
- service_names=[s.name for s in self.services],
+ service_names=self.service_names,
include_deps=include_deps
)
else:
@@ -223,10 +235,21 @@ def remove_stopped(self, service_names=None, **options):
service.remove_stopped(**options)
def containers(self, service_names=None, stopped=False, one_off=False):
- return [Container.from_ps(self.client, container)
- for container in self.client.containers(all=stopped)
- for service in self.get_services(service_names)
- if service.has_container(container, one_off=one_off)]
+ containers = [
+ Container.from_ps(self.client, container)
+ for container in self.client.containers(
+ all=stopped,
+ filters={'label': self.labels(one_off=one_off)})]
+
+ if not containers:
+ check_for_legacy_containers(
+ self.client,
+ self.name,
+ self.service_names,
+ stopped=stopped,
+ one_off=one_off)
+
+ return containers
def _inject_deps(self, acc, service):
net_name = service.get_net_name()
diff --git a/compose/service.py b/compose/service.py
--- a/compose/service.py
+++ b/compose/service.py
@@ -10,7 +10,15 @@
from docker.errors import APIError
from docker.utils import create_host_config, LogConfig
+from . import __version__
from .config import DOCKER_CONFIG_KEYS, merge_environment
+from .const import (
+ LABEL_CONTAINER_NUMBER,
+ LABEL_ONE_OFF,
+ LABEL_PROJECT,
+ LABEL_SERVICE,
+ LABEL_VERSION,
+)
from .container import Container, get_container_name
from .progress_stream import stream_output, StreamOutputError
@@ -78,28 +86,29 @@ def __init__(self, name, client=None, project='default', links=None, external_li
self.options = options
def containers(self, stopped=False, one_off=False):
- return [Container.from_ps(self.client, container)
- for container in self.client.containers(all=stopped)
- if self.has_container(container, one_off=one_off)]
+ containers = [
+ Container.from_ps(self.client, container)
+ for container in self.client.containers(
+ all=stopped,
+ filters={'label': self.labels(one_off=one_off)})]
- def has_container(self, container, one_off=False):
- """Return True if `container` was created to fulfill this service."""
- name = get_container_name(container)
- if not name or not is_valid_name(name, one_off):
- return False
- project, name, _number = parse_name(name)
- return project == self.project and name == self.name
+ if not containers:
+ check_for_legacy_containers(
+ self.client,
+ self.project,
+ [self.name],
+ stopped=stopped,
+ one_off=one_off)
+
+ return containers
def get_container(self, number=1):
"""Return a :class:`compose.container.Container` for this service. The
container must be active, and match `number`.
"""
- for container in self.client.containers():
- if not self.has_container(container):
- continue
- _, _, container_number = parse_name(get_container_name(container))
- if container_number == number:
- return Container.from_ps(self.client, container)
+ labels = self.labels() + ['{0}={1}'.format(LABEL_CONTAINER_NUMBER, number)]
+ for container in self.client.containers(filters={'label': labels}):
+ return Container.from_ps(self.client, container)
raise ValueError("No container found for %s_%s" % (self.name, number))
@@ -138,7 +147,6 @@ def scale(self, desired_num):
# Create enough containers
containers = self.containers(stopped=True)
while len(containers) < desired_num:
- log.info("Creating %s..." % self._next_container_name(containers))
containers.append(self.create_container(detach=True))
running_containers = []
@@ -178,6 +186,7 @@ def create_container(self,
insecure_registry=False,
do_build=True,
previous_container=None,
+ number=None,
**override_options):
"""
Create a container for this service. If the image doesn't exist, attempt to pull
@@ -185,6 +194,7 @@ def create_container(self,
"""
container_options = self._get_container_create_options(
override_options,
+ number or self._next_container_number(one_off=one_off),
one_off=one_off,
previous_container=previous_container,
)
@@ -209,7 +219,6 @@ def recreate_containers(self, insecure_registry=False, do_build=True, **override
"""
containers = self.containers(stopped=True)
if not containers:
- log.info("Creating %s..." % self._next_container_name(containers))
container = self.create_container(
insecure_registry=insecure_registry,
do_build=do_build,
@@ -256,6 +265,7 @@ def recreate_container(self, container, **override_options):
new_container = self.create_container(
do_build=False,
previous_container=container,
+ number=container.labels.get(LABEL_CONTAINER_NUMBER),
**override_options)
self.start_container(new_container)
container.remove()
@@ -280,7 +290,6 @@ def start_or_create_containers(
containers = self.containers(stopped=True)
if not containers:
- log.info("Creating %s..." % self._next_container_name(containers))
new_container = self.create_container(
insecure_registry=insecure_registry,
detach=detach,
@@ -302,14 +311,19 @@ def get_net_name(self):
else:
return
- def _next_container_name(self, all_containers, one_off=False):
- bits = [self.project, self.name]
- if one_off:
- bits.append('run')
- return '_'.join(bits + [str(self._next_container_number(all_containers))])
-
- def _next_container_number(self, all_containers):
- numbers = [parse_name(c.name).number for c in all_containers]
+ def get_container_name(self, number, one_off=False):
+ # TODO: Implement issue #652 here
+ return build_container_name(self.project, self.name, number, one_off)
+
+ # TODO: this would benefit from github.com/docker/docker/pull/11943
+ # to remove the need to inspect every container
+ def _next_container_number(self, one_off=False):
+ numbers = [
+ Container.from_ps(self.client, container).number
+ for container in self.client.containers(
+ all=True,
+ filters={'label': self.labels(one_off=one_off)})
+ ]
return 1 if not numbers else max(numbers) + 1
def _get_links(self, link_to_self):
@@ -369,6 +383,7 @@ def _get_net(self):
def _get_container_create_options(
self,
override_options,
+ number,
one_off=False,
previous_container=None):
container_options = dict(
@@ -376,9 +391,7 @@ def _get_container_create_options(
for k in DOCKER_CONFIG_KEYS if k in self.options)
container_options.update(override_options)
- container_options['name'] = self._next_container_name(
- self.containers(stopped=True, one_off=one_off),
- one_off)
+ container_options['name'] = self.get_container_name(number, one_off)
# If a qualified hostname was given, split it into an
# unqualified hostname and a domainname unless domainname
@@ -419,6 +432,11 @@ def _get_container_create_options(
if self.can_be_built():
container_options['image'] = self.full_name
+ container_options['labels'] = build_container_labels(
+ container_options.get('labels', {}),
+ self.labels(one_off=one_off),
+ number)
+
# Delete options which are only used when starting
for key in DOCKER_START_KEYS:
container_options.pop(key, None)
@@ -520,6 +538,13 @@ def full_name(self):
"""
return '%s_%s' % (self.project, self.name)
+ def labels(self, one_off=False):
+ return [
+ '{0}={1}'.format(LABEL_PROJECT, self.project),
+ '{0}={1}'.format(LABEL_SERVICE, self.name),
+ '{0}={1}'.format(LABEL_ONE_OFF, "True" if one_off else "False")
+ ]
+
def can_be_scaled(self):
for port in self.options.get('ports', []):
if ':' in str(port):
@@ -585,23 +610,44 @@ def merge_volume_bindings(volumes_option, previous_container):
return volume_bindings
-NAME_RE = re.compile(r'^([^_]+)_([^_]+)_(run_)?(\d+)$')
-
-
-def is_valid_name(name, one_off=False):
- match = NAME_RE.match(name)
- if match is None:
- return False
+def build_container_name(project, service, number, one_off=False):
+ bits = [project, service]
if one_off:
- return match.group(3) == 'run_'
- else:
- return match.group(3) is None
-
+ bits.append('run')
+ return '_'.join(bits + [str(number)])
+
+
+def build_container_labels(label_options, service_labels, number, one_off=False):
+ labels = label_options or {}
+ labels.update(label.split('=', 1) for label in service_labels)
+ labels[LABEL_CONTAINER_NUMBER] = str(number)
+ labels[LABEL_VERSION] = __version__
+ return labels
+
+
+def check_for_legacy_containers(
+ client,
+ project,
+ services,
+ stopped=False,
+ one_off=False):
+ """Check if there are containers named using the old naming convention
+ and warn the user that those containers may need to be migrated to
+ using labels, so that compose can find them.
+ """
+ for container in client.containers(all=stopped):
+ name = get_container_name(container)
+ for service in services:
+ prefix = '%s_%s_%s' % (project, service, 'run_' if one_off else '')
+ if not name.startswith(prefix):
+ continue
-def parse_name(name):
- match = NAME_RE.match(name)
- (project, service_name, _, suffix) = match.groups()
- return ServiceName(project, service_name, int(suffix))
+ log.warn(
+ "Compose found a found a container named %s without any "
+ "labels. As of compose 1.3.0 containers are identified with "
+ "labels instead of naming convention. If you'd like compose "
+ "to use this container, please run "
+ "`docker-compose --migrate-to-labels`" % (name,))
def parse_restart_spec(restart_config):
</patch>
|
[]
|
[]
| |||
celery__celery-4240
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Joining to chains loses link_error of least chain
https://github.com/celery/celery/blob/cbbf481801079f0e2cfbfe464c9ecfe3ccc7a067/celery/canvas.py#L408-L414
Should be something like
```python
link_error_sigs = other._with_list_option('link_error')
sig.tasks.extend(
reduce(
lambda t, s: t.on_error(s), link_error_sigs, t.clone())
for t in other.tasks)
```
</issue>
<code>
[start of README.rst]
1 .. image:: http://docs.celeryproject.org/en/latest/_images/celery-banner-small.png
2
3 |build-status| |license| |wheel| |pyversion| |pyimp|
4
5 :Version: 4.1.0 (latentcall)
6 :Web: http://celeryproject.org/
7 :Download: https://pypi.python.org/pypi/celery/
8 :Source: https://github.com/celery/celery/
9 :Keywords: task, queue, job, async, rabbitmq, amqp, redis,
10 python, distributed, actors
11
12 --
13
14 What's a Task Queue?
15 ====================
16
17 Task queues are used as a mechanism to distribute work across threads or
18 machines.
19
20 A task queue's input is a unit of work, called a task, dedicated worker
21 processes then constantly monitor the queue for new work to perform.
22
23 Celery communicates via messages, usually using a broker
24 to mediate between clients and workers. To initiate a task a client puts a
25 message on the queue, the broker then delivers the message to a worker.
26
27 A Celery system can consist of multiple workers and brokers, giving way
28 to high availability and horizontal scaling.
29
30 Celery is written in Python, but the protocol can be implemented in any
31 language. In addition to Python there's node-celery_ for Node.js,
32 and a `PHP client`_.
33
34 Language interoperability can also be achieved by using webhooks
35 in such a way that the client enqueues an URL to be requested by a worker.
36
37 .. _node-celery: https://github.com/mher/node-celery
38 .. _`PHP client`: https://github.com/gjedeer/celery-php
39
40 What do I need?
41 ===============
42
43 Celery version 4.1 runs on,
44
45 - Python (2.7, 3.4, 3.5, 3.6)
46 - PyPy (5.8)
47
48
49 This is the last version to support Python 2.7,
50 and from the next version (Celery 5.x) Python 3.5 or newer is required.
51
52 If you're running an older version of Python, you need to be running
53 an older version of Celery:
54
55 - Python 2.6: Celery series 3.1 or earlier.
56 - Python 2.5: Celery series 3.0 or earlier.
57 - Python 2.4 was Celery series 2.2 or earlier.
58
59 Celery is a project with minimal funding,
60 so we don't support Microsoft Windows.
61 Please don't open any issues related to that platform.
62
63 *Celery* is usually used with a message broker to send and receive messages.
64 The RabbitMQ, Redis transports are feature complete,
65 but there's also experimental support for a myriad of other solutions, including
66 using SQLite for local development.
67
68 *Celery* can run on a single machine, on multiple machines, or even
69 across datacenters.
70
71 Get Started
72 ===========
73
74 If this is the first time you're trying to use Celery, or you're
75 new to Celery 4.1 coming from previous versions then you should read our
76 getting started tutorials:
77
78 - `First steps with Celery`_
79
80 Tutorial teaching you the bare minimum needed to get started with Celery.
81
82 - `Next steps`_
83
84 A more complete overview, showing more features.
85
86 .. _`First steps with Celery`:
87 http://docs.celeryproject.org/en/latest/getting-started/first-steps-with-celery.html
88
89 .. _`Next steps`:
90 http://docs.celeryproject.org/en/latest/getting-started/next-steps.html
91
92 Celery is...
93 =============
94
95 - **Simple**
96
97 Celery is easy to use and maintain, and does *not need configuration files*.
98
99 It has an active, friendly community you can talk to for support,
100 like at our `mailing-list`_, or the IRC channel.
101
102 Here's one of the simplest applications you can make::
103
104 from celery import Celery
105
106 app = Celery('hello', broker='amqp://guest@localhost//')
107
108 @app.task
109 def hello():
110 return 'hello world'
111
112 - **Highly Available**
113
114 Workers and clients will automatically retry in the event
115 of connection loss or failure, and some brokers support
116 HA in way of *Primary/Primary* or *Primary/Replica* replication.
117
118 - **Fast**
119
120 A single Celery process can process millions of tasks a minute,
121 with sub-millisecond round-trip latency (using RabbitMQ,
122 py-librabbitmq, and optimized settings).
123
124 - **Flexible**
125
126 Almost every part of *Celery* can be extended or used on its own,
127 Custom pool implementations, serializers, compression schemes, logging,
128 schedulers, consumers, producers, broker transports, and much more.
129
130 It supports...
131 ================
132
133 - **Message Transports**
134
135 - RabbitMQ_, Redis_, Amazon SQS
136
137 - **Concurrency**
138
139 - Prefork, Eventlet_, gevent_, single threaded (``solo``)
140
141 - **Result Stores**
142
143 - AMQP, Redis
144 - memcached
145 - SQLAlchemy, Django ORM
146 - Apache Cassandra, IronCache, Elasticsearch
147
148 - **Serialization**
149
150 - *pickle*, *json*, *yaml*, *msgpack*.
151 - *zlib*, *bzip2* compression.
152 - Cryptographic message signing.
153
154 .. _`Eventlet`: http://eventlet.net/
155 .. _`gevent`: http://gevent.org/
156
157 .. _RabbitMQ: https://rabbitmq.com
158 .. _Redis: https://redis.io
159 .. _SQLAlchemy: http://sqlalchemy.org
160
161 Framework Integration
162 =====================
163
164 Celery is easy to integrate with web frameworks, some of which even have
165 integration packages:
166
167 +--------------------+------------------------+
168 | `Django`_ | not needed |
169 +--------------------+------------------------+
170 | `Pyramid`_ | `pyramid_celery`_ |
171 +--------------------+------------------------+
172 | `Pylons`_ | `celery-pylons`_ |
173 +--------------------+------------------------+
174 | `Flask`_ | not needed |
175 +--------------------+------------------------+
176 | `web2py`_ | `web2py-celery`_ |
177 +--------------------+------------------------+
178 | `Tornado`_ | `tornado-celery`_ |
179 +--------------------+------------------------+
180
181 The integration packages aren't strictly necessary, but they can make
182 development easier, and sometimes they add important hooks like closing
183 database connections at ``fork``.
184
185 .. _`Django`: https://djangoproject.com/
186 .. _`Pylons`: http://pylonsproject.org/
187 .. _`Flask`: http://flask.pocoo.org/
188 .. _`web2py`: http://web2py.com/
189 .. _`Bottle`: https://bottlepy.org/
190 .. _`Pyramid`: http://docs.pylonsproject.org/en/latest/docs/pyramid.html
191 .. _`pyramid_celery`: https://pypi.python.org/pypi/pyramid_celery/
192 .. _`celery-pylons`: https://pypi.python.org/pypi/celery-pylons
193 .. _`web2py-celery`: https://code.google.com/p/web2py-celery/
194 .. _`Tornado`: http://www.tornadoweb.org/
195 .. _`tornado-celery`: https://github.com/mher/tornado-celery/
196
197 .. _celery-documentation:
198
199 Documentation
200 =============
201
202 The `latest documentation`_ is hosted at Read The Docs, containing user guides,
203 tutorials, and an API reference.
204
205 .. _`latest documentation`: http://docs.celeryproject.org/en/latest/
206
207 .. _celery-installation:
208
209 Installation
210 ============
211
212 You can install Celery either via the Python Package Index (PyPI)
213 or from source.
214
215 To install using ``pip``:
216
217 ::
218
219
220 $ pip install -U Celery
221
222 .. _bundles:
223
224 Bundles
225 -------
226
227 Celery also defines a group of bundles that can be used
228 to install Celery and the dependencies for a given feature.
229
230 You can specify these in your requirements or on the ``pip``
231 command-line by using brackets. Multiple bundles can be specified by
232 separating them by commas.
233
234 ::
235
236
237 $ pip install "celery[librabbitmq]"
238
239 $ pip install "celery[librabbitmq,redis,auth,msgpack]"
240
241 The following bundles are available:
242
243 Serializers
244 ~~~~~~~~~~~
245
246 :``celery[auth]``:
247 for using the ``auth`` security serializer.
248
249 :``celery[msgpack]``:
250 for using the msgpack serializer.
251
252 :``celery[yaml]``:
253 for using the yaml serializer.
254
255 Concurrency
256 ~~~~~~~~~~~
257
258 :``celery[eventlet]``:
259 for using the ``eventlet`` pool.
260
261 :``celery[gevent]``:
262 for using the ``gevent`` pool.
263
264 Transports and Backends
265 ~~~~~~~~~~~~~~~~~~~~~~~
266
267 :``celery[librabbitmq]``:
268 for using the librabbitmq C library.
269
270 :``celery[redis]``:
271 for using Redis as a message transport or as a result backend.
272
273 :``celery[sqs]``:
274 for using Amazon SQS as a message transport (*experimental*).
275
276 :``celery[tblib``]:
277 for using the ``task_remote_tracebacks`` feature.
278
279 :``celery[memcache]``:
280 for using Memcached as a result backend (using ``pylibmc``)
281
282 :``celery[pymemcache]``:
283 for using Memcached as a result backend (pure-Python implementation).
284
285 :``celery[cassandra]``:
286 for using Apache Cassandra as a result backend with DataStax driver.
287
288 :``celery[couchbase]``:
289 for using Couchbase as a result backend.
290
291 :``celery[elasticsearch]``:
292 for using Elasticsearch as a result backend.
293
294 :``celery[riak]``:
295 for using Riak as a result backend.
296
297 :``celery[zookeeper]``:
298 for using Zookeeper as a message transport.
299
300 :``celery[sqlalchemy]``:
301 for using SQLAlchemy as a result backend (*supported*).
302
303 :``celery[pyro]``:
304 for using the Pyro4 message transport (*experimental*).
305
306 :``celery[slmq]``:
307 for using the SoftLayer Message Queue transport (*experimental*).
308
309 :``celery[consul]``:
310 for using the Consul.io Key/Value store as a message transport or result backend (*experimental*).
311
312 :``celery[django]``:
313 specifies the lowest version possible for Django support.
314
315 You should probably not use this in your requirements, it's here
316 for informational purposes only.
317
318
319 .. _celery-installing-from-source:
320
321 Downloading and installing from source
322 --------------------------------------
323
324 Download the latest version of Celery from PyPI:
325
326 https://pypi.python.org/pypi/celery/
327
328 You can install it by doing the following,:
329
330 ::
331
332
333 $ tar xvfz celery-0.0.0.tar.gz
334 $ cd celery-0.0.0
335 $ python setup.py build
336 # python setup.py install
337
338 The last command must be executed as a privileged user if
339 you aren't currently using a virtualenv.
340
341 .. _celery-installing-from-git:
342
343 Using the development version
344 -----------------------------
345
346 With pip
347 ~~~~~~~~
348
349 The Celery development version also requires the development
350 versions of ``kombu``, ``amqp``, ``billiard``, and ``vine``.
351
352 You can install the latest snapshot of these using the following
353 pip commands:
354
355 ::
356
357
358 $ pip install https://github.com/celery/celery/zipball/master#egg=celery
359 $ pip install https://github.com/celery/billiard/zipball/master#egg=billiard
360 $ pip install https://github.com/celery/py-amqp/zipball/master#egg=amqp
361 $ pip install https://github.com/celery/kombu/zipball/master#egg=kombu
362 $ pip install https://github.com/celery/vine/zipball/master#egg=vine
363
364 With git
365 ~~~~~~~~
366
367 Please see the Contributing section.
368
369 .. _getting-help:
370
371 Getting Help
372 ============
373
374 .. _mailing-list:
375
376 Mailing list
377 ------------
378
379 For discussions about the usage, development, and future of Celery,
380 please join the `celery-users`_ mailing list.
381
382 .. _`celery-users`: https://groups.google.com/group/celery-users/
383
384 .. _irc-channel:
385
386 IRC
387 ---
388
389 Come chat with us on IRC. The **#celery** channel is located at the `Freenode`_
390 network.
391
392 .. _`Freenode`: https://freenode.net
393
394 .. _bug-tracker:
395
396 Bug tracker
397 ===========
398
399 If you have any suggestions, bug reports, or annoyances please report them
400 to our issue tracker at https://github.com/celery/celery/issues/
401
402 .. _wiki:
403
404 Wiki
405 ====
406
407 https://wiki.github.com/celery/celery/
408
409 .. _contributing-short:
410
411 Contributing
412 ============
413
414 Development of `celery` happens at GitHub: https://github.com/celery/celery
415
416 You're highly encouraged to participate in the development
417 of `celery`. If you don't like GitHub (for some reason) you're welcome
418 to send regular patches.
419
420 Be sure to also read the `Contributing to Celery`_ section in the
421 documentation.
422
423 .. _`Contributing to Celery`:
424 http://docs.celeryproject.org/en/master/contributing.html
425
426 .. _license:
427
428 License
429 =======
430
431 This software is licensed under the `New BSD License`. See the ``LICENSE``
432 file in the top distribution directory for the full license text.
433
434 .. # vim: syntax=rst expandtab tabstop=4 shiftwidth=4 shiftround
435
436 .. |build-status| image:: https://secure.travis-ci.org/celery/celery.png?branch=master
437 :alt: Build status
438 :target: https://travis-ci.org/celery/celery
439
440 .. |coverage| image:: https://codecov.io/github/celery/celery/coverage.svg?branch=master
441 :target: https://codecov.io/github/celery/celery?branch=master
442
443 .. |license| image:: https://img.shields.io/pypi/l/celery.svg
444 :alt: BSD License
445 :target: https://opensource.org/licenses/BSD-3-Clause
446
447 .. |wheel| image:: https://img.shields.io/pypi/wheel/celery.svg
448 :alt: Celery can be installed via wheel
449 :target: https://pypi.python.org/pypi/celery/
450
451 .. |pyversion| image:: https://img.shields.io/pypi/pyversions/celery.svg
452 :alt: Supported Python versions.
453 :target: https://pypi.python.org/pypi/celery/
454
455 .. |pyimp| image:: https://img.shields.io/pypi/implementation/celery.svg
456 :alt: Support Python implementations.
457 :target: https://pypi.python.org/pypi/celery/
458
459
[end of README.rst]
[start of celery/apps/worker.py]
1 # -*- coding: utf-8 -*-
2 """Worker command-line program.
3
4 This module is the 'program-version' of :mod:`celery.worker`.
5
6 It does everything necessary to run that module
7 as an actual application, like installing signal handlers,
8 platform tweaks, and so on.
9 """
10 from __future__ import absolute_import, print_function, unicode_literals
11
12 import logging
13 import os
14 import platform as _platform
15 import sys
16
17 from datetime import datetime
18 from functools import partial
19
20 from billiard.process import current_process
21 from kombu.utils.encoding import safe_str
22
23 from celery import VERSION_BANNER
24 from celery import platforms
25 from celery import signals
26 from celery.app import trace
27 from celery.exceptions import WorkerShutdown, WorkerTerminate
28 from celery.five import string, string_t
29 from celery.loaders.app import AppLoader
30 from celery.platforms import EX_FAILURE, EX_OK, check_privileges, isatty
31 from celery.utils import static
32 from celery.utils import term
33 from celery.utils.debug import cry
34 from celery.utils.imports import qualname
35 from celery.utils.log import get_logger, in_sighandler, set_in_sighandler
36 from celery.utils.text import pluralize
37 from celery.worker import WorkController
38
39 __all__ = ['Worker']
40
41 logger = get_logger(__name__)
42 is_jython = sys.platform.startswith('java')
43 is_pypy = hasattr(sys, 'pypy_version_info')
44
45 ARTLINES = [
46 ' --------------',
47 '---- **** -----',
48 '--- * *** * --',
49 '-- * - **** ---',
50 '- ** ----------',
51 '- ** ----------',
52 '- ** ----------',
53 '- ** ----------',
54 '- *** --- * ---',
55 '-- ******* ----',
56 '--- ***** -----',
57 ' --------------',
58 ]
59
60 BANNER = """\
61 {hostname} v{version}
62
63 {platform} {timestamp}
64
65 [config]
66 .> app: {app}
67 .> transport: {conninfo}
68 .> results: {results}
69 .> concurrency: {concurrency}
70 .> task events: {events}
71
72 [queues]
73 {queues}
74 """
75
76 EXTRA_INFO_FMT = """
77 [tasks]
78 {tasks}
79 """
80
81
82 def active_thread_count():
83 from threading import enumerate
84 return sum(1 for t in enumerate()
85 if not t.name.startswith('Dummy-'))
86
87
88 def safe_say(msg):
89 print('\n{0}'.format(msg), file=sys.__stderr__)
90
91
92 class Worker(WorkController):
93 """Worker as a program."""
94
95 def on_before_init(self, quiet=False, **kwargs):
96 self.quiet = quiet
97 trace.setup_worker_optimizations(self.app, self.hostname)
98
99 # this signal can be used to set up configuration for
100 # workers by name.
101 signals.celeryd_init.send(
102 sender=self.hostname, instance=self,
103 conf=self.app.conf, options=kwargs,
104 )
105 check_privileges(self.app.conf.accept_content)
106
107 def on_after_init(self, purge=False, no_color=None,
108 redirect_stdouts=None, redirect_stdouts_level=None,
109 **kwargs):
110 self.redirect_stdouts = self.app.either(
111 'worker_redirect_stdouts', redirect_stdouts)
112 self.redirect_stdouts_level = self.app.either(
113 'worker_redirect_stdouts_level', redirect_stdouts_level)
114 super(Worker, self).setup_defaults(**kwargs)
115 self.purge = purge
116 self.no_color = no_color
117 self._isatty = isatty(sys.stdout)
118 self.colored = self.app.log.colored(
119 self.logfile,
120 enabled=not no_color if no_color is not None else no_color
121 )
122
123 def on_init_blueprint(self):
124 self._custom_logging = self.setup_logging()
125 # apply task execution optimizations
126 # -- This will finalize the app!
127 trace.setup_worker_optimizations(self.app, self.hostname)
128
129 def on_start(self):
130 app = self.app
131 WorkController.on_start(self)
132
133 # this signal can be used to, for example, change queues after
134 # the -Q option has been applied.
135 signals.celeryd_after_setup.send(
136 sender=self.hostname, instance=self, conf=app.conf,
137 )
138
139 if self.purge:
140 self.purge_messages()
141
142 if not self.quiet:
143 self.emit_banner()
144
145 self.set_process_status('-active-')
146 self.install_platform_tweaks(self)
147 if not self._custom_logging and self.redirect_stdouts:
148 app.log.redirect_stdouts(self.redirect_stdouts_level)
149
150 def emit_banner(self):
151 # Dump configuration to screen so we have some basic information
152 # for when users sends bug reports.
153 use_image = term.supports_images()
154 if use_image:
155 print(term.imgcat(static.logo()))
156 print(safe_str(''.join([
157 string(self.colored.cyan(
158 ' \n', self.startup_info(artlines=not use_image))),
159 string(self.colored.reset(self.extra_info() or '')),
160 ])), file=sys.__stdout__)
161
162 def on_consumer_ready(self, consumer):
163 signals.worker_ready.send(sender=consumer)
164 logger.info('%s ready.', safe_str(self.hostname))
165
166 def setup_logging(self, colorize=None):
167 if colorize is None and self.no_color is not None:
168 colorize = not self.no_color
169 return self.app.log.setup(
170 self.loglevel, self.logfile,
171 redirect_stdouts=False, colorize=colorize, hostname=self.hostname,
172 )
173
174 def purge_messages(self):
175 with self.app.connection_for_write() as connection:
176 count = self.app.control.purge(connection=connection)
177 if count: # pragma: no cover
178 print('purge: Erased {0} {1} from the queue.\n'.format(
179 count, pluralize(count, 'message')))
180
181 def tasklist(self, include_builtins=True, sep='\n', int_='celery.'):
182 return sep.join(
183 ' . {0}'.format(task) for task in sorted(self.app.tasks)
184 if (not task.startswith(int_) if not include_builtins else task)
185 )
186
187 def extra_info(self):
188 if self.loglevel is None:
189 return
190 if self.loglevel <= logging.INFO:
191 include_builtins = self.loglevel <= logging.DEBUG
192 tasklist = self.tasklist(include_builtins=include_builtins)
193 return EXTRA_INFO_FMT.format(tasks=tasklist)
194
195 def startup_info(self, artlines=True):
196 app = self.app
197 concurrency = string(self.concurrency)
198 appr = '{0}:{1:#x}'.format(app.main or '__main__', id(app))
199 if not isinstance(app.loader, AppLoader):
200 loader = qualname(app.loader)
201 if loader.startswith('celery.loaders'): # pragma: no cover
202 loader = loader[14:]
203 appr += ' ({0})'.format(loader)
204 if self.autoscale:
205 max, min = self.autoscale
206 concurrency = '{{min={0}, max={1}}}'.format(min, max)
207 pool = self.pool_cls
208 if not isinstance(pool, string_t):
209 pool = pool.__module__
210 concurrency += ' ({0})'.format(pool.split('.')[-1])
211 events = 'ON'
212 if not self.task_events:
213 events = 'OFF (enable -E to monitor tasks in this worker)'
214
215 banner = BANNER.format(
216 app=appr,
217 hostname=safe_str(self.hostname),
218 timestamp=datetime.now().replace(microsecond=0),
219 version=VERSION_BANNER,
220 conninfo=self.app.connection().as_uri(),
221 results=self.app.backend.as_uri(),
222 concurrency=concurrency,
223 platform=safe_str(_platform.platform()),
224 events=events,
225 queues=app.amqp.queues.format(indent=0, indent_first=False),
226 ).splitlines()
227
228 # integrate the ASCII art.
229 if artlines:
230 for i, _ in enumerate(banner):
231 try:
232 banner[i] = ' '.join([ARTLINES[i], banner[i]])
233 except IndexError:
234 banner[i] = ' ' * 16 + banner[i]
235 return '\n'.join(banner) + '\n'
236
237 def install_platform_tweaks(self, worker):
238 """Install platform specific tweaks and workarounds."""
239 if self.app.IS_macOS:
240 self.macOS_proxy_detection_workaround()
241
242 # Install signal handler so SIGHUP restarts the worker.
243 if not self._isatty:
244 # only install HUP handler if detached from terminal,
245 # so closing the terminal window doesn't restart the worker
246 # into the background.
247 if self.app.IS_macOS:
248 # macOS can't exec from a process using threads.
249 # See https://github.com/celery/celery/issues#issue/152
250 install_HUP_not_supported_handler(worker)
251 else:
252 install_worker_restart_handler(worker)
253 install_worker_term_handler(worker)
254 install_worker_term_hard_handler(worker)
255 install_worker_int_handler(worker)
256 install_cry_handler()
257 install_rdb_handler()
258
259 def macOS_proxy_detection_workaround(self):
260 """See https://github.com/celery/celery/issues#issue/161."""
261 os.environ.setdefault('celery_dummy_proxy', 'set_by_celeryd')
262
263 def set_process_status(self, info):
264 return platforms.set_mp_process_title(
265 'celeryd',
266 info='{0} ({1})'.format(info, platforms.strargv(sys.argv)),
267 hostname=self.hostname,
268 )
269
270
271 def _shutdown_handler(worker, sig='TERM', how='Warm',
272 exc=WorkerShutdown, callback=None, exitcode=EX_OK):
273 def _handle_request(*args):
274 with in_sighandler():
275 from celery.worker import state
276 if current_process()._name == 'MainProcess':
277 if callback:
278 callback(worker)
279 safe_say('worker: {0} shutdown (MainProcess)'.format(how))
280 signals.worker_shutting_down.send(
281 sender=worker.hostname, sig=sig, how=how,
282 exitcode=exitcode,
283 )
284 if active_thread_count() > 1:
285 setattr(state, {'Warm': 'should_stop',
286 'Cold': 'should_terminate'}[how], exitcode)
287 else:
288 raise exc(exitcode)
289 _handle_request.__name__ = str('worker_{0}'.format(how))
290 platforms.signals[sig] = _handle_request
291
292
293 install_worker_term_handler = partial(
294 _shutdown_handler, sig='SIGTERM', how='Warm', exc=WorkerShutdown,
295 )
296 if not is_jython: # pragma: no cover
297 install_worker_term_hard_handler = partial(
298 _shutdown_handler, sig='SIGQUIT', how='Cold', exc=WorkerTerminate,
299 exitcode=EX_FAILURE,
300 )
301 else: # pragma: no cover
302 install_worker_term_handler = \
303 install_worker_term_hard_handler = lambda *a, **kw: None
304
305
306 def on_SIGINT(worker):
307 safe_say('worker: Hitting Ctrl+C again will terminate all running tasks!')
308 install_worker_term_hard_handler(worker, sig='SIGINT')
309
310
311 if not is_jython: # pragma: no cover
312 install_worker_int_handler = partial(
313 _shutdown_handler, sig='SIGINT', callback=on_SIGINT,
314 exitcode=EX_FAILURE,
315 )
316 else: # pragma: no cover
317 def install_worker_int_handler(*args, **kwargs):
318 pass
319
320
321 def _reload_current_worker():
322 platforms.close_open_fds([
323 sys.__stdin__, sys.__stdout__, sys.__stderr__,
324 ])
325 os.execv(sys.executable, [sys.executable] + sys.argv)
326
327
328 def install_worker_restart_handler(worker, sig='SIGHUP'):
329
330 def restart_worker_sig_handler(*args):
331 """Signal handler restarting the current python program."""
332 set_in_sighandler(True)
333 safe_say('Restarting celery worker ({0})'.format(' '.join(sys.argv)))
334 import atexit
335 atexit.register(_reload_current_worker)
336 from celery.worker import state
337 state.should_stop = EX_OK
338 platforms.signals[sig] = restart_worker_sig_handler
339
340
341 def install_cry_handler(sig='SIGUSR1'):
342 # Jython/PyPy does not have sys._current_frames
343 if is_jython or is_pypy: # pragma: no cover
344 return
345
346 def cry_handler(*args):
347 """Signal handler logging the stack-trace of all active threads."""
348 with in_sighandler():
349 safe_say(cry())
350 platforms.signals[sig] = cry_handler
351
352
353 def install_rdb_handler(envvar='CELERY_RDBSIG',
354 sig='SIGUSR2'): # pragma: no cover
355
356 def rdb_handler(*args):
357 """Signal handler setting a rdb breakpoint at the current frame."""
358 with in_sighandler():
359 from celery.contrib.rdb import set_trace, _frame
360 # gevent does not pass standard signal handler args
361 frame = args[1] if args else _frame().f_back
362 set_trace(frame)
363 if os.environ.get(envvar):
364 platforms.signals[sig] = rdb_handler
365
366
367 def install_HUP_not_supported_handler(worker, sig='SIGHUP'):
368
369 def warn_on_HUP_handler(signum, frame):
370 with in_sighandler():
371 safe_say('{sig} not supported: Restarting with {sig} is '
372 'unstable on this platform!'.format(sig=sig))
373 platforms.signals[sig] = warn_on_HUP_handler
374
[end of celery/apps/worker.py]
[start of celery/backends/cassandra.py]
1 # -* coding: utf-8 -*-
2 """Apache Cassandra result store backend using the DataStax driver."""
3 from __future__ import absolute_import, unicode_literals
4 import sys
5 from celery import states
6 from celery.exceptions import ImproperlyConfigured
7 from celery.utils.log import get_logger
8 from .base import BaseBackend
9 try: # pragma: no cover
10 import cassandra
11 import cassandra.auth
12 import cassandra.cluster
13 except ImportError: # pragma: no cover
14 cassandra = None # noqa
15
16
17 __all__ = ['CassandraBackend']
18
19 logger = get_logger(__name__)
20
21 E_NO_CASSANDRA = """
22 You need to install the cassandra-driver library to
23 use the Cassandra backend. See https://github.com/datastax/python-driver
24 """
25
26 E_NO_SUCH_CASSANDRA_AUTH_PROVIDER = """
27 CASSANDRA_AUTH_PROVIDER you provided is not a valid auth_provider class.
28 See https://datastax.github.io/python-driver/api/cassandra/auth.html.
29 """
30
31 Q_INSERT_RESULT = """
32 INSERT INTO {table} (
33 task_id, status, result, date_done, traceback, children) VALUES (
34 %s, %s, %s, %s, %s, %s) {expires};
35 """
36
37 Q_SELECT_RESULT = """
38 SELECT status, result, date_done, traceback, children
39 FROM {table}
40 WHERE task_id=%s
41 LIMIT 1
42 """
43
44 Q_CREATE_RESULT_TABLE = """
45 CREATE TABLE {table} (
46 task_id text,
47 status text,
48 result blob,
49 date_done timestamp,
50 traceback blob,
51 children blob,
52 PRIMARY KEY ((task_id), date_done)
53 ) WITH CLUSTERING ORDER BY (date_done DESC);
54 """
55
56 Q_EXPIRES = """
57 USING TTL {0}
58 """
59
60 if sys.version_info[0] == 3:
61 def buf_t(x):
62 return bytes(x, 'utf8')
63 else:
64 buf_t = buffer # noqa
65
66
67 class CassandraBackend(BaseBackend):
68 """Cassandra backend utilizing DataStax driver.
69
70 Raises:
71 celery.exceptions.ImproperlyConfigured:
72 if module :pypi:`cassandra-driver` is not available,
73 or if the :setting:`cassandra_servers` setting is not set.
74 """
75
76 #: List of Cassandra servers with format: ``hostname``.
77 servers = None
78
79 supports_autoexpire = True # autoexpire supported via entry_ttl
80
81 def __init__(self, servers=None, keyspace=None, table=None, entry_ttl=None,
82 port=9042, **kwargs):
83 super(CassandraBackend, self).__init__(**kwargs)
84
85 if not cassandra:
86 raise ImproperlyConfigured(E_NO_CASSANDRA)
87
88 conf = self.app.conf
89 self.servers = servers or conf.get('cassandra_servers', None)
90 self.port = port or conf.get('cassandra_port', None)
91 self.keyspace = keyspace or conf.get('cassandra_keyspace', None)
92 self.table = table or conf.get('cassandra_table', None)
93
94 if not self.servers or not self.keyspace or not self.table:
95 raise ImproperlyConfigured('Cassandra backend not configured.')
96
97 expires = entry_ttl or conf.get('cassandra_entry_ttl', None)
98
99 self.cqlexpires = (
100 Q_EXPIRES.format(expires) if expires is not None else '')
101
102 read_cons = conf.get('cassandra_read_consistency') or 'LOCAL_QUORUM'
103 write_cons = conf.get('cassandra_write_consistency') or 'LOCAL_QUORUM'
104
105 self.read_consistency = getattr(
106 cassandra.ConsistencyLevel, read_cons,
107 cassandra.ConsistencyLevel.LOCAL_QUORUM)
108 self.write_consistency = getattr(
109 cassandra.ConsistencyLevel, write_cons,
110 cassandra.ConsistencyLevel.LOCAL_QUORUM)
111
112 self.auth_provider = None
113 auth_provider = conf.get('cassandra_auth_provider', None)
114 auth_kwargs = conf.get('cassandra_auth_kwargs', None)
115 if auth_provider and auth_kwargs:
116 auth_provider_class = getattr(cassandra.auth, auth_provider, None)
117 if not auth_provider_class:
118 raise ImproperlyConfigured(E_NO_SUCH_CASSANDRA_AUTH_PROVIDER)
119 self.auth_provider = auth_provider_class(**auth_kwargs)
120
121 self._connection = None
122 self._session = None
123 self._write_stmt = None
124 self._read_stmt = None
125 self._make_stmt = None
126
127 def process_cleanup(self):
128 if self._connection is not None:
129 self._connection.shutdown() # also shuts down _session
130 self._connection = None
131 self._session = None
132
133 def _get_connection(self, write=False):
134 """Prepare the connection for action.
135
136 Arguments:
137 write (bool): are we a writer?
138 """
139 if self._connection is not None:
140 return
141 try:
142 self._connection = cassandra.cluster.Cluster(
143 self.servers, port=self.port,
144 auth_provider=self.auth_provider)
145 self._session = self._connection.connect(self.keyspace)
146
147 # We're forced to do concatenation below, as formatting would
148 # blow up on superficial %s that'll be processed by Cassandra
149 self._write_stmt = cassandra.query.SimpleStatement(
150 Q_INSERT_RESULT.format(
151 table=self.table, expires=self.cqlexpires),
152 )
153 self._write_stmt.consistency_level = self.write_consistency
154
155 self._read_stmt = cassandra.query.SimpleStatement(
156 Q_SELECT_RESULT.format(table=self.table),
157 )
158 self._read_stmt.consistency_level = self.read_consistency
159
160 if write:
161 # Only possible writers "workers" are allowed to issue
162 # CREATE TABLE. This is to prevent conflicting situations
163 # where both task-creator and task-executor would issue it
164 # at the same time.
165
166 # Anyway; if you're doing anything critical, you should
167 # have created this table in advance, in which case
168 # this query will be a no-op (AlreadyExists)
169 self._make_stmt = cassandra.query.SimpleStatement(
170 Q_CREATE_RESULT_TABLE.format(table=self.table),
171 )
172 self._make_stmt.consistency_level = self.write_consistency
173
174 try:
175 self._session.execute(self._make_stmt)
176 except cassandra.AlreadyExists:
177 pass
178
179 except cassandra.OperationTimedOut:
180 # a heavily loaded or gone Cassandra cluster failed to respond.
181 # leave this class in a consistent state
182 if self._connection is not None:
183 self._connection.shutdown() # also shuts down _session
184
185 self._connection = None
186 self._session = None
187 raise # we did fail after all - reraise
188
189 def _store_result(self, task_id, result, state,
190 traceback=None, request=None, **kwargs):
191 """Store return value and state of an executed task."""
192 self._get_connection(write=True)
193
194 self._session.execute(self._write_stmt, (
195 task_id,
196 state,
197 buf_t(self.encode(result)),
198 self.app.now(),
199 buf_t(self.encode(traceback)),
200 buf_t(self.encode(self.current_task_children(request)))
201 ))
202
203 def as_uri(self, include_password=True):
204 return 'cassandra://'
205
206 def _get_task_meta_for(self, task_id):
207 """Get task meta-data for a task by id."""
208 self._get_connection()
209
210 res = self._session.execute(self._read_stmt, (task_id, ))
211 if not res:
212 return {'status': states.PENDING, 'result': None}
213
214 status, result, date_done, traceback, children = res[0]
215
216 return self.meta_from_decoded({
217 'task_id': task_id,
218 'status': status,
219 'result': self.decode(result),
220 'date_done': date_done.strftime('%Y-%m-%dT%H:%M:%SZ'),
221 'traceback': self.decode(traceback),
222 'children': self.decode(children),
223 })
224
225 def __reduce__(self, args=(), kwargs={}):
226 kwargs.update(
227 dict(servers=self.servers,
228 keyspace=self.keyspace,
229 table=self.table))
230 return super(CassandraBackend, self).__reduce__(args, kwargs)
231
[end of celery/backends/cassandra.py]
[start of celery/utils/saferepr.py]
1 # -*- coding: utf-8 -*-
2 """Streaming, truncating, non-recursive version of :func:`repr`.
3
4 Differences from regular :func:`repr`:
5
6 - Sets are represented the Python 3 way: ``{1, 2}`` vs ``set([1, 2])``.
7 - Unicode strings does not have the ``u'`` prefix, even on Python 2.
8 - Empty set formatted as ``set()`` (Python 3), not ``set([])`` (Python 2).
9 - Longs don't have the ``L`` suffix.
10
11 Very slow with no limits, super quick with limits.
12 """
13 from __future__ import absolute_import, unicode_literals
14
15 import sys
16 import traceback
17
18 from collections import deque, namedtuple
19
20 from decimal import Decimal
21 from itertools import chain
22 from numbers import Number
23 from pprint import _recursion
24
25 from celery.five import items, text_t
26
27 from .text import truncate
28
29 __all__ = ['saferepr', 'reprstream']
30
31 # pylint: disable=redefined-outer-name
32 # We cache globals and attribute lookups, so disable this warning.
33
34 IS_PY3 = sys.version_info[0] == 3
35
36 if IS_PY3: # pragma: no cover
37 range_t = (range, )
38 else:
39 class range_t(object): # noqa
40 pass
41
42 #: Node representing literal text.
43 #: - .value: is the literal text value
44 #: - .truncate: specifies if this text can be truncated, for things like
45 #: LIT_DICT_END this will be False, as we always display
46 #: the ending brackets, e.g: [[[1, 2, 3, ...,], ..., ]]
47 #: - .direction: If +1 the current level is increment by one,
48 #: if -1 the current level is decremented by one, and
49 #: if 0 the current level is unchanged.
50 _literal = namedtuple('_literal', ('value', 'truncate', 'direction'))
51
52 #: Node representing a dictionary key.
53 _key = namedtuple('_key', ('value',))
54
55 #: Node representing quoted text, e.g. a string value.
56 _quoted = namedtuple('_quoted', ('value',))
57
58
59 #: Recursion protection.
60 _dirty = namedtuple('_dirty', ('objid',))
61
62 #: Types that are repsented as chars.
63 chars_t = (bytes, text_t)
64
65 #: Types that are regarded as safe to call repr on.
66 safe_t = (Number,)
67
68 #: Set types.
69 set_t = (frozenset, set)
70
71 LIT_DICT_START = _literal('{', False, +1)
72 LIT_DICT_KVSEP = _literal(': ', True, 0)
73 LIT_DICT_END = _literal('}', False, -1)
74 LIT_LIST_START = _literal('[', False, +1)
75 LIT_LIST_END = _literal(']', False, -1)
76 LIT_LIST_SEP = _literal(', ', True, 0)
77 LIT_SET_START = _literal('{', False, +1)
78 LIT_SET_END = _literal('}', False, -1)
79 LIT_TUPLE_START = _literal('(', False, +1)
80 LIT_TUPLE_END = _literal(')', False, -1)
81 LIT_TUPLE_END_SV = _literal(',)', False, -1)
82
83
84 def saferepr(o, maxlen=None, maxlevels=3, seen=None):
85 # type: (Any, int, int, Set) -> str
86 """Safe version of :func:`repr`.
87
88 Warning:
89 Make sure you set the maxlen argument, or it will be very slow
90 for recursive objects. With the maxlen set, it's often faster
91 than built-in repr.
92 """
93 return ''.join(_saferepr(
94 o, maxlen=maxlen, maxlevels=maxlevels, seen=seen
95 ))
96
97
98 def _chaindict(mapping,
99 LIT_DICT_KVSEP=LIT_DICT_KVSEP,
100 LIT_LIST_SEP=LIT_LIST_SEP):
101 # type: (Dict, _literal, _literal) -> Iterator[Any]
102 size = len(mapping)
103 for i, (k, v) in enumerate(items(mapping)):
104 yield _key(k)
105 yield LIT_DICT_KVSEP
106 yield v
107 if i < (size - 1):
108 yield LIT_LIST_SEP
109
110
111 def _chainlist(it, LIT_LIST_SEP=LIT_LIST_SEP):
112 # type: (List) -> Iterator[Any]
113 size = len(it)
114 for i, v in enumerate(it):
115 yield v
116 if i < (size - 1):
117 yield LIT_LIST_SEP
118
119
120 def _repr_empty_set(s):
121 # type: (Set) -> str
122 return '%s()' % (type(s).__name__,)
123
124
125 def _safetext(val):
126 # type: (AnyStr) -> str
127 if isinstance(val, bytes):
128 try:
129 val.encode('utf-8')
130 except UnicodeDecodeError:
131 # is bytes with unrepresentable characters, attempt
132 # to convert back to unicode
133 return val.decode('utf-8', errors='backslashreplace')
134 return val
135
136
137 def _format_binary_bytes(val, maxlen, ellipsis='...'):
138 # type: (bytes, int, str) -> str
139 if maxlen and len(val) > maxlen:
140 # we don't want to copy all the data, just take what we need.
141 chunk = memoryview(val)[:maxlen].tobytes()
142 return _bytes_prefix("'{0}{1}'".format(
143 _repr_binary_bytes(chunk), ellipsis))
144 return _bytes_prefix("'{0}'".format(_repr_binary_bytes(val)))
145
146
147 def _bytes_prefix(s):
148 return 'b' + s if IS_PY3 else s
149
150
151 def _repr_binary_bytes(val):
152 # type: (bytes) -> str
153 try:
154 return val.decode('utf-8')
155 except UnicodeDecodeError:
156 # possibly not unicode, but binary data so format as hex.
157 try:
158 ashex = val.hex
159 except AttributeError: # pragma: no cover
160 # Python 3.4
161 return val.decode('utf-8', errors='replace')
162 else:
163 # Python 3.5+
164 return ashex()
165
166
167 def _format_chars(val, maxlen):
168 # type: (AnyStr, int) -> str
169 if isinstance(val, bytes): # pragma: no cover
170 return _format_binary_bytes(val, maxlen)
171 else:
172 return "'{0}'".format(truncate(val, maxlen))
173
174
175 def _repr(obj):
176 # type: (Any) -> str
177 try:
178 return repr(obj)
179 except Exception as exc:
180 return '<Unrepresentable {0!r}{1:#x}: {2!r} {3!r}>'.format(
181 type(obj), id(obj), exc, '\n'.join(traceback.format_stack()))
182
183
184 def _saferepr(o, maxlen=None, maxlevels=3, seen=None):
185 # type: (Any, int, int, Set) -> str
186 stack = deque([iter([o])])
187 for token, it in reprstream(stack, seen=seen, maxlevels=maxlevels):
188 if maxlen is not None and maxlen <= 0:
189 yield ', ...'
190 # move rest back to stack, so that we can include
191 # dangling parens.
192 stack.append(it)
193 break
194 if isinstance(token, _literal):
195 val = token.value
196 elif isinstance(token, _key):
197 val = saferepr(token.value, maxlen, maxlevels)
198 elif isinstance(token, _quoted):
199 val = _format_chars(token.value, maxlen)
200 else:
201 val = _safetext(truncate(token, maxlen))
202 yield val
203 if maxlen is not None:
204 maxlen -= len(val)
205 for rest1 in stack:
206 # maxlen exceeded, process any dangling parens.
207 for rest2 in rest1:
208 if isinstance(rest2, _literal) and not rest2.truncate:
209 yield rest2.value
210
211
212 def _reprseq(val, lit_start, lit_end, builtin_type, chainer):
213 # type: (Sequence, _literal, _literal, Any, Any) -> Tuple[Any, ...]
214 if type(val) is builtin_type: # noqa
215 return lit_start, lit_end, chainer(val)
216 return (
217 _literal('%s(%s' % (type(val).__name__, lit_start.value), False, +1),
218 _literal('%s)' % (lit_end.value,), False, -1),
219 chainer(val)
220 )
221
222
223 def reprstream(stack, seen=None, maxlevels=3, level=0, isinstance=isinstance):
224 """Streaming repr, yielding tokens."""
225 # type: (deque, Set, int, int, Callable) -> Iterator[Any]
226 seen = seen or set()
227 append = stack.append
228 popleft = stack.popleft
229 is_in_seen = seen.__contains__
230 discard_from_seen = seen.discard
231 add_to_seen = seen.add
232
233 while stack:
234 lit_start = lit_end = None
235 it = popleft()
236 for val in it:
237 orig = val
238 if isinstance(val, _dirty):
239 discard_from_seen(val.objid)
240 continue
241 elif isinstance(val, _literal):
242 level += val.direction
243 yield val, it
244 elif isinstance(val, _key):
245 yield val, it
246 elif isinstance(val, Decimal):
247 yield _repr(val), it
248 elif isinstance(val, safe_t):
249 yield text_t(val), it
250 elif isinstance(val, chars_t):
251 yield _quoted(val), it
252 elif isinstance(val, range_t): # pragma: no cover
253 yield _repr(val), it
254 else:
255 if isinstance(val, set_t):
256 if not val:
257 yield _repr_empty_set(val), it
258 continue
259 lit_start, lit_end, val = _reprseq(
260 val, LIT_SET_START, LIT_SET_END, set, _chainlist,
261 )
262 elif isinstance(val, tuple):
263 lit_start, lit_end, val = (
264 LIT_TUPLE_START,
265 LIT_TUPLE_END_SV if len(val) == 1 else LIT_TUPLE_END,
266 _chainlist(val))
267 elif isinstance(val, dict):
268 lit_start, lit_end, val = (
269 LIT_DICT_START, LIT_DICT_END, _chaindict(val))
270 elif isinstance(val, list):
271 lit_start, lit_end, val = (
272 LIT_LIST_START, LIT_LIST_END, _chainlist(val))
273 else:
274 # other type of object
275 yield _repr(val), it
276 continue
277
278 if maxlevels and level >= maxlevels:
279 yield '%s...%s' % (lit_start.value, lit_end.value), it
280 continue
281
282 objid = id(orig)
283 if is_in_seen(objid):
284 yield _recursion(orig), it
285 continue
286 add_to_seen(objid)
287
288 # Recurse into the new list/tuple/dict/etc by tacking
289 # the rest of our iterable onto the new it: this way
290 # it works similar to a linked list.
291 append(chain([lit_start], val, [_dirty(objid), lit_end], it))
292 break
293
[end of celery/utils/saferepr.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
celery/celery
|
9b2a1720781930f8eed87bce2c3396e40a99529e
|
Joining to chains loses link_error of least chain
https://github.com/celery/celery/blob/cbbf481801079f0e2cfbfe464c9ecfe3ccc7a067/celery/canvas.py#L408-L414
Should be something like
```python
link_error_sigs = other._with_list_option('link_error')
sig.tasks.extend(
reduce(
lambda t, s: t.on_error(s), link_error_sigs, t.clone())
for t in other.tasks)
```
|
2017-08-31T07:44:24Z
|
<patch>
diff --git a/celery/canvas.py b/celery/canvas.py
--- a/celery/canvas.py
+++ b/celery/canvas.py
@@ -395,23 +395,19 @@ def __or__(self, other):
other = maybe_unroll_group(other)
if isinstance(self, _chain):
# chain | group() -> chain
- sig = self.clone()
- sig.tasks.append(other)
- return sig
+ return _chain(seq_concat_item(
+ self.unchain_tasks(), other), app=self._app)
# task | group() -> chain
return _chain(self, other, app=self.app)
if not isinstance(self, _chain) and isinstance(other, _chain):
# task | chain -> chain
- return _chain(
- seq_concat_seq((self,), other.tasks), app=self._app)
+ return _chain(seq_concat_seq(
+ (self,), other.unchain_tasks()), app=self._app)
elif isinstance(other, _chain):
# chain | chain -> chain
- sig = self.clone()
- if isinstance(sig.tasks, tuple):
- sig.tasks = list(sig.tasks)
- sig.tasks.extend(other.tasks)
- return sig
+ return _chain(seq_concat_seq(
+ self.unchain_tasks(), other.unchain_tasks()), app=self._app)
elif isinstance(self, chord):
# chord(ONE, body) | other -> ONE | body | other
# chord with one header task is unecessary.
@@ -436,8 +432,8 @@ def __or__(self, other):
return sig
else:
# chain | task -> chain
- return _chain(
- seq_concat_item(self.tasks, other), app=self._app)
+ return _chain(seq_concat_item(
+ self.unchain_tasks(), other), app=self._app)
# task | task -> chain
return _chain(self, other, app=self._app)
return NotImplemented
@@ -557,6 +553,15 @@ def clone(self, *args, **kwargs):
]
return s
+ def unchain_tasks(self):
+ # Clone chain's tasks assigning sugnatures from link_error
+ # to each task
+ tasks = [t.clone() for t in self.tasks]
+ for sig in self.options.get('link_error', []):
+ for task in tasks:
+ task.link_error(sig)
+ return tasks
+
def apply_async(self, args=(), kwargs={}, **options):
# python is best at unpacking kwargs, so .run is here to do that.
app = self.app
</patch>
|
[]
|
[]
| ||||
numpy__numpy-8423
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ENH: in1d, but preserve shape of ar1
in1d takes two arrays, `ar1` and `ar2`, and returns a 1d array with the same number of elements as `ar1`. The logical extension would be a function that does the same thing but returns a (possibly multi-dimensional) array of the same shape as `ar1`. Effectively, it would be equivalent to this:
def in(ar1, ar2, **kwargs):
return np.in1d(ar1, ar2, **kwargs).reshape(ar1.shape)
although it might be implemented differently.
</issue>
<code>
[start of README.md]
1 <div align="center">
2 <img src="http://www.numpy.org/_static/numpy_logo.png"><br>
3 </div>
4
5 -----------------
6 | **`Travis CI Status`** |
7 |-------------------|
8 [](https://travis-ci.org/numpy/numpy)|
9
10
11 NumPy is the fundamental package needed for scientific computing with Python.
12 This package contains:
13
14 * a powerful N-dimensional array object
15 * sophisticated (broadcasting) functions
16 * tools for integrating C/C++ and Fortran code
17 * useful linear algebra, Fourier transform, and random number capabilities.
18
19 It derives from the old Numeric code base and can be used as a replacement for Numeric. It also adds the features introduced by numarray and can be used to replace numarray.
20
21 More information can be found at the website:
22
23 * http://www.numpy.org
24
25 After installation, tests can be run (if ``nose`` is installed) with:
26
27 python -c 'import numpy; numpy.test()'
28
29 The most current development version is always available from our
30 git repository:
31
32 * http://github.com/numpy/numpy
33
[end of README.md]
[start of numpy/doc/indexing.py]
1 """==============
2 Array indexing
3 ==============
4
5 Array indexing refers to any use of the square brackets ([]) to index
6 array values. There are many options to indexing, which give numpy
7 indexing great power, but with power comes some complexity and the
8 potential for confusion. This section is just an overview of the
9 various options and issues related to indexing. Aside from single
10 element indexing, the details on most of these options are to be
11 found in related sections.
12
13 Assignment vs referencing
14 =========================
15
16 Most of the following examples show the use of indexing when
17 referencing data in an array. The examples work just as well
18 when assigning to an array. See the section at the end for
19 specific examples and explanations on how assignments work.
20
21 Single element indexing
22 =======================
23
24 Single element indexing for a 1-D array is what one expects. It work
25 exactly like that for other standard Python sequences. It is 0-based,
26 and accepts negative indices for indexing from the end of the array. ::
27
28 >>> x = np.arange(10)
29 >>> x[2]
30 2
31 >>> x[-2]
32 8
33
34 Unlike lists and tuples, numpy arrays support multidimensional indexing
35 for multidimensional arrays. That means that it is not necessary to
36 separate each dimension's index into its own set of square brackets. ::
37
38 >>> x.shape = (2,5) # now x is 2-dimensional
39 >>> x[1,3]
40 8
41 >>> x[1,-1]
42 9
43
44 Note that if one indexes a multidimensional array with fewer indices
45 than dimensions, one gets a subdimensional array. For example: ::
46
47 >>> x[0]
48 array([0, 1, 2, 3, 4])
49
50 That is, each index specified selects the array corresponding to the
51 rest of the dimensions selected. In the above example, choosing 0
52 means that the remaining dimension of length 5 is being left unspecified,
53 and that what is returned is an array of that dimensionality and size.
54 It must be noted that the returned array is not a copy of the original,
55 but points to the same values in memory as does the original array.
56 In this case, the 1-D array at the first position (0) is returned.
57 So using a single index on the returned array, results in a single
58 element being returned. That is: ::
59
60 >>> x[0][2]
61 2
62
63 So note that ``x[0,2] = x[0][2]`` though the second case is more
64 inefficient as a new temporary array is created after the first index
65 that is subsequently indexed by 2.
66
67 Note to those used to IDL or Fortran memory order as it relates to
68 indexing. NumPy uses C-order indexing. That means that the last
69 index usually represents the most rapidly changing memory location,
70 unlike Fortran or IDL, where the first index represents the most
71 rapidly changing location in memory. This difference represents a
72 great potential for confusion.
73
74 Other indexing options
75 ======================
76
77 It is possible to slice and stride arrays to extract arrays of the
78 same number of dimensions, but of different sizes than the original.
79 The slicing and striding works exactly the same way it does for lists
80 and tuples except that they can be applied to multiple dimensions as
81 well. A few examples illustrates best: ::
82
83 >>> x = np.arange(10)
84 >>> x[2:5]
85 array([2, 3, 4])
86 >>> x[:-7]
87 array([0, 1, 2])
88 >>> x[1:7:2]
89 array([1, 3, 5])
90 >>> y = np.arange(35).reshape(5,7)
91 >>> y[1:5:2,::3]
92 array([[ 7, 10, 13],
93 [21, 24, 27]])
94
95 Note that slices of arrays do not copy the internal array data but
96 also produce new views of the original data.
97
98 It is possible to index arrays with other arrays for the purposes of
99 selecting lists of values out of arrays into new arrays. There are
100 two different ways of accomplishing this. One uses one or more arrays
101 of index values. The other involves giving a boolean array of the proper
102 shape to indicate the values to be selected. Index arrays are a very
103 powerful tool that allow one to avoid looping over individual elements in
104 arrays and thus greatly improve performance.
105
106 It is possible to use special features to effectively increase the
107 number of dimensions in an array through indexing so the resulting
108 array aquires the shape needed for use in an expression or with a
109 specific function.
110
111 Index arrays
112 ============
113
114 NumPy arrays may be indexed with other arrays (or any other sequence-
115 like object that can be converted to an array, such as lists, with the
116 exception of tuples; see the end of this document for why this is). The
117 use of index arrays ranges from simple, straightforward cases to
118 complex, hard-to-understand cases. For all cases of index arrays, what
119 is returned is a copy of the original data, not a view as one gets for
120 slices.
121
122 Index arrays must be of integer type. Each value in the array indicates
123 which value in the array to use in place of the index. To illustrate: ::
124
125 >>> x = np.arange(10,1,-1)
126 >>> x
127 array([10, 9, 8, 7, 6, 5, 4, 3, 2])
128 >>> x[np.array([3, 3, 1, 8])]
129 array([7, 7, 9, 2])
130
131
132 The index array consisting of the values 3, 3, 1 and 8 correspondingly
133 create an array of length 4 (same as the index array) where each index
134 is replaced by the value the index array has in the array being indexed.
135
136 Negative values are permitted and work as they do with single indices
137 or slices: ::
138
139 >>> x[np.array([3,3,-3,8])]
140 array([7, 7, 4, 2])
141
142 It is an error to have index values out of bounds: ::
143
144 >>> x[np.array([3, 3, 20, 8])]
145 <type 'exceptions.IndexError'>: index 20 out of bounds 0<=index<9
146
147 Generally speaking, what is returned when index arrays are used is
148 an array with the same shape as the index array, but with the type
149 and values of the array being indexed. As an example, we can use a
150 multidimensional index array instead: ::
151
152 >>> x[np.array([[1,1],[2,3]])]
153 array([[9, 9],
154 [8, 7]])
155
156 Indexing Multi-dimensional arrays
157 =================================
158
159 Things become more complex when multidimensional arrays are indexed,
160 particularly with multidimensional index arrays. These tend to be
161 more unusual uses, but they are permitted, and they are useful for some
162 problems. We'll start with the simplest multidimensional case (using
163 the array y from the previous examples): ::
164
165 >>> y[np.array([0,2,4]), np.array([0,1,2])]
166 array([ 0, 15, 30])
167
168 In this case, if the index arrays have a matching shape, and there is
169 an index array for each dimension of the array being indexed, the
170 resultant array has the same shape as the index arrays, and the values
171 correspond to the index set for each position in the index arrays. In
172 this example, the first index value is 0 for both index arrays, and
173 thus the first value of the resultant array is y[0,0]. The next value
174 is y[2,1], and the last is y[4,2].
175
176 If the index arrays do not have the same shape, there is an attempt to
177 broadcast them to the same shape. If they cannot be broadcast to the
178 same shape, an exception is raised: ::
179
180 >>> y[np.array([0,2,4]), np.array([0,1])]
181 <type 'exceptions.ValueError'>: shape mismatch: objects cannot be
182 broadcast to a single shape
183
184 The broadcasting mechanism permits index arrays to be combined with
185 scalars for other indices. The effect is that the scalar value is used
186 for all the corresponding values of the index arrays: ::
187
188 >>> y[np.array([0,2,4]), 1]
189 array([ 1, 15, 29])
190
191 Jumping to the next level of complexity, it is possible to only
192 partially index an array with index arrays. It takes a bit of thought
193 to understand what happens in such cases. For example if we just use
194 one index array with y: ::
195
196 >>> y[np.array([0,2,4])]
197 array([[ 0, 1, 2, 3, 4, 5, 6],
198 [14, 15, 16, 17, 18, 19, 20],
199 [28, 29, 30, 31, 32, 33, 34]])
200
201 What results is the construction of a new array where each value of
202 the index array selects one row from the array being indexed and the
203 resultant array has the resulting shape (number of index elements,
204 size of row).
205
206 An example of where this may be useful is for a color lookup table
207 where we want to map the values of an image into RGB triples for
208 display. The lookup table could have a shape (nlookup, 3). Indexing
209 such an array with an image with shape (ny, nx) with dtype=np.uint8
210 (or any integer type so long as values are with the bounds of the
211 lookup table) will result in an array of shape (ny, nx, 3) where a
212 triple of RGB values is associated with each pixel location.
213
214 In general, the shape of the resultant array will be the concatenation
215 of the shape of the index array (or the shape that all the index arrays
216 were broadcast to) with the shape of any unused dimensions (those not
217 indexed) in the array being indexed.
218
219 Boolean or "mask" index arrays
220 ==============================
221
222 Boolean arrays used as indices are treated in a different manner
223 entirely than index arrays. Boolean arrays must be of the same shape
224 as the initial dimensions of the array being indexed. In the
225 most straightforward case, the boolean array has the same shape: ::
226
227 >>> b = y>20
228 >>> y[b]
229 array([21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34])
230
231 Unlike in the case of integer index arrays, in the boolean case, the
232 result is a 1-D array containing all the elements in the indexed array
233 corresponding to all the true elements in the boolean array. The
234 elements in the indexed array are always iterated and returned in
235 :term:`row-major` (C-style) order. The result is also identical to
236 ``y[np.nonzero(b)]``. As with index arrays, what is returned is a copy
237 of the data, not a view as one gets with slices.
238
239 The result will be multidimensional if y has more dimensions than b.
240 For example: ::
241
242 >>> b[:,5] # use a 1-D boolean whose first dim agrees with the first dim of y
243 array([False, False, False, True, True], dtype=bool)
244 >>> y[b[:,5]]
245 array([[21, 22, 23, 24, 25, 26, 27],
246 [28, 29, 30, 31, 32, 33, 34]])
247
248 Here the 4th and 5th rows are selected from the indexed array and
249 combined to make a 2-D array.
250
251 In general, when the boolean array has fewer dimensions than the array
252 being indexed, this is equivalent to y[b, ...], which means
253 y is indexed by b followed by as many : as are needed to fill
254 out the rank of y.
255 Thus the shape of the result is one dimension containing the number
256 of True elements of the boolean array, followed by the remaining
257 dimensions of the array being indexed.
258
259 For example, using a 2-D boolean array of shape (2,3)
260 with four True elements to select rows from a 3-D array of shape
261 (2,3,5) results in a 2-D result of shape (4,5): ::
262
263 >>> x = np.arange(30).reshape(2,3,5)
264 >>> x
265 array([[[ 0, 1, 2, 3, 4],
266 [ 5, 6, 7, 8, 9],
267 [10, 11, 12, 13, 14]],
268 [[15, 16, 17, 18, 19],
269 [20, 21, 22, 23, 24],
270 [25, 26, 27, 28, 29]]])
271 >>> b = np.array([[True, True, False], [False, True, True]])
272 >>> x[b]
273 array([[ 0, 1, 2, 3, 4],
274 [ 5, 6, 7, 8, 9],
275 [20, 21, 22, 23, 24],
276 [25, 26, 27, 28, 29]])
277
278 For further details, consult the numpy reference documentation on array indexing.
279
280 Combining index arrays with slices
281 ==================================
282
283 Index arrays may be combined with slices. For example: ::
284
285 >>> y[np.array([0,2,4]),1:3]
286 array([[ 1, 2],
287 [15, 16],
288 [29, 30]])
289
290 In effect, the slice is converted to an index array
291 np.array([[1,2]]) (shape (1,2)) that is broadcast with the index array
292 to produce a resultant array of shape (3,2).
293
294 Likewise, slicing can be combined with broadcasted boolean indices: ::
295
296 >>> y[b[:,5],1:3]
297 array([[22, 23],
298 [29, 30]])
299
300 Structural indexing tools
301 =========================
302
303 To facilitate easy matching of array shapes with expressions and in
304 assignments, the np.newaxis object can be used within array indices
305 to add new dimensions with a size of 1. For example: ::
306
307 >>> y.shape
308 (5, 7)
309 >>> y[:,np.newaxis,:].shape
310 (5, 1, 7)
311
312 Note that there are no new elements in the array, just that the
313 dimensionality is increased. This can be handy to combine two
314 arrays in a way that otherwise would require explicitly reshaping
315 operations. For example: ::
316
317 >>> x = np.arange(5)
318 >>> x[:,np.newaxis] + x[np.newaxis,:]
319 array([[0, 1, 2, 3, 4],
320 [1, 2, 3, 4, 5],
321 [2, 3, 4, 5, 6],
322 [3, 4, 5, 6, 7],
323 [4, 5, 6, 7, 8]])
324
325 The ellipsis syntax maybe used to indicate selecting in full any
326 remaining unspecified dimensions. For example: ::
327
328 >>> z = np.arange(81).reshape(3,3,3,3)
329 >>> z[1,...,2]
330 array([[29, 32, 35],
331 [38, 41, 44],
332 [47, 50, 53]])
333
334 This is equivalent to: ::
335
336 >>> z[1,:,:,2]
337 array([[29, 32, 35],
338 [38, 41, 44],
339 [47, 50, 53]])
340
341 Assigning values to indexed arrays
342 ==================================
343
344 As mentioned, one can select a subset of an array to assign to using
345 a single index, slices, and index and mask arrays. The value being
346 assigned to the indexed array must be shape consistent (the same shape
347 or broadcastable to the shape the index produces). For example, it is
348 permitted to assign a constant to a slice: ::
349
350 >>> x = np.arange(10)
351 >>> x[2:7] = 1
352
353 or an array of the right size: ::
354
355 >>> x[2:7] = np.arange(5)
356
357 Note that assignments may result in changes if assigning
358 higher types to lower types (like floats to ints) or even
359 exceptions (assigning complex to floats or ints): ::
360
361 >>> x[1] = 1.2
362 >>> x[1]
363 1
364 >>> x[1] = 1.2j
365 <type 'exceptions.TypeError'>: can't convert complex to long; use
366 long(abs(z))
367
368
369 Unlike some of the references (such as array and mask indices)
370 assignments are always made to the original data in the array
371 (indeed, nothing else would make sense!). Note though, that some
372 actions may not work as one may naively expect. This particular
373 example is often surprising to people: ::
374
375 >>> x = np.arange(0, 50, 10)
376 >>> x
377 array([ 0, 10, 20, 30, 40])
378 >>> x[np.array([1, 1, 3, 1])] += 1
379 >>> x
380 array([ 0, 11, 20, 31, 40])
381
382 Where people expect that the 1st location will be incremented by 3.
383 In fact, it will only be incremented by 1. The reason is because
384 a new array is extracted from the original (as a temporary) containing
385 the values at 1, 1, 3, 1, then the value 1 is added to the temporary,
386 and then the temporary is assigned back to the original array. Thus
387 the value of the array at x[1]+1 is assigned to x[1] three times,
388 rather than being incremented 3 times.
389
390 Dealing with variable numbers of indices within programs
391 ========================================================
392
393 The index syntax is very powerful but limiting when dealing with
394 a variable number of indices. For example, if you want to write
395 a function that can handle arguments with various numbers of
396 dimensions without having to write special case code for each
397 number of possible dimensions, how can that be done? If one
398 supplies to the index a tuple, the tuple will be interpreted
399 as a list of indices. For example (using the previous definition
400 for the array z): ::
401
402 >>> indices = (1,1,1,1)
403 >>> z[indices]
404 40
405
406 So one can use code to construct tuples of any number of indices
407 and then use these within an index.
408
409 Slices can be specified within programs by using the slice() function
410 in Python. For example: ::
411
412 >>> indices = (1,1,1,slice(0,2)) # same as [1,1,1,0:2]
413 >>> z[indices]
414 array([39, 40])
415
416 Likewise, ellipsis can be specified by code by using the Ellipsis
417 object: ::
418
419 >>> indices = (1, Ellipsis, 1) # same as [1,...,1]
420 >>> z[indices]
421 array([[28, 31, 34],
422 [37, 40, 43],
423 [46, 49, 52]])
424
425 For this reason it is possible to use the output from the np.where()
426 function directly as an index since it always returns a tuple of index
427 arrays.
428
429 Because the special treatment of tuples, they are not automatically
430 converted to an array as a list would be. As an example: ::
431
432 >>> z[[1,1,1,1]] # produces a large array
433 array([[[[27, 28, 29],
434 [30, 31, 32], ...
435 >>> z[(1,1,1,1)] # returns a single value
436 40
437
438 """
439 from __future__ import division, absolute_import, print_function
440
[end of numpy/doc/indexing.py]
[start of numpy/lib/arraysetops.py]
1 """
2 Set operations for 1D numeric arrays based on sorting.
3
4 :Contains:
5 ediff1d,
6 unique,
7 intersect1d,
8 setxor1d,
9 in1d,
10 union1d,
11 setdiff1d
12
13 :Notes:
14
15 For floating point arrays, inaccurate results may appear due to usual round-off
16 and floating point comparison issues.
17
18 Speed could be gained in some operations by an implementation of
19 sort(), that can provide directly the permutation vectors, avoiding
20 thus calls to argsort().
21
22 To do: Optionally return indices analogously to unique for all functions.
23
24 :Author: Robert Cimrman
25
26 """
27 from __future__ import division, absolute_import, print_function
28
29 import numpy as np
30
31
32 __all__ = [
33 'ediff1d', 'intersect1d', 'setxor1d', 'union1d', 'setdiff1d', 'unique',
34 'in1d'
35 ]
36
37
38 def ediff1d(ary, to_end=None, to_begin=None):
39 """
40 The differences between consecutive elements of an array.
41
42 Parameters
43 ----------
44 ary : array_like
45 If necessary, will be flattened before the differences are taken.
46 to_end : array_like, optional
47 Number(s) to append at the end of the returned differences.
48 to_begin : array_like, optional
49 Number(s) to prepend at the beginning of the returned differences.
50
51 Returns
52 -------
53 ediff1d : ndarray
54 The differences. Loosely, this is ``ary.flat[1:] - ary.flat[:-1]``.
55
56 See Also
57 --------
58 diff, gradient
59
60 Notes
61 -----
62 When applied to masked arrays, this function drops the mask information
63 if the `to_begin` and/or `to_end` parameters are used.
64
65 Examples
66 --------
67 >>> x = np.array([1, 2, 4, 7, 0])
68 >>> np.ediff1d(x)
69 array([ 1, 2, 3, -7])
70
71 >>> np.ediff1d(x, to_begin=-99, to_end=np.array([88, 99]))
72 array([-99, 1, 2, 3, -7, 88, 99])
73
74 The returned array is always 1D.
75
76 >>> y = [[1, 2, 4], [1, 6, 24]]
77 >>> np.ediff1d(y)
78 array([ 1, 2, -3, 5, 18])
79
80 """
81 # force a 1d array
82 ary = np.asanyarray(ary).ravel()
83
84 # fast track default case
85 if to_begin is None and to_end is None:
86 return ary[1:] - ary[:-1]
87
88 if to_begin is None:
89 l_begin = 0
90 else:
91 to_begin = np.asanyarray(to_begin).ravel()
92 l_begin = len(to_begin)
93
94 if to_end is None:
95 l_end = 0
96 else:
97 to_end = np.asanyarray(to_end).ravel()
98 l_end = len(to_end)
99
100 # do the calculation in place and copy to_begin and to_end
101 l_diff = max(len(ary) - 1, 0)
102 result = np.empty(l_diff + l_begin + l_end, dtype=ary.dtype)
103 result = ary.__array_wrap__(result)
104 if l_begin > 0:
105 result[:l_begin] = to_begin
106 if l_end > 0:
107 result[l_begin + l_diff:] = to_end
108 np.subtract(ary[1:], ary[:-1], result[l_begin:l_begin + l_diff])
109 return result
110
111
112 def unique(ar, return_index=False, return_inverse=False,
113 return_counts=False, axis=None):
114 """
115 Find the unique elements of an array.
116
117 Returns the sorted unique elements of an array. There are three optional
118 outputs in addition to the unique elements: the indices of the input array
119 that give the unique values, the indices of the unique array that
120 reconstruct the input array, and the number of times each unique value
121 comes up in the input array.
122
123 Parameters
124 ----------
125 ar : array_like
126 Input array. Unless `axis` is specified, this will be flattened if it
127 is not already 1-D.
128 return_index : bool, optional
129 If True, also return the indices of `ar` (along the specified axis,
130 if provided, or in the flattened array) that result in the unique array.
131 return_inverse : bool, optional
132 If True, also return the indices of the unique array (for the specified
133 axis, if provided) that can be used to reconstruct `ar`.
134 return_counts : bool, optional
135 If True, also return the number of times each unique item appears
136 in `ar`.
137 .. versionadded:: 1.9.0
138 axis : int or None, optional
139 The axis to operate on. If None, `ar` will be flattened beforehand.
140 Otherwise, duplicate items will be removed along the provided axis,
141 with all the other axes belonging to the each of the unique elements.
142 Object arrays or structured arrays that contain objects are not
143 supported if the `axis` kwarg is used.
144 .. versionadded:: 1.13.0
145
146
147
148 Returns
149 -------
150 unique : ndarray
151 The sorted unique values.
152 unique_indices : ndarray, optional
153 The indices of the first occurrences of the unique values in the
154 original array. Only provided if `return_index` is True.
155 unique_inverse : ndarray, optional
156 The indices to reconstruct the original array from the
157 unique array. Only provided if `return_inverse` is True.
158 unique_counts : ndarray, optional
159 The number of times each of the unique values comes up in the
160 original array. Only provided if `return_counts` is True.
161 .. versionadded:: 1.9.0
162
163 See Also
164 --------
165 numpy.lib.arraysetops : Module with a number of other functions for
166 performing set operations on arrays.
167
168 Examples
169 --------
170 >>> np.unique([1, 1, 2, 2, 3, 3])
171 array([1, 2, 3])
172 >>> a = np.array([[1, 1], [2, 3]])
173 >>> np.unique(a)
174 array([1, 2, 3])
175
176 Return the unique rows of a 2D array
177
178 >>> a = np.array([[1, 0, 0], [1, 0, 0], [2, 3, 4]])
179 >>> np.unique(a, axis=0)
180 array([[1, 0, 0], [2, 3, 4]])
181
182 Return the indices of the original array that give the unique values:
183
184 >>> a = np.array(['a', 'b', 'b', 'c', 'a'])
185 >>> u, indices = np.unique(a, return_index=True)
186 >>> u
187 array(['a', 'b', 'c'],
188 dtype='|S1')
189 >>> indices
190 array([0, 1, 3])
191 >>> a[indices]
192 array(['a', 'b', 'c'],
193 dtype='|S1')
194
195 Reconstruct the input array from the unique values:
196
197 >>> a = np.array([1, 2, 6, 4, 2, 3, 2])
198 >>> u, indices = np.unique(a, return_inverse=True)
199 >>> u
200 array([1, 2, 3, 4, 6])
201 >>> indices
202 array([0, 1, 4, 3, 1, 2, 1])
203 >>> u[indices]
204 array([1, 2, 6, 4, 2, 3, 2])
205
206 """
207 ar = np.asanyarray(ar)
208 if axis is None:
209 return _unique1d(ar, return_index, return_inverse, return_counts)
210 if not (-ar.ndim <= axis < ar.ndim):
211 raise ValueError('Invalid axis kwarg specified for unique')
212
213 ar = np.swapaxes(ar, axis, 0)
214 orig_shape, orig_dtype = ar.shape, ar.dtype
215 # Must reshape to a contiguous 2D array for this to work...
216 ar = ar.reshape(orig_shape[0], -1)
217 ar = np.ascontiguousarray(ar)
218
219 if ar.dtype.char in (np.typecodes['AllInteger'] +
220 np.typecodes['Datetime'] + 'S'):
221 # Optimization: Creating a view of your data with a np.void data type of
222 # size the number of bytes in a full row. Handles any type where items
223 # have a unique binary representation, i.e. 0 is only 0, not +0 and -0.
224 dtype = np.dtype((np.void, ar.dtype.itemsize * ar.shape[1]))
225 else:
226 dtype = [('f{i}'.format(i=i), ar.dtype) for i in range(ar.shape[1])]
227
228 try:
229 consolidated = ar.view(dtype)
230 except TypeError:
231 # There's no good way to do this for object arrays, etc...
232 msg = 'The axis argument to unique is not supported for dtype {dt}'
233 raise TypeError(msg.format(dt=ar.dtype))
234
235 def reshape_uniq(uniq):
236 uniq = uniq.view(orig_dtype)
237 uniq = uniq.reshape(-1, *orig_shape[1:])
238 uniq = np.swapaxes(uniq, 0, axis)
239 return uniq
240
241 output = _unique1d(consolidated, return_index,
242 return_inverse, return_counts)
243 if not (return_index or return_inverse or return_counts):
244 return reshape_uniq(output)
245 else:
246 uniq = reshape_uniq(output[0])
247 return (uniq,) + output[1:]
248
249 def _unique1d(ar, return_index=False, return_inverse=False,
250 return_counts=False):
251 """
252 Find the unique elements of an array, ignoring shape.
253 """
254 ar = np.asanyarray(ar).flatten()
255
256 optional_indices = return_index or return_inverse
257 optional_returns = optional_indices or return_counts
258
259 if ar.size == 0:
260 if not optional_returns:
261 ret = ar
262 else:
263 ret = (ar,)
264 if return_index:
265 ret += (np.empty(0, np.bool),)
266 if return_inverse:
267 ret += (np.empty(0, np.bool),)
268 if return_counts:
269 ret += (np.empty(0, np.intp),)
270 return ret
271
272 if optional_indices:
273 perm = ar.argsort(kind='mergesort' if return_index else 'quicksort')
274 aux = ar[perm]
275 else:
276 ar.sort()
277 aux = ar
278 flag = np.concatenate(([True], aux[1:] != aux[:-1]))
279
280 if not optional_returns:
281 ret = aux[flag]
282 else:
283 ret = (aux[flag],)
284 if return_index:
285 ret += (perm[flag],)
286 if return_inverse:
287 iflag = np.cumsum(flag) - 1
288 inv_idx = np.empty(ar.shape, dtype=np.intp)
289 inv_idx[perm] = iflag
290 ret += (inv_idx,)
291 if return_counts:
292 idx = np.concatenate(np.nonzero(flag) + ([ar.size],))
293 ret += (np.diff(idx),)
294 return ret
295
296 def intersect1d(ar1, ar2, assume_unique=False):
297 """
298 Find the intersection of two arrays.
299
300 Return the sorted, unique values that are in both of the input arrays.
301
302 Parameters
303 ----------
304 ar1, ar2 : array_like
305 Input arrays.
306 assume_unique : bool
307 If True, the input arrays are both assumed to be unique, which
308 can speed up the calculation. Default is False.
309
310 Returns
311 -------
312 intersect1d : ndarray
313 Sorted 1D array of common and unique elements.
314
315 See Also
316 --------
317 numpy.lib.arraysetops : Module with a number of other functions for
318 performing set operations on arrays.
319
320 Examples
321 --------
322 >>> np.intersect1d([1, 3, 4, 3], [3, 1, 2, 1])
323 array([1, 3])
324
325 To intersect more than two arrays, use functools.reduce:
326
327 >>> from functools import reduce
328 >>> reduce(np.intersect1d, ([1, 3, 4, 3], [3, 1, 2, 1], [6, 3, 4, 2]))
329 array([3])
330 """
331 if not assume_unique:
332 # Might be faster than unique( intersect1d( ar1, ar2 ) )?
333 ar1 = unique(ar1)
334 ar2 = unique(ar2)
335 aux = np.concatenate((ar1, ar2))
336 aux.sort()
337 return aux[:-1][aux[1:] == aux[:-1]]
338
339 def setxor1d(ar1, ar2, assume_unique=False):
340 """
341 Find the set exclusive-or of two arrays.
342
343 Return the sorted, unique values that are in only one (not both) of the
344 input arrays.
345
346 Parameters
347 ----------
348 ar1, ar2 : array_like
349 Input arrays.
350 assume_unique : bool
351 If True, the input arrays are both assumed to be unique, which
352 can speed up the calculation. Default is False.
353
354 Returns
355 -------
356 setxor1d : ndarray
357 Sorted 1D array of unique values that are in only one of the input
358 arrays.
359
360 Examples
361 --------
362 >>> a = np.array([1, 2, 3, 2, 4])
363 >>> b = np.array([2, 3, 5, 7, 5])
364 >>> np.setxor1d(a,b)
365 array([1, 4, 5, 7])
366
367 """
368 if not assume_unique:
369 ar1 = unique(ar1)
370 ar2 = unique(ar2)
371
372 aux = np.concatenate((ar1, ar2))
373 if aux.size == 0:
374 return aux
375
376 aux.sort()
377 # flag = ediff1d( aux, to_end = 1, to_begin = 1 ) == 0
378 flag = np.concatenate(([True], aux[1:] != aux[:-1], [True]))
379 # flag2 = ediff1d( flag ) == 0
380 flag2 = flag[1:] == flag[:-1]
381 return aux[flag2]
382
383 def in1d(ar1, ar2, assume_unique=False, invert=False):
384 """
385 Test whether each element of a 1-D array is also present in a second array.
386
387 Returns a boolean array the same length as `ar1` that is True
388 where an element of `ar1` is in `ar2` and False otherwise.
389
390 Parameters
391 ----------
392 ar1 : (M,) array_like
393 Input array.
394 ar2 : array_like
395 The values against which to test each value of `ar1`.
396 assume_unique : bool, optional
397 If True, the input arrays are both assumed to be unique, which
398 can speed up the calculation. Default is False.
399 invert : bool, optional
400 If True, the values in the returned array are inverted (that is,
401 False where an element of `ar1` is in `ar2` and True otherwise).
402 Default is False. ``np.in1d(a, b, invert=True)`` is equivalent
403 to (but is faster than) ``np.invert(in1d(a, b))``.
404
405 .. versionadded:: 1.8.0
406
407 Returns
408 -------
409 in1d : (M,) ndarray, bool
410 The values `ar1[in1d]` are in `ar2`.
411
412 See Also
413 --------
414 numpy.lib.arraysetops : Module with a number of other functions for
415 performing set operations on arrays.
416
417 Notes
418 -----
419 `in1d` can be considered as an element-wise function version of the
420 python keyword `in`, for 1-D sequences. ``in1d(a, b)`` is roughly
421 equivalent to ``np.array([item in b for item in a])``.
422 However, this idea fails if `ar2` is a set, or similar (non-sequence)
423 container: As ``ar2`` is converted to an array, in those cases
424 ``asarray(ar2)`` is an object array rather than the expected array of
425 contained values.
426
427 .. versionadded:: 1.4.0
428
429 Examples
430 --------
431 >>> test = np.array([0, 1, 2, 5, 0])
432 >>> states = [0, 2]
433 >>> mask = np.in1d(test, states)
434 >>> mask
435 array([ True, False, True, False, True], dtype=bool)
436 >>> test[mask]
437 array([0, 2, 0])
438 >>> mask = np.in1d(test, states, invert=True)
439 >>> mask
440 array([False, True, False, True, False], dtype=bool)
441 >>> test[mask]
442 array([1, 5])
443 """
444 # Ravel both arrays, behavior for the first array could be different
445 ar1 = np.asarray(ar1).ravel()
446 ar2 = np.asarray(ar2).ravel()
447
448 # This code is significantly faster when the condition is satisfied.
449 if len(ar2) < 10 * len(ar1) ** 0.145:
450 if invert:
451 mask = np.ones(len(ar1), dtype=np.bool)
452 for a in ar2:
453 mask &= (ar1 != a)
454 else:
455 mask = np.zeros(len(ar1), dtype=np.bool)
456 for a in ar2:
457 mask |= (ar1 == a)
458 return mask
459
460 # Otherwise use sorting
461 if not assume_unique:
462 ar1, rev_idx = np.unique(ar1, return_inverse=True)
463 ar2 = np.unique(ar2)
464
465 ar = np.concatenate((ar1, ar2))
466 # We need this to be a stable sort, so always use 'mergesort'
467 # here. The values from the first array should always come before
468 # the values from the second array.
469 order = ar.argsort(kind='mergesort')
470 sar = ar[order]
471 if invert:
472 bool_ar = (sar[1:] != sar[:-1])
473 else:
474 bool_ar = (sar[1:] == sar[:-1])
475 flag = np.concatenate((bool_ar, [invert]))
476 ret = np.empty(ar.shape, dtype=bool)
477 ret[order] = flag
478
479 if assume_unique:
480 return ret[:len(ar1)]
481 else:
482 return ret[rev_idx]
483
484 def union1d(ar1, ar2):
485 """
486 Find the union of two arrays.
487
488 Return the unique, sorted array of values that are in either of the two
489 input arrays.
490
491 Parameters
492 ----------
493 ar1, ar2 : array_like
494 Input arrays. They are flattened if they are not already 1D.
495
496 Returns
497 -------
498 union1d : ndarray
499 Unique, sorted union of the input arrays.
500
501 See Also
502 --------
503 numpy.lib.arraysetops : Module with a number of other functions for
504 performing set operations on arrays.
505
506 Examples
507 --------
508 >>> np.union1d([-1, 0, 1], [-2, 0, 2])
509 array([-2, -1, 0, 1, 2])
510
511 To find the union of more than two arrays, use functools.reduce:
512
513 >>> from functools import reduce
514 >>> reduce(np.union1d, ([1, 3, 4, 3], [3, 1, 2, 1], [6, 3, 4, 2]))
515 array([1, 2, 3, 4, 6])
516 """
517 return unique(np.concatenate((ar1, ar2)))
518
519 def setdiff1d(ar1, ar2, assume_unique=False):
520 """
521 Find the set difference of two arrays.
522
523 Return the sorted, unique values in `ar1` that are not in `ar2`.
524
525 Parameters
526 ----------
527 ar1 : array_like
528 Input array.
529 ar2 : array_like
530 Input comparison array.
531 assume_unique : bool
532 If True, the input arrays are both assumed to be unique, which
533 can speed up the calculation. Default is False.
534
535 Returns
536 -------
537 setdiff1d : ndarray
538 Sorted 1D array of values in `ar1` that are not in `ar2`.
539
540 See Also
541 --------
542 numpy.lib.arraysetops : Module with a number of other functions for
543 performing set operations on arrays.
544
545 Examples
546 --------
547 >>> a = np.array([1, 2, 3, 2, 4, 1])
548 >>> b = np.array([3, 4, 5, 6])
549 >>> np.setdiff1d(a, b)
550 array([1, 2])
551
552 """
553 if assume_unique:
554 ar1 = np.asarray(ar1).ravel()
555 else:
556 ar1 = unique(ar1)
557 ar2 = unique(ar2)
558 return ar1[in1d(ar1, ar2, assume_unique=True, invert=True)]
559
[end of numpy/lib/arraysetops.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
numpy/numpy
|
d5657b9e29a8e00ad8e074bc32c15dec220d766f
|
ENH: in1d, but preserve shape of ar1
in1d takes two arrays, `ar1` and `ar2`, and returns a 1d array with the same number of elements as `ar1`. The logical extension would be a function that does the same thing but returns a (possibly multi-dimensional) array of the same shape as `ar1`. Effectively, it would be equivalent to this:
def in(ar1, ar2, **kwargs):
return np.in1d(ar1, ar2, **kwargs).reshape(ar1.shape)
although it might be implemented differently.
|
There already is [a comment](https://github.com/numpy/numpy/blob/master/numpy/lib/arraysetops.py#L444) in the code pointing in that direction.
I don't think it is viable to change the behavior of the function for backwards compatibility reasons, although it would be possible to add a `keep_shape=False` kwarg to trigger the proposed new behavior.
If you want to see this happen, your best bet is to do it yourself. It should be relatively straightforward, as this function is entirely coded in Python, see the link above. Before getting down to that, changes in the user interface like this one would require discussion and approval on the mailing list. Can you also send an e-mail there with the proposed change?
I figured there would need to be discussion beforehand. Should I email numpy-discussion or scipy-dev?
numpy-discussion is the place for this, please do!
|
2016-12-28T00:39:09Z
|
<patch>
diff --git a/numpy/add_newdocs.py b/numpy/add_newdocs.py
--- a/numpy/add_newdocs.py
+++ b/numpy/add_newdocs.py
@@ -1500,7 +1500,7 @@ def luf(lamdaexpr, *args, **kwargs):
Find the indices of elements of `x` that are in `goodvalues`.
>>> goodvalues = [3, 4, 7]
- >>> ix = np.in1d(x.ravel(), goodvalues).reshape(x.shape)
+ >>> ix = np.isin(x, goodvalues)
>>> ix
array([[False, False, False],
[ True, True, False],
diff --git a/numpy/lib/arraysetops.py b/numpy/lib/arraysetops.py
--- a/numpy/lib/arraysetops.py
+++ b/numpy/lib/arraysetops.py
@@ -1,9 +1,10 @@
"""
-Set operations for 1D numeric arrays based on sorting.
+Set operations for arrays based on sorting.
:Contains:
- ediff1d,
unique,
+ isin,
+ ediff1d,
intersect1d,
setxor1d,
in1d,
@@ -31,7 +32,7 @@
__all__ = [
'ediff1d', 'intersect1d', 'setxor1d', 'union1d', 'setdiff1d', 'unique',
- 'in1d'
+ 'in1d', 'isin'
]
@@ -380,6 +381,7 @@ def setxor1d(ar1, ar2, assume_unique=False):
flag2 = flag[1:] == flag[:-1]
return aux[flag2]
+
def in1d(ar1, ar2, assume_unique=False, invert=False):
"""
Test whether each element of a 1-D array is also present in a second array.
@@ -387,6 +389,8 @@ def in1d(ar1, ar2, assume_unique=False, invert=False):
Returns a boolean array the same length as `ar1` that is True
where an element of `ar1` is in `ar2` and False otherwise.
+ We recommend using :func:`isin` instead of `in1d` for new code.
+
Parameters
----------
ar1 : (M,) array_like
@@ -411,6 +415,8 @@ def in1d(ar1, ar2, assume_unique=False, invert=False):
See Also
--------
+ isin : Version of this function that preserves the
+ shape of ar1.
numpy.lib.arraysetops : Module with a number of other functions for
performing set operations on arrays.
@@ -481,6 +487,96 @@ def in1d(ar1, ar2, assume_unique=False, invert=False):
else:
return ret[rev_idx]
+
+def isin(element, test_elements, assume_unique=False, invert=False):
+ """
+ Calculates `element in test_elements`, broadcasting over `element` only.
+ Returns a boolean array of the same shape as `element` that is True
+ where an element of `element` is in `test_elements` and False otherwise.
+
+ Parameters
+ ----------
+ element : array_like
+ Input array.
+ test_elements : array_like
+ The values against which to test each value of `element`.
+ This argument is flattened if it is an array or array_like.
+ See notes for behavior with non-array-like parameters.
+ assume_unique : bool, optional
+ If True, the input arrays are both assumed to be unique, which
+ can speed up the calculation. Default is False.
+ invert : bool, optional
+ If True, the values in the returned array are inverted, as if
+ calculating `element not in test_elements`. Default is False.
+ ``np.isin(a, b, invert=True)`` is equivalent to (but faster
+ than) ``np.invert(np.isin(a, b))``.
+
+ Returns
+ -------
+ isin : ndarray, bool
+ Has the same shape as `element`. The values `element[isin]`
+ are in `test_elements`.
+
+ See Also
+ --------
+ in1d : Flattened version of this function.
+ numpy.lib.arraysetops : Module with a number of other functions for
+ performing set operations on arrays.
+ Notes
+ -----
+
+ `isin` is an element-wise function version of the python keyword `in`.
+ ``isin(a, b)`` is roughly equivalent to
+ ``np.array([item in b for item in a])`` if `a` and `b` are 1-D sequences.
+
+ `element` and `test_elements` are converted to arrays if they are not
+ already. If `test_elements` is a set (or other non-sequence collection)
+ it will be converted to an object array with one element, rather than an
+ array of the values contained in `test_elements`. This is a consequence
+ of the `array` constructor's way of handling non-sequence collections.
+ Converting the set to a list usually gives the desired behavior.
+
+ .. versionadded:: 1.13.0
+
+ Examples
+ --------
+ >>> element = 2*np.arange(4).reshape((2, 2))
+ >>> element
+ array([[0, 2],
+ [4, 6]])
+ >>> test_elements = [1, 2, 4, 8]
+ >>> mask = np.isin(element, test_elements)
+ >>> mask
+ array([[ False, True],
+ [ True, False]], dtype=bool)
+ >>> element[mask]
+ array([2, 4])
+ >>> mask = np.isin(element, test_elements, invert=True)
+ >>> mask
+ array([[ True, False],
+ [ False, True]], dtype=bool)
+ >>> element[mask]
+ array([0, 6])
+
+ Because of how `array` handles sets, the following does not
+ work as expected:
+
+ >>> test_set = {1, 2, 4, 8}
+ >>> np.isin(element, test_set)
+ array([[ False, False],
+ [ False, False]], dtype=bool)
+
+ Casting the set to a list gives the expected result:
+
+ >>> np.isin(element, list(test_set))
+ array([[ False, True],
+ [ True, False]], dtype=bool)
+ """
+ element = np.asarray(element)
+ return in1d(element, test_elements, assume_unique=assume_unique,
+ invert=invert).reshape(element.shape)
+
+
def union1d(ar1, ar2):
"""
Find the union of two arrays.
diff --git a/numpy/lib/info.py b/numpy/lib/info.py
--- a/numpy/lib/info.py
+++ b/numpy/lib/info.py
@@ -136,13 +136,15 @@
ParallelExec Execute commands in parallel thread.
================ ===================
-1D Array Set Operations
+Array Set Operations
-----------------------
-Set operations for 1D numeric arrays based on sort() function.
+Set operations for numeric arrays based on sort() function.
================ ===================
-ediff1d Array difference (auxiliary function).
unique Unique elements of an array.
+isin Test whether each element of an ND array is present
+ anywhere within a second array.
+ediff1d Array difference (auxiliary function).
intersect1d Intersection of 1D arrays with unique elements.
setxor1d Set exclusive-or of 1D arrays with unique elements.
in1d Test whether elements in a 1D array are also present in
diff --git a/numpy/ma/extras.py b/numpy/ma/extras.py
--- a/numpy/ma/extras.py
+++ b/numpy/ma/extras.py
@@ -16,7 +16,7 @@
'column_stack', 'compress_cols', 'compress_nd', 'compress_rowcols',
'compress_rows', 'count_masked', 'corrcoef', 'cov', 'diagflat', 'dot',
'dstack', 'ediff1d', 'flatnotmasked_contiguous', 'flatnotmasked_edges',
- 'hsplit', 'hstack', 'in1d', 'intersect1d', 'mask_cols', 'mask_rowcols',
+ 'hsplit', 'hstack', 'isin', 'in1d', 'intersect1d', 'mask_cols', 'mask_rowcols',
'mask_rows', 'masked_all', 'masked_all_like', 'median', 'mr_',
'notmasked_contiguous', 'notmasked_edges', 'polyfit', 'row_stack',
'setdiff1d', 'setxor1d', 'unique', 'union1d', 'vander', 'vstack',
@@ -1131,6 +1131,7 @@ def setxor1d(ar1, ar2, assume_unique=False):
flag2 = (flag[1:] == flag[:-1])
return aux[flag2]
+
def in1d(ar1, ar2, assume_unique=False, invert=False):
"""
Test whether each element of an array is also present in a second
@@ -1138,8 +1139,11 @@ def in1d(ar1, ar2, assume_unique=False, invert=False):
The output is always a masked array. See `numpy.in1d` for more details.
+ We recommend using :func:`isin` instead of `in1d` for new code.
+
See Also
--------
+ isin : Version of this function that preserves the shape of ar1.
numpy.in1d : Equivalent function for ndarrays.
Notes
@@ -1170,6 +1174,29 @@ def in1d(ar1, ar2, assume_unique=False, invert=False):
return flag[indx][rev_idx]
+def isin(element, test_elements, assume_unique=False, invert=False):
+ """
+ Calculates `element in test_elements`, broadcasting over
+ `element` only.
+
+ The output is always a masked array of the same shape as `element`.
+ See `numpy.isin` for more details.
+
+ See Also
+ --------
+ in1d : Flattened version of this function.
+ numpy.isin : Equivalent function for ndarrays.
+
+ Notes
+ -----
+ .. versionadded:: 1.13.0
+
+ """
+ element = ma.asarray(element)
+ return in1d(element, test_elements, assume_unique=assume_unique,
+ invert=invert).reshape(element.shape)
+
+
def union1d(ar1, ar2):
"""
Union of two arrays.
</patch>
|
[]
|
[]
| |||
ipython__ipython-4303
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Notebook should be saved before downloading
If a user works in a new notebook and clicks 'Download' without first having saved anything, they get an empty notebook. We should either prompt to save, or automatically produce a save action, since otherwise it's pretty confusing. Reported by @jdh2358.
</issue>
<code>
[start of README.rst]
1 ===========================================
2 IPython: Productive Interactive Computing
3 ===========================================
4
5 Overview
6 ========
7
8 Welcome to IPython. Our full documentation is available on `our website
9 <http://ipython.org/documentation.html>`_; if you downloaded a built source
10 distribution the ``docs/source`` directory contains the plaintext version of
11 these manuals. If you have Sphinx installed, you can build them by typing
12 ``cd docs; make html`` for local browsing.
13
14
15 Dependencies and supported Python versions
16 ==========================================
17
18 For full details, see the installation section of the manual. The basic parts
19 of IPython only need the Python standard library, but much of its more advanced
20 functionality requires extra packages.
21
22 Officially, IPython requires Python version 2.7, or 3.3 and above.
23 IPython 1.x is the last IPython version to support Python 2.6 and 3.2.
24
25
26 Instant running
27 ===============
28
29 You can run IPython from this directory without even installing it system-wide
30 by typing at the terminal::
31
32 $ python -m IPython
33
34
35 Development installation
36 ========================
37
38 If you want to hack on certain parts, e.g. the IPython notebook, in a clean
39 environment (such as a virtualenv) you can use ``pip`` to grab the necessary
40 dependencies quickly::
41
42 $ git clone --recursive https://github.com/ipython/ipython.git
43 $ cd ipython
44 $ pip install -e ".[notebook]"
45
46 This installs the necessary packages and symlinks IPython into your current
47 environment so that you can work on your local repo copy and run it from anywhere::
48
49 $ ipython notebook
50
51 The same process applies for other parts, such as the qtconsole (the
52 ``extras_require`` attribute in the setup.py file lists all the possibilities).
53
54 Git Hooks and Submodules
55 ************************
56
57 IPython now uses git submodules to ship its javascript dependencies.
58 If you run IPython from git master, you may need to update submodules once in a while with::
59
60 $ git submodule update
61
62 or::
63
64 $ python setup.py submodule
65
66 We have some git hooks for helping keep your submodules always in sync,
67 see our ``git-hooks`` directory for more info.
68
[end of README.rst]
[start of IPython/core/usage.py]
1 # -*- coding: utf-8 -*-
2 """Usage information for the main IPython applications.
3 """
4 #-----------------------------------------------------------------------------
5 # Copyright (C) 2008-2011 The IPython Development Team
6 # Copyright (C) 2001-2007 Fernando Perez. <[email protected]>
7 #
8 # Distributed under the terms of the BSD License. The full license is in
9 # the file COPYING, distributed as part of this software.
10 #-----------------------------------------------------------------------------
11
12 import sys
13 from IPython.core import release
14
15 cl_usage = """\
16 =========
17 IPython
18 =========
19
20 Tools for Interactive Computing in Python
21 =========================================
22
23 A Python shell with automatic history (input and output), dynamic object
24 introspection, easier configuration, command completion, access to the
25 system shell and more. IPython can also be embedded in running programs.
26
27
28 Usage
29
30 ipython [subcommand] [options] [-c cmd | -m mod | file] [--] [arg] ...
31
32 If invoked with no options, it executes the file and exits, passing the
33 remaining arguments to the script, just as if you had specified the same
34 command with python. You may need to specify `--` before args to be passed
35 to the script, to prevent IPython from attempting to parse them. If you
36 specify the option `-i` before the filename, it will enter an interactive
37 IPython session after running the script, rather than exiting. Files ending
38 in .py will be treated as normal Python, but files ending in .ipy can
39 contain special IPython syntax (magic commands, shell expansions, etc.).
40
41 Almost all configuration in IPython is available via the command-line. Do
42 `ipython --help-all` to see all available options. For persistent
43 configuration, look into your `ipython_config.py` configuration file for
44 details.
45
46 This file is typically installed in the `IPYTHONDIR` directory, and there
47 is a separate configuration directory for each profile. The default profile
48 directory will be located in $IPYTHONDIR/profile_default. For Linux users,
49 IPYTHONDIR defaults to `$HOME/.config/ipython`, and for other Unix systems
50 to `$HOME/.ipython`. For Windows users, $HOME resolves to C:\\Documents
51 and Settings\\YourUserName in most instances.
52
53 To initialize a profile with the default configuration file, do::
54
55 $> ipython profile create
56
57 and start editing `IPYTHONDIR/profile_default/ipython_config.py`
58
59 In IPython's documentation, we will refer to this directory as
60 `IPYTHONDIR`, you can change its default location by creating an
61 environment variable with this name and setting it to the desired path.
62
63 For more information, see the manual available in HTML and PDF in your
64 installation, or online at http://ipython.org/documentation.html.
65 """
66
67 interactive_usage = """
68 IPython -- An enhanced Interactive Python
69 =========================================
70
71 IPython offers a combination of convenient shell features, special commands
72 and a history mechanism for both input (command history) and output (results
73 caching, similar to Mathematica). It is intended to be a fully compatible
74 replacement for the standard Python interpreter, while offering vastly
75 improved functionality and flexibility.
76
77 At your system command line, type 'ipython -h' to see the command line
78 options available. This document only describes interactive features.
79
80 MAIN FEATURES
81 -------------
82
83 * Access to the standard Python help. As of Python 2.1, a help system is
84 available with access to object docstrings and the Python manuals. Simply
85 type 'help' (no quotes) to access it.
86
87 * Magic commands: type %magic for information on the magic subsystem.
88
89 * System command aliases, via the %alias command or the configuration file(s).
90
91 * Dynamic object information:
92
93 Typing ?word or word? prints detailed information about an object. If
94 certain strings in the object are too long (docstrings, code, etc.) they get
95 snipped in the center for brevity.
96
97 Typing ??word or word?? gives access to the full information without
98 snipping long strings. Long strings are sent to the screen through the less
99 pager if longer than the screen, printed otherwise.
100
101 The ?/?? system gives access to the full source code for any object (if
102 available), shows function prototypes and other useful information.
103
104 If you just want to see an object's docstring, type '%pdoc object' (without
105 quotes, and without % if you have automagic on).
106
107 Both %pdoc and ?/?? give you access to documentation even on things which are
108 not explicitely defined. Try for example typing {}.get? or after import os,
109 type os.path.abspath??. The magic functions %pdef, %source and %file operate
110 similarly.
111
112 * Completion in the local namespace, by typing TAB at the prompt.
113
114 At any time, hitting tab will complete any available python commands or
115 variable names, and show you a list of the possible completions if there's
116 no unambiguous one. It will also complete filenames in the current directory.
117
118 This feature requires the readline and rlcomplete modules, so it won't work
119 if your Python lacks readline support (such as under Windows).
120
121 * Search previous command history in two ways (also requires readline):
122
123 - Start typing, and then use Ctrl-p (previous,up) and Ctrl-n (next,down) to
124 search through only the history items that match what you've typed so
125 far. If you use Ctrl-p/Ctrl-n at a blank prompt, they just behave like
126 normal arrow keys.
127
128 - Hit Ctrl-r: opens a search prompt. Begin typing and the system searches
129 your history for lines that match what you've typed so far, completing as
130 much as it can.
131
132 - %hist: search history by index (this does *not* require readline).
133
134 * Persistent command history across sessions.
135
136 * Logging of input with the ability to save and restore a working session.
137
138 * System escape with !. Typing !ls will run 'ls' in the current directory.
139
140 * The reload command does a 'deep' reload of a module: changes made to the
141 module since you imported will actually be available without having to exit.
142
143 * Verbose and colored exception traceback printouts. See the magic xmode and
144 xcolor functions for details (just type %magic).
145
146 * Input caching system:
147
148 IPython offers numbered prompts (In/Out) with input and output caching. All
149 input is saved and can be retrieved as variables (besides the usual arrow
150 key recall).
151
152 The following GLOBAL variables always exist (so don't overwrite them!):
153 _i: stores previous input.
154 _ii: next previous.
155 _iii: next-next previous.
156 _ih : a list of all input _ih[n] is the input from line n.
157
158 Additionally, global variables named _i<n> are dynamically created (<n>
159 being the prompt counter), such that _i<n> == _ih[<n>]
160
161 For example, what you typed at prompt 14 is available as _i14 and _ih[14].
162
163 You can create macros which contain multiple input lines from this history,
164 for later re-execution, with the %macro function.
165
166 The history function %hist allows you to see any part of your input history
167 by printing a range of the _i variables. Note that inputs which contain
168 magic functions (%) appear in the history with a prepended comment. This is
169 because they aren't really valid Python code, so you can't exec them.
170
171 * Output caching system:
172
173 For output that is returned from actions, a system similar to the input
174 cache exists but using _ instead of _i. Only actions that produce a result
175 (NOT assignments, for example) are cached. If you are familiar with
176 Mathematica, IPython's _ variables behave exactly like Mathematica's %
177 variables.
178
179 The following GLOBAL variables always exist (so don't overwrite them!):
180 _ (one underscore): previous output.
181 __ (two underscores): next previous.
182 ___ (three underscores): next-next previous.
183
184 Global variables named _<n> are dynamically created (<n> being the prompt
185 counter), such that the result of output <n> is always available as _<n>.
186
187 Finally, a global dictionary named _oh exists with entries for all lines
188 which generated output.
189
190 * Directory history:
191
192 Your history of visited directories is kept in the global list _dh, and the
193 magic %cd command can be used to go to any entry in that list.
194
195 * Auto-parentheses and auto-quotes (adapted from Nathan Gray's LazyPython)
196
197 1. Auto-parentheses
198
199 Callable objects (i.e. functions, methods, etc) can be invoked like
200 this (notice the commas between the arguments)::
201
202 In [1]: callable_ob arg1, arg2, arg3
203
204 and the input will be translated to this::
205
206 callable_ob(arg1, arg2, arg3)
207
208 This feature is off by default (in rare cases it can produce
209 undesirable side-effects), but you can activate it at the command-line
210 by starting IPython with `--autocall 1`, set it permanently in your
211 configuration file, or turn on at runtime with `%autocall 1`.
212
213 You can force auto-parentheses by using '/' as the first character
214 of a line. For example::
215
216 In [1]: /globals # becomes 'globals()'
217
218 Note that the '/' MUST be the first character on the line! This
219 won't work::
220
221 In [2]: print /globals # syntax error
222
223 In most cases the automatic algorithm should work, so you should
224 rarely need to explicitly invoke /. One notable exception is if you
225 are trying to call a function with a list of tuples as arguments (the
226 parenthesis will confuse IPython)::
227
228 In [1]: zip (1,2,3),(4,5,6) # won't work
229
230 but this will work::
231
232 In [2]: /zip (1,2,3),(4,5,6)
233 ------> zip ((1,2,3),(4,5,6))
234 Out[2]= [(1, 4), (2, 5), (3, 6)]
235
236 IPython tells you that it has altered your command line by
237 displaying the new command line preceded by -->. e.g.::
238
239 In [18]: callable list
240 -------> callable (list)
241
242 2. Auto-Quoting
243
244 You can force auto-quoting of a function's arguments by using ',' as
245 the first character of a line. For example::
246
247 In [1]: ,my_function /home/me # becomes my_function("/home/me")
248
249 If you use ';' instead, the whole argument is quoted as a single
250 string (while ',' splits on whitespace)::
251
252 In [2]: ,my_function a b c # becomes my_function("a","b","c")
253 In [3]: ;my_function a b c # becomes my_function("a b c")
254
255 Note that the ',' MUST be the first character on the line! This
256 won't work::
257
258 In [4]: x = ,my_function /home/me # syntax error
259 """
260
261 interactive_usage_min = """\
262 An enhanced console for Python.
263 Some of its features are:
264 - Readline support if the readline library is present.
265 - Tab completion in the local namespace.
266 - Logging of input, see command-line options.
267 - System shell escape via ! , eg !ls.
268 - Magic commands, starting with a % (like %ls, %pwd, %cd, etc.)
269 - Keeps track of locally defined variables via %who, %whos.
270 - Show object information with a ? eg ?x or x? (use ?? for more info).
271 """
272
273 quick_reference = r"""
274 IPython -- An enhanced Interactive Python - Quick Reference Card
275 ================================================================
276
277 obj?, obj?? : Get help, or more help for object (also works as
278 ?obj, ??obj).
279 ?foo.*abc* : List names in 'foo' containing 'abc' in them.
280 %magic : Information about IPython's 'magic' % functions.
281
282 Magic functions are prefixed by % or %%, and typically take their arguments
283 without parentheses, quotes or even commas for convenience. Line magics take a
284 single % and cell magics are prefixed with two %%.
285
286 Example magic function calls:
287
288 %alias d ls -F : 'd' is now an alias for 'ls -F'
289 alias d ls -F : Works if 'alias' not a python name
290 alist = %alias : Get list of aliases to 'alist'
291 cd /usr/share : Obvious. cd -<tab> to choose from visited dirs.
292 %cd?? : See help AND source for magic %cd
293 %timeit x=10 : time the 'x=10' statement with high precision.
294 %%timeit x=2**100
295 x**100 : time 'x*100' with a setup of 'x=2**100'; setup code is not
296 counted. This is an example of a cell magic.
297
298 System commands:
299
300 !cp a.txt b/ : System command escape, calls os.system()
301 cp a.txt b/ : after %rehashx, most system commands work without !
302 cp ${f}.txt $bar : Variable expansion in magics and system commands
303 files = !ls /usr : Capture sytem command output
304 files.s, files.l, files.n: "a b c", ['a','b','c'], 'a\nb\nc'
305
306 History:
307
308 _i, _ii, _iii : Previous, next previous, next next previous input
309 _i4, _ih[2:5] : Input history line 4, lines 2-4
310 exec _i81 : Execute input history line #81 again
311 %rep 81 : Edit input history line #81
312 _, __, ___ : previous, next previous, next next previous output
313 _dh : Directory history
314 _oh : Output history
315 %hist : Command history. '%hist -g foo' search history for 'foo'
316
317 Autocall:
318
319 f 1,2 : f(1,2) # Off by default, enable with %autocall magic.
320 /f 1,2 : f(1,2) (forced autoparen)
321 ,f 1 2 : f("1","2")
322 ;f 1 2 : f("1 2")
323
324 Remember: TAB completion works in many contexts, not just file names
325 or python names.
326
327 The following magic functions are currently available:
328
329 """
330
331 gui_reference = """\
332 ===============================
333 The graphical IPython console
334 ===============================
335
336 This console is designed to emulate the look, feel and workflow of a terminal
337 environment, while adding a number of enhancements that are simply not possible
338 in a real terminal, such as inline syntax highlighting, true multiline editing,
339 inline graphics and much more.
340
341 This quick reference document contains the basic information you'll need to
342 know to make the most efficient use of it. For the various command line
343 options available at startup, type ``ipython qtconsole --help`` at the command line.
344
345
346 Multiline editing
347 =================
348
349 The graphical console is capable of true multiline editing, but it also tries
350 to behave intuitively like a terminal when possible. If you are used to
351 IPython's old terminal behavior, you should find the transition painless, and
352 once you learn a few basic keybindings it will be a much more efficient
353 environment.
354
355 For single expressions or indented blocks, the console behaves almost like the
356 terminal IPython: single expressions are immediately evaluated, and indented
357 blocks are evaluated once a single blank line is entered::
358
359 In [1]: print "Hello IPython!" # Enter was pressed at the end of the line
360 Hello IPython!
361
362 In [2]: for i in range(10):
363 ...: print i,
364 ...:
365 0 1 2 3 4 5 6 7 8 9
366
367 If you want to enter more than one expression in a single input block
368 (something not possible in the terminal), you can use ``Control-Enter`` at the
369 end of your first line instead of ``Enter``. At that point the console goes
370 into 'cell mode' and even if your inputs are not indented, it will continue
371 accepting arbitrarily many lines until either you enter an extra blank line or
372 you hit ``Shift-Enter`` (the key binding that forces execution). When a
373 multiline cell is entered, IPython analyzes it and executes its code producing
374 an ``Out[n]`` prompt only for the last expression in it, while the rest of the
375 cell is executed as if it was a script. An example should clarify this::
376
377 In [3]: x=1 # Hit C-Enter here
378 ...: y=2 # from now on, regular Enter is sufficient
379 ...: z=3
380 ...: x**2 # This does *not* produce an Out[] value
381 ...: x+y+z # Only the last expression does
382 ...:
383 Out[3]: 6
384
385 The behavior where an extra blank line forces execution is only active if you
386 are actually typing at the keyboard each line, and is meant to make it mimic
387 the IPython terminal behavior. If you paste a long chunk of input (for example
388 a long script copied form an editor or web browser), it can contain arbitrarily
389 many intermediate blank lines and they won't cause any problems. As always,
390 you can then make it execute by appending a blank line *at the end* or hitting
391 ``Shift-Enter`` anywhere within the cell.
392
393 With the up arrow key, you can retrieve previous blocks of input that contain
394 multiple lines. You can move inside of a multiline cell like you would in any
395 text editor. When you want it executed, the simplest thing to do is to hit the
396 force execution key, ``Shift-Enter`` (though you can also navigate to the end
397 and append a blank line by using ``Enter`` twice).
398
399 If you've edited a multiline cell and accidentally navigate out of it with the
400 up or down arrow keys, IPython will clear the cell and replace it with the
401 contents of the one above or below that you navigated to. If this was an
402 accident and you want to retrieve the cell you were editing, use the Undo
403 keybinding, ``Control-z``.
404
405
406 Key bindings
407 ============
408
409 The IPython console supports most of the basic Emacs line-oriented keybindings,
410 in addition to some of its own.
411
412 The keybinding prefixes mean:
413
414 - ``C``: Control
415 - ``S``: Shift
416 - ``M``: Meta (typically the Alt key)
417
418 The keybindings themselves are:
419
420 - ``Enter``: insert new line (may cause execution, see above).
421 - ``C-Enter``: *force* new line, *never* causes execution.
422 - ``S-Enter``: *force* execution regardless of where cursor is, no newline added.
423 - ``Up``: step backwards through the history.
424 - ``Down``: step forwards through the history.
425 - ``S-Up``: search backwards through the history (like ``C-r`` in bash).
426 - ``S-Down``: search forwards through the history.
427 - ``C-c``: copy highlighted text to clipboard (prompts are automatically stripped).
428 - ``C-S-c``: copy highlighted text to clipboard (prompts are not stripped).
429 - ``C-v``: paste text from clipboard.
430 - ``C-z``: undo (retrieves lost text if you move out of a cell with the arrows).
431 - ``C-S-z``: redo.
432 - ``C-o``: move to 'other' area, between pager and terminal.
433 - ``C-l``: clear terminal.
434 - ``C-a``: go to beginning of line.
435 - ``C-e``: go to end of line.
436 - ``C-u``: kill from cursor to the begining of the line.
437 - ``C-k``: kill from cursor to the end of the line.
438 - ``C-y``: yank (paste)
439 - ``C-p``: previous line (like up arrow)
440 - ``C-n``: next line (like down arrow)
441 - ``C-f``: forward (like right arrow)
442 - ``C-b``: back (like left arrow)
443 - ``C-d``: delete next character, or exits if input is empty
444 - ``M-<``: move to the beginning of the input region.
445 - ``M->``: move to the end of the input region.
446 - ``M-d``: delete next word.
447 - ``M-Backspace``: delete previous word.
448 - ``C-.``: force a kernel restart (a confirmation dialog appears).
449 - ``C-+``: increase font size.
450 - ``C--``: decrease font size.
451 - ``C-M-Space``: toggle full screen. (Command-Control-Space on Mac OS X)
452
453 The IPython pager
454 =================
455
456 IPython will show long blocks of text from many sources using a builtin pager.
457 You can control where this pager appears with the ``--paging`` command-line
458 flag:
459
460 - ``inside`` [default]: the pager is overlaid on top of the main terminal. You
461 must quit the pager to get back to the terminal (similar to how a pager such
462 as ``less`` or ``more`` works).
463
464 - ``vsplit``: the console is made double-tall, and the pager appears on the
465 bottom area when needed. You can view its contents while using the terminal.
466
467 - ``hsplit``: the console is made double-wide, and the pager appears on the
468 right area when needed. You can view its contents while using the terminal.
469
470 - ``none``: the console never pages output.
471
472 If you use the vertical or horizontal paging modes, you can navigate between
473 terminal and pager as follows:
474
475 - Tab key: goes from pager to terminal (but not the other way around).
476 - Control-o: goes from one to another always.
477 - Mouse: click on either.
478
479 In all cases, the ``q`` or ``Escape`` keys quit the pager (when used with the
480 focus on the pager area).
481
482 Running subprocesses
483 ====================
484
485 The graphical IPython console uses the ``pexpect`` module to run subprocesses
486 when you type ``!command``. This has a number of advantages (true asynchronous
487 output from subprocesses as well as very robust termination of rogue
488 subprocesses with ``Control-C``), as well as some limitations. The main
489 limitation is that you can *not* interact back with the subprocess, so anything
490 that invokes a pager or expects you to type input into it will block and hang
491 (you can kill it with ``Control-C``).
492
493 We have provided as magics ``%less`` to page files (aliased to ``%more``),
494 ``%clear`` to clear the terminal, and ``%man`` on Linux/OSX. These cover the
495 most common commands you'd want to call in your subshell and that would cause
496 problems if invoked via ``!cmd``, but you need to be aware of this limitation.
497
498 Display
499 =======
500
501 The IPython console can now display objects in a variety of formats, including
502 HTML, PNG and SVG. This is accomplished using the display functions in
503 ``IPython.core.display``::
504
505 In [4]: from IPython.core.display import display, display_html
506
507 In [5]: from IPython.core.display import display_png, display_svg
508
509 Python objects can simply be passed to these functions and the appropriate
510 representations will be displayed in the console as long as the objects know
511 how to compute those representations. The easiest way of teaching objects how
512 to format themselves in various representations is to define special methods
513 such as: ``_repr_html_``, ``_repr_svg_`` and ``_repr_png_``. IPython's display formatters
514 can also be given custom formatter functions for various types::
515
516 In [6]: ip = get_ipython()
517
518 In [7]: html_formatter = ip.display_formatter.formatters['text/html']
519
520 In [8]: html_formatter.for_type(Foo, foo_to_html)
521
522 For further details, see ``IPython.core.formatters``.
523
524 Inline matplotlib graphics
525 ==========================
526
527 The IPython console is capable of displaying matplotlib figures inline, in SVG
528 or PNG format. If started with the ``matplotlib=inline``, then all figures are
529 rendered inline automatically (PNG by default). If started with ``--matplotlib``
530 or ``matplotlib=<your backend>``, then a GUI backend will be used, but IPython's
531 ``display()`` and ``getfigs()`` functions can be used to view plots inline::
532
533 In [9]: display(*getfigs()) # display all figures inline
534
535 In[10]: display(*getfigs(1,2)) # display figures 1 and 2 inline
536 """
537
538
539 quick_guide = """\
540 ? -> Introduction and overview of IPython's features.
541 %quickref -> Quick reference.
542 help -> Python's own help system.
543 object? -> Details about 'object', use 'object??' for extra details.
544 """
545
546 gui_note = """\
547 %guiref -> A brief reference about the graphical user interface.
548 """
549
550 default_banner_parts = [
551 'Python %s\n' % (sys.version.split('\n')[0],),
552 'Type "copyright", "credits" or "license" for more information.\n\n',
553 'IPython {version} -- An enhanced Interactive Python.\n'.format(
554 version=release.version,
555 ),
556 quick_guide
557 ]
558
559 default_gui_banner_parts = default_banner_parts + [gui_note]
560
561 default_banner = ''.join(default_banner_parts)
562
563 default_gui_banner = ''.join(default_gui_banner_parts)
564
565 # page GUI Reference, for use as a magic:
566
567 def page_guiref(arg_s=None):
568 """Show a basic reference about the GUI Console."""
569 from IPython.core import page
570 page.page(gui_reference, auto_html=True)
571
572
[end of IPython/core/usage.py]
[start of IPython/kernel/zmq/zmqshell.py]
1 """A ZMQ-based subclass of InteractiveShell.
2
3 This code is meant to ease the refactoring of the base InteractiveShell into
4 something with a cleaner architecture for 2-process use, without actually
5 breaking InteractiveShell itself. So we're doing something a bit ugly, where
6 we subclass and override what we want to fix. Once this is working well, we
7 can go back to the base class and refactor the code for a cleaner inheritance
8 implementation that doesn't rely on so much monkeypatching.
9
10 But this lets us maintain a fully working IPython as we develop the new
11 machinery. This should thus be thought of as scaffolding.
12 """
13 #-----------------------------------------------------------------------------
14 # Imports
15 #-----------------------------------------------------------------------------
16 from __future__ import print_function
17
18 # Stdlib
19 import os
20 import sys
21 import time
22
23 # System library imports
24 from zmq.eventloop import ioloop
25
26 # Our own
27 from IPython.core.interactiveshell import (
28 InteractiveShell, InteractiveShellABC
29 )
30 from IPython.core import page
31 from IPython.core.autocall import ZMQExitAutocall
32 from IPython.core.displaypub import DisplayPublisher
33 from IPython.core.error import UsageError
34 from IPython.core.magics import MacroToEdit, CodeMagics
35 from IPython.core.magic import magics_class, line_magic, Magics
36 from IPython.core.payloadpage import install_payload_page
37 from IPython.display import display, Javascript
38 from IPython.kernel.inprocess.socket import SocketABC
39 from IPython.kernel import (
40 get_connection_file, get_connection_info, connect_qtconsole
41 )
42 from IPython.testing.skipdoctest import skip_doctest
43 from IPython.utils import openpy
44 from IPython.utils.jsonutil import json_clean, encode_images
45 from IPython.utils.process import arg_split
46 from IPython.utils import py3compat
47 from IPython.utils.traitlets import Instance, Type, Dict, CBool, CBytes
48 from IPython.utils.warn import error
49 from IPython.kernel.zmq.displayhook import ZMQShellDisplayHook
50 from IPython.kernel.zmq.datapub import ZMQDataPublisher
51 from IPython.kernel.zmq.session import extract_header
52 from session import Session
53
54 #-----------------------------------------------------------------------------
55 # Functions and classes
56 #-----------------------------------------------------------------------------
57
58 class ZMQDisplayPublisher(DisplayPublisher):
59 """A display publisher that publishes data using a ZeroMQ PUB socket."""
60
61 session = Instance(Session)
62 pub_socket = Instance(SocketABC)
63 parent_header = Dict({})
64 topic = CBytes(b'display_data')
65
66 def set_parent(self, parent):
67 """Set the parent for outbound messages."""
68 self.parent_header = extract_header(parent)
69
70 def _flush_streams(self):
71 """flush IO Streams prior to display"""
72 sys.stdout.flush()
73 sys.stderr.flush()
74
75 def publish(self, source, data, metadata=None):
76 self._flush_streams()
77 if metadata is None:
78 metadata = {}
79 self._validate_data(source, data, metadata)
80 content = {}
81 content['source'] = source
82 content['data'] = encode_images(data)
83 content['metadata'] = metadata
84 self.session.send(
85 self.pub_socket, u'display_data', json_clean(content),
86 parent=self.parent_header, ident=self.topic,
87 )
88
89 def clear_output(self, wait=False):
90 content = dict(wait=wait)
91
92 print('\r', file=sys.stdout, end='')
93 print('\r', file=sys.stderr, end='')
94 self._flush_streams()
95
96 self.session.send(
97 self.pub_socket, u'clear_output', content,
98 parent=self.parent_header, ident=self.topic,
99 )
100
101 @magics_class
102 class KernelMagics(Magics):
103 #------------------------------------------------------------------------
104 # Magic overrides
105 #------------------------------------------------------------------------
106 # Once the base class stops inheriting from magic, this code needs to be
107 # moved into a separate machinery as well. For now, at least isolate here
108 # the magics which this class needs to implement differently from the base
109 # class, or that are unique to it.
110
111 @line_magic
112 def doctest_mode(self, parameter_s=''):
113 """Toggle doctest mode on and off.
114
115 This mode is intended to make IPython behave as much as possible like a
116 plain Python shell, from the perspective of how its prompts, exceptions
117 and output look. This makes it easy to copy and paste parts of a
118 session into doctests. It does so by:
119
120 - Changing the prompts to the classic ``>>>`` ones.
121 - Changing the exception reporting mode to 'Plain'.
122 - Disabling pretty-printing of output.
123
124 Note that IPython also supports the pasting of code snippets that have
125 leading '>>>' and '...' prompts in them. This means that you can paste
126 doctests from files or docstrings (even if they have leading
127 whitespace), and the code will execute correctly. You can then use
128 '%history -t' to see the translated history; this will give you the
129 input after removal of all the leading prompts and whitespace, which
130 can be pasted back into an editor.
131
132 With these features, you can switch into this mode easily whenever you
133 need to do testing and changes to doctests, without having to leave
134 your existing IPython session.
135 """
136
137 from IPython.utils.ipstruct import Struct
138
139 # Shorthands
140 shell = self.shell
141 disp_formatter = self.shell.display_formatter
142 ptformatter = disp_formatter.formatters['text/plain']
143 # dstore is a data store kept in the instance metadata bag to track any
144 # changes we make, so we can undo them later.
145 dstore = shell.meta.setdefault('doctest_mode', Struct())
146 save_dstore = dstore.setdefault
147
148 # save a few values we'll need to recover later
149 mode = save_dstore('mode', False)
150 save_dstore('rc_pprint', ptformatter.pprint)
151 save_dstore('rc_active_types',disp_formatter.active_types)
152 save_dstore('xmode', shell.InteractiveTB.mode)
153
154 if mode == False:
155 # turn on
156 ptformatter.pprint = False
157 disp_formatter.active_types = ['text/plain']
158 shell.magic('xmode Plain')
159 else:
160 # turn off
161 ptformatter.pprint = dstore.rc_pprint
162 disp_formatter.active_types = dstore.rc_active_types
163 shell.magic("xmode " + dstore.xmode)
164
165 # Store new mode and inform on console
166 dstore.mode = bool(1-int(mode))
167 mode_label = ['OFF','ON'][dstore.mode]
168 print('Doctest mode is:', mode_label)
169
170 # Send the payload back so that clients can modify their prompt display
171 payload = dict(
172 source='doctest_mode',
173 mode=dstore.mode)
174 shell.payload_manager.write_payload(payload)
175
176
177 _find_edit_target = CodeMagics._find_edit_target
178
179 @skip_doctest
180 @line_magic
181 def edit(self, parameter_s='', last_call=['','']):
182 """Bring up an editor and execute the resulting code.
183
184 Usage:
185 %edit [options] [args]
186
187 %edit runs an external text editor. You will need to set the command for
188 this editor via the ``TerminalInteractiveShell.editor`` option in your
189 configuration file before it will work.
190
191 This command allows you to conveniently edit multi-line code right in
192 your IPython session.
193
194 If called without arguments, %edit opens up an empty editor with a
195 temporary file and will execute the contents of this file when you
196 close it (don't forget to save it!).
197
198
199 Options:
200
201 -n <number>: open the editor at a specified line number. By default,
202 the IPython editor hook uses the unix syntax 'editor +N filename', but
203 you can configure this by providing your own modified hook if your
204 favorite editor supports line-number specifications with a different
205 syntax.
206
207 -p: this will call the editor with the same data as the previous time
208 it was used, regardless of how long ago (in your current session) it
209 was.
210
211 -r: use 'raw' input. This option only applies to input taken from the
212 user's history. By default, the 'processed' history is used, so that
213 magics are loaded in their transformed version to valid Python. If
214 this option is given, the raw input as typed as the command line is
215 used instead. When you exit the editor, it will be executed by
216 IPython's own processor.
217
218 -x: do not execute the edited code immediately upon exit. This is
219 mainly useful if you are editing programs which need to be called with
220 command line arguments, which you can then do using %run.
221
222
223 Arguments:
224
225 If arguments are given, the following possibilites exist:
226
227 - The arguments are numbers or pairs of colon-separated numbers (like
228 1 4:8 9). These are interpreted as lines of previous input to be
229 loaded into the editor. The syntax is the same of the %macro command.
230
231 - If the argument doesn't start with a number, it is evaluated as a
232 variable and its contents loaded into the editor. You can thus edit
233 any string which contains python code (including the result of
234 previous edits).
235
236 - If the argument is the name of an object (other than a string),
237 IPython will try to locate the file where it was defined and open the
238 editor at the point where it is defined. You can use `%edit function`
239 to load an editor exactly at the point where 'function' is defined,
240 edit it and have the file be executed automatically.
241
242 If the object is a macro (see %macro for details), this opens up your
243 specified editor with a temporary file containing the macro's data.
244 Upon exit, the macro is reloaded with the contents of the file.
245
246 Note: opening at an exact line is only supported under Unix, and some
247 editors (like kedit and gedit up to Gnome 2.8) do not understand the
248 '+NUMBER' parameter necessary for this feature. Good editors like
249 (X)Emacs, vi, jed, pico and joe all do.
250
251 - If the argument is not found as a variable, IPython will look for a
252 file with that name (adding .py if necessary) and load it into the
253 editor. It will execute its contents with execfile() when you exit,
254 loading any code in the file into your interactive namespace.
255
256 After executing your code, %edit will return as output the code you
257 typed in the editor (except when it was an existing file). This way
258 you can reload the code in further invocations of %edit as a variable,
259 via _<NUMBER> or Out[<NUMBER>], where <NUMBER> is the prompt number of
260 the output.
261
262 Note that %edit is also available through the alias %ed.
263
264 This is an example of creating a simple function inside the editor and
265 then modifying it. First, start up the editor:
266
267 In [1]: ed
268 Editing... done. Executing edited code...
269 Out[1]: 'def foo():n print "foo() was defined in an editing session"n'
270
271 We can then call the function foo():
272
273 In [2]: foo()
274 foo() was defined in an editing session
275
276 Now we edit foo. IPython automatically loads the editor with the
277 (temporary) file where foo() was previously defined:
278
279 In [3]: ed foo
280 Editing... done. Executing edited code...
281
282 And if we call foo() again we get the modified version:
283
284 In [4]: foo()
285 foo() has now been changed!
286
287 Here is an example of how to edit a code snippet successive
288 times. First we call the editor:
289
290 In [5]: ed
291 Editing... done. Executing edited code...
292 hello
293 Out[5]: "print 'hello'n"
294
295 Now we call it again with the previous output (stored in _):
296
297 In [6]: ed _
298 Editing... done. Executing edited code...
299 hello world
300 Out[6]: "print 'hello world'n"
301
302 Now we call it with the output #8 (stored in _8, also as Out[8]):
303
304 In [7]: ed _8
305 Editing... done. Executing edited code...
306 hello again
307 Out[7]: "print 'hello again'n"
308 """
309
310 opts,args = self.parse_options(parameter_s,'prn:')
311
312 try:
313 filename, lineno, _ = CodeMagics._find_edit_target(self.shell, args, opts, last_call)
314 except MacroToEdit as e:
315 # TODO: Implement macro editing over 2 processes.
316 print("Macro editing not yet implemented in 2-process model.")
317 return
318
319 # Make sure we send to the client an absolute path, in case the working
320 # directory of client and kernel don't match
321 filename = os.path.abspath(filename)
322
323 payload = {
324 'source' : 'edit_magic',
325 'filename' : filename,
326 'line_number' : lineno
327 }
328 self.shell.payload_manager.write_payload(payload)
329
330 # A few magics that are adapted to the specifics of using pexpect and a
331 # remote terminal
332
333 @line_magic
334 def clear(self, arg_s):
335 """Clear the terminal."""
336 if os.name == 'posix':
337 self.shell.system("clear")
338 else:
339 self.shell.system("cls")
340
341 if os.name == 'nt':
342 # This is the usual name in windows
343 cls = line_magic('cls')(clear)
344
345 # Terminal pagers won't work over pexpect, but we do have our own pager
346
347 @line_magic
348 def less(self, arg_s):
349 """Show a file through the pager.
350
351 Files ending in .py are syntax-highlighted."""
352 if not arg_s:
353 raise UsageError('Missing filename.')
354
355 cont = open(arg_s).read()
356 if arg_s.endswith('.py'):
357 cont = self.shell.pycolorize(openpy.read_py_file(arg_s, skip_encoding_cookie=False))
358 else:
359 cont = open(arg_s).read()
360 page.page(cont)
361
362 more = line_magic('more')(less)
363
364 # Man calls a pager, so we also need to redefine it
365 if os.name == 'posix':
366 @line_magic
367 def man(self, arg_s):
368 """Find the man page for the given command and display in pager."""
369 page.page(self.shell.getoutput('man %s | col -b' % arg_s,
370 split=False))
371
372 @line_magic
373 def connect_info(self, arg_s):
374 """Print information for connecting other clients to this kernel
375
376 It will print the contents of this session's connection file, as well as
377 shortcuts for local clients.
378
379 In the simplest case, when called from the most recently launched kernel,
380 secondary clients can be connected, simply with:
381
382 $> ipython <app> --existing
383
384 """
385
386 from IPython.core.application import BaseIPythonApplication as BaseIPApp
387
388 if BaseIPApp.initialized():
389 app = BaseIPApp.instance()
390 security_dir = app.profile_dir.security_dir
391 profile = app.profile
392 else:
393 profile = 'default'
394 security_dir = ''
395
396 try:
397 connection_file = get_connection_file()
398 info = get_connection_info(unpack=False)
399 except Exception as e:
400 error("Could not get connection info: %r" % e)
401 return
402
403 # add profile flag for non-default profile
404 profile_flag = "--profile %s" % profile if profile != 'default' else ""
405
406 # if it's in the security dir, truncate to basename
407 if security_dir == os.path.dirname(connection_file):
408 connection_file = os.path.basename(connection_file)
409
410
411 print (info + '\n')
412 print ("Paste the above JSON into a file, and connect with:\n"
413 " $> ipython <app> --existing <file>\n"
414 "or, if you are local, you can connect with just:\n"
415 " $> ipython <app> --existing {0} {1}\n"
416 "or even just:\n"
417 " $> ipython <app> --existing {1}\n"
418 "if this is the most recent IPython session you have started.".format(
419 connection_file, profile_flag
420 )
421 )
422
423 @line_magic
424 def qtconsole(self, arg_s):
425 """Open a qtconsole connected to this kernel.
426
427 Useful for connecting a qtconsole to running notebooks, for better
428 debugging.
429 """
430
431 # %qtconsole should imply bind_kernel for engines:
432 try:
433 from IPython.parallel import bind_kernel
434 except ImportError:
435 # technically possible, because parallel has higher pyzmq min-version
436 pass
437 else:
438 bind_kernel()
439
440 try:
441 p = connect_qtconsole(argv=arg_split(arg_s, os.name=='posix'))
442 except Exception as e:
443 error("Could not start qtconsole: %r" % e)
444 return
445
446 @line_magic
447 def autosave(self, arg_s):
448 """Set the autosave interval in the notebook (in seconds).
449
450 The default value is 120, or two minutes.
451 ``%autosave 0`` will disable autosave.
452
453 This magic only has an effect when called from the notebook interface.
454 It has no effect when called in a startup file.
455 """
456
457 try:
458 interval = int(arg_s)
459 except ValueError:
460 raise UsageError("%%autosave requires an integer, got %r" % arg_s)
461
462 # javascript wants milliseconds
463 milliseconds = 1000 * interval
464 display(Javascript("IPython.notebook.set_autosave_interval(%i)" % milliseconds),
465 include=['application/javascript']
466 )
467 if interval:
468 print("Autosaving every %i seconds" % interval)
469 else:
470 print("Autosave disabled")
471
472
473 class ZMQInteractiveShell(InteractiveShell):
474 """A subclass of InteractiveShell for ZMQ."""
475
476 displayhook_class = Type(ZMQShellDisplayHook)
477 display_pub_class = Type(ZMQDisplayPublisher)
478 data_pub_class = Type(ZMQDataPublisher)
479
480 # Override the traitlet in the parent class, because there's no point using
481 # readline for the kernel. Can be removed when the readline code is moved
482 # to the terminal frontend.
483 colors_force = CBool(True)
484 readline_use = CBool(False)
485 # autoindent has no meaning in a zmqshell, and attempting to enable it
486 # will print a warning in the absence of readline.
487 autoindent = CBool(False)
488
489 exiter = Instance(ZMQExitAutocall)
490 def _exiter_default(self):
491 return ZMQExitAutocall(self)
492
493 def _exit_now_changed(self, name, old, new):
494 """stop eventloop when exit_now fires"""
495 if new:
496 loop = ioloop.IOLoop.instance()
497 loop.add_timeout(time.time()+0.1, loop.stop)
498
499 keepkernel_on_exit = None
500
501 # Over ZeroMQ, GUI control isn't done with PyOS_InputHook as there is no
502 # interactive input being read; we provide event loop support in ipkernel
503 @staticmethod
504 def enable_gui(gui):
505 from .eventloops import enable_gui as real_enable_gui
506 try:
507 real_enable_gui(gui)
508 except ValueError as e:
509 raise UsageError("%s" % e)
510
511 def init_environment(self):
512 """Configure the user's environment.
513
514 """
515 env = os.environ
516 # These two ensure 'ls' produces nice coloring on BSD-derived systems
517 env['TERM'] = 'xterm-color'
518 env['CLICOLOR'] = '1'
519 # Since normal pagers don't work at all (over pexpect we don't have
520 # single-key control of the subprocess), try to disable paging in
521 # subprocesses as much as possible.
522 env['PAGER'] = 'cat'
523 env['GIT_PAGER'] = 'cat'
524
525 # And install the payload version of page.
526 install_payload_page()
527
528 def auto_rewrite_input(self, cmd):
529 """Called to show the auto-rewritten input for autocall and friends.
530
531 FIXME: this payload is currently not correctly processed by the
532 frontend.
533 """
534 new = self.prompt_manager.render('rewrite') + cmd
535 payload = dict(
536 source='auto_rewrite_input',
537 transformed_input=new,
538 )
539 self.payload_manager.write_payload(payload)
540
541 def ask_exit(self):
542 """Engage the exit actions."""
543 self.exit_now = True
544 payload = dict(
545 source='ask_exit',
546 exit=True,
547 keepkernel=self.keepkernel_on_exit,
548 )
549 self.payload_manager.write_payload(payload)
550
551 def _showtraceback(self, etype, evalue, stb):
552
553 exc_content = {
554 u'traceback' : stb,
555 u'ename' : unicode(etype.__name__),
556 u'evalue' : py3compat.safe_unicode(evalue),
557 }
558
559 dh = self.displayhook
560 # Send exception info over pub socket for other clients than the caller
561 # to pick up
562 topic = None
563 if dh.topic:
564 topic = dh.topic.replace(b'pyout', b'pyerr')
565
566 exc_msg = dh.session.send(dh.pub_socket, u'pyerr', json_clean(exc_content), dh.parent_header, ident=topic)
567
568 # FIXME - Hack: store exception info in shell object. Right now, the
569 # caller is reading this info after the fact, we need to fix this logic
570 # to remove this hack. Even uglier, we need to store the error status
571 # here, because in the main loop, the logic that sets it is being
572 # skipped because runlines swallows the exceptions.
573 exc_content[u'status'] = u'error'
574 self._reply_content = exc_content
575 # /FIXME
576
577 return exc_content
578
579 def set_next_input(self, text):
580 """Send the specified text to the frontend to be presented at the next
581 input cell."""
582 payload = dict(
583 source='set_next_input',
584 text=text
585 )
586 self.payload_manager.write_payload(payload)
587
588 #-------------------------------------------------------------------------
589 # Things related to magics
590 #-------------------------------------------------------------------------
591
592 def init_magics(self):
593 super(ZMQInteractiveShell, self).init_magics()
594 self.register_magics(KernelMagics)
595 self.magics_manager.register_alias('ed', 'edit')
596
597
598
599 InteractiveShellABC.register(ZMQInteractiveShell)
600
[end of IPython/kernel/zmq/zmqshell.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
ipython/ipython
|
a846ffaf2e74823c781972e4f1f7d2996f96ec64
|
Notebook should be saved before downloading
If a user works in a new notebook and clicks 'Download' without first having saved anything, they get an empty notebook. We should either prompt to save, or automatically produce a save action, since otherwise it's pretty confusing. Reported by @jdh2358.
|
@minrk, we never merged 2aacd8b9faa384 b/c it triggered popup warnings, right? We may have to bump this to 0.13 though, we're getting too close to release time...
Right - I don't know a better way to do this. I'm fine pushing to 0.13.
Sounds good. Triage, triage...
What is the status of this? Is there an active PR related to it?
|
2013-09-29T21:53:28Z
|
<patch>
diff --git a/IPython/config/application.py b/IPython/config/application.py
--- a/IPython/config/application.py
+++ b/IPython/config/application.py
@@ -38,6 +38,7 @@
)
from IPython.utils.importstring import import_item
from IPython.utils.text import indent, wrap_paragraphs, dedent
+from IPython.utils import py3compat
#-----------------------------------------------------------------------------
# function for re-wrapping a helpstring
@@ -457,7 +458,7 @@ def flatten_flags(self):
def parse_command_line(self, argv=None):
"""Parse the command line arguments."""
argv = sys.argv[1:] if argv is None else argv
- self.argv = list(argv)
+ self.argv = [ py3compat.cast_unicode(arg) for arg in argv ]
if argv and argv[0] == 'help':
# turn `ipython help notebook` into `ipython notebook -h`
diff --git a/IPython/consoleapp.py b/IPython/consoleapp.py
--- a/IPython/consoleapp.py
+++ b/IPython/consoleapp.py
@@ -45,6 +45,7 @@
kernel_aliases,
IPKernelApp
)
+from IPython.kernel.zmq.pylab.config import InlineBackend
from IPython.kernel.zmq.session import Session, default_secure
from IPython.kernel.zmq.zmqshell import ZMQInteractiveShell
from IPython.kernel.connect import ConnectionFileMixin
@@ -110,14 +111,7 @@
# IPythonConsole
#-----------------------------------------------------------------------------
-classes = [IPKernelApp, ZMQInteractiveShell, KernelManager, ProfileDir, Session]
-
-try:
- from IPython.kernel.zmq.pylab.backend_inline import InlineBackend
-except ImportError:
- pass
-else:
- classes.append(InlineBackend)
+classes = [IPKernelApp, ZMQInteractiveShell, KernelManager, ProfileDir, Session, InlineBackend]
class IPythonConsoleApp(ConnectionFileMixin):
name = 'ipython-console-mixin'
diff --git a/IPython/html/base/handlers.py b/IPython/html/base/handlers.py
--- a/IPython/html/base/handlers.py
+++ b/IPython/html/base/handlers.py
@@ -19,12 +19,16 @@
import datetime
import email.utils
+import functools
import hashlib
+import json
import logging
import mimetypes
import os
import stat
+import sys
import threading
+import traceback
from tornado import web
from tornado import websocket
@@ -37,6 +41,11 @@
from IPython.config import Application
from IPython.external.decorator import decorator
from IPython.utils.path import filefind
+from IPython.utils.jsonutil import date_default
+
+# UF_HIDDEN is a stat flag not defined in the stat module.
+# It is used by BSD to indicate hidden files.
+UF_HIDDEN = getattr(stat, 'UF_HIDDEN', 32768)
#-----------------------------------------------------------------------------
# Monkeypatch for Tornado <= 2.1.1 - Remove when no longer necessary!
@@ -214,7 +223,11 @@ def cluster_manager(self):
return self.settings['cluster_manager']
@property
- def project(self):
+ def session_manager(self):
+ return self.settings['session_manager']
+
+ @property
+ def project_dir(self):
return self.notebook_manager.notebook_dir
#---------------------------------------------------------------
@@ -240,12 +253,100 @@ def template_namespace(self):
use_less=self.use_less,
)
+ def get_json_body(self):
+ """Return the body of the request as JSON data."""
+ if not self.request.body:
+ return None
+ # Do we need to call body.decode('utf-8') here?
+ body = self.request.body.strip().decode(u'utf-8')
+ try:
+ model = json.loads(body)
+ except Exception:
+ self.log.debug("Bad JSON: %r", body)
+ self.log.error("Couldn't parse JSON", exc_info=True)
+ raise web.HTTPError(400, u'Invalid JSON in body of request')
+ return model
+
+
class AuthenticatedFileHandler(IPythonHandler, web.StaticFileHandler):
"""static files should only be accessible when logged in"""
@web.authenticated
def get(self, path):
+ if os.path.splitext(path)[1] == '.ipynb':
+ name = os.path.basename(path)
+ self.set_header('Content-Type', 'application/json')
+ self.set_header('Content-Disposition','attachment; filename="%s"' % name)
+
return web.StaticFileHandler.get(self, path)
+
+ def validate_absolute_path(self, root, absolute_path):
+ """Validate and return the absolute path.
+
+ Requires tornado 3.1
+
+ Adding to tornado's own handling, forbids the serving of hidden files.
+ """
+ abs_path = super(AuthenticatedFileHandler, self).validate_absolute_path(root, absolute_path)
+ abs_root = os.path.abspath(root)
+ self.forbid_hidden(abs_root, abs_path)
+ return abs_path
+
+ def forbid_hidden(self, absolute_root, absolute_path):
+ """Raise 403 if a file is hidden or contained in a hidden directory.
+
+ Hidden is determined by either name starting with '.'
+ or the UF_HIDDEN flag as reported by stat
+ """
+ inside_root = absolute_path[len(absolute_root):]
+ if any(part.startswith('.') for part in inside_root.split(os.sep)):
+ raise web.HTTPError(403)
+
+ # check UF_HIDDEN on any location up to root
+ path = absolute_path
+ while path and path.startswith(absolute_root) and path != absolute_root:
+ st = os.stat(path)
+ if getattr(st, 'st_flags', 0) & UF_HIDDEN:
+ raise web.HTTPError(403)
+ path = os.path.dirname(path)
+
+ return absolute_path
+
+
+def json_errors(method):
+ """Decorate methods with this to return GitHub style JSON errors.
+
+ This should be used on any JSON API on any handler method that can raise HTTPErrors.
+
+ This will grab the latest HTTPError exception using sys.exc_info
+ and then:
+
+ 1. Set the HTTP status code based on the HTTPError
+ 2. Create and return a JSON body with a message field describing
+ the error in a human readable form.
+ """
+ @functools.wraps(method)
+ def wrapper(self, *args, **kwargs):
+ try:
+ result = method(self, *args, **kwargs)
+ except web.HTTPError as e:
+ status = e.status_code
+ message = e.log_message
+ self.set_status(e.status_code)
+ self.finish(json.dumps(dict(message=message)))
+ except Exception:
+ self.log.error("Unhandled error in API request", exc_info=True)
+ status = 500
+ message = "Unknown server error"
+ t, value, tb = sys.exc_info()
+ self.set_status(status)
+ tb_text = ''.join(traceback.format_exception(t, value, tb))
+ reply = dict(message=message, traceback=tb_text)
+ self.finish(json.dumps(reply))
+ else:
+ return result
+ return wrapper
+
#-----------------------------------------------------------------------------
@@ -266,7 +367,7 @@ def initialize(self, path, default_filename=None):
if isinstance(path, basestring):
path = [path]
self.roots = tuple(
- os.path.abspath(os.path.expanduser(p)) + os.path.sep for p in path
+ os.path.abspath(os.path.expanduser(p)) + os.sep for p in path
)
self.default_filename = default_filename
@@ -284,7 +385,7 @@ def locate_file(cls, path, roots):
# os.path.abspath strips a trailing /
# it needs to be temporarily added back for requests to root/
- if not (abspath + os.path.sep).startswith(roots):
+ if not (abspath + os.sep).startswith(roots):
raise HTTPError(403, "%s is not in root static directory", path)
cls._static_paths[path] = abspath
@@ -339,7 +440,7 @@ def get(self, path, include_body=True):
if if_since >= modified:
self.set_status(304)
return
-
+
with open(abspath, "rb") as file:
data = file.read()
hasher = hashlib.sha1()
@@ -369,7 +470,7 @@ def get_version(cls, settings, path):
if isinstance(static_paths, basestring):
static_paths = [static_paths]
roots = tuple(
- os.path.abspath(os.path.expanduser(p)) + os.path.sep for p in static_paths
+ os.path.abspath(os.path.expanduser(p)) + os.sep for p in static_paths
)
try:
@@ -403,13 +504,26 @@ def parse_url_path(self, url_path):
``static_url_prefix`` removed. The return value should be
filesystem path relative to ``static_path``.
"""
- if os.path.sep != "/":
- url_path = url_path.replace("/", os.path.sep)
+ if os.sep != "/":
+ url_path = url_path.replace("/", os.sep)
return url_path
+class TrailingSlashHandler(web.RequestHandler):
+ """Simple redirect handler that strips trailing slashes
+
+ This should be the first, highest priority handler.
+ """
+
+ SUPPORTED_METHODS = ['GET']
+
+ def get(self):
+ self.redirect(self.request.uri.rstrip('/'))
+
#-----------------------------------------------------------------------------
# URL to handler mappings
#-----------------------------------------------------------------------------
-default_handlers = []
+default_handlers = [
+ (r".*/", TrailingSlashHandler)
+]
diff --git a/IPython/html/notebook/handlers.py b/IPython/html/notebook/handlers.py
--- a/IPython/html/notebook/handlers.py
+++ b/IPython/html/notebook/handlers.py
@@ -17,75 +17,67 @@
#-----------------------------------------------------------------------------
import os
+import json
+
from tornado import web
HTTPError = web.HTTPError
from ..base.handlers import IPythonHandler
-from ..utils import url_path_join
+from ..services.notebooks.handlers import _notebook_path_regex, _path_regex
+from ..utils import url_path_join, url_escape, url_unescape
+from urllib import quote
#-----------------------------------------------------------------------------
# Handlers
#-----------------------------------------------------------------------------
-class NewHandler(IPythonHandler):
-
- @web.authenticated
- def get(self):
- notebook_id = self.notebook_manager.new_notebook()
- self.redirect(url_path_join(self.base_project_url, notebook_id))
-
-
-class NamedNotebookHandler(IPythonHandler):
+class NotebookHandler(IPythonHandler):
@web.authenticated
- def get(self, notebook_id):
+ def get(self, path='', name=None):
+ """get renders the notebook template if a name is given, or
+ redirects to the '/files/' handler if the name is not given."""
+ path = path.strip('/')
nbm = self.notebook_manager
- if not nbm.notebook_exists(notebook_id):
- raise web.HTTPError(404, u'Notebook does not exist: %s' % notebook_id)
+ if name is None:
+ raise web.HTTPError(500, "This shouldn't be accessible: %s" % self.request.uri)
+
+ # a .ipynb filename was given
+ if not nbm.notebook_exists(name, path):
+ raise web.HTTPError(404, u'Notebook does not exist: %s/%s' % (path, name))
+ name = url_escape(name)
+ path = url_escape(path)
self.write(self.render_template('notebook.html',
- project=self.project,
- notebook_id=notebook_id,
+ project=self.project_dir,
+ notebook_path=path,
+ notebook_name=name,
kill_kernel=False,
mathjax_url=self.mathjax_url,
)
)
-
class NotebookRedirectHandler(IPythonHandler):
-
- @web.authenticated
- def get(self, notebook_name):
- # strip trailing .ipynb:
- notebook_name = os.path.splitext(notebook_name)[0]
- notebook_id = self.notebook_manager.rev_mapping.get(notebook_name, '')
- if notebook_id:
- url = url_path_join(self.settings.get('base_project_url', '/'), notebook_id)
- return self.redirect(url)
+ def get(self, path=''):
+ nbm = self.notebook_manager
+ if nbm.path_exists(path):
+ # it's a *directory*, redirect to /tree
+ url = url_path_join(self.base_project_url, 'tree', path)
else:
- raise HTTPError(404)
-
-
-class NotebookCopyHandler(IPythonHandler):
-
- @web.authenticated
- def get(self, notebook_id):
- notebook_id = self.notebook_manager.copy_notebook(notebook_id)
- self.redirect(url_path_join(self.base_project_url, notebook_id))
-
+ # otherwise, redirect to /files
+ # TODO: This should check if it's actually a file
+ url = url_path_join(self.base_project_url, 'files', path)
+ url = url_escape(url)
+ self.log.debug("Redirecting %s to %s", self.request.path, url)
+ self.redirect(url)
#-----------------------------------------------------------------------------
# URL to handler mappings
#-----------------------------------------------------------------------------
-_notebook_id_regex = r"(?P<notebook_id>\w+-\w+-\w+-\w+-\w+)"
-_notebook_name_regex = r"(?P<notebook_name>.+\.ipynb)"
-
default_handlers = [
- (r"/new", NewHandler),
- (r"/%s" % _notebook_id_regex, NamedNotebookHandler),
- (r"/%s" % _notebook_name_regex, NotebookRedirectHandler),
- (r"/%s/copy" % _notebook_id_regex, NotebookCopyHandler),
-
+ (r"/notebooks%s" % _notebook_path_regex, NotebookHandler),
+ (r"/notebooks%s" % _path_regex, NotebookRedirectHandler),
]
+
diff --git a/IPython/html/notebookapp.py b/IPython/html/notebookapp.py
--- a/IPython/html/notebookapp.py
+++ b/IPython/html/notebookapp.py
@@ -65,6 +65,7 @@
from .services.notebooks.nbmanager import NotebookManager
from .services.notebooks.filenbmanager import FileNotebookManager
from .services.clusters.clustermanager import ClusterManager
+from .services.sessions.sessionmanager import SessionManager
from .base.handlers import AuthenticatedFileHandler, FileFindHandler
@@ -127,19 +128,19 @@ def load_handlers(name):
class NotebookWebApplication(web.Application):
def __init__(self, ipython_app, kernel_manager, notebook_manager,
- cluster_manager, log,
- base_project_url, settings_overrides):
+ cluster_manager, session_manager, log, base_project_url,
+ settings_overrides):
settings = self.init_settings(
ipython_app, kernel_manager, notebook_manager, cluster_manager,
- log, base_project_url, settings_overrides)
+ session_manager, log, base_project_url, settings_overrides)
handlers = self.init_handlers(settings)
super(NotebookWebApplication, self).__init__(handlers, **settings)
def init_settings(self, ipython_app, kernel_manager, notebook_manager,
- cluster_manager, log,
- base_project_url, settings_overrides):
+ cluster_manager, session_manager, log, base_project_url,
+ settings_overrides):
# Python < 2.6.5 doesn't accept unicode keys in f(**kwargs), and
# base_project_url will always be unicode, which will in turn
# make the patterns unicode, and ultimately result in unicode
@@ -168,7 +169,8 @@ def init_settings(self, ipython_app, kernel_manager, notebook_manager,
kernel_manager=kernel_manager,
notebook_manager=notebook_manager,
cluster_manager=cluster_manager,
-
+ session_manager=session_manager,
+
# IPython stuff
nbextensions_path = ipython_app.nbextensions_path,
mathjax_url=ipython_app.mathjax_url,
@@ -192,6 +194,7 @@ def init_handlers(self, settings):
handlers.extend(load_handlers('services.kernels.handlers'))
handlers.extend(load_handlers('services.notebooks.handlers'))
handlers.extend(load_handlers('services.clusters.handlers'))
+ handlers.extend(load_handlers('services.sessions.handlers'))
handlers.extend([
(r"/files/(.*)", AuthenticatedFileHandler, {'path' : settings['notebook_manager'].notebook_dir}),
(r"/nbextensions/(.*)", FileFindHandler, {'path' : settings['nbextensions_path']}),
@@ -497,13 +500,16 @@ def parse_command_line(self, argv=None):
super(NotebookApp, self).parse_command_line(argv)
if self.extra_args:
- f = os.path.abspath(self.extra_args[0])
+ arg0 = self.extra_args[0]
+ f = os.path.abspath(arg0)
+ self.argv.remove(arg0)
+ if not os.path.exists(f):
+ self.log.critical("No such file or directory: %s", f)
+ self.exit(1)
if os.path.isdir(f):
- nbdir = f
- else:
+ self.config.FileNotebookManager.notebook_dir = f
+ elif os.path.isfile(f):
self.file_to_run = f
- nbdir = os.path.dirname(f)
- self.config.NotebookManager.notebook_dir = nbdir
def init_kernel_argv(self):
"""construct the kernel arguments"""
@@ -523,7 +529,7 @@ def init_configurables(self):
)
kls = import_item(self.notebook_manager_class)
self.notebook_manager = kls(parent=self, log=self.log)
- self.notebook_manager.load_notebook_names()
+ self.session_manager = SessionManager(parent=self, log=self.log)
self.cluster_manager = ClusterManager(parent=self, log=self.log)
self.cluster_manager.update_profiles()
@@ -535,14 +541,17 @@ def init_logging(self):
# hook up tornado 3's loggers to our app handlers
for name in ('access', 'application', 'general'):
- logging.getLogger('tornado.%s' % name).handlers = self.log.handlers
+ logger = logging.getLogger('tornado.%s' % name)
+ logger.propagate = False
+ logger.setLevel(self.log.level)
+ logger.handlers = self.log.handlers
def init_webapp(self):
"""initialize tornado webapp and httpserver"""
self.web_app = NotebookWebApplication(
- self, self.kernel_manager, self.notebook_manager,
- self.cluster_manager, self.log,
- self.base_project_url, self.webapp_settings,
+ self, self.kernel_manager, self.notebook_manager,
+ self.cluster_manager, self.session_manager,
+ self.log, self.base_project_url, self.webapp_settings
)
if self.certfile:
ssl_options = dict(certfile=self.certfile)
@@ -726,12 +735,22 @@ def start(self):
except webbrowser.Error as e:
self.log.warn('No web browser found: %s.' % e)
browser = None
-
- if self.file_to_run:
- name, _ = os.path.splitext(os.path.basename(self.file_to_run))
- url = self.notebook_manager.rev_mapping.get(name, '')
+
+ nbdir = os.path.abspath(self.notebook_manager.notebook_dir)
+ f = self.file_to_run
+ if f and f.startswith(nbdir):
+ f = f[len(nbdir):]
+ else:
+ self.log.warn(
+ "Probably won't be able to open notebook %s "
+ "because it is not in notebook_dir %s",
+ f, nbdir,
+ )
+
+ if os.path.isfile(self.file_to_run):
+ url = url_path_join('notebooks', f)
else:
- url = ''
+ url = url_path_join('tree', f)
if browser:
b = lambda : browser.open("%s://%s:%i%s%s" % (proto, ip,
self.port, self.base_project_url, url), new=2)
diff --git a/IPython/html/services/kernels/handlers.py b/IPython/html/services/kernels/handlers.py
--- a/IPython/html/services/kernels/handlers.py
+++ b/IPython/html/services/kernels/handlers.py
@@ -22,8 +22,9 @@
from zmq.utils import jsonapi
from IPython.utils.jsonutil import date_default
+from IPython.html.utils import url_path_join, url_escape
-from ...base.handlers import IPythonHandler
+from ...base.handlers import IPythonHandler, json_errors
from ...base.zmqhandlers import AuthenticatedZMQStreamHandler
#-----------------------------------------------------------------------------
@@ -34,26 +35,37 @@
class MainKernelHandler(IPythonHandler):
@web.authenticated
+ @json_errors
def get(self):
km = self.kernel_manager
- self.finish(jsonapi.dumps(km.list_kernel_ids()))
+ self.finish(jsonapi.dumps(km.list_kernels(self.ws_url)))
@web.authenticated
+ @json_errors
def post(self):
km = self.kernel_manager
- nbm = self.notebook_manager
- notebook_id = self.get_argument('notebook', default=None)
- kernel_id = km.start_kernel(notebook_id, cwd=nbm.notebook_dir)
- data = {'ws_url':self.ws_url,'kernel_id':kernel_id}
- self.set_header('Location', '{0}kernels/{1}'.format(self.base_kernel_url, kernel_id))
- self.finish(jsonapi.dumps(data))
+ kernel_id = km.start_kernel()
+ model = km.kernel_model(kernel_id, self.ws_url)
+ location = url_path_join(self.base_kernel_url, 'api', 'kernels', kernel_id)
+ self.set_header('Location', url_escape(location))
+ self.set_status(201)
+ self.finish(jsonapi.dumps(model))
class KernelHandler(IPythonHandler):
- SUPPORTED_METHODS = ('DELETE')
+ SUPPORTED_METHODS = ('DELETE', 'GET')
@web.authenticated
+ @json_errors
+ def get(self, kernel_id):
+ km = self.kernel_manager
+ km._check_kernel_id(kernel_id)
+ model = km.kernel_model(kernel_id, self.ws_url)
+ self.finish(jsonapi.dumps(model))
+
+ @web.authenticated
+ @json_errors
def delete(self, kernel_id):
km = self.kernel_manager
km.shutdown_kernel(kernel_id)
@@ -64,6 +76,7 @@ def delete(self, kernel_id):
class KernelActionHandler(IPythonHandler):
@web.authenticated
+ @json_errors
def post(self, kernel_id, action):
km = self.kernel_manager
if action == 'interrupt':
@@ -71,9 +84,9 @@ def post(self, kernel_id, action):
self.set_status(204)
if action == 'restart':
km.restart_kernel(kernel_id)
- data = {'ws_url':self.ws_url, 'kernel_id':kernel_id}
- self.set_header('Location', '{0}kernels/{1}'.format(self.base_kernel_url, kernel_id))
- self.write(jsonapi.dumps(data))
+ model = km.kernel_model(kernel_id, self.ws_url)
+ self.set_header('Location', '{0}api/kernels/{1}'.format(self.base_kernel_url, kernel_id))
+ self.write(jsonapi.dumps(model))
self.finish()
@@ -173,10 +186,10 @@ class StdinHandler(ZMQChannelHandler):
_kernel_action_regex = r"(?P<action>restart|interrupt)"
default_handlers = [
- (r"/kernels", MainKernelHandler),
- (r"/kernels/%s" % _kernel_id_regex, KernelHandler),
- (r"/kernels/%s/%s" % (_kernel_id_regex, _kernel_action_regex), KernelActionHandler),
- (r"/kernels/%s/iopub" % _kernel_id_regex, IOPubHandler),
- (r"/kernels/%s/shell" % _kernel_id_regex, ShellHandler),
- (r"/kernels/%s/stdin" % _kernel_id_regex, StdinHandler)
+ (r"/api/kernels", MainKernelHandler),
+ (r"/api/kernels/%s" % _kernel_id_regex, KernelHandler),
+ (r"/api/kernels/%s/%s" % (_kernel_id_regex, _kernel_action_regex), KernelActionHandler),
+ (r"/api/kernels/%s/iopub" % _kernel_id_regex, IOPubHandler),
+ (r"/api/kernels/%s/shell" % _kernel_id_regex, ShellHandler),
+ (r"/api/kernels/%s/stdin" % _kernel_id_regex, StdinHandler)
]
diff --git a/IPython/html/services/kernels/kernelmanager.py b/IPython/html/services/kernels/kernelmanager.py
--- a/IPython/html/services/kernels/kernelmanager.py
+++ b/IPython/html/services/kernels/kernelmanager.py
@@ -35,56 +35,29 @@ def _kernel_manager_class_default(self):
return "IPython.kernel.ioloop.IOLoopKernelManager"
kernel_argv = List(Unicode)
-
- _notebook_mapping = Dict()
#-------------------------------------------------------------------------
# Methods for managing kernels and sessions
#-------------------------------------------------------------------------
- def kernel_for_notebook(self, notebook_id):
- """Return the kernel_id for a notebook_id or None."""
- return self._notebook_mapping.get(notebook_id)
-
- def set_kernel_for_notebook(self, notebook_id, kernel_id):
- """Associate a notebook with a kernel."""
- if notebook_id is not None:
- self._notebook_mapping[notebook_id] = kernel_id
-
- def notebook_for_kernel(self, kernel_id):
- """Return the notebook_id for a kernel_id or None."""
- for notebook_id, kid in self._notebook_mapping.iteritems():
- if kernel_id == kid:
- return notebook_id
- return None
-
- def delete_mapping_for_kernel(self, kernel_id):
- """Remove the kernel/notebook mapping for kernel_id."""
- notebook_id = self.notebook_for_kernel(kernel_id)
- if notebook_id is not None:
- del self._notebook_mapping[notebook_id]
-
def _handle_kernel_died(self, kernel_id):
"""notice that a kernel died"""
self.log.warn("Kernel %s died, removing from map.", kernel_id)
- self.delete_mapping_for_kernel(kernel_id)
self.remove_kernel(kernel_id)
- def start_kernel(self, notebook_id=None, **kwargs):
- """Start a kernel for a notebook an return its kernel_id.
+ def start_kernel(self, kernel_id=None, **kwargs):
+ """Start a kernel for a session an return its kernel_id.
Parameters
----------
- notebook_id : uuid
- The uuid of the notebook to associate the new kernel with. If this
- is not None, this kernel will be persistent whenever the notebook
- requests a kernel.
+ kernel_id : uuid
+ The uuid to associate the new kernel with. If this
+ is not None, this kernel will be persistent whenever it is
+ requested.
"""
- kernel_id = self.kernel_for_notebook(notebook_id)
if kernel_id is None:
kwargs['extra_arguments'] = self.kernel_argv
kernel_id = super(MappingKernelManager, self).start_kernel(**kwargs)
- self.set_kernel_for_notebook(notebook_id, kernel_id)
self.log.info("Kernel started: %s" % kernel_id)
self.log.debug("Kernel args: %r" % kwargs)
# register callback for failed auto-restart
@@ -93,18 +66,33 @@ def start_kernel(self, notebook_id=None, **kwargs):
'dead',
)
else:
+ self._check_kernel_id(kernel_id)
self.log.info("Using existing kernel: %s" % kernel_id)
-
return kernel_id
def shutdown_kernel(self, kernel_id, now=False):
"""Shutdown a kernel by kernel_id"""
+ self._check_kernel_id(kernel_id)
super(MappingKernelManager, self).shutdown_kernel(kernel_id, now=now)
- self.delete_mapping_for_kernel(kernel_id)
+
+ def kernel_model(self, kernel_id, ws_url):
+ """Return a dictionary of kernel information described in the
+ JSON standard model."""
+ self._check_kernel_id(kernel_id)
+ model = {"id":kernel_id, "ws_url": ws_url}
+ return model
+
+ def list_kernels(self, ws_url):
+ """Returns a list of kernel_id's of kernels running."""
+ kernels = []
+ kernel_ids = super(MappingKernelManager, self).list_kernel_ids()
+ for kernel_id in kernel_ids:
+ model = self.kernel_model(kernel_id, ws_url)
+ kernels.append(model)
+ return kernels
# override _check_kernel_id to raise 404 instead of KeyError
def _check_kernel_id(self, kernel_id):
"""Check a that a kernel_id exists and raise 404 if not."""
if kernel_id not in self:
raise web.HTTPError(404, u'Kernel does not exist: %s' % kernel_id)
-
diff --git a/IPython/html/services/notebooks/filenbmanager.py b/IPython/html/services/notebooks/filenbmanager.py
--- a/IPython/html/services/notebooks/filenbmanager.py
+++ b/IPython/html/services/notebooks/filenbmanager.py
@@ -3,6 +3,7 @@
Authors:
* Brian Granger
+* Zach Sailer
"""
#-----------------------------------------------------------------------------
@@ -16,12 +17,11 @@
# Imports
#-----------------------------------------------------------------------------
-import datetime
import io
+import itertools
import os
import glob
import shutil
-from unicodedata import normalize
from tornado import web
@@ -70,290 +70,340 @@ def _checkpoint_dir_changed(self, name, old, new):
os.mkdir(new)
except:
raise TraitError("Couldn't create checkpoint dir %r" % new)
-
- filename_ext = Unicode(u'.ipynb')
- # Map notebook names to notebook_ids
- rev_mapping = Dict()
-
- def get_notebook_names(self):
- """List all notebook names in the notebook dir."""
- names = glob.glob(os.path.join(self.notebook_dir,
- '*' + self.filename_ext))
- names = [normalize('NFC', os.path.splitext(os.path.basename(name))[0])
+ def get_notebook_names(self, path=''):
+ """List all notebook names in the notebook dir and path."""
+ path = path.strip('/')
+ if not os.path.isdir(self.get_os_path(path=path)):
+ raise web.HTTPError(404, 'Directory not found: ' + path)
+ names = glob.glob(self.get_os_path('*'+self.filename_ext, path))
+ names = [os.path.basename(name)
for name in names]
return names
- def list_notebooks(self):
- """List all notebooks in the notebook dir."""
- names = self.get_notebook_names()
-
- data = []
- for name in names:
- if name not in self.rev_mapping:
- notebook_id = self.new_notebook_id(name)
- else:
- notebook_id = self.rev_mapping[name]
- data.append(dict(notebook_id=notebook_id,name=name))
- data = sorted(data, key=lambda item: item['name'])
- return data
-
- def new_notebook_id(self, name):
- """Generate a new notebook_id for a name and store its mappings."""
- notebook_id = super(FileNotebookManager, self).new_notebook_id(name)
- self.rev_mapping[name] = notebook_id
- return notebook_id
-
- def delete_notebook_id(self, notebook_id):
- """Delete a notebook's id in the mapping."""
- name = self.mapping[notebook_id]
- super(FileNotebookManager, self).delete_notebook_id(notebook_id)
- del self.rev_mapping[name]
-
- def notebook_exists(self, notebook_id):
- """Does a notebook exist?"""
- exists = super(FileNotebookManager, self).notebook_exists(notebook_id)
- if not exists:
- return False
- path = self.get_path_by_name(self.mapping[notebook_id])
- return os.path.isfile(path)
-
- def get_name(self, notebook_id):
- """get a notebook name, raising 404 if not found"""
- try:
- name = self.mapping[notebook_id]
- except KeyError:
- raise web.HTTPError(404, u'Notebook does not exist: %s' % notebook_id)
+ def increment_filename(self, basename, path='', ext='.ipynb'):
+ """Return a non-used filename of the form basename<int>."""
+ path = path.strip('/')
+ for i in itertools.count():
+ name = u'{basename}{i}{ext}'.format(basename=basename, i=i, ext=ext)
+ os_path = self.get_os_path(name, path)
+ if not os.path.isfile(os_path):
+ break
return name
- def get_path(self, notebook_id):
- """Return a full path to a notebook given its notebook_id."""
- name = self.get_name(notebook_id)
- return self.get_path_by_name(name)
+ def path_exists(self, path):
+ """Does the API-style path (directory) actually exist?
+
+ Parameters
+ ----------
+ path : string
+ The path to check. This is an API path (`/` separated,
+ relative to base notebook-dir).
+
+ Returns
+ -------
+ exists : bool
+ Whether the path is indeed a directory.
+ """
+ path = path.strip('/')
+ os_path = self.get_os_path(path=path)
+ return os.path.isdir(os_path)
+
+ def get_os_path(self, name=None, path=''):
+ """Given a notebook name and a URL path, return its file system
+ path.
+
+ Parameters
+ ----------
+ name : string
+ The name of a notebook file with the .ipynb extension
+ path : string
+ The relative URL path (with '/' as separator) to the named
+ notebook.
- def get_path_by_name(self, name):
- """Return a full path to a notebook given its name."""
- filename = name + self.filename_ext
- path = os.path.join(self.notebook_dir, filename)
+ Returns
+ -------
+ path : string
+ A file system path that combines notebook_dir (location where
+ server started), the relative path, and the filename with the
+ current operating system's url.
+ """
+ parts = path.strip('/').split('/')
+ parts = [p for p in parts if p != ''] # remove duplicate splits
+ if name is not None:
+ parts.append(name)
+ path = os.path.join(self.notebook_dir, *parts)
return path
- def read_notebook_object_from_path(self, path):
- """read a notebook object from a path"""
- info = os.stat(path)
+ def notebook_exists(self, name, path=''):
+ """Returns a True if the notebook exists. Else, returns False.
+
+ Parameters
+ ----------
+ name : string
+ The name of the notebook you are checking.
+ path : string
+ The relative path to the notebook (with '/' as separator)
+
+ Returns
+ -------
+ bool
+ """
+ path = path.strip('/')
+ nbpath = self.get_os_path(name, path=path)
+ return os.path.isfile(nbpath)
+
+ def list_notebooks(self, path):
+ """Returns a list of dictionaries that are the standard model
+ for all notebooks in the relative 'path'.
+
+ Parameters
+ ----------
+ path : str
+ the URL path that describes the relative path for the
+ listed notebooks
+
+ Returns
+ -------
+ notebooks : list of dicts
+ a list of the notebook models without 'content'
+ """
+ path = path.strip('/')
+ notebook_names = self.get_notebook_names(path)
+ notebooks = []
+ for name in notebook_names:
+ model = self.get_notebook_model(name, path, content=False)
+ notebooks.append(model)
+ notebooks = sorted(notebooks, key=lambda item: item['name'])
+ return notebooks
+
+ def get_notebook_model(self, name, path='', content=True):
+ """ Takes a path and name for a notebook and returns it's model
+
+ Parameters
+ ----------
+ name : str
+ the name of the notebook
+ path : str
+ the URL path that describes the relative path for
+ the notebook
+
+ Returns
+ -------
+ model : dict
+ the notebook model. If contents=True, returns the 'contents'
+ dict in the model as well.
+ """
+ path = path.strip('/')
+ if not self.notebook_exists(name=name, path=path):
+ raise web.HTTPError(404, u'Notebook does not exist: %s' % name)
+ os_path = self.get_os_path(name, path)
+ info = os.stat(os_path)
last_modified = tz.utcfromtimestamp(info.st_mtime)
- with open(path,'r') as f:
- s = f.read()
- try:
- # v1 and v2 and json in the .ipynb files.
- nb = current.reads(s, u'json')
- except ValueError as e:
- msg = u"Unreadable Notebook: %s" % e
- raise web.HTTPError(400, msg, reason=msg)
- return last_modified, nb
-
- def read_notebook_object(self, notebook_id):
- """Get the Notebook representation of a notebook by notebook_id."""
- path = self.get_path(notebook_id)
- if not os.path.isfile(path):
- raise web.HTTPError(404, u'Notebook does not exist: %s' % notebook_id)
- last_modified, nb = self.read_notebook_object_from_path(path)
- # Always use the filename as the notebook name.
- # Eventually we will get rid of the notebook name in the metadata
- # but for now, that name is just an empty string. Until the notebooks
- # web service knows about names in URLs we still pass the name
- # back to the web app using the metadata though.
- nb.metadata.name = os.path.splitext(os.path.basename(path))[0]
- return last_modified, nb
-
- def write_notebook_object(self, nb, notebook_id=None):
- """Save an existing notebook object by notebook_id."""
- try:
- new_name = normalize('NFC', nb.metadata.name)
- except AttributeError:
- raise web.HTTPError(400, u'Missing notebook name')
+ created = tz.utcfromtimestamp(info.st_ctime)
+ # Create the notebook model.
+ model ={}
+ model['name'] = name
+ model['path'] = path
+ model['last_modified'] = last_modified
+ model['created'] = created
+ if content is True:
+ with io.open(os_path, 'r', encoding='utf-8') as f:
+ try:
+ nb = current.read(f, u'json')
+ except Exception as e:
+ raise web.HTTPError(400, u"Unreadable Notebook: %s %s" % (os_path, e))
+ model['content'] = nb
+ return model
- if notebook_id is None:
- notebook_id = self.new_notebook_id(new_name)
+ def save_notebook_model(self, model, name='', path=''):
+ """Save the notebook model and return the model with no content."""
+ path = path.strip('/')
- if notebook_id not in self.mapping:
- raise web.HTTPError(404, u'Notebook does not exist: %s' % notebook_id)
+ if 'content' not in model:
+ raise web.HTTPError(400, u'No notebook JSON data provided')
- old_name = self.mapping[notebook_id]
- old_checkpoints = self.list_checkpoints(notebook_id)
- path = self.get_path_by_name(new_name)
+ new_path = model.get('path', path).strip('/')
+ new_name = model.get('name', name)
- # Right before we save the notebook, we write an empty string as the
- # notebook name in the metadata. This is to prepare for removing
- # this attribute entirely post 1.0. The web app still uses the metadata
- # name for now.
- nb.metadata.name = u''
+ if path != new_path or name != new_name:
+ self.rename_notebook(name, path, new_name, new_path)
+ # Save the notebook file
+ os_path = self.get_os_path(new_name, new_path)
+ nb = current.to_notebook_json(model['content'])
+ if 'name' in nb['metadata']:
+ nb['metadata']['name'] = u''
try:
- self.log.debug("Autosaving notebook %s", path)
- with open(path,'w') as f:
+ self.log.debug("Autosaving notebook %s", os_path)
+ with io.open(os_path, 'w', encoding='utf-8') as f:
current.write(nb, f, u'json')
except Exception as e:
- raise web.HTTPError(400, u'Unexpected error while autosaving notebook: %s' % e)
+ raise web.HTTPError(400, u'Unexpected error while autosaving notebook: %s %s' % (os_path, e))
- # save .py script as well
+ # Save .py script as well
if self.save_script:
- pypath = os.path.splitext(path)[0] + '.py'
- self.log.debug("Writing script %s", pypath)
+ py_path = os.path.splitext(os_path)[0] + '.py'
+ self.log.debug("Writing script %s", py_path)
try:
- with io.open(pypath,'w', encoding='utf-8') as f:
- current.write(nb, f, u'py')
+ with io.open(py_path, 'w', encoding='utf-8') as f:
+ current.write(model, f, u'py')
except Exception as e:
- raise web.HTTPError(400, u'Unexpected error while saving notebook as script: %s' % e)
-
- # remove old files if the name changed
- if old_name != new_name:
- # update mapping
- self.mapping[notebook_id] = new_name
- self.rev_mapping[new_name] = notebook_id
- del self.rev_mapping[old_name]
-
- # remove renamed original, if it exists
- old_path = self.get_path_by_name(old_name)
- if os.path.isfile(old_path):
- self.log.debug("unlinking notebook %s", old_path)
- os.unlink(old_path)
-
- # cleanup old script, if it exists
- if self.save_script:
- old_pypath = os.path.splitext(old_path)[0] + '.py'
- if os.path.isfile(old_pypath):
- self.log.debug("unlinking script %s", old_pypath)
- os.unlink(old_pypath)
-
- # rename checkpoints to follow file
- for cp in old_checkpoints:
- checkpoint_id = cp['checkpoint_id']
- old_cp_path = self.get_checkpoint_path_by_name(old_name, checkpoint_id)
- new_cp_path = self.get_checkpoint_path_by_name(new_name, checkpoint_id)
- if os.path.isfile(old_cp_path):
- self.log.debug("renaming checkpoint %s -> %s", old_cp_path, new_cp_path)
- os.rename(old_cp_path, new_cp_path)
-
- return notebook_id
+ raise web.HTTPError(400, u'Unexpected error while saving notebook as script: %s %s' % (py_path, e))
+
+ model = self.get_notebook_model(new_name, new_path, content=False)
+ return model
- def delete_notebook(self, notebook_id):
- """Delete notebook by notebook_id."""
- nb_path = self.get_path(notebook_id)
- if not os.path.isfile(nb_path):
- raise web.HTTPError(404, u'Notebook does not exist: %s' % notebook_id)
+ def update_notebook_model(self, model, name, path=''):
+ """Update the notebook's path and/or name"""
+ path = path.strip('/')
+ new_name = model.get('name', name)
+ new_path = model.get('path', path).strip('/')
+ if path != new_path or name != new_name:
+ self.rename_notebook(name, path, new_name, new_path)
+ model = self.get_notebook_model(new_name, new_path, content=False)
+ return model
+
+ def delete_notebook_model(self, name, path=''):
+ """Delete notebook by name and path."""
+ path = path.strip('/')
+ os_path = self.get_os_path(name, path)
+ if not os.path.isfile(os_path):
+ raise web.HTTPError(404, u'Notebook does not exist: %s' % os_path)
# clear checkpoints
- for checkpoint in self.list_checkpoints(notebook_id):
- checkpoint_id = checkpoint['checkpoint_id']
- path = self.get_checkpoint_path(notebook_id, checkpoint_id)
- self.log.debug(path)
- if os.path.isfile(path):
- self.log.debug("unlinking checkpoint %s", path)
- os.unlink(path)
+ for checkpoint in self.list_checkpoints(name, path):
+ checkpoint_id = checkpoint['id']
+ cp_path = self.get_checkpoint_path(checkpoint_id, name, path)
+ if os.path.isfile(cp_path):
+ self.log.debug("Unlinking checkpoint %s", cp_path)
+ os.unlink(cp_path)
- self.log.debug("unlinking notebook %s", nb_path)
- os.unlink(nb_path)
- self.delete_notebook_id(notebook_id)
+ self.log.debug("Unlinking notebook %s", os_path)
+ os.unlink(os_path)
- def increment_filename(self, basename):
- """Return a non-used filename of the form basename<int>.
+ def rename_notebook(self, old_name, old_path, new_name, new_path):
+ """Rename a notebook."""
+ old_path = old_path.strip('/')
+ new_path = new_path.strip('/')
+ if new_name == old_name and new_path == old_path:
+ return
- This searches through the filenames (basename0, basename1, ...)
- until is find one that is not already being used. It is used to
- create Untitled and Copy names that are unique.
- """
- i = 0
- while True:
- name = u'%s%i' % (basename,i)
- path = self.get_path_by_name(name)
- if not os.path.isfile(path):
- break
- else:
- i = i+1
- return name
-
+ new_os_path = self.get_os_path(new_name, new_path)
+ old_os_path = self.get_os_path(old_name, old_path)
+
+ # Should we proceed with the move?
+ if os.path.isfile(new_os_path):
+ raise web.HTTPError(409, u'Notebook with name already exists: %s' % new_os_path)
+ if self.save_script:
+ old_py_path = os.path.splitext(old_os_path)[0] + '.py'
+ new_py_path = os.path.splitext(new_os_path)[0] + '.py'
+ if os.path.isfile(new_py_path):
+ raise web.HTTPError(409, u'Python script with name already exists: %s' % new_py_path)
+
+ # Move the notebook file
+ try:
+ os.rename(old_os_path, new_os_path)
+ except Exception as e:
+ raise web.HTTPError(500, u'Unknown error renaming notebook: %s %s' % (old_os_path, e))
+
+ # Move the checkpoints
+ old_checkpoints = self.list_checkpoints(old_name, old_path)
+ for cp in old_checkpoints:
+ checkpoint_id = cp['id']
+ old_cp_path = self.get_checkpoint_path(checkpoint_id, old_name, old_path)
+ new_cp_path = self.get_checkpoint_path(checkpoint_id, new_name, new_path)
+ if os.path.isfile(old_cp_path):
+ self.log.debug("Renaming checkpoint %s -> %s", old_cp_path, new_cp_path)
+ os.rename(old_cp_path, new_cp_path)
+
+ # Move the .py script
+ if self.save_script:
+ os.rename(old_py_path, new_py_path)
+
# Checkpoint-related utilities
- def get_checkpoint_path_by_name(self, name, checkpoint_id):
- """Return a full path to a notebook checkpoint, given its name and checkpoint id."""
+ def get_checkpoint_path(self, checkpoint_id, name, path=''):
+ """find the path to a checkpoint"""
+ path = path.strip('/')
filename = u"{name}-{checkpoint_id}{ext}".format(
name=name,
checkpoint_id=checkpoint_id,
ext=self.filename_ext,
)
- path = os.path.join(self.checkpoint_dir, filename)
- return path
-
- def get_checkpoint_path(self, notebook_id, checkpoint_id):
- """find the path to a checkpoint"""
- name = self.get_name(notebook_id)
- return self.get_checkpoint_path_by_name(name, checkpoint_id)
-
- def get_checkpoint_info(self, notebook_id, checkpoint_id):
+ cp_path = os.path.join(path, self.checkpoint_dir, filename)
+ return cp_path
+
+ def get_checkpoint_model(self, checkpoint_id, name, path=''):
"""construct the info dict for a given checkpoint"""
- path = self.get_checkpoint_path(notebook_id, checkpoint_id)
- stats = os.stat(path)
+ path = path.strip('/')
+ cp_path = self.get_checkpoint_path(checkpoint_id, name, path)
+ stats = os.stat(cp_path)
last_modified = tz.utcfromtimestamp(stats.st_mtime)
info = dict(
- checkpoint_id = checkpoint_id,
+ id = checkpoint_id,
last_modified = last_modified,
)
-
return info
# public checkpoint API
- def create_checkpoint(self, notebook_id):
+ def create_checkpoint(self, name, path=''):
"""Create a checkpoint from the current state of a notebook"""
- nb_path = self.get_path(notebook_id)
+ path = path.strip('/')
+ nb_path = self.get_os_path(name, path)
# only the one checkpoint ID:
checkpoint_id = u"checkpoint"
- cp_path = self.get_checkpoint_path(notebook_id, checkpoint_id)
- self.log.debug("creating checkpoint for notebook %s", notebook_id)
+ cp_path = self.get_checkpoint_path(checkpoint_id, name, path)
+ self.log.debug("creating checkpoint for notebook %s", name)
if not os.path.exists(self.checkpoint_dir):
os.mkdir(self.checkpoint_dir)
shutil.copy2(nb_path, cp_path)
# return the checkpoint info
- return self.get_checkpoint_info(notebook_id, checkpoint_id)
+ return self.get_checkpoint_model(checkpoint_id, name, path)
- def list_checkpoints(self, notebook_id):
+ def list_checkpoints(self, name, path=''):
"""list the checkpoints for a given notebook
This notebook manager currently only supports one checkpoint per notebook.
"""
- checkpoint_id = u"checkpoint"
- path = self.get_checkpoint_path(notebook_id, checkpoint_id)
+ path = path.strip('/')
+ checkpoint_id = "checkpoint"
+ path = self.get_checkpoint_path(checkpoint_id, name, path)
if not os.path.exists(path):
return []
else:
- return [self.get_checkpoint_info(notebook_id, checkpoint_id)]
+ return [self.get_checkpoint_model(checkpoint_id, name, path)]
- def restore_checkpoint(self, notebook_id, checkpoint_id):
+ def restore_checkpoint(self, checkpoint_id, name, path=''):
"""restore a notebook to a checkpointed state"""
- self.log.info("restoring Notebook %s from checkpoint %s", notebook_id, checkpoint_id)
- nb_path = self.get_path(notebook_id)
- cp_path = self.get_checkpoint_path(notebook_id, checkpoint_id)
+ path = path.strip('/')
+ self.log.info("restoring Notebook %s from checkpoint %s", name, checkpoint_id)
+ nb_path = self.get_os_path(name, path)
+ cp_path = self.get_checkpoint_path(checkpoint_id, name, path)
if not os.path.isfile(cp_path):
self.log.debug("checkpoint file does not exist: %s", cp_path)
raise web.HTTPError(404,
- u'Notebook checkpoint does not exist: %s-%s' % (notebook_id, checkpoint_id)
+ u'Notebook checkpoint does not exist: %s-%s' % (name, checkpoint_id)
)
# ensure notebook is readable (never restore from an unreadable notebook)
- last_modified, nb = self.read_notebook_object_from_path(cp_path)
+ with io.open(cp_path, 'r', encoding='utf-8') as f:
+ nb = current.read(f, u'json')
shutil.copy2(cp_path, nb_path)
self.log.debug("copying %s -> %s", cp_path, nb_path)
- def delete_checkpoint(self, notebook_id, checkpoint_id):
+ def delete_checkpoint(self, checkpoint_id, name, path=''):
"""delete a notebook's checkpoint"""
- path = self.get_checkpoint_path(notebook_id, checkpoint_id)
- if not os.path.isfile(path):
+ path = path.strip('/')
+ cp_path = self.get_checkpoint_path(checkpoint_id, name, path)
+ if not os.path.isfile(cp_path):
raise web.HTTPError(404,
- u'Notebook checkpoint does not exist: %s-%s' % (notebook_id, checkpoint_id)
+ u'Notebook checkpoint does not exist: %s%s-%s' % (path, name, checkpoint_id)
)
- self.log.debug("unlinking %s", path)
- os.unlink(path)
+ self.log.debug("unlinking %s", cp_path)
+ os.unlink(cp_path)
def info_string(self):
return "Serving notebooks from local directory: %s" % self.notebook_dir
diff --git a/IPython/html/services/notebooks/handlers.py b/IPython/html/services/notebooks/handlers.py
--- a/IPython/html/services/notebooks/handlers.py
+++ b/IPython/html/services/notebooks/handlers.py
@@ -6,7 +6,7 @@
"""
#-----------------------------------------------------------------------------
-# Copyright (C) 2008-2011 The IPython Development Team
+# Copyright (C) 2011 The IPython Development Team
#
# Distributed under the terms of the BSD License. The full license is in
# the file COPYING, distributed as part of this software.
@@ -16,74 +16,193 @@
# Imports
#-----------------------------------------------------------------------------
-from tornado import web
+import json
-from zmq.utils import jsonapi
+from tornado import web
+from IPython.html.utils import url_path_join, url_escape
from IPython.utils.jsonutil import date_default
-from ...base.handlers import IPythonHandler
+from IPython.html.base.handlers import IPythonHandler, json_errors
#-----------------------------------------------------------------------------
# Notebook web service handlers
#-----------------------------------------------------------------------------
-class NotebookRootHandler(IPythonHandler):
- @web.authenticated
- def get(self):
- nbm = self.notebook_manager
- km = self.kernel_manager
- files = nbm.list_notebooks()
- for f in files :
- f['kernel_id'] = km.kernel_for_notebook(f['notebook_id'])
- self.finish(jsonapi.dumps(files))
+class NotebookHandler(IPythonHandler):
- @web.authenticated
- def post(self):
- nbm = self.notebook_manager
- body = self.request.body.strip()
- format = self.get_argument('format', default='json')
- name = self.get_argument('name', default=None)
- if body:
- notebook_id = nbm.save_new_notebook(body, name=name, format=format)
- else:
- notebook_id = nbm.new_notebook()
- self.set_header('Location', '{0}notebooks/{1}'.format(self.base_project_url, notebook_id))
- self.finish(jsonapi.dumps(notebook_id))
+ SUPPORTED_METHODS = (u'GET', u'PUT', u'PATCH', u'POST', u'DELETE')
+ def notebook_location(self, name, path=''):
+ """Return the full URL location of a notebook based.
+
+ Parameters
+ ----------
+ name : unicode
+ The base name of the notebook, such as "foo.ipynb".
+ path : unicode
+ The URL path of the notebook.
+ """
+ return url_escape(url_path_join(
+ self.base_project_url, 'api', 'notebooks', path, name
+ ))
-class NotebookHandler(IPythonHandler):
+ def _finish_model(self, model, location=True):
+ """Finish a JSON request with a model, setting relevant headers, etc."""
+ if location:
+ location = self.notebook_location(model['name'], model['path'])
+ self.set_header('Location', location)
+ self.set_header('Last-Modified', model['last_modified'])
+ self.finish(json.dumps(model, default=date_default))
+
+ @web.authenticated
+ @json_errors
+ def get(self, path='', name=None):
+ """Return a Notebook or list of notebooks.
- SUPPORTED_METHODS = ('GET', 'PUT', 'DELETE')
+ * GET with path and no notebook name lists notebooks in a directory
+ * GET with path and notebook name returns notebook JSON
+ """
+ nbm = self.notebook_manager
+ # Check to see if a notebook name was given
+ if name is None:
+ # List notebooks in 'path'
+ notebooks = nbm.list_notebooks(path)
+ self.finish(json.dumps(notebooks, default=date_default))
+ return
+ # get and return notebook representation
+ model = nbm.get_notebook_model(name, path)
+ self._finish_model(model, location=False)
@web.authenticated
- def get(self, notebook_id):
+ @json_errors
+ def patch(self, path='', name=None):
+ """PATCH renames a notebook without re-uploading content."""
nbm = self.notebook_manager
- format = self.get_argument('format', default='json')
- last_mod, name, data = nbm.get_notebook(notebook_id, format)
+ if name is None:
+ raise web.HTTPError(400, u'Notebook name missing')
+ model = self.get_json_body()
+ if model is None:
+ raise web.HTTPError(400, u'JSON body missing')
+ model = nbm.update_notebook_model(model, name, path)
+ self._finish_model(model)
+
+ def _copy_notebook(self, copy_from, path, copy_to=None):
+ """Copy a notebook in path, optionally specifying the new name.
- if format == u'json':
- self.set_header('Content-Type', 'application/json')
- self.set_header('Content-Disposition','attachment; filename="%s.ipynb"' % name)
- elif format == u'py':
- self.set_header('Content-Type', 'application/x-python')
- self.set_header('Content-Disposition','attachment; filename="%s.py"' % name)
- self.set_header('Last-Modified', last_mod)
- self.finish(data)
+ Only support copying within the same directory.
+ """
+ self.log.info(u"Copying notebook from %s/%s to %s/%s",
+ path, copy_from,
+ path, copy_to or '',
+ )
+ model = self.notebook_manager.copy_notebook(copy_from, copy_to, path)
+ self.set_status(201)
+ self._finish_model(model)
+
+ def _upload_notebook(self, model, path, name=None):
+ """Upload a notebook
+
+ If name specified, create it in path/name.
+ """
+ self.log.info(u"Uploading notebook to %s/%s", path, name or '')
+ if name:
+ model['name'] = name
+
+ model = self.notebook_manager.create_notebook_model(model, path)
+ self.set_status(201)
+ self._finish_model(model)
+
+ def _create_empty_notebook(self, path, name=None):
+ """Create an empty notebook in path
+
+ If name specified, create it in path/name.
+ """
+ self.log.info(u"Creating new notebook in %s/%s", path, name or '')
+ model = {}
+ if name:
+ model['name'] = name
+ model = self.notebook_manager.create_notebook_model(model, path=path)
+ self.set_status(201)
+ self._finish_model(model)
+
+ def _save_notebook(self, model, path, name):
+ """Save an existing notebook."""
+ self.log.info(u"Saving notebook at %s/%s", path, name)
+ model = self.notebook_manager.save_notebook_model(model, name, path)
+ if model['path'] != path.strip('/') or model['name'] != name:
+ # a rename happened, set Location header
+ location = True
+ else:
+ location = False
+ self._finish_model(model, location)
+
+ @web.authenticated
+ @json_errors
+ def post(self, path='', name=None):
+ """Create a new notebook in the specified path.
+
+ POST creates new notebooks. The server always decides on the notebook name.
+
+ POST /api/notebooks/path : new untitled notebook in path
+ If content specified, upload a notebook, otherwise start empty.
+ POST /api/notebooks/path?copy=OtherNotebook.ipynb : new copy of OtherNotebook in path
+ """
+
+ if name is not None:
+ raise web.HTTPError(400, "Only POST to directories. Use PUT for full names.")
+
+ model = self.get_json_body()
+
+ if model is not None:
+ copy_from = model.get('copy_from')
+ if copy_from:
+ if model.get('content'):
+ raise web.HTTPError(400, "Can't upload and copy at the same time.")
+ self._copy_notebook(copy_from, path)
+ else:
+ self._upload_notebook(model, path)
+ else:
+ self._create_empty_notebook(path)
@web.authenticated
- def put(self, notebook_id):
- nbm = self.notebook_manager
- format = self.get_argument('format', default='json')
- name = self.get_argument('name', default=None)
- nbm.save_notebook(notebook_id, self.request.body, name=name, format=format)
- self.set_status(204)
- self.finish()
+ @json_errors
+ def put(self, path='', name=None):
+ """Saves the notebook in the location specified by name and path.
+
+ PUT /api/notebooks/path/Name.ipynb : Save notebook at path/Name.ipynb
+ Notebook structure is specified in `content` key of JSON request body.
+ If content is not specified, create a new empty notebook.
+ PUT /api/notebooks/path/Name.ipynb?copy=OtherNotebook.ipynb : copy OtherNotebook to Name
+
+ POST and PUT are basically the same. The only difference:
+
+ - with POST, server always picks the name, with PUT the requester does
+ """
+ if name is None:
+ raise web.HTTPError(400, "Only PUT to full names. Use POST for directories.")
+
+ model = self.get_json_body()
+ if model:
+ copy_from = model.get('copy_from')
+ if copy_from:
+ if model.get('content'):
+ raise web.HTTPError(400, "Can't upload and copy at the same time.")
+ self._copy_notebook(copy_from, path, name)
+ elif self.notebook_manager.notebook_exists(name, path):
+ self._save_notebook(model, path, name)
+ else:
+ self._upload_notebook(model, path, name)
+ else:
+ self._create_empty_notebook(path, name)
@web.authenticated
- def delete(self, notebook_id):
- self.notebook_manager.delete_notebook(notebook_id)
+ @json_errors
+ def delete(self, path='', name=None):
+ """delete the notebook in the given notebook path"""
+ nbm = self.notebook_manager
+ nbm.delete_notebook_model(name, path)
self.set_status(204)
self.finish()
@@ -93,23 +212,25 @@ class NotebookCheckpointsHandler(IPythonHandler):
SUPPORTED_METHODS = ('GET', 'POST')
@web.authenticated
- def get(self, notebook_id):
+ @json_errors
+ def get(self, path='', name=None):
"""get lists checkpoints for a notebook"""
nbm = self.notebook_manager
- checkpoints = nbm.list_checkpoints(notebook_id)
- data = jsonapi.dumps(checkpoints, default=date_default)
+ checkpoints = nbm.list_checkpoints(name, path)
+ data = json.dumps(checkpoints, default=date_default)
self.finish(data)
@web.authenticated
- def post(self, notebook_id):
+ @json_errors
+ def post(self, path='', name=None):
"""post creates a new checkpoint"""
nbm = self.notebook_manager
- checkpoint = nbm.create_checkpoint(notebook_id)
- data = jsonapi.dumps(checkpoint, default=date_default)
- self.set_header('Location', '{0}notebooks/{1}/checkpoints/{2}'.format(
- self.base_project_url, notebook_id, checkpoint['checkpoint_id']
- ))
-
+ checkpoint = nbm.create_checkpoint(name, path)
+ data = json.dumps(checkpoint, default=date_default)
+ location = url_path_join(self.base_project_url, 'api/notebooks',
+ path, name, 'checkpoints', checkpoint['id'])
+ self.set_header('Location', url_escape(location))
+ self.set_status(201)
self.finish(data)
@@ -118,39 +239,40 @@ class ModifyNotebookCheckpointsHandler(IPythonHandler):
SUPPORTED_METHODS = ('POST', 'DELETE')
@web.authenticated
- def post(self, notebook_id, checkpoint_id):
+ @json_errors
+ def post(self, path, name, checkpoint_id):
"""post restores a notebook from a checkpoint"""
nbm = self.notebook_manager
- nbm.restore_checkpoint(notebook_id, checkpoint_id)
+ nbm.restore_checkpoint(checkpoint_id, name, path)
self.set_status(204)
self.finish()
@web.authenticated
- def delete(self, notebook_id, checkpoint_id):
+ @json_errors
+ def delete(self, path, name, checkpoint_id):
"""delete clears a checkpoint for a given notebook"""
nbm = self.notebook_manager
- nbm.delte_checkpoint(notebook_id, checkpoint_id)
+ nbm.delete_checkpoint(checkpoint_id, name, path)
self.set_status(204)
self.finish()
-
-
+
#-----------------------------------------------------------------------------
# URL to handler mappings
#-----------------------------------------------------------------------------
-_notebook_id_regex = r"(?P<notebook_id>\w+-\w+-\w+-\w+-\w+)"
+_path_regex = r"(?P<path>(?:/.*)*)"
_checkpoint_id_regex = r"(?P<checkpoint_id>[\w-]+)"
+_notebook_name_regex = r"(?P<name>[^/]+\.ipynb)"
+_notebook_path_regex = "%s/%s" % (_path_regex, _notebook_name_regex)
default_handlers = [
- (r"/notebooks", NotebookRootHandler),
- (r"/notebooks/%s" % _notebook_id_regex, NotebookHandler),
- (r"/notebooks/%s/checkpoints" % _notebook_id_regex, NotebookCheckpointsHandler),
- (r"/notebooks/%s/checkpoints/%s" % (_notebook_id_regex, _checkpoint_id_regex),
- ModifyNotebookCheckpointsHandler
- ),
+ (r"/api/notebooks%s/checkpoints" % _notebook_path_regex, NotebookCheckpointsHandler),
+ (r"/api/notebooks%s/checkpoints/%s" % (_notebook_path_regex, _checkpoint_id_regex),
+ ModifyNotebookCheckpointsHandler),
+ (r"/api/notebooks%s" % _notebook_path_regex, NotebookHandler),
+ (r"/api/notebooks%s" % _path_regex, NotebookHandler),
]
-
diff --git a/IPython/html/services/notebooks/nbmanager.py b/IPython/html/services/notebooks/nbmanager.py
--- a/IPython/html/services/notebooks/nbmanager.py
+++ b/IPython/html/services/notebooks/nbmanager.py
@@ -3,6 +3,7 @@
Authors:
* Brian Granger
+* Zach Sailer
"""
#-----------------------------------------------------------------------------
@@ -17,9 +18,6 @@
#-----------------------------------------------------------------------------
import os
-import uuid
-
-from tornado import web
from IPython.config.configurable import LoggingConfigurable
from IPython.nbformat import current
@@ -38,14 +36,33 @@ class NotebookManager(LoggingConfigurable):
# Right now we use this attribute in a number of different places and
# we are going to have to disentangle all of this.
notebook_dir = Unicode(os.getcwdu(), config=True, help="""
- The directory to use for notebooks.
- """)
+ The directory to use for notebooks.
+ """)
+
+ filename_ext = Unicode(u'.ipynb')
+
+ def path_exists(self, path):
+ """Does the API-style path (directory) actually exist?
+
+ Override this method in subclasses.
+
+ Parameters
+ ----------
+ path : string
+ The
+
+ Returns
+ -------
+ exists : bool
+ Whether the path does indeed exist.
+ """
+ raise NotImplementedError
+
def _notebook_dir_changed(self, name, old, new):
- """do a bit of validation of the notebook dir"""
+ """Do a bit of validation of the notebook dir."""
if not os.path.isabs(new):
# If we receive a non-absolute path, make it absolute.
- abs_new = os.path.abspath(new)
- self.notebook_dir = abs_new
+ self.notebook_dir = os.path.abspath(new)
return
if os.path.exists(new) and not os.path.isdir(new):
raise TraitError("notebook dir %r is not a directory" % new)
@@ -56,22 +73,22 @@ def _notebook_dir_changed(self, name, old, new):
except:
raise TraitError("Couldn't create notebook dir %r" % new)
- allowed_formats = List([u'json',u'py'])
-
- # Map notebook_ids to notebook names
- mapping = Dict()
-
- def load_notebook_names(self):
- """Load the notebook names into memory.
+ # Main notebook API
- This should be called once immediately after the notebook manager
- is created to load the existing notebooks into the mapping in
- memory.
+ def increment_filename(self, basename, path=''):
+ """Increment a notebook filename without the .ipynb to make it unique.
+
+ Parameters
+ ----------
+ basename : unicode
+ The name of a notebook without the ``.ipynb`` file extension.
+ path : unicode
+ The URL path of the notebooks directory
"""
- self.list_notebooks()
+ return basename
- def list_notebooks(self):
- """List all notebooks.
+ def list_notebooks(self, path=''):
+ """Return a list of notebook dicts without content.
This returns a list of dicts, each of the form::
@@ -83,147 +100,69 @@ def list_notebooks(self):
"""
raise NotImplementedError('must be implemented in a subclass')
-
- def new_notebook_id(self, name):
- """Generate a new notebook_id for a name and store its mapping."""
- # TODO: the following will give stable urls for notebooks, but unless
- # the notebooks are immediately redirected to their new urls when their
- # filemname changes, nasty inconsistencies result. So for now it's
- # disabled and instead we use a random uuid4() call. But we leave the
- # logic here so that we can later reactivate it, whhen the necessary
- # url redirection code is written.
- #notebook_id = unicode(uuid.uuid5(uuid.NAMESPACE_URL,
- # 'file://'+self.get_path_by_name(name).encode('utf-8')))
-
- notebook_id = unicode(uuid.uuid4())
- self.mapping[notebook_id] = name
- return notebook_id
-
- def delete_notebook_id(self, notebook_id):
- """Delete a notebook's id in the mapping.
-
- This doesn't delete the actual notebook, only its entry in the mapping.
- """
- del self.mapping[notebook_id]
-
- def notebook_exists(self, notebook_id):
- """Does a notebook exist?"""
- return notebook_id in self.mapping
-
- def get_notebook(self, notebook_id, format=u'json'):
- """Get the representation of a notebook in format by notebook_id."""
- format = unicode(format)
- if format not in self.allowed_formats:
- raise web.HTTPError(415, u'Invalid notebook format: %s' % format)
- last_modified, nb = self.read_notebook_object(notebook_id)
- kwargs = {}
- if format == 'json':
- # don't split lines for sending over the wire, because it
- # should match the Python in-memory format.
- kwargs['split_lines'] = False
- data = current.writes(nb, format, **kwargs)
- name = nb.metadata.get('name','notebook')
- return last_modified, name, data
-
- def read_notebook_object(self, notebook_id):
- """Get the object representation of a notebook by notebook_id."""
+ def get_notebook_model(self, name, path='', content=True):
+ """Get the notebook model with or without content."""
raise NotImplementedError('must be implemented in a subclass')
- def save_new_notebook(self, data, name=None, format=u'json'):
- """Save a new notebook and return its notebook_id.
-
- If a name is passed in, it overrides any values in the notebook data
- and the value in the data is updated to use that value.
- """
- if format not in self.allowed_formats:
- raise web.HTTPError(415, u'Invalid notebook format: %s' % format)
-
- try:
- nb = current.reads(data.decode('utf-8'), format)
- except:
- raise web.HTTPError(400, u'Invalid JSON data')
-
- if name is None:
- try:
- name = nb.metadata.name
- except AttributeError:
- raise web.HTTPError(400, u'Missing notebook name')
- nb.metadata.name = name
-
- notebook_id = self.write_notebook_object(nb)
- return notebook_id
-
- def save_notebook(self, notebook_id, data, name=None, format=u'json'):
- """Save an existing notebook by notebook_id."""
- if format not in self.allowed_formats:
- raise web.HTTPError(415, u'Invalid notebook format: %s' % format)
-
- try:
- nb = current.reads(data.decode('utf-8'), format)
- except:
- raise web.HTTPError(400, u'Invalid JSON data')
-
- if name is not None:
- nb.metadata.name = name
- self.write_notebook_object(nb, notebook_id)
-
- def write_notebook_object(self, nb, notebook_id=None):
- """Write a notebook object and return its notebook_id.
-
- If notebook_id is None, this method should create a new notebook_id.
- If notebook_id is not None, this method should check to make sure it
- exists and is valid.
- """
+ def save_notebook_model(self, model, name, path=''):
+ """Save the notebook model and return the model with no content."""
raise NotImplementedError('must be implemented in a subclass')
- def delete_notebook(self, notebook_id):
- """Delete notebook by notebook_id."""
+ def update_notebook_model(self, model, name, path=''):
+ """Update the notebook model and return the model with no content."""
raise NotImplementedError('must be implemented in a subclass')
- def increment_filename(self, name):
- """Increment a filename to make it unique.
+ def delete_notebook_model(self, name, path=''):
+ """Delete notebook by name and path."""
+ raise NotImplementedError('must be implemented in a subclass')
- This exists for notebook stores that must have unique names. When a notebook
- is created or copied this method constructs a unique filename, typically
- by appending an integer to the name.
+ def create_notebook_model(self, model=None, path=''):
+ """Create a new untitled notebook and return its model with no content."""
+ path = path.strip('/')
+ if model is None:
+ model = {}
+ if 'content' not in model:
+ metadata = current.new_metadata(name=u'')
+ model['content'] = current.new_notebook(metadata=metadata)
+ if 'name' not in model:
+ model['name'] = self.increment_filename('Untitled', path)
+
+ model['path'] = path
+ model = self.save_notebook_model(model, model['name'], model['path'])
+ return model
+
+ def copy_notebook(self, from_name, to_name=None, path=''):
+ """Copy an existing notebook and return its new model.
+
+ If to_name not specified, increment `from_name-Copy#.ipynb`.
"""
- return name
-
- def new_notebook(self):
- """Create a new notebook and return its notebook_id."""
- name = self.increment_filename('Untitled')
- metadata = current.new_metadata(name=name)
- nb = current.new_notebook(metadata=metadata)
- notebook_id = self.write_notebook_object(nb)
- return notebook_id
-
- def copy_notebook(self, notebook_id):
- """Copy an existing notebook and return its notebook_id."""
- last_mod, nb = self.read_notebook_object(notebook_id)
- name = nb.metadata.name + '-Copy'
- name = self.increment_filename(name)
- nb.metadata.name = name
- notebook_id = self.write_notebook_object(nb)
- return notebook_id
+ path = path.strip('/')
+ model = self.get_notebook_model(from_name, path)
+ if not to_name:
+ base = os.path.splitext(from_name)[0] + '-Copy'
+ to_name = self.increment_filename(base, path)
+ model['name'] = to_name
+ model = self.save_notebook_model(model, to_name, path)
+ return model
# Checkpoint-related
- def create_checkpoint(self, notebook_id):
+ def create_checkpoint(self, name, path=''):
"""Create a checkpoint of the current state of a notebook
Returns a checkpoint_id for the new checkpoint.
"""
raise NotImplementedError("must be implemented in a subclass")
- def list_checkpoints(self, notebook_id):
+ def list_checkpoints(self, name, path=''):
"""Return a list of checkpoints for a given notebook"""
return []
- def restore_checkpoint(self, notebook_id, checkpoint_id):
+ def restore_checkpoint(self, checkpoint_id, name, path=''):
"""Restore a notebook from one of its checkpoints"""
raise NotImplementedError("must be implemented in a subclass")
- def delete_checkpoint(self, notebook_id, checkpoint_id):
+ def delete_checkpoint(self, checkpoint_id, name, path=''):
"""delete a checkpoint for a notebook"""
raise NotImplementedError("must be implemented in a subclass")
@@ -232,4 +171,3 @@ def log_info(self):
def info_string(self):
return "Serving notebooks"
-
diff --git a/IPython/html/services/sessions/__init__.py b/IPython/html/services/sessions/__init__.py
new file mode 100644
diff --git a/IPython/html/services/sessions/handlers.py b/IPython/html/services/sessions/handlers.py
new file mode 100644
--- /dev/null
+++ b/IPython/html/services/sessions/handlers.py
@@ -0,0 +1,127 @@
+"""Tornado handlers for the sessions web service.
+
+Authors:
+
+* Zach Sailer
+"""
+
+#-----------------------------------------------------------------------------
+# Copyright (C) 2013 The IPython Development Team
+#
+# Distributed under the terms of the BSD License. The full license is in
+# the file COPYING, distributed as part of this software.
+#-----------------------------------------------------------------------------
+
+#-----------------------------------------------------------------------------
+# Imports
+#-----------------------------------------------------------------------------
+
+import json
+
+from tornado import web
+
+from ...base.handlers import IPythonHandler, json_errors
+from IPython.utils.jsonutil import date_default
+from IPython.html.utils import url_path_join, url_escape
+
+#-----------------------------------------------------------------------------
+# Session web service handlers
+#-----------------------------------------------------------------------------
+
+
+class SessionRootHandler(IPythonHandler):
+
+ @web.authenticated
+ @json_errors
+ def get(self):
+ # Return a list of running sessions
+ sm = self.session_manager
+ sessions = sm.list_sessions()
+ self.finish(json.dumps(sessions, default=date_default))
+
+ @web.authenticated
+ @json_errors
+ def post(self):
+ # Creates a new session
+ #(unless a session already exists for the named nb)
+ sm = self.session_manager
+ nbm = self.notebook_manager
+ km = self.kernel_manager
+ model = self.get_json_body()
+ if model is None:
+ raise web.HTTPError(400, "No JSON data provided")
+ try:
+ name = model['notebook']['name']
+ except KeyError:
+ raise web.HTTPError(400, "Missing field in JSON data: name")
+ try:
+ path = model['notebook']['path']
+ except KeyError:
+ raise web.HTTPError(400, "Missing field in JSON data: path")
+ # Check to see if session exists
+ if sm.session_exists(name=name, path=path):
+ model = sm.get_session(name=name, path=path)
+ else:
+ kernel_id = km.start_kernel(cwd=nbm.notebook_dir)
+ model = sm.create_session(name=name, path=path, kernel_id=kernel_id, ws_url=self.ws_url)
+ location = url_path_join(self.base_kernel_url, 'api', 'sessions', model['id'])
+ self.set_header('Location', url_escape(location))
+ self.set_status(201)
+ self.finish(json.dumps(model, default=date_default))
+
+class SessionHandler(IPythonHandler):
+
+ SUPPORTED_METHODS = ('GET', 'PATCH', 'DELETE')
+
+ @web.authenticated
+ @json_errors
+ def get(self, session_id):
+ # Returns the JSON model for a single session
+ sm = self.session_manager
+ model = sm.get_session(session_id=session_id)
+ self.finish(json.dumps(model, default=date_default))
+
+ @web.authenticated
+ @json_errors
+ def patch(self, session_id):
+ # Currently, this handler is strictly for renaming notebooks
+ sm = self.session_manager
+ model = self.get_json_body()
+ if model is None:
+ raise web.HTTPError(400, "No JSON data provided")
+ changes = {}
+ if 'notebook' in model:
+ notebook = model['notebook']
+ if 'name' in notebook:
+ changes['name'] = notebook['name']
+ if 'path' in notebook:
+ changes['path'] = notebook['path']
+
+ sm.update_session(session_id, **changes)
+ model = sm.get_session(session_id=session_id)
+ self.finish(json.dumps(model, default=date_default))
+
+ @web.authenticated
+ @json_errors
+ def delete(self, session_id):
+ # Deletes the session with given session_id
+ sm = self.session_manager
+ km = self.kernel_manager
+ session = sm.get_session(session_id=session_id)
+ sm.delete_session(session_id)
+ km.shutdown_kernel(session['kernel']['id'])
+ self.set_status(204)
+ self.finish()
+
+
+#-----------------------------------------------------------------------------
+# URL to handler mappings
+#-----------------------------------------------------------------------------
+
+_session_id_regex = r"(?P<session_id>\w+-\w+-\w+-\w+-\w+)"
+
+default_handlers = [
+ (r"/api/sessions/%s" % _session_id_regex, SessionHandler),
+ (r"/api/sessions", SessionRootHandler)
+]
+
diff --git a/IPython/html/services/sessions/sessionmanager.py b/IPython/html/services/sessions/sessionmanager.py
new file mode 100644
--- /dev/null
+++ b/IPython/html/services/sessions/sessionmanager.py
@@ -0,0 +1,201 @@
+"""A base class session manager.
+
+Authors:
+
+* Zach Sailer
+"""
+
+#-----------------------------------------------------------------------------
+# Copyright (C) 2013 The IPython Development Team
+#
+# Distributed under the terms of the BSD License. The full license is in
+# the file COPYING, distributed as part of this software.
+#-----------------------------------------------------------------------------
+
+#-----------------------------------------------------------------------------
+# Imports
+#-----------------------------------------------------------------------------
+
+import uuid
+import sqlite3
+
+from tornado import web
+
+from IPython.config.configurable import LoggingConfigurable
+
+#-----------------------------------------------------------------------------
+# Classes
+#-----------------------------------------------------------------------------
+
+class SessionManager(LoggingConfigurable):
+
+ # Session database initialized below
+ _cursor = None
+ _connection = None
+ _columns = {'session_id', 'name', 'path', 'kernel_id', 'ws_url'}
+
+ @property
+ def cursor(self):
+ """Start a cursor and create a database called 'session'"""
+ if self._cursor is None:
+ self._cursor = self.connection.cursor()
+ self._cursor.execute("""CREATE TABLE session
+ (session_id, name, path, kernel_id, ws_url)""")
+ return self._cursor
+
+ @property
+ def connection(self):
+ """Start a database connection"""
+ if self._connection is None:
+ self._connection = sqlite3.connect(':memory:')
+ self._connection.row_factory = self.row_factory
+ return self._connection
+
+ def __del__(self):
+ """Close connection once SessionManager closes"""
+ self.cursor.close()
+
+ def session_exists(self, name, path):
+ """Check to see if the session for a given notebook exists"""
+ self.cursor.execute("SELECT * FROM session WHERE name=? AND path=?", (name, path))
+ reply = self.cursor.fetchone()
+ if reply is None:
+ return False
+ else:
+ return True
+
+ def new_session_id(self):
+ "Create a uuid for a new session"
+ return unicode(uuid.uuid4())
+
+ def create_session(self, name=None, path=None, kernel_id=None, ws_url=None):
+ """Creates a session and returns its model"""
+ session_id = self.new_session_id()
+ return self.save_session(session_id, name=name, path=path, kernel_id=kernel_id, ws_url=ws_url)
+
+ def save_session(self, session_id, name=None, path=None, kernel_id=None, ws_url=None):
+ """Saves the items for the session with the given session_id
+
+ Given a session_id (and any other of the arguments), this method
+ creates a row in the sqlite session database that holds the information
+ for a session.
+
+ Parameters
+ ----------
+ session_id : str
+ uuid for the session; this method must be given a session_id
+ name : str
+ the .ipynb notebook name that started the session
+ path : str
+ the path to the named notebook
+ kernel_id : str
+ a uuid for the kernel associated with this session
+ ws_url : str
+ the websocket url
+
+ Returns
+ -------
+ model : dict
+ a dictionary of the session model
+ """
+ self.cursor.execute("INSERT INTO session VALUES (?,?,?,?,?)",
+ (session_id, name, path, kernel_id, ws_url)
+ )
+ return self.get_session(session_id=session_id)
+
+ def get_session(self, **kwargs):
+ """Returns the model for a particular session.
+
+ Takes a keyword argument and searches for the value in the session
+ database, then returns the rest of the session's info.
+
+ Parameters
+ ----------
+ **kwargs : keyword argument
+ must be given one of the keywords and values from the session database
+ (i.e. session_id, name, path, kernel_id, ws_url)
+
+ Returns
+ -------
+ model : dict
+ returns a dictionary that includes all the information from the
+ session described by the kwarg.
+ """
+ if not kwargs:
+ raise TypeError("must specify a column to query")
+
+ conditions = []
+ for column in kwargs.keys():
+ if column not in self._columns:
+ raise TypeError("No such column: %r", column)
+ conditions.append("%s=?" % column)
+
+ query = "SELECT * FROM session WHERE %s" % (' AND '.join(conditions))
+
+ self.cursor.execute(query, kwargs.values())
+ model = self.cursor.fetchone()
+ if model is None:
+ q = []
+ for key, value in kwargs.items():
+ q.append("%s=%r" % (key, value))
+
+ raise web.HTTPError(404, u'Session not found: %s' % (', '.join(q)))
+ return model
+
+ def update_session(self, session_id, **kwargs):
+ """Updates the values in the session database.
+
+ Changes the values of the session with the given session_id
+ with the values from the keyword arguments.
+
+ Parameters
+ ----------
+ session_id : str
+ a uuid that identifies a session in the sqlite3 database
+ **kwargs : str
+ the key must correspond to a column title in session database,
+ and the value replaces the current value in the session
+ with session_id.
+ """
+ self.get_session(session_id=session_id)
+
+ if not kwargs:
+ # no changes
+ return
+
+ sets = []
+ for column in kwargs.keys():
+ if column not in self._columns:
+ raise TypeError("No such column: %r" % column)
+ sets.append("%s=?" % column)
+ query = "UPDATE session SET %s WHERE session_id=?" % (', '.join(sets))
+ self.cursor.execute(query, kwargs.values() + [session_id])
+
+ @staticmethod
+ def row_factory(cursor, row):
+ """Takes sqlite database session row and turns it into a dictionary"""
+ row = sqlite3.Row(cursor, row)
+ model = {
+ 'id': row['session_id'],
+ 'notebook': {
+ 'name': row['name'],
+ 'path': row['path']
+ },
+ 'kernel': {
+ 'id': row['kernel_id'],
+ 'ws_url': row['ws_url']
+ }
+ }
+ return model
+
+ def list_sessions(self):
+ """Returns a list of dictionaries containing all the information from
+ the session database"""
+ c = self.cursor.execute("SELECT * FROM session")
+ return list(c.fetchall())
+
+ def delete_session(self, session_id):
+ """Deletes the row in the session database with given session_id"""
+ # Check that session exists before deleting
+ self.get_session(session_id=session_id)
+ self.cursor.execute("DELETE FROM session WHERE session_id=?", (session_id,))
diff --git a/IPython/html/tree/handlers.py b/IPython/html/tree/handlers.py
--- a/IPython/html/tree/handlers.py
+++ b/IPython/html/tree/handlers.py
@@ -15,23 +15,53 @@
#-----------------------------------------------------------------------------
# Imports
#-----------------------------------------------------------------------------
+import os
from tornado import web
from ..base.handlers import IPythonHandler
+from ..utils import url_path_join, path2url, url2path, url_escape
+from ..services.notebooks.handlers import _notebook_path_regex, _path_regex
#-----------------------------------------------------------------------------
# Handlers
#-----------------------------------------------------------------------------
-class ProjectDashboardHandler(IPythonHandler):
+class TreeHandler(IPythonHandler):
+ """Render the tree view, listing notebooks, clusters, etc."""
@web.authenticated
- def get(self):
- self.write(self.render_template('tree.html',
- project=self.project,
- project_component=self.project.split('/'),
+ def get(self, path='', name=None):
+ path = path.strip('/')
+ nbm = self.notebook_manager
+ if name is not None:
+ # is a notebook, redirect to notebook handler
+ url = url_escape(url_path_join(
+ self.base_project_url, 'notebooks', path, name
+ ))
+ self.log.debug("Redirecting %s to %s", self.request.path, url)
+ self.redirect(url)
+ else:
+ if not nbm.path_exists(path=path):
+ # no such directory, 404
+ raise web.HTTPError(404)
+ self.write(self.render_template('tree.html',
+ project=self.project_dir,
+ tree_url_path=path,
+ notebook_path=path,
+ ))
+
+
+class TreeRedirectHandler(IPythonHandler):
+ """Redirect a request to the corresponding tree URL"""
+
+ @web.authenticated
+ def get(self, path=''):
+ url = url_escape(url_path_join(
+ self.base_project_url, 'tree', path.strip('/')
))
+ self.log.debug("Redirecting %s to %s", self.request.path, url)
+ self.redirect(url)
#-----------------------------------------------------------------------------
@@ -39,4 +69,9 @@ def get(self):
#-----------------------------------------------------------------------------
-default_handlers = [(r"/", ProjectDashboardHandler)]
\ No newline at end of file
+default_handlers = [
+ (r"/tree%s" % _notebook_path_regex, TreeHandler),
+ (r"/tree%s" % _path_regex, TreeHandler),
+ (r"/tree", TreeHandler),
+ (r"/", TreeRedirectHandler),
+ ]
diff --git a/IPython/html/utils.py b/IPython/html/utils.py
--- a/IPython/html/utils.py
+++ b/IPython/html/utils.py
@@ -12,6 +12,11 @@
# the file COPYING, distributed as part of this software.
#-----------------------------------------------------------------------------
+import os
+from urllib import quote, unquote
+
+from IPython.utils import py3compat
+
#-----------------------------------------------------------------------------
# Imports
#-----------------------------------------------------------------------------
@@ -24,9 +29,43 @@ def url_path_join(*pieces):
"""
initial = pieces[0].startswith('/')
final = pieces[-1].endswith('/')
- striped = [s.strip('/') for s in pieces]
- result = '/'.join(s for s in striped if s)
+ stripped = [s.strip('/') for s in pieces]
+ result = '/'.join(s for s in stripped if s)
if initial: result = '/' + result
if final: result = result + '/'
if result == '//': result = '/'
return result
+
+def path2url(path):
+ """Convert a local file path to a URL"""
+ pieces = [ quote(p) for p in path.split(os.sep) ]
+ # preserve trailing /
+ if pieces[-1] == '':
+ pieces[-1] = '/'
+ url = url_path_join(*pieces)
+ return url
+
+def url2path(url):
+ """Convert a URL to a local file path"""
+ pieces = [ unquote(p) for p in url.split('/') ]
+ path = os.path.join(*pieces)
+ return path
+
+def url_escape(path):
+ """Escape special characters in a URL path
+
+ Turns '/foo bar/' into '/foo%20bar/'
+ """
+ parts = py3compat.unicode_to_str(path).split('/')
+ return u'/'.join([quote(p) for p in parts])
+
+def url_unescape(path):
+ """Unescape special characters in a URL path
+
+ Turns '/foo%20bar/' into '/foo bar/'
+ """
+ return u'/'.join([
+ py3compat.str_to_unicode(unquote(p))
+ for p in py3compat.unicode_to_str(path).split('/')
+ ])
+
diff --git a/IPython/kernel/zmq/pylab/backend_inline.py b/IPython/kernel/zmq/pylab/backend_inline.py
--- a/IPython/kernel/zmq/pylab/backend_inline.py
+++ b/IPython/kernel/zmq/pylab/backend_inline.py
@@ -1,5 +1,11 @@
-"""Produce SVG versions of active plots for display by the rich Qt frontend.
-"""
+"""A matplotlib backend for publishing figures via display_data"""
+#-----------------------------------------------------------------------------
+# Copyright (C) 2011 The IPython Development Team
+#
+# Distributed under the terms of the BSD License. The full license is in
+# the file COPYING, distributed as part of this software.
+#-----------------------------------------------------------------------------
+
#-----------------------------------------------------------------------------
# Imports
#-----------------------------------------------------------------------------
@@ -7,80 +13,14 @@
# Third-party imports
import matplotlib
-from matplotlib.backends.backend_agg import new_figure_manager, FigureCanvasAgg
+from matplotlib.backends.backend_agg import FigureCanvasAgg
from matplotlib._pylab_helpers import Gcf
-# Local imports.
-from IPython.config.configurable import SingletonConfigurable
+# Local imports
+from IPython.core.getipython import get_ipython
from IPython.core.display import display
-from IPython.core.displaypub import publish_display_data
-from IPython.core.pylabtools import print_figure, select_figure_format
-from IPython.utils.traitlets import Dict, Instance, CaselessStrEnum, Bool
-from IPython.utils.warn import warn
-
-#-----------------------------------------------------------------------------
-# Configurable for inline backend options
-#-----------------------------------------------------------------------------
-# inherit from InlineBackendConfig for deprecation purposes
-class InlineBackendConfig(SingletonConfigurable):
- pass
-
-class InlineBackend(InlineBackendConfig):
- """An object to store configuration of the inline backend."""
-
- def _config_changed(self, name, old, new):
- # warn on change of renamed config section
- if new.InlineBackendConfig != old.InlineBackendConfig:
- warn("InlineBackendConfig has been renamed to InlineBackend")
- super(InlineBackend, self)._config_changed(name, old, new)
-
- # The typical default figure size is too large for inline use,
- # so we shrink the figure size to 6x4, and tweak fonts to
- # make that fit.
- rc = Dict({'figure.figsize': (6.0,4.0),
- # play nicely with white background in the Qt and notebook frontend
- 'figure.facecolor': 'white',
- 'figure.edgecolor': 'white',
- # 12pt labels get cutoff on 6x4 logplots, so use 10pt.
- 'font.size': 10,
- # 72 dpi matches SVG/qtconsole
- # this only affects PNG export, as SVG has no dpi setting
- 'savefig.dpi': 72,
- # 10pt still needs a little more room on the xlabel:
- 'figure.subplot.bottom' : .125
- }, config=True,
- help="""Subset of matplotlib rcParams that should be different for the
- inline backend."""
- )
-
- figure_format = CaselessStrEnum(['svg', 'png', 'retina'], default_value='png', config=True,
- help="The image format for figures with the inline backend.")
-
- def _figure_format_changed(self, name, old, new):
- if self.shell is None:
- return
- else:
- select_figure_format(self.shell, new)
-
- close_figures = Bool(True, config=True,
- help="""Close all figures at the end of each cell.
-
- When True, ensures that each cell starts with no active figures, but it
- also means that one must keep track of references in order to edit or
- redraw figures in subsequent cells. This mode is ideal for the notebook,
- where residual plots from other cells might be surprising.
-
- When False, one must call figure() to create new figures. This means
- that gcf() and getfigs() can reference figures created in other cells,
- and the active figure can continue to be edited with pylab/pyplot
- methods that reference the current active figure. This mode facilitates
- iterative editing of figures, and behaves most consistently with
- other matplotlib backends, but figure barriers between cells must
- be explicit.
- """)
-
- shell = Instance('IPython.core.interactiveshell.InteractiveShellABC')
+from .config import InlineBackend
#-----------------------------------------------------------------------------
# Functions
@@ -107,7 +47,6 @@ def show(close=None):
matplotlib.pyplot.close('all')
-
# This flag will be reset by draw_if_interactive when called
show._draw_called = False
# list of figures to draw when flush_figures is called
@@ -179,12 +118,11 @@ def flush_figures():
return show(True)
except Exception as e:
# safely show traceback if in IPython, else raise
- try:
- get_ipython
- except NameError:
+ ip = get_ipython()
+ if ip is None:
raise e
else:
- get_ipython().showtraceback()
+ ip.showtraceback()
return
try:
# exclude any figures that were closed:
@@ -194,13 +132,12 @@ def flush_figures():
display(fig)
except Exception as e:
# safely show traceback if in IPython, else raise
- try:
- get_ipython
- except NameError:
+ ip = get_ipython()
+ if ip is None:
raise e
else:
- get_ipython().showtraceback()
- break
+ ip.showtraceback()
+ return
finally:
# clear flags for next round
show._to_draw = []
diff --git a/IPython/kernel/zmq/pylab/config.py b/IPython/kernel/zmq/pylab/config.py
new file mode 100644
--- /dev/null
+++ b/IPython/kernel/zmq/pylab/config.py
@@ -0,0 +1,85 @@
+"""Configurable for configuring the IPython inline backend
+
+This module does not import anything from matplotlib.
+"""
+#-----------------------------------------------------------------------------
+# Copyright (C) 2011 The IPython Development Team
+#
+# Distributed under the terms of the BSD License. The full license is in
+# the file COPYING, distributed as part of this software.
+#-----------------------------------------------------------------------------
+
+#-----------------------------------------------------------------------------
+# Imports
+#-----------------------------------------------------------------------------
+
+from IPython.config.configurable import SingletonConfigurable
+from IPython.utils.traitlets import Dict, Instance, CaselessStrEnum, Bool
+from IPython.utils.warn import warn
+
+#-----------------------------------------------------------------------------
+# Configurable for inline backend options
+#-----------------------------------------------------------------------------
+
+# inherit from InlineBackendConfig for deprecation purposes
+class InlineBackendConfig(SingletonConfigurable):
+ pass
+
+class InlineBackend(InlineBackendConfig):
+ """An object to store configuration of the inline backend."""
+
+ def _config_changed(self, name, old, new):
+ # warn on change of renamed config section
+ if new.InlineBackendConfig != old.InlineBackendConfig:
+ warn("InlineBackendConfig has been renamed to InlineBackend")
+ super(InlineBackend, self)._config_changed(name, old, new)
+
+ # The typical default figure size is too large for inline use,
+ # so we shrink the figure size to 6x4, and tweak fonts to
+ # make that fit.
+ rc = Dict({'figure.figsize': (6.0,4.0),
+ # play nicely with white background in the Qt and notebook frontend
+ 'figure.facecolor': 'white',
+ 'figure.edgecolor': 'white',
+ # 12pt labels get cutoff on 6x4 logplots, so use 10pt.
+ 'font.size': 10,
+ # 72 dpi matches SVG/qtconsole
+ # this only affects PNG export, as SVG has no dpi setting
+ 'savefig.dpi': 72,
+ # 10pt still needs a little more room on the xlabel:
+ 'figure.subplot.bottom' : .125
+ }, config=True,
+ help="""Subset of matplotlib rcParams that should be different for the
+ inline backend."""
+ )
+
+ figure_format = CaselessStrEnum(['svg', 'png', 'retina'], default_value='png', config=True,
+ help="The image format for figures with the inline backend.")
+
+ def _figure_format_changed(self, name, old, new):
+ from IPython.core.pylabtools import select_figure_format
+ if self.shell is None:
+ return
+ else:
+ select_figure_format(self.shell, new)
+
+ close_figures = Bool(True, config=True,
+ help="""Close all figures at the end of each cell.
+
+ When True, ensures that each cell starts with no active figures, but it
+ also means that one must keep track of references in order to edit or
+ redraw figures in subsequent cells. This mode is ideal for the notebook,
+ where residual plots from other cells might be surprising.
+
+ When False, one must call figure() to create new figures. This means
+ that gcf() and getfigs() can reference figures created in other cells,
+ and the active figure can continue to be edited with pylab/pyplot
+ methods that reference the current active figure. This mode facilitates
+ iterative editing of figures, and behaves most consistently with
+ other matplotlib backends, but figure barriers between cells must
+ be explicit.
+ """)
+
+ shell = Instance('IPython.core.interactiveshell.InteractiveShellABC')
+
+
diff --git a/IPython/nbformat/current.py b/IPython/nbformat/current.py
--- a/IPython/nbformat/current.py
+++ b/IPython/nbformat/current.py
@@ -29,7 +29,7 @@
NotebookNode,
new_code_cell, new_text_cell, new_notebook, new_output, new_worksheet,
parse_filename, new_metadata, new_author, new_heading_cell, nbformat,
- nbformat_minor,
+ nbformat_minor, to_notebook_json
)
#-----------------------------------------------------------------------------
</patch>
|
[]
|
[]
| |||
pandas-dev__pandas-8330
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Feature request: specify chunksize for read_sql
It would be helpful to iterate through rows returned from an sql query (sqlite specifically) chunk by chunk just as is done in the read_csv and text files function as described here: http://pandas.pydata.org/pandas-docs/stable/io.html#iterating-through-files-chunk-by-chunk
The return value should be an iterable object. This will prevent queries from returning too large an amount of data, (possibly) exceeding the system memory.
</issue>
<code>
[start of README.md]
1 # pandas: powerful Python data analysis toolkit
2
3 
4
5 [](http://scatterci.github.io/pydata/pandas)
6
7 ## What is it
8
9 **pandas** is a Python package providing fast, flexible, and expressive data
10 structures designed to make working with "relational" or "labeled" data both
11 easy and intuitive. It aims to be the fundamental high-level building block for
12 doing practical, **real world** data analysis in Python. Additionally, it has
13 the broader goal of becoming **the most powerful and flexible open source data
14 analysis / manipulation tool available in any language**. It is already well on
15 its way toward this goal.
16
17 ## Main Features
18 Here are just a few of the things that pandas does well:
19
20 - Easy handling of [**missing data**][missing-data] (represented as
21 `NaN`) in floating point as well as non-floating point data
22 - Size mutability: columns can be [**inserted and
23 deleted**][insertion-deletion] from DataFrame and higher dimensional
24 objects
25 - Automatic and explicit [**data alignment**][alignment]: objects can
26 be explicitly aligned to a set of labels, or the user can simply
27 ignore the labels and let `Series`, `DataFrame`, etc. automatically
28 align the data for you in computations
29 - Powerful, flexible [**group by**][groupby] functionality to perform
30 split-apply-combine operations on data sets, for both aggregating
31 and transforming data
32 - Make it [**easy to convert**][conversion] ragged,
33 differently-indexed data in other Python and NumPy data structures
34 into DataFrame objects
35 - Intelligent label-based [**slicing**][slicing], [**fancy
36 indexing**][fancy-indexing], and [**subsetting**][subsetting] of
37 large data sets
38 - Intuitive [**merging**][merging] and [**joining**][joining] data
39 sets
40 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of
41 data sets
42 - [**Hierarchical**][mi] labeling of axes (possible to have multiple
43 labels per tick)
44 - Robust IO tools for loading data from [**flat files**][flat-files]
45 (CSV and delimited), [**Excel files**][excel], [**databases**][db],
46 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore]
47 - [**Time series**][timeseries]-specific functionality: date range
48 generation and frequency conversion, moving window statistics,
49 moving window linear regressions, date shifting and lagging, etc.
50
51
52 [missing-data]: http://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data
53 [insertion-deletion]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html#column-selection-addition-deletion
54 [alignment]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html?highlight=alignment#intro-to-data-structures
55 [groupby]: http://pandas.pydata.org/pandas-docs/stable/groupby.html#group-by-split-apply-combine
56 [conversion]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe
57 [slicing]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges
58 [fancy-indexing]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#advanced-indexing-with-ix
59 [subsetting]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing
60 [merging]: http://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging
61 [joining]: http://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index
62 [reshape]: http://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-and-pivot-tables
63 [pivot-table]: http://pandas.pydata.org/pandas-docs/stable/reshaping.html#pivot-tables-and-cross-tabulations
64 [mi]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex
65 [flat-files]: http://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files
66 [excel]: http://pandas.pydata.org/pandas-docs/stable/io.html#excel-files
67 [db]: http://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries
68 [hdfstore]: http://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables
69 [timeseries]: http://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-series-date-functionality
70
71 ## Where to get it
72 The source code is currently hosted on GitHub at:
73 http://github.com/pydata/pandas
74
75 Binary installers for the latest released version are available at the Python
76 package index
77
78 http://pypi.python.org/pypi/pandas/
79
80 And via `easy_install`:
81
82 ```sh
83 easy_install pandas
84 ```
85
86 or `pip`:
87
88 ```sh
89 pip install pandas
90 ```
91
92 ## Dependencies
93 - [NumPy](http://www.numpy.org): 1.7.0 or higher
94 - [python-dateutil](http://labix.org/python-dateutil): 1.5 or higher
95 - [pytz](http://pytz.sourceforge.net)
96 - Needed for time zone support with ``pandas.date_range``
97
98 ### Highly Recommended Dependencies
99 - [numexpr](https://github.com/pydata/numexpr)
100 - Needed to accelerate some expression evaluation operations
101 - Required by PyTables
102 - [bottleneck](http://berkeleyanalytics.com/bottleneck)
103 - Needed to accelerate certain numerical operations
104
105 ### Optional dependencies
106 - [Cython](http://www.cython.org): Only necessary to build development version. Version 0.17.1 or higher.
107 - [SciPy](http://www.scipy.org): miscellaneous statistical functions
108 - [PyTables](http://www.pytables.org): necessary for HDF5-based storage
109 - [SQLAlchemy](http://www.sqlalchemy.org): for SQL database support. Version 0.8.1 or higher recommended.
110 - [matplotlib](http://matplotlib.sourceforge.net/): for plotting
111 - [statsmodels](http://statsmodels.sourceforge.net/)
112 - Needed for parts of `pandas.stats`
113 - For Excel I/O:
114 - [xlrd/xlwt](http://www.python-excel.org/)
115 - Excel reading (xlrd) and writing (xlwt)
116 - [openpyxl](http://packages.python.org/openpyxl/)
117 - openpyxl version 1.6.1 or higher, but lower than 2.0.0, for
118 writing .xlsx files
119 - xlrd >= 0.9.0
120 - [XlsxWriter](https://pypi.python.org/pypi/XlsxWriter)
121 - Alternative Excel writer.
122 - [Google bq Command Line Tool](https://developers.google.com/bigquery/bq-command-line-tool/)
123 - Needed for `pandas.io.gbq`
124 - [boto](https://pypi.python.org/pypi/boto): necessary for Amazon S3 access.
125 - One of the following combinations of libraries is needed to use the
126 top-level [`pandas.read_html`][read-html-docs] function:
127 - [BeautifulSoup4][BeautifulSoup4] and [html5lib][html5lib] (Any
128 recent version of [html5lib][html5lib] is okay.)
129 - [BeautifulSoup4][BeautifulSoup4] and [lxml][lxml]
130 - [BeautifulSoup4][BeautifulSoup4] and [html5lib][html5lib] and [lxml][lxml]
131 - Only [lxml][lxml], although see [HTML reading gotchas][html-gotchas]
132 for reasons as to why you should probably **not** take this approach.
133
134 #### Notes about HTML parsing libraries
135 - If you install [BeautifulSoup4][BeautifulSoup4] you must install
136 either [lxml][lxml] or [html5lib][html5lib] or both.
137 `pandas.read_html` will **not** work with *only* `BeautifulSoup4`
138 installed.
139 - You are strongly encouraged to read [HTML reading
140 gotchas][html-gotchas]. It explains issues surrounding the
141 installation and usage of the above three libraries.
142 - You may need to install an older version of
143 [BeautifulSoup4][BeautifulSoup4]:
144 - Versions 4.2.1, 4.1.3 and 4.0.2 have been confirmed for 64 and
145 32-bit Ubuntu/Debian
146 - Additionally, if you're using [Anaconda][Anaconda] you should
147 definitely read [the gotchas about HTML parsing][html-gotchas]
148 libraries
149 - If you're on a system with `apt-get` you can do
150
151 ```sh
152 sudo apt-get build-dep python-lxml
153 ```
154
155 to get the necessary dependencies for installation of [lxml][lxml].
156 This will prevent further headaches down the line.
157
158 [html5lib]: https://github.com/html5lib/html5lib-python "html5lib"
159 [BeautifulSoup4]: http://www.crummy.com/software/BeautifulSoup "BeautifulSoup4"
160 [lxml]: http://lxml.de
161 [Anaconda]: https://store.continuum.io/cshop/anaconda
162 [NumPy]: http://numpy.scipy.org/
163 [html-gotchas]: http://pandas.pydata.org/pandas-docs/stable/gotchas.html#html-table-parsing
164 [read-html-docs]: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.io.html.read_html.html#pandas.io.html.read_html
165
166 ## Installation from sources
167 To install pandas from source you need Cython in addition to the normal
168 dependencies above. Cython can be installed from pypi:
169
170 ```sh
171 pip install cython
172 ```
173
174 In the `pandas` directory (same one where you found this file after
175 cloning the git repo), execute:
176
177 ```sh
178 python setup.py install
179 ```
180
181 or for installing in [development mode](http://www.pip-installer.org/en/latest/usage.html):
182
183 ```sh
184 python setup.py develop
185 ```
186
187 Alternatively, you can use `pip` if you want all the dependencies pulled
188 in automatically (the `-e` option is for installing it in [development
189 mode](http://www.pip-installer.org/en/latest/usage.html)):
190
191 ```sh
192 pip install -e .
193 ```
194
195 On Windows, you will need to install MinGW and execute:
196
197 ```sh
198 python setup.py build --compiler=mingw32
199 python setup.py install
200 ```
201
202 See http://pandas.pydata.org/ for more information.
203
204 ## License
205 BSD
206
207 ## Documentation
208 The official documentation is hosted on PyData.org: http://pandas.pydata.org/
209
210 The Sphinx documentation should provide a good starting point for learning how
211 to use the library. Expect the docs to continue to expand as time goes on.
212
213 ## Background
214 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and
215 has been under active development since then.
216
217 ## Discussion and Development
218 Since pandas development is related to a number of other scientific
219 Python projects, questions are welcome on the scipy-user mailing
220 list. Specialized discussions or design issues should take place on
221 the PyData mailing list / Google group:
222
223 https://groups.google.com/forum/#!forum/pydata
224
[end of README.md]
[start of pandas/io/html.py]
1 """:mod:`pandas.io.html` is a module containing functionality for dealing with
2 HTML IO.
3
4 """
5
6 import os
7 import re
8 import numbers
9 import collections
10 import warnings
11
12 from distutils.version import LooseVersion
13
14 import numpy as np
15
16 from pandas.io.common import _is_url, urlopen, parse_url
17 from pandas.io.parsers import TextParser
18 from pandas.compat import (lrange, lmap, u, string_types, iteritems,
19 raise_with_traceback, binary_type)
20 from pandas.core import common as com
21 from pandas import Series
22
23
24 try:
25 import bs4
26 except ImportError:
27 _HAS_BS4 = False
28 else:
29 _HAS_BS4 = True
30
31
32 try:
33 import lxml
34 except ImportError:
35 _HAS_LXML = False
36 else:
37 _HAS_LXML = True
38
39
40 try:
41 import html5lib
42 except ImportError:
43 _HAS_HTML5LIB = False
44 else:
45 _HAS_HTML5LIB = True
46
47
48 #############
49 # READ HTML #
50 #############
51 _RE_WHITESPACE = re.compile(r'[\r\n]+|\s{2,}')
52
53
54 char_types = string_types + (binary_type,)
55
56
57 def _remove_whitespace(s, regex=_RE_WHITESPACE):
58 """Replace extra whitespace inside of a string with a single space.
59
60 Parameters
61 ----------
62 s : str or unicode
63 The string from which to remove extra whitespace.
64
65 regex : regex
66 The regular expression to use to remove extra whitespace.
67
68 Returns
69 -------
70 subd : str or unicode
71 `s` with all extra whitespace replaced with a single space.
72 """
73 return regex.sub(' ', s.strip())
74
75
76 def _get_skiprows(skiprows):
77 """Get an iterator given an integer, slice or container.
78
79 Parameters
80 ----------
81 skiprows : int, slice, container
82 The iterator to use to skip rows; can also be a slice.
83
84 Raises
85 ------
86 TypeError
87 * If `skiprows` is not a slice, integer, or Container
88
89 Returns
90 -------
91 it : iterable
92 A proper iterator to use to skip rows of a DataFrame.
93 """
94 if isinstance(skiprows, slice):
95 return lrange(skiprows.start or 0, skiprows.stop, skiprows.step or 1)
96 elif isinstance(skiprows, numbers.Integral) or com.is_list_like(skiprows):
97 return skiprows
98 elif skiprows is None:
99 return 0
100 raise TypeError('%r is not a valid type for skipping rows' %
101 type(skiprows).__name__)
102
103
104 def _read(obj):
105 """Try to read from a url, file or string.
106
107 Parameters
108 ----------
109 obj : str, unicode, or file-like
110
111 Returns
112 -------
113 raw_text : str
114 """
115 if _is_url(obj):
116 with urlopen(obj) as url:
117 text = url.read()
118 elif hasattr(obj, 'read'):
119 text = obj.read()
120 elif isinstance(obj, char_types):
121 text = obj
122 try:
123 if os.path.isfile(text):
124 with open(text, 'rb') as f:
125 return f.read()
126 except (TypeError, ValueError):
127 pass
128 else:
129 raise TypeError("Cannot read object of type %r" % type(obj).__name__)
130 return text
131
132
133 class _HtmlFrameParser(object):
134 """Base class for parsers that parse HTML into DataFrames.
135
136 Parameters
137 ----------
138 io : str or file-like
139 This can be either a string of raw HTML, a valid URL using the HTTP,
140 FTP, or FILE protocols or a file-like object.
141
142 match : str or regex
143 The text to match in the document.
144
145 attrs : dict
146 List of HTML <table> element attributes to match.
147
148 Attributes
149 ----------
150 io : str or file-like
151 raw HTML, URL, or file-like object
152
153 match : regex
154 The text to match in the raw HTML
155
156 attrs : dict-like
157 A dictionary of valid table attributes to use to search for table
158 elements.
159
160 Notes
161 -----
162 To subclass this class effectively you must override the following methods:
163 * :func:`_build_doc`
164 * :func:`_text_getter`
165 * :func:`_parse_td`
166 * :func:`_parse_tables`
167 * :func:`_parse_tr`
168 * :func:`_parse_thead`
169 * :func:`_parse_tbody`
170 * :func:`_parse_tfoot`
171 See each method's respective documentation for details on their
172 functionality.
173 """
174 def __init__(self, io, match, attrs, encoding):
175 self.io = io
176 self.match = match
177 self.attrs = attrs
178 self.encoding = encoding
179
180 def parse_tables(self):
181 tables = self._parse_tables(self._build_doc(), self.match, self.attrs)
182 return (self._build_table(table) for table in tables)
183
184 def _parse_raw_data(self, rows):
185 """Parse the raw data into a list of lists.
186
187 Parameters
188 ----------
189 rows : iterable of node-like
190 A list of row elements.
191
192 text_getter : callable
193 A callable that gets the text from an individual node. This must be
194 defined by subclasses.
195
196 column_finder : callable
197 A callable that takes a row node as input and returns a list of the
198 column node in that row. This must be defined by subclasses.
199
200 Returns
201 -------
202 data : list of list of strings
203 """
204 data = [[_remove_whitespace(self._text_getter(col)) for col in
205 self._parse_td(row)] for row in rows]
206 return data
207
208 def _text_getter(self, obj):
209 """Return the text of an individual DOM node.
210
211 Parameters
212 ----------
213 obj : node-like
214 A DOM node.
215
216 Returns
217 -------
218 text : str or unicode
219 The text from an individual DOM node.
220 """
221 raise NotImplementedError
222
223 def _parse_td(self, obj):
224 """Return the td elements from a row element.
225
226 Parameters
227 ----------
228 obj : node-like
229
230 Returns
231 -------
232 columns : list of node-like
233 These are the elements of each row, i.e., the columns.
234 """
235 raise NotImplementedError
236
237 def _parse_tables(self, doc, match, attrs):
238 """Return all tables from the parsed DOM.
239
240 Parameters
241 ----------
242 doc : tree-like
243 The DOM from which to parse the table element.
244
245 match : str or regular expression
246 The text to search for in the DOM tree.
247
248 attrs : dict
249 A dictionary of table attributes that can be used to disambiguate
250 mutliple tables on a page.
251
252 Raises
253 ------
254 ValueError
255 * If `match` does not match any text in the document.
256
257 Returns
258 -------
259 tables : list of node-like
260 A list of <table> elements to be parsed into raw data.
261 """
262 raise NotImplementedError
263
264 def _parse_tr(self, table):
265 """Return the list of row elements from the parsed table element.
266
267 Parameters
268 ----------
269 table : node-like
270 A table element that contains row elements.
271
272 Returns
273 -------
274 rows : list of node-like
275 A list row elements of a table, usually <tr> or <th> elements.
276 """
277 raise NotImplementedError
278
279 def _parse_thead(self, table):
280 """Return the header of a table.
281
282 Parameters
283 ----------
284 table : node-like
285 A table element that contains row elements.
286
287 Returns
288 -------
289 thead : node-like
290 A <thead>...</thead> element.
291 """
292 raise NotImplementedError
293
294 def _parse_tbody(self, table):
295 """Return the body of the table.
296
297 Parameters
298 ----------
299 table : node-like
300 A table element that contains row elements.
301
302 Returns
303 -------
304 tbody : node-like
305 A <tbody>...</tbody> element.
306 """
307 raise NotImplementedError
308
309 def _parse_tfoot(self, table):
310 """Return the footer of the table if any.
311
312 Parameters
313 ----------
314 table : node-like
315 A table element that contains row elements.
316
317 Returns
318 -------
319 tfoot : node-like
320 A <tfoot>...</tfoot> element.
321 """
322 raise NotImplementedError
323
324 def _build_doc(self):
325 """Return a tree-like object that can be used to iterate over the DOM.
326
327 Returns
328 -------
329 obj : tree-like
330 """
331 raise NotImplementedError
332
333 def _build_table(self, table):
334 header = self._parse_raw_thead(table)
335 body = self._parse_raw_tbody(table)
336 footer = self._parse_raw_tfoot(table)
337 return header, body, footer
338
339 def _parse_raw_thead(self, table):
340 thead = self._parse_thead(table)
341 res = []
342 if thead:
343 res = lmap(self._text_getter, self._parse_th(thead[0]))
344 return np.array(res).squeeze() if res and len(res) == 1 else res
345
346 def _parse_raw_tfoot(self, table):
347 tfoot = self._parse_tfoot(table)
348 res = []
349 if tfoot:
350 res = lmap(self._text_getter, self._parse_td(tfoot[0]))
351 return np.array(res).squeeze() if res and len(res) == 1 else res
352
353 def _parse_raw_tbody(self, table):
354 tbody = self._parse_tbody(table)
355
356 try:
357 res = self._parse_tr(tbody[0])
358 except IndexError:
359 res = self._parse_tr(table)
360 return self._parse_raw_data(res)
361
362
363 class _BeautifulSoupHtml5LibFrameParser(_HtmlFrameParser):
364 """HTML to DataFrame parser that uses BeautifulSoup under the hood.
365
366 See Also
367 --------
368 pandas.io.html._HtmlFrameParser
369 pandas.io.html._LxmlFrameParser
370
371 Notes
372 -----
373 Documentation strings for this class are in the base class
374 :class:`pandas.io.html._HtmlFrameParser`.
375 """
376 def __init__(self, *args, **kwargs):
377 super(_BeautifulSoupHtml5LibFrameParser, self).__init__(*args,
378 **kwargs)
379 from bs4 import SoupStrainer
380 self._strainer = SoupStrainer('table')
381
382 def _text_getter(self, obj):
383 return obj.text
384
385 def _parse_td(self, row):
386 return row.find_all(('td', 'th'))
387
388 def _parse_tr(self, element):
389 return element.find_all('tr')
390
391 def _parse_th(self, element):
392 return element.find_all('th')
393
394 def _parse_thead(self, table):
395 return table.find_all('thead')
396
397 def _parse_tbody(self, table):
398 return table.find_all('tbody')
399
400 def _parse_tfoot(self, table):
401 return table.find_all('tfoot')
402
403 def _parse_tables(self, doc, match, attrs):
404 element_name = self._strainer.name
405 tables = doc.find_all(element_name, attrs=attrs)
406
407 if not tables:
408 raise ValueError('No tables found')
409
410 result = []
411 unique_tables = set()
412
413 for table in tables:
414 if (table not in unique_tables and
415 table.find(text=match) is not None):
416 result.append(table)
417 unique_tables.add(table)
418
419 if not result:
420 raise ValueError("No tables found matching pattern %r" %
421 match.pattern)
422 return result
423
424 def _setup_build_doc(self):
425 raw_text = _read(self.io)
426 if not raw_text:
427 raise ValueError('No text parsed from document: %s' % self.io)
428 return raw_text
429
430 def _build_doc(self):
431 from bs4 import BeautifulSoup
432 return BeautifulSoup(self._setup_build_doc(), features='html5lib',
433 from_encoding=self.encoding)
434
435
436 def _build_xpath_expr(attrs):
437 """Build an xpath expression to simulate bs4's ability to pass in kwargs to
438 search for attributes when using the lxml parser.
439
440 Parameters
441 ----------
442 attrs : dict
443 A dict of HTML attributes. These are NOT checked for validity.
444
445 Returns
446 -------
447 expr : unicode
448 An XPath expression that checks for the given HTML attributes.
449 """
450 # give class attribute as class_ because class is a python keyword
451 if 'class_' in attrs:
452 attrs['class'] = attrs.pop('class_')
453
454 s = [u("@%s=%r") % (k, v) for k, v in iteritems(attrs)]
455 return u('[%s]') % ' and '.join(s)
456
457
458 _re_namespace = {'re': 'http://exslt.org/regular-expressions'}
459 _valid_schemes = 'http', 'file', 'ftp'
460
461
462 class _LxmlFrameParser(_HtmlFrameParser):
463 """HTML to DataFrame parser that uses lxml under the hood.
464
465 Warning
466 -------
467 This parser can only handle HTTP, FTP, and FILE urls.
468
469 See Also
470 --------
471 _HtmlFrameParser
472 _BeautifulSoupLxmlFrameParser
473
474 Notes
475 -----
476 Documentation strings for this class are in the base class
477 :class:`_HtmlFrameParser`.
478 """
479 def __init__(self, *args, **kwargs):
480 super(_LxmlFrameParser, self).__init__(*args, **kwargs)
481
482 def _text_getter(self, obj):
483 return obj.text_content()
484
485 def _parse_td(self, row):
486 return row.xpath('.//td|.//th')
487
488 def _parse_tr(self, table):
489 expr = './/tr[normalize-space()]'
490 return table.xpath(expr)
491
492 def _parse_tables(self, doc, match, kwargs):
493 pattern = match.pattern
494
495 # 1. check all descendants for the given pattern and only search tables
496 # 2. go up the tree until we find a table
497 query = '//table//*[re:test(text(), %r)]/ancestor::table'
498 xpath_expr = u(query) % pattern
499
500 # if any table attributes were given build an xpath expression to
501 # search for them
502 if kwargs:
503 xpath_expr += _build_xpath_expr(kwargs)
504
505 tables = doc.xpath(xpath_expr, namespaces=_re_namespace)
506
507 if not tables:
508 raise ValueError("No tables found matching regex %r" % pattern)
509 return tables
510
511 def _build_doc(self):
512 """
513 Raises
514 ------
515 ValueError
516 * If a URL that lxml cannot parse is passed.
517
518 Exception
519 * Any other ``Exception`` thrown. For example, trying to parse a
520 URL that is syntactically correct on a machine with no internet
521 connection will fail.
522
523 See Also
524 --------
525 pandas.io.html._HtmlFrameParser._build_doc
526 """
527 from lxml.html import parse, fromstring, HTMLParser
528 from lxml.etree import XMLSyntaxError
529
530 parser = HTMLParser(recover=False, encoding=self.encoding)
531
532 try:
533 # try to parse the input in the simplest way
534 r = parse(self.io, parser=parser)
535
536 try:
537 r = r.getroot()
538 except AttributeError:
539 pass
540 except (UnicodeDecodeError, IOError):
541 # if the input is a blob of html goop
542 if not _is_url(self.io):
543 r = fromstring(self.io, parser=parser)
544
545 try:
546 r = r.getroot()
547 except AttributeError:
548 pass
549 else:
550 # not a url
551 scheme = parse_url(self.io).scheme
552 if scheme not in _valid_schemes:
553 # lxml can't parse it
554 msg = ('%r is not a valid url scheme, valid schemes are '
555 '%s') % (scheme, _valid_schemes)
556 raise ValueError(msg)
557 else:
558 # something else happened: maybe a faulty connection
559 raise
560 else:
561 if not hasattr(r, 'text_content'):
562 raise XMLSyntaxError("no text parsed from document", 0, 0, 0)
563 return r
564
565 def _parse_tbody(self, table):
566 return table.xpath('.//tbody')
567
568 def _parse_thead(self, table):
569 return table.xpath('.//thead')
570
571 def _parse_tfoot(self, table):
572 return table.xpath('.//tfoot')
573
574 def _parse_raw_thead(self, table):
575 expr = './/thead//th'
576 return [_remove_whitespace(x.text_content()) for x in
577 table.xpath(expr)]
578
579 def _parse_raw_tfoot(self, table):
580 expr = './/tfoot//th'
581 return [_remove_whitespace(x.text_content()) for x in
582 table.xpath(expr)]
583
584
585 def _expand_elements(body):
586 lens = Series(lmap(len, body))
587 lens_max = lens.max()
588 not_max = lens[lens != lens_max]
589
590 empty = ['']
591 for ind, length in iteritems(not_max):
592 body[ind] += empty * (lens_max - length)
593
594
595 def _data_to_frame(data, header, index_col, skiprows, infer_types,
596 parse_dates, tupleize_cols, thousands):
597 head, body, _ = data # _ is footer which is rarely used: ignore for now
598
599 if head:
600 body = [head] + body
601
602 if header is None: # special case when a table has <th> elements
603 header = 0
604
605 # fill out elements of body that are "ragged"
606 _expand_elements(body)
607
608 tp = TextParser(body, header=header, index_col=index_col,
609 skiprows=_get_skiprows(skiprows),
610 parse_dates=parse_dates, tupleize_cols=tupleize_cols,
611 thousands=thousands)
612 df = tp.read()
613 return df
614
615
616 _valid_parsers = {'lxml': _LxmlFrameParser, None: _LxmlFrameParser,
617 'html5lib': _BeautifulSoupHtml5LibFrameParser,
618 'bs4': _BeautifulSoupHtml5LibFrameParser}
619
620
621 def _parser_dispatch(flavor):
622 """Choose the parser based on the input flavor.
623
624 Parameters
625 ----------
626 flavor : str
627 The type of parser to use. This must be a valid backend.
628
629 Returns
630 -------
631 cls : _HtmlFrameParser subclass
632 The parser class based on the requested input flavor.
633
634 Raises
635 ------
636 ValueError
637 * If `flavor` is not a valid backend.
638 ImportError
639 * If you do not have the requested `flavor`
640 """
641 valid_parsers = list(_valid_parsers.keys())
642 if flavor not in valid_parsers:
643 raise ValueError('%r is not a valid flavor, valid flavors are %s' %
644 (flavor, valid_parsers))
645
646 if flavor in ('bs4', 'html5lib'):
647 if not _HAS_HTML5LIB:
648 raise ImportError("html5lib not found, please install it")
649 if not _HAS_BS4:
650 raise ImportError("BeautifulSoup4 (bs4) not found, please install it")
651 if bs4.__version__ == LooseVersion('4.2.0'):
652 raise ValueError("You're using a version"
653 " of BeautifulSoup4 (4.2.0) that has been"
654 " known to cause problems on certain"
655 " operating systems such as Debian. "
656 "Please install a version of"
657 " BeautifulSoup4 != 4.2.0, both earlier"
658 " and later releases will work.")
659 else:
660 if not _HAS_LXML:
661 raise ImportError("lxml not found, please install it")
662 return _valid_parsers[flavor]
663
664
665 def _print_as_set(s):
666 return '{%s}' % ', '.join([com.pprint_thing(el) for el in s])
667
668
669 def _validate_flavor(flavor):
670 if flavor is None:
671 flavor = 'lxml', 'bs4'
672 elif isinstance(flavor, string_types):
673 flavor = flavor,
674 elif isinstance(flavor, collections.Iterable):
675 if not all(isinstance(flav, string_types) for flav in flavor):
676 raise TypeError('Object of type %r is not an iterable of strings' %
677 type(flavor).__name__)
678 else:
679 fmt = '{0!r}' if isinstance(flavor, string_types) else '{0}'
680 fmt += ' is not a valid flavor'
681 raise ValueError(fmt.format(flavor))
682
683 flavor = tuple(flavor)
684 valid_flavors = set(_valid_parsers)
685 flavor_set = set(flavor)
686
687 if not flavor_set & valid_flavors:
688 raise ValueError('%s is not a valid set of flavors, valid flavors are '
689 '%s' % (_print_as_set(flavor_set),
690 _print_as_set(valid_flavors)))
691 return flavor
692
693
694 def _parse(flavor, io, match, header, index_col, skiprows, infer_types,
695 parse_dates, tupleize_cols, thousands, attrs, encoding):
696 flavor = _validate_flavor(flavor)
697 compiled_match = re.compile(match) # you can pass a compiled regex here
698
699 # hack around python 3 deleting the exception variable
700 retained = None
701 for flav in flavor:
702 parser = _parser_dispatch(flav)
703 p = parser(io, compiled_match, attrs, encoding)
704
705 try:
706 tables = p.parse_tables()
707 except Exception as caught:
708 retained = caught
709 else:
710 break
711 else:
712 raise_with_traceback(retained)
713
714 ret = []
715 for table in tables:
716 try:
717 ret.append(_data_to_frame(table, header, index_col, skiprows,
718 infer_types, parse_dates, tupleize_cols, thousands))
719 except StopIteration: # empty table
720 continue
721 return ret
722
723
724 def read_html(io, match='.+', flavor=None, header=None, index_col=None,
725 skiprows=None, infer_types=None, attrs=None, parse_dates=False,
726 tupleize_cols=False, thousands=',', encoding=None):
727 r"""Read HTML tables into a ``list`` of ``DataFrame`` objects.
728
729 Parameters
730 ----------
731 io : str or file-like
732 A URL, a file-like object, or a raw string containing HTML. Note that
733 lxml only accepts the http, ftp and file url protocols. If you have a
734 URL that starts with ``'https'`` you might try removing the ``'s'``.
735
736 match : str or compiled regular expression, optional
737 The set of tables containing text matching this regex or string will be
738 returned. Unless the HTML is extremely simple you will probably need to
739 pass a non-empty string here. Defaults to '.+' (match any non-empty
740 string). The default value will return all tables contained on a page.
741 This value is converted to a regular expression so that there is
742 consistent behavior between Beautiful Soup and lxml.
743
744 flavor : str or None, container of strings
745 The parsing engine to use. 'bs4' and 'html5lib' are synonymous with
746 each other, they are both there for backwards compatibility. The
747 default of ``None`` tries to use ``lxml`` to parse and if that fails it
748 falls back on ``bs4`` + ``html5lib``.
749
750 header : int or list-like or None, optional
751 The row (or list of rows for a :class:`~pandas.MultiIndex`) to use to
752 make the columns headers.
753
754 index_col : int or list-like or None, optional
755 The column (or list of columns) to use to create the index.
756
757 skiprows : int or list-like or slice or None, optional
758 0-based. Number of rows to skip after parsing the column integer. If a
759 sequence of integers or a slice is given, will skip the rows indexed by
760 that sequence. Note that a single element sequence means 'skip the nth
761 row' whereas an integer means 'skip n rows'.
762
763 infer_types : None, optional
764 This has no effect since 0.15.0. It is here for backwards compatibility.
765
766 attrs : dict or None, optional
767 This is a dictionary of attributes that you can pass to use to identify
768 the table in the HTML. These are not checked for validity before being
769 passed to lxml or Beautiful Soup. However, these attributes must be
770 valid HTML table attributes to work correctly. For example, ::
771
772 attrs = {'id': 'table'}
773
774 is a valid attribute dictionary because the 'id' HTML tag attribute is
775 a valid HTML attribute for *any* HTML tag as per `this document
776 <http://www.w3.org/TR/html-markup/global-attributes.html>`__. ::
777
778 attrs = {'asdf': 'table'}
779
780 is *not* a valid attribute dictionary because 'asdf' is not a valid
781 HTML attribute even if it is a valid XML attribute. Valid HTML 4.01
782 table attributes can be found `here
783 <http://www.w3.org/TR/REC-html40/struct/tables.html#h-11.2>`__. A
784 working draft of the HTML 5 spec can be found `here
785 <http://www.w3.org/TR/html-markup/table.html>`__. It contains the
786 latest information on table attributes for the modern web.
787
788 parse_dates : bool, optional
789 See :func:`~pandas.io.parsers.read_csv` for more details. In 0.13, this
790 parameter can sometimes interact strangely with ``infer_types``. If you
791 get a large number of ``NaT`` values in your results, consider passing
792 ``infer_types=False`` and manually converting types afterwards.
793
794 tupleize_cols : bool, optional
795 If ``False`` try to parse multiple header rows into a
796 :class:`~pandas.MultiIndex`, otherwise return raw tuples. Defaults to
797 ``False``.
798
799 thousands : str, optional
800 Separator to use to parse thousands. Defaults to ``','``.
801
802 encoding : str or None, optional
803 The encoding used to decode the web page. Defaults to ``None``.``None``
804 preserves the previous encoding behavior, which depends on the
805 underlying parser library (e.g., the parser library will try to use
806 the encoding provided by the document).
807
808 Returns
809 -------
810 dfs : list of DataFrames
811
812 Notes
813 -----
814 Before using this function you should read the :ref:`gotchas about the
815 HTML parsing libraries <html-gotchas>`.
816
817 Expect to do some cleanup after you call this function. For example, you
818 might need to manually assign column names if the column names are
819 converted to NaN when you pass the `header=0` argument. We try to assume as
820 little as possible about the structure of the table and push the
821 idiosyncrasies of the HTML contained in the table to the user.
822
823 This function searches for ``<table>`` elements and only for ``<tr>``
824 and ``<th>`` rows and ``<td>`` elements within each ``<tr>`` or ``<th>``
825 element in the table. ``<td>`` stands for "table data".
826
827 Similar to :func:`~pandas.read_csv` the `header` argument is applied
828 **after** `skiprows` is applied.
829
830 This function will *always* return a list of :class:`DataFrame` *or*
831 it will fail, e.g., it will *not* return an empty list.
832
833 Examples
834 --------
835 See the :ref:`read_html documentation in the IO section of the docs
836 <io.read_html>` for some examples of reading in HTML tables.
837
838 See Also
839 --------
840 pandas.io.parsers.read_csv
841 """
842 if infer_types is not None:
843 warnings.warn("infer_types has no effect since 0.15", FutureWarning)
844
845 # Type check here. We don't want to parse only to fail because of an
846 # invalid value of an integer skiprows.
847 if isinstance(skiprows, numbers.Integral) and skiprows < 0:
848 raise ValueError('cannot skip rows starting from the end of the '
849 'data (you passed a negative value)')
850 return _parse(flavor, io, match, header, index_col, skiprows, infer_types,
851 parse_dates, tupleize_cols, thousands, attrs, encoding)
852
[end of pandas/io/html.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pandas-dev/pandas
|
d865428931045837f59625cfb62d72823c5e0819
|
Feature request: specify chunksize for read_sql
It would be helpful to iterate through rows returned from an sql query (sqlite specifically) chunk by chunk just as is done in the read_csv and text files function as described here: http://pandas.pydata.org/pandas-docs/stable/io.html#iterating-through-files-chunk-by-chunk
The return value should be an iterable object. This will prevent queries from returning too large an amount of data, (possibly) exceeding the system memory.
|
The exact error I got was this on pandas version 0.10.1:
```
runData = psql.read_frame("SELECT * FROM output", conn)
File "C:\Python27\lib\site-packages\pandas\io\sql.py", line 151, in read_frame
coerce_float=coerce_float)
File "C:\Python27\lib\site-packages\pandas\core\frame.py", line 1014, in from_records
coerce_float=coerce_float)
File "C:\Python27\lib\site-packages\pandas\core\frame.py", line 5468, in _to_arrays
if len(data) == 0:
TypeError: object of type 'NoneType' has no len()
```
The TypeError is a little confusing as it took me awhile to figure out it was happening because I was hitting the memory limit. Maybe just a clearer error message would be enough (max query sized reached or something), perhaps suggesting that the user use the SQL LIMIT command to prevent this problem (See http://php.about.com/od/mysqlcommands/g/Limit_sql.htm)
I just ran:
```
runData = psql.read_frame("SELECT * FROM output LIMIT 10", conn)
```
with no problem
so this would need (to be consistent) `iterator` and `chunksize` keywords
For the time being, here is a simple implementation of the requested functionality: https://gist.github.com/lebedov/6831387
@jorisvandenbossche @hayd this needs to go on the new sql issues list?
Hmm, I preferably keep the list in https://github.com/pydata/pandas/issues/6292 as the important todo's that should ideally be finished before releasing it. And this is a nice feature request, but not a blocker for the basic functionality. Just keep it as a seperate issue?
ok....how about you create another issue (mark as 0.15), then will include items that are not in #6292
but are marked as SQL; that way easy to move an issue out of current release to next one (and track all the SQL ones). make check boxes and such.
#3745, #5008, #2754 I think should go on one of these as well (or if already satisifed by another issue go ahead and close)
This came up again here: http://stackoverflow.com/q/25633830/1240268
I take full responsibility for asking how to [pull large amounts of data from a remote server, into a DataFrame](http://stackoverflow.com/q/25633830/341929), that @hayd just referenced and answered in such good detail on SO ––for which I thank you!
I've updated the SO question with more context, but if I can help / contribute in any way here, I'd be more than happy to.
@mariusbutuc if you want to try to implement it and send a pull request, that would be very welcome!
I think this could be done inside the `read_sql` function (https://github.com/pydata/pandas/blob/master/pandas/io/sql.py#L870) using `fetchmany` instead of `fetchall` ? (would that work?)
|
2014-09-20T15:23:47Z
|
<patch>
diff --git a/doc/source/io.rst b/doc/source/io.rst
--- a/doc/source/io.rst
+++ b/doc/source/io.rst
@@ -3411,6 +3411,18 @@ Of course, you can specify a more "complex" query.
pd.read_sql_query("SELECT id, Col_1, Col_2 FROM data WHERE id = 42;", engine)
+The func:`~pandas.read_sql_query` function supports a ``chunksize`` argument.
+Specifying this will return an iterator through chunks of the query result:
+
+.. ipython:: python
+
+ df = pd.DataFrame(np.random.randn(20, 3), columns=list('abc'))
+ df.to_sql('data_chunks', engine, index=False)
+
+.. ipython:: python
+
+ for chunk in pd.read_sql_query("SELECT * FROM data_chunks", engine, chunksize):
+ print(chunk)
You can also run a plain query without creating a dataframe with
:func:`~pandas.io.sql.execute`. This is useful for queries that don't return values,
diff --git a/doc/source/v0.15.0.txt b/doc/source/v0.15.0.txt
--- a/doc/source/v0.15.0.txt
+++ b/doc/source/v0.15.0.txt
@@ -801,7 +801,8 @@ Deprecations
Enhancements
~~~~~~~~~~~~
-- Added support for a ``chunksize`` parameter to ``to_sql`` function. This allows DataFrame to be written in chunks and avoid packet-size overflow errors (:issue:`8062`)
+- Added support for a ``chunksize`` parameter to ``to_sql`` function. This allows DataFrame to be written in chunks and avoid packet-size overflow errors (:issue:`8062`).
+- Added support for a ``chunksize`` parameter to ``read_sql`` function. Specifying this argument will return an iterator through chunks of the query result (:issue:`2908`).
- Added support for writing ``datetime.date`` and ``datetime.time`` object columns with ``to_sql`` (:issue:`6932`).
- Added support for specifying a ``schema`` to read from/write to with ``read_sql_table`` and ``to_sql`` (:issue:`7441`, :issue:`7952`).
For example:
diff --git a/pandas/io/sql.py b/pandas/io/sql.py
--- a/pandas/io/sql.py
+++ b/pandas/io/sql.py
@@ -32,7 +32,7 @@ class DatabaseError(IOError):
#------------------------------------------------------------------------------
-# Helper functions
+#--- Helper functions
_SQLALCHEMY_INSTALLED = None
@@ -115,6 +115,21 @@ def _parse_date_columns(data_frame, parse_dates):
return data_frame
+def _wrap_result(data, columns, index_col=None, coerce_float=True,
+ parse_dates=None):
+ """Wrap result set of query in a DataFrame """
+
+ frame = DataFrame.from_records(data, columns=columns,
+ coerce_float=coerce_float)
+
+ _parse_date_columns(frame, parse_dates)
+
+ if index_col is not None:
+ frame.set_index(index_col, inplace=True)
+
+ return frame
+
+
def execute(sql, con, cur=None, params=None):
"""
Execute the given SQL query using the provided connection object.
@@ -262,7 +277,8 @@ def uquery(sql, con=None, cur=None, retry=True, params=None):
#--- Read and write to DataFrames
def read_sql_table(table_name, con, schema=None, index_col=None,
- coerce_float=True, parse_dates=None, columns=None):
+ coerce_float=True, parse_dates=None, columns=None,
+ chunksize=None):
"""Read SQL database table into a DataFrame.
Given a table name and an SQLAlchemy engine, returns a DataFrame.
@@ -293,6 +309,9 @@ def read_sql_table(table_name, con, schema=None, index_col=None,
such as SQLite
columns : list
List of column names to select from sql table
+ chunksize : int, default None
+ If specified, return an iterator where `chunksize` is the number of
+ rows to include in each chunk.
Returns
-------
@@ -318,7 +337,7 @@ def read_sql_table(table_name, con, schema=None, index_col=None,
pandas_sql = SQLDatabase(con, meta=meta)
table = pandas_sql.read_table(
table_name, index_col=index_col, coerce_float=coerce_float,
- parse_dates=parse_dates, columns=columns)
+ parse_dates=parse_dates, columns=columns, chunksize=chunksize)
if table is not None:
return table
@@ -327,7 +346,7 @@ def read_sql_table(table_name, con, schema=None, index_col=None,
def read_sql_query(sql, con, index_col=None, coerce_float=True, params=None,
- parse_dates=None):
+ parse_dates=None, chunksize=None):
"""Read SQL query into a DataFrame.
Returns a DataFrame corresponding to the result set of the query
@@ -362,6 +381,9 @@ def read_sql_query(sql, con, index_col=None, coerce_float=True, params=None,
to the keyword arguments of :func:`pandas.to_datetime`
Especially useful with databases without native Datetime support,
such as SQLite
+ chunksize : int, default None
+ If specified, return an iterator where `chunksize` is the number of
+ rows to include in each chunk.
Returns
-------
@@ -376,11 +398,11 @@ def read_sql_query(sql, con, index_col=None, coerce_float=True, params=None,
pandas_sql = pandasSQL_builder(con)
return pandas_sql.read_query(
sql, index_col=index_col, params=params, coerce_float=coerce_float,
- parse_dates=parse_dates)
+ parse_dates=parse_dates, chunksize=chunksize)
def read_sql(sql, con, index_col=None, coerce_float=True, params=None,
- parse_dates=None, columns=None):
+ parse_dates=None, columns=None, chunksize=None):
"""
Read SQL query or database table into a DataFrame.
@@ -415,6 +437,9 @@ def read_sql(sql, con, index_col=None, coerce_float=True, params=None,
columns : list
List of column names to select from sql table (only used when reading
a table).
+ chunksize : int, default None
+ If specified, return an iterator where `chunksize` is the
+ number of rows to include in each chunk.
Returns
-------
@@ -438,7 +463,8 @@ def read_sql(sql, con, index_col=None, coerce_float=True, params=None,
if isinstance(pandas_sql, SQLiteDatabase):
return pandas_sql.read_query(
sql, index_col=index_col, params=params,
- coerce_float=coerce_float, parse_dates=parse_dates)
+ coerce_float=coerce_float, parse_dates=parse_dates,
+ chunksize=chunksize)
try:
_is_table_name = pandas_sql.has_table(sql)
@@ -449,11 +475,12 @@ def read_sql(sql, con, index_col=None, coerce_float=True, params=None,
pandas_sql.meta.reflect(only=[sql])
return pandas_sql.read_table(
sql, index_col=index_col, coerce_float=coerce_float,
- parse_dates=parse_dates, columns=columns)
+ parse_dates=parse_dates, columns=columns, chunksize=chunksize)
else:
return pandas_sql.read_query(
sql, index_col=index_col, params=params,
- coerce_float=coerce_float, parse_dates=parse_dates)
+ coerce_float=coerce_float, parse_dates=parse_dates,
+ chunksize=chunksize)
def to_sql(frame, name, con, flavor='sqlite', schema=None, if_exists='fail',
@@ -684,7 +711,27 @@ def insert(self, chunksize=None):
chunk_iter = zip(*[arr[start_i:end_i] for arr in data_list])
self._execute_insert(conn, keys, chunk_iter)
- def read(self, coerce_float=True, parse_dates=None, columns=None):
+ def _query_iterator(self, result, chunksize, columns, coerce_float=True,
+ parse_dates=None):
+ """Return generator through chunked result set"""
+
+ while True:
+ data = result.fetchmany(chunksize)
+ if not data:
+ break
+ else:
+ self.frame = DataFrame.from_records(
+ data, columns=columns, coerce_float=coerce_float)
+
+ self._harmonize_columns(parse_dates=parse_dates)
+
+ if self.index is not None:
+ self.frame.set_index(self.index, inplace=True)
+
+ yield self.frame
+
+ def read(self, coerce_float=True, parse_dates=None, columns=None,
+ chunksize=None):
if columns is not None and len(columns) > 0:
from sqlalchemy import select
@@ -696,18 +743,23 @@ def read(self, coerce_float=True, parse_dates=None, columns=None):
sql_select = self.table.select()
result = self.pd_sql.execute(sql_select)
- data = result.fetchall()
column_names = result.keys()
- self.frame = DataFrame.from_records(
- data, columns=column_names, coerce_float=coerce_float)
+ if chunksize is not None:
+ return self._query_iterator(result, chunksize, column_names,
+ coerce_float=coerce_float,
+ parse_dates=parse_dates)
+ else:
+ data = result.fetchall()
+ self.frame = DataFrame.from_records(
+ data, columns=column_names, coerce_float=coerce_float)
- self._harmonize_columns(parse_dates=parse_dates)
+ self._harmonize_columns(parse_dates=parse_dates)
- if self.index is not None:
- self.frame.set_index(self.index, inplace=True)
+ if self.index is not None:
+ self.frame.set_index(self.index, inplace=True)
- return self.frame
+ return self.frame
def _index_name(self, index, index_label):
# for writing: index=True to include index in sql table
@@ -898,8 +950,8 @@ class SQLDatabase(PandasSQL):
Parameters
----------
engine : SQLAlchemy engine
- Engine to connect with the database. Using SQLAlchemy makes it possible to use any DB supported by that
- library.
+ Engine to connect with the database. Using SQLAlchemy makes it
+ possible to use any DB supported by that library.
schema : string, default None
Name of SQL schema in database to write to (if database flavor
supports this). If None, use default schema (default).
@@ -926,9 +978,10 @@ def execute(self, *args, **kwargs):
return self.engine.execute(*args, **kwargs)
def read_table(self, table_name, index_col=None, coerce_float=True,
- parse_dates=None, columns=None, schema=None):
+ parse_dates=None, columns=None, schema=None,
+ chunksize=None):
"""Read SQL database table into a DataFrame.
-
+
Parameters
----------
table_name : string
@@ -936,15 +989,16 @@ def read_table(self, table_name, index_col=None, coerce_float=True,
index_col : string, optional
Column to set as index
coerce_float : boolean, default True
- Attempt to convert values to non-string, non-numeric objects (like
- decimal.Decimal) to floating point. Can result in loss of Precision.
+ Attempt to convert values to non-string, non-numeric objects
+ (like decimal.Decimal) to floating point. This can result in
+ loss of precision.
parse_dates : list or dict
- List of column names to parse as dates
- Dict of ``{column_name: format string}`` where format string is
strftime compatible in case of parsing string times or is one of
(D, s, ns, ms, us) in case of parsing integer timestamps
- - Dict of ``{column_name: arg dict}``, where the arg dict corresponds
- to the keyword arguments of :func:`pandas.to_datetime`
+ - Dict of ``{column_name: arg}``, where the arg corresponds
+ to the keyword arguments of :func:`pandas.to_datetime`.
Especially useful with databases without native Datetime support,
such as SQLite
columns : list
@@ -953,6 +1007,9 @@ def read_table(self, table_name, index_col=None, coerce_float=True,
Name of SQL schema in database to query (if database flavor
supports this). If specified, this overwrites the default
schema of the SQLDatabase object.
+ chunksize : int, default None
+ If specified, return an iterator where `chunksize` is the number
+ of rows to include in each chunk.
Returns
-------
@@ -966,10 +1023,25 @@ def read_table(self, table_name, index_col=None, coerce_float=True,
"""
table = SQLTable(table_name, self, index=index_col, schema=schema)
return table.read(coerce_float=coerce_float,
- parse_dates=parse_dates, columns=columns)
-
+ parse_dates=parse_dates, columns=columns,
+ chunksize=chunksize)
+
+ @staticmethod
+ def _query_iterator(result, chunksize, columns, index_col=None,
+ coerce_float=True, parse_dates=None):
+ """Return generator through chunked result set"""
+
+ while True:
+ data = result.fetchmany(chunksize)
+ if not data:
+ break
+ else:
+ yield _wrap_result(data, columns, index_col=index_col,
+ coerce_float=coerce_float,
+ parse_dates=parse_dates)
+
def read_query(self, sql, index_col=None, coerce_float=True,
- parse_dates=None, params=None):
+ parse_dates=None, params=None, chunksize=None):
"""Read SQL query into a DataFrame.
Parameters
@@ -1006,30 +1078,31 @@ def read_query(self, sql, index_col=None, coerce_float=True,
read_sql_table : Read SQL database table into a DataFrame
read_sql
- """
+ """
args = _convert_params(sql, params)
result = self.execute(*args)
- data = result.fetchall()
columns = result.keys()
- data_frame = DataFrame.from_records(
- data, columns=columns, coerce_float=coerce_float)
-
- _parse_date_columns(data_frame, parse_dates)
-
- if index_col is not None:
- data_frame.set_index(index_col, inplace=True)
+ if chunksize is not None:
+ return self._query_iterator(result, chunksize, columns,
+ index_col=index_col,
+ coerce_float=coerce_float,
+ parse_dates=parse_dates)
+ else:
+ data = result.fetchall()
+ frame = _wrap_result(data, columns, index_col=index_col,
+ coerce_float=coerce_float,
+ parse_dates=parse_dates)
+ return frame
- return data_frame
-
read_sql = read_query
def to_sql(self, frame, name, if_exists='fail', index=True,
index_label=None, schema=None, chunksize=None):
"""
Write records stored in a DataFrame to a SQL database.
-
+
Parameters
----------
frame : DataFrame
@@ -1308,23 +1381,42 @@ def execute(self, *args, **kwargs):
ex = DatabaseError("Execution failed on sql '%s': %s" % (args[0], exc))
raise_with_traceback(ex)
+ @staticmethod
+ def _query_iterator(cursor, chunksize, columns, index_col=None,
+ coerce_float=True, parse_dates=None):
+ """Return generator through chunked result set"""
+
+ while True:
+ data = cursor.fetchmany(chunksize)
+ if not data:
+ cursor.close()
+ break
+ else:
+ yield _wrap_result(data, columns, index_col=index_col,
+ coerce_float=coerce_float,
+ parse_dates=parse_dates)
+
def read_query(self, sql, index_col=None, coerce_float=True, params=None,
- parse_dates=None):
+ parse_dates=None, chunksize=None):
+
args = _convert_params(sql, params)
cursor = self.execute(*args)
columns = [col_desc[0] for col_desc in cursor.description]
- data = self._fetchall_as_list(cursor)
- cursor.close()
- data_frame = DataFrame.from_records(
- data, columns=columns, coerce_float=coerce_float)
+ if chunksize is not None:
+ return self._query_iterator(cursor, chunksize, columns,
+ index_col=index_col,
+ coerce_float=coerce_float,
+ parse_dates=parse_dates)
+ else:
+ data = self._fetchall_as_list(cursor)
+ cursor.close()
- _parse_date_columns(data_frame, parse_dates)
+ frame = _wrap_result(data, columns, index_col=index_col,
+ coerce_float=coerce_float,
+ parse_dates=parse_dates)
+ return frame
- if index_col is not None:
- data_frame.set_index(index_col, inplace=True)
- return data_frame
-
def _fetchall_as_list(self, cur):
result = cur.fetchall()
if not isinstance(result, list):
</patch>
|
[]
|
[]
| |||
pandas-dev__pandas-26371
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Fix type annotations for pandas.core.indexes.base
Part of #25882
Current errors are:
pandas\core\indexes\base.py:9: error: Module 'pandas._libs' has no attribute 'join'
</issue>
<code>
[start of README.md]
1 <div align="center">
2 <img src="https://github.com/pandas-dev/pandas/blob/master/doc/logo/pandas_logo.png"><br>
3 </div>
4
5 -----------------
6
7 # pandas: powerful Python data analysis toolkit
8
9 <table>
10 <tr>
11 <td>Latest Release</td>
12 <td>
13 <a href="https://pypi.org/project/pandas/">
14 <img src="https://img.shields.io/pypi/v/pandas.svg" alt="latest release" />
15 </a>
16 </td>
17 </tr>
18 <td></td>
19 <td>
20 <a href="https://anaconda.org/anaconda/pandas/">
21 <img src="https://anaconda.org/conda-forge/pandas/badges/version.svg" alt="latest release" />
22 </a>
23 </td>
24 </tr>
25 <tr>
26 <td>Package Status</td>
27 <td>
28 <a href="https://pypi.org/project/pandas/">
29 <img src="https://img.shields.io/pypi/status/pandas.svg" alt="status" />
30 </a>
31 </td>
32 </tr>
33 <tr>
34 <td>License</td>
35 <td>
36 <a href="https://github.com/pandas-dev/pandas/blob/master/LICENSE">
37 <img src="https://img.shields.io/pypi/l/pandas.svg" alt="license" />
38 </a>
39 </td>
40 </tr>
41 <tr>
42 <td>Build Status</td>
43 <td>
44 <a href="https://travis-ci.org/pandas-dev/pandas">
45 <img src="https://travis-ci.org/pandas-dev/pandas.svg?branch=master" alt="travis build status" />
46 </a>
47 </td>
48 </tr>
49 <tr>
50 <td></td>
51 <td>
52 <a href="https://dev.azure.com/pandas-dev/pandas/_build/latest?definitionId=1&branch=master">
53 <img src="https://dev.azure.com/pandas-dev/pandas/_apis/build/status/pandas-dev.pandas?branch=master" alt="Azure Pipelines build status" />
54 </a>
55 </td>
56 </tr>
57 <tr>
58 <td>Coverage</td>
59 <td>
60 <a href="https://codecov.io/gh/pandas-dev/pandas">
61 <img src="https://codecov.io/github/pandas-dev/pandas/coverage.svg?branch=master" alt="coverage" />
62 </a>
63 </td>
64 </tr>
65 <tr>
66 <td>Downloads</td>
67 <td>
68 <a href="https://pandas.pydata.org">
69 <img src="https://anaconda.org/conda-forge/pandas/badges/downloads.svg" alt="conda-forge downloads" />
70 </a>
71 </td>
72 </tr>
73 <tr>
74 <td>Gitter</td>
75 <td>
76 <a href="https://gitter.im/pydata/pandas">
77 <img src="https://badges.gitter.im/Join%20Chat.svg" />
78 </a>
79 </td>
80 </tr>
81 </table>
82
83
84
85 ## What is it?
86
87 **pandas** is a Python package providing fast, flexible, and expressive data
88 structures designed to make working with "relational" or "labeled" data both
89 easy and intuitive. It aims to be the fundamental high-level building block for
90 doing practical, **real world** data analysis in Python. Additionally, it has
91 the broader goal of becoming **the most powerful and flexible open source data
92 analysis / manipulation tool available in any language**. It is already well on
93 its way towards this goal.
94
95 ## Main Features
96 Here are just a few of the things that pandas does well:
97
98 - Easy handling of [**missing data**][missing-data] (represented as
99 `NaN`) in floating point as well as non-floating point data
100 - Size mutability: columns can be [**inserted and
101 deleted**][insertion-deletion] from DataFrame and higher dimensional
102 objects
103 - Automatic and explicit [**data alignment**][alignment]: objects can
104 be explicitly aligned to a set of labels, or the user can simply
105 ignore the labels and let `Series`, `DataFrame`, etc. automatically
106 align the data for you in computations
107 - Powerful, flexible [**group by**][groupby] functionality to perform
108 split-apply-combine operations on data sets, for both aggregating
109 and transforming data
110 - Make it [**easy to convert**][conversion] ragged,
111 differently-indexed data in other Python and NumPy data structures
112 into DataFrame objects
113 - Intelligent label-based [**slicing**][slicing], [**fancy
114 indexing**][fancy-indexing], and [**subsetting**][subsetting] of
115 large data sets
116 - Intuitive [**merging**][merging] and [**joining**][joining] data
117 sets
118 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of
119 data sets
120 - [**Hierarchical**][mi] labeling of axes (possible to have multiple
121 labels per tick)
122 - Robust IO tools for loading data from [**flat files**][flat-files]
123 (CSV and delimited), [**Excel files**][excel], [**databases**][db],
124 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore]
125 - [**Time series**][timeseries]-specific functionality: date range
126 generation and frequency conversion, moving window statistics,
127 moving window linear regressions, date shifting and lagging, etc.
128
129
130 [missing-data]: https://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data
131 [insertion-deletion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#column-selection-addition-deletion
132 [alignment]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html?highlight=alignment#intro-to-data-structures
133 [groupby]: https://pandas.pydata.org/pandas-docs/stable/groupby.html#group-by-split-apply-combine
134 [conversion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe
135 [slicing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges
136 [fancy-indexing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#advanced-indexing-with-ix
137 [subsetting]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing
138 [merging]: https://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging
139 [joining]: https://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index
140 [reshape]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-and-pivot-tables
141 [pivot-table]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#pivot-tables-and-cross-tabulations
142 [mi]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex
143 [flat-files]: https://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files
144 [excel]: https://pandas.pydata.org/pandas-docs/stable/io.html#excel-files
145 [db]: https://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries
146 [hdfstore]: https://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables
147 [timeseries]: https://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-series-date-functionality
148
149 ## Where to get it
150 The source code is currently hosted on GitHub at:
151 https://github.com/pandas-dev/pandas
152
153 Binary installers for the latest released version are available at the [Python
154 package index](https://pypi.org/project/pandas) and on conda.
155
156 ```sh
157 # conda
158 conda install pandas
159 ```
160
161 ```sh
162 # or PyPI
163 pip install pandas
164 ```
165
166 ## Dependencies
167 - [NumPy](https://www.numpy.org): 1.13.3 or higher
168 - [python-dateutil](https://labix.org/python-dateutil): 2.5.0 or higher
169 - [pytz](https://pythonhosted.org/pytz): 2015.4 or higher
170
171 See the [full installation instructions](https://pandas.pydata.org/pandas-docs/stable/install.html#dependencies)
172 for recommended and optional dependencies.
173
174 ## Installation from sources
175 To install pandas from source you need Cython in addition to the normal
176 dependencies above. Cython can be installed from pypi:
177
178 ```sh
179 pip install cython
180 ```
181
182 In the `pandas` directory (same one where you found this file after
183 cloning the git repo), execute:
184
185 ```sh
186 python setup.py install
187 ```
188
189 or for installing in [development mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs):
190
191 ```sh
192 python setup.py develop
193 ```
194
195 Alternatively, you can use `pip` if you want all the dependencies pulled
196 in automatically (the `-e` option is for installing it in [development
197 mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs)):
198
199 ```sh
200 pip install -e .
201 ```
202
203 See the full instructions for [installing from source](https://pandas.pydata.org/pandas-docs/stable/install.html#installing-from-source).
204
205 ## License
206 [BSD 3](LICENSE)
207
208 ## Documentation
209 The official documentation is hosted on PyData.org: https://pandas.pydata.org/pandas-docs/stable
210
211 ## Background
212 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and
213 has been under active development since then.
214
215 ## Getting Help
216
217 For usage questions, the best place to go to is [StackOverflow](https://stackoverflow.com/questions/tagged/pandas).
218 Further, general questions and discussions can also take place on the [pydata mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata).
219
220 ## Discussion and Development
221 Most development discussion is taking place on github in this repo. Further, the [pandas-dev mailing list](https://mail.python.org/mailman/listinfo/pandas-dev) can also be used for specialized discussions or design issues, and a [Gitter channel](https://gitter.im/pydata/pandas) is available for quick development related questions.
222
223 ## Contributing to pandas [](https://www.codetriage.com/pandas-dev/pandas)
224
225 All contributions, bug reports, bug fixes, documentation improvements, enhancements and ideas are welcome.
226
227 A detailed overview on how to contribute can be found in the **[contributing guide](https://pandas-docs.github.io/pandas-docs-travis/contributing.html)**. There is also an [overview](.github/CONTRIBUTING.md) on GitHub.
228
229 If you are simply looking to start working with the pandas codebase, navigate to the [GitHub "issues" tab](https://github.com/pandas-dev/pandas/issues) and start looking through interesting issues. There are a number of issues listed under [Docs](https://github.com/pandas-dev/pandas/issues?labels=Docs&sort=updated&state=open) and [good first issue](https://github.com/pandas-dev/pandas/issues?labels=good+first+issue&sort=updated&state=open) where you could start out.
230
231 You can also triage issues which may include reproducing bug reports, or asking for vital information such as version numbers or reproduction instructions. If you would like to start triaging issues, one easy way to get started is to [subscribe to pandas on CodeTriage](https://www.codetriage.com/pandas-dev/pandas).
232
233 Or maybe through using pandas you have an idea of your own or are looking for something in the documentation and thinking ‘this can be improved’...you can do something about it!
234
235 Feel free to ask questions on the [mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata) or on [Gitter](https://gitter.im/pydata/pandas).
236
[end of README.md]
[start of doc/source/conf.py]
1 #
2 # pandas documentation build configuration file, created by
3 #
4 # This file is execfile()d with the current directory set to its containing
5 # dir.
6 #
7 # Note that not all possible configuration values are present in this
8 # autogenerated file.
9 #
10 # All configuration values have a default; values that are commented out
11 # serve to show the default.
12
13 import sys
14 import os
15 import inspect
16 import importlib
17 import logging
18 import jinja2
19 from sphinx.ext.autosummary import _import_by_name
20 from numpydoc.docscrape import NumpyDocString
21 from numpydoc.docscrape_sphinx import SphinxDocString
22
23 logger = logging.getLogger(__name__)
24
25 # https://github.com/sphinx-doc/sphinx/pull/2325/files
26 # Workaround for sphinx-build recursion limit overflow:
27 # pickle.dump(doctree, f, pickle.HIGHEST_PROTOCOL)
28 # RuntimeError: maximum recursion depth exceeded while pickling an object
29 #
30 # Python's default allowed recursion depth is 1000.
31 sys.setrecursionlimit(5000)
32
33 # If extensions (or modules to document with autodoc) are in another directory,
34 # add these directories to sys.path here. If the directory is relative to the
35 # documentation root, use os.path.abspath to make it absolute, like shown here.
36 # sys.path.append(os.path.abspath('.'))
37 sys.path.insert(0, os.path.abspath('../sphinxext'))
38 sys.path.extend([
39
40 # numpy standard doc extensions
41 os.path.join(os.path.dirname(__file__),
42 '..', '../..',
43 'sphinxext')
44
45 ])
46
47 # -- General configuration -----------------------------------------------
48
49 # Add any Sphinx extension module names here, as strings. They can be
50 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
51 # sphinxext.
52
53 extensions = ['sphinx.ext.autodoc',
54 'sphinx.ext.autosummary',
55 'sphinx.ext.doctest',
56 'sphinx.ext.extlinks',
57 'sphinx.ext.todo',
58 'numpydoc', # handle NumPy documentation formatted docstrings
59 'IPython.sphinxext.ipython_directive',
60 'IPython.sphinxext.ipython_console_highlighting',
61 'matplotlib.sphinxext.plot_directive',
62 'sphinx.ext.intersphinx',
63 'sphinx.ext.coverage',
64 'sphinx.ext.mathjax',
65 'sphinx.ext.ifconfig',
66 'sphinx.ext.linkcode',
67 'nbsphinx',
68 'contributors', # custom pandas extension
69 ]
70
71 exclude_patterns = ['**.ipynb_checkpoints']
72 try:
73 import nbconvert
74 except ImportError:
75 logger.warn('nbconvert not installed. Skipping notebooks.')
76 exclude_patterns.append('**/*.ipynb')
77 else:
78 try:
79 nbconvert.utils.pandoc.get_pandoc_version()
80 except nbconvert.utils.pandoc.PandocMissing:
81 logger.warn('Pandoc not installed. Skipping notebooks.')
82 exclude_patterns.append('**/*.ipynb')
83
84 # sphinx_pattern can be '-api' to exclude the API pages,
85 # the path to a file, or a Python object
86 # (e.g. '10min.rst' or 'pandas.DataFrame.head')
87 source_path = os.path.dirname(os.path.abspath(__file__))
88 pattern = os.environ.get('SPHINX_PATTERN')
89 if pattern:
90 for dirname, dirs, fnames in os.walk(source_path):
91 for fname in fnames:
92 if os.path.splitext(fname)[-1] in ('.rst', '.ipynb'):
93 fname = os.path.relpath(os.path.join(dirname, fname),
94 source_path)
95
96 if (fname == 'index.rst'
97 and os.path.abspath(dirname) == source_path):
98 continue
99 elif pattern == '-api' and dirname == 'reference':
100 exclude_patterns.append(fname)
101 elif pattern != '-api' and fname != pattern:
102 exclude_patterns.append(fname)
103
104 with open(os.path.join(source_path, 'index.rst.template')) as f:
105 t = jinja2.Template(f.read())
106 with open(os.path.join(source_path, 'index.rst'), 'w') as f:
107 f.write(t.render(include_api=pattern is None,
108 single_doc=(pattern
109 if pattern is not None and pattern != '-api'
110 else None)))
111 autosummary_generate = True if pattern is None else ['index']
112
113 # matplotlib plot directive
114 plot_include_source = True
115 plot_formats = [("png", 90)]
116 plot_html_show_formats = False
117 plot_html_show_source_link = False
118 plot_pre_code = """import numpy as np
119 import pandas as pd"""
120
121 # Add any paths that contain templates here, relative to this directory.
122 templates_path = ['../_templates']
123
124 # The suffix of source filenames.
125 source_suffix = [
126 '.rst',
127 ]
128
129 # The encoding of source files.
130 source_encoding = 'utf-8'
131
132 # The master toctree document.
133 master_doc = 'index'
134
135 # General information about the project.
136 project = 'pandas'
137 copyright = '2008-2014, the pandas development team'
138
139 # The version info for the project you're documenting, acts as replacement for
140 # |version| and |release|, also used in various other places throughout the
141 # built documents.
142 #
143 # The short X.Y version.
144 import pandas
145
146 # version = '%s r%s' % (pandas.__version__, svn_version())
147 version = str(pandas.__version__)
148
149 # The full version, including alpha/beta/rc tags.
150 release = version
151
152 # The language for content autogenerated by Sphinx. Refer to documentation
153 # for a list of supported languages.
154 # language = None
155
156 # There are two options for replacing |today|: either, you set today to some
157 # non-false value, then it is used:
158 # today = ''
159 # Else, today_fmt is used as the format for a strftime call.
160 # today_fmt = '%B %d, %Y'
161
162 # List of documents that shouldn't be included in the build.
163 # unused_docs = []
164
165 # List of directories, relative to source directory, that shouldn't be searched
166 # for source files.
167 exclude_trees = []
168
169 # The reST default role (used for this markup: `text`) to use for all
170 # documents. default_role = None
171
172 # If true, '()' will be appended to :func: etc. cross-reference text.
173 # add_function_parentheses = True
174
175 # If true, the current module name will be prepended to all description
176 # unit titles (such as .. function::).
177 # add_module_names = True
178
179 # If true, sectionauthor and moduleauthor directives will be shown in the
180 # output. They are ignored by default.
181 # show_authors = False
182
183 # The name of the Pygments (syntax highlighting) style to use.
184 pygments_style = 'sphinx'
185
186 # A list of ignored prefixes for module index sorting.
187 # modindex_common_prefix = []
188
189
190 # -- Options for HTML output ---------------------------------------------
191
192 # The theme to use for HTML and HTML Help pages. Major themes that come with
193 # Sphinx are currently 'default' and 'sphinxdoc'.
194 html_theme = 'nature_with_gtoc'
195
196 # The style sheet to use for HTML and HTML Help pages. A file of that name
197 # must exist either in Sphinx' static/ path, or in one of the custom paths
198 # given in html_static_path.
199 # html_style = 'statsmodels.css'
200
201 # Theme options are theme-specific and customize the look and feel of a theme
202 # further. For a list of options available for each theme, see the
203 # documentation.
204 # html_theme_options = {}
205
206 # Add any paths that contain custom themes here, relative to this directory.
207 html_theme_path = ['themes']
208
209 # The name for this set of Sphinx documents. If None, it defaults to
210 # "<project> v<release> documentation".
211 # html_title = None
212
213 # A shorter title for the navigation bar. Default is the same as html_title.
214 # html_short_title = None
215
216 # The name of an image file (relative to this directory) to place at the top
217 # of the sidebar.
218 # html_logo = None
219
220 # Add any paths that contain custom static files (such as style sheets) here,
221 # relative to this directory. They are copied after the builtin static files,
222 # so a file named "default.css" will overwrite the builtin "default.css".
223 html_static_path = ['_static']
224
225 # The name of an image file (within the static path) to use as favicon of the
226 # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
227 # pixels large.
228 html_favicon = os.path.join(html_static_path[0], 'favicon.ico')
229
230 # If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
231 # using the given strftime format.
232 # html_last_updated_fmt = '%b %d, %Y'
233
234 # If true, SmartyPants will be used to convert quotes and dashes to
235 # typographically correct entities.
236 # html_use_smartypants = True
237
238 # Custom sidebar templates, maps document names to template names.
239 # html_sidebars = {}
240
241 # Additional templates that should be rendered to pages, maps page names to
242 # template names.
243
244 # Add redirect for previously existing API pages
245 # each item is like `(from_old, to_new)`
246 # To redirect a class and all its methods, see below
247 # https://github.com/pandas-dev/pandas/issues/16186
248
249 moved_api_pages = [
250 ('pandas.core.common.isnull', 'pandas.isna'),
251 ('pandas.core.common.notnull', 'pandas.notna'),
252 ('pandas.core.reshape.get_dummies', 'pandas.get_dummies'),
253 ('pandas.tools.merge.concat', 'pandas.concat'),
254 ('pandas.tools.merge.merge', 'pandas.merge'),
255 ('pandas.tools.pivot.pivot_table', 'pandas.pivot_table'),
256 ('pandas.tseries.tools.to_datetime', 'pandas.to_datetime'),
257 ('pandas.io.clipboard.read_clipboard', 'pandas.read_clipboard'),
258 ('pandas.io.excel.ExcelFile.parse', 'pandas.ExcelFile.parse'),
259 ('pandas.io.excel.read_excel', 'pandas.read_excel'),
260 ('pandas.io.gbq.read_gbq', 'pandas.read_gbq'),
261 ('pandas.io.html.read_html', 'pandas.read_html'),
262 ('pandas.io.json.read_json', 'pandas.read_json'),
263 ('pandas.io.parsers.read_csv', 'pandas.read_csv'),
264 ('pandas.io.parsers.read_fwf', 'pandas.read_fwf'),
265 ('pandas.io.parsers.read_table', 'pandas.read_table'),
266 ('pandas.io.pickle.read_pickle', 'pandas.read_pickle'),
267 ('pandas.io.pytables.HDFStore.append', 'pandas.HDFStore.append'),
268 ('pandas.io.pytables.HDFStore.get', 'pandas.HDFStore.get'),
269 ('pandas.io.pytables.HDFStore.put', 'pandas.HDFStore.put'),
270 ('pandas.io.pytables.HDFStore.select', 'pandas.HDFStore.select'),
271 ('pandas.io.pytables.read_hdf', 'pandas.read_hdf'),
272 ('pandas.io.sql.read_sql', 'pandas.read_sql'),
273 ('pandas.io.sql.read_frame', 'pandas.read_frame'),
274 ('pandas.io.sql.write_frame', 'pandas.write_frame'),
275 ('pandas.io.stata.read_stata', 'pandas.read_stata'),
276 ]
277
278 # Again, tuples of (from_old, to_new)
279 moved_classes = [
280 ('pandas.tseries.resample.Resampler', 'pandas.core.resample.Resampler'),
281 ('pandas.formats.style.Styler', 'pandas.io.formats.style.Styler'),
282 ]
283
284 for old, new in moved_classes:
285 # the class itself...
286 moved_api_pages.append((old, new))
287
288 mod, classname = new.rsplit('.', 1)
289 klass = getattr(importlib.import_module(mod), classname)
290 methods = [x for x in dir(klass)
291 if not x.startswith('_') or x in ('__iter__', '__array__')]
292
293 for method in methods:
294 # ... and each of its public methods
295 moved_api_pages.append(
296 ("{old}.{method}".format(old=old, method=method),
297 "{new}.{method}".format(new=new, method=method))
298 )
299
300 if pattern is None:
301 html_additional_pages = {
302 'generated/' + page[0]: 'api_redirect.html'
303 for page in moved_api_pages
304 }
305
306
307 header = """\
308 .. currentmodule:: pandas
309
310 .. ipython:: python
311 :suppress:
312
313 import numpy as np
314 import pandas as pd
315
316 randn = np.random.randn
317 np.random.seed(123456)
318 np.set_printoptions(precision=4, suppress=True)
319 pd.options.display.max_rows = 15
320
321 import os
322 os.chdir('{}')
323 """.format(os.path.dirname(os.path.dirname(__file__)))
324
325
326 html_context = {
327 'redirects': {old: new for old, new in moved_api_pages},
328 'header': header
329 }
330
331 # If false, no module index is generated.
332 html_use_modindex = True
333
334 # If false, no index is generated.
335 # html_use_index = True
336
337 # If true, the index is split into individual pages for each letter.
338 # html_split_index = False
339
340 # If true, links to the reST sources are added to the pages.
341 # html_show_sourcelink = True
342
343 # If true, an OpenSearch description file will be output, and all pages will
344 # contain a <link> tag referring to it. The value of this option must be the
345 # base URL from which the finished HTML is served.
346 # html_use_opensearch = ''
347
348 # If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml").
349 # html_file_suffix = ''
350
351 # Output file base name for HTML help builder.
352 htmlhelp_basename = 'pandas'
353
354 # -- Options for nbsphinx ------------------------------------------------
355
356 nbsphinx_allow_errors = True
357
358 # -- Options for LaTeX output --------------------------------------------
359
360 latex_elements = {}
361
362 # The paper size ('letter' or 'a4').
363 # latex_paper_size = 'letter'
364
365 # The font size ('10pt', '11pt' or '12pt').
366 # latex_font_size = '10pt'
367
368 # Grouping the document tree into LaTeX files. List of tuples (source start
369 # file, target name, title, author, documentclass [howto/manual]).
370 latex_documents = [
371 ('index', 'pandas.tex',
372 'pandas: powerful Python data analysis toolkit',
373 r'Wes McKinney\n\& PyData Development Team', 'manual'),
374 ]
375
376 # The name of an image file (relative to this directory) to place at the top of
377 # the title page.
378 # latex_logo = None
379
380 # For "manual" documents, if this is true, then toplevel headings are parts,
381 # not chapters.
382 # latex_use_parts = False
383
384 # Additional stuff for the LaTeX preamble.
385 # latex_preamble = ''
386
387 # Documents to append as an appendix to all manuals.
388 # latex_appendices = []
389
390 # If false, no module index is generated.
391 # latex_use_modindex = True
392
393
394 if pattern is None:
395 intersphinx_mapping = {
396 'dateutil': ("https://dateutil.readthedocs.io/en/latest/", None),
397 'matplotlib': ('https://matplotlib.org/', None),
398 'numpy': ('https://docs.scipy.org/doc/numpy/', None),
399 'pandas-gbq': ('https://pandas-gbq.readthedocs.io/en/latest/', None),
400 'py': ('https://pylib.readthedocs.io/en/latest/', None),
401 'python': ('https://docs.python.org/3/', None),
402 'scipy': ('https://docs.scipy.org/doc/scipy/reference/', None),
403 'statsmodels': ('http://www.statsmodels.org/devel/', None),
404 }
405
406 # extlinks alias
407 extlinks = {'issue': ('https://github.com/pandas-dev/pandas/issues/%s',
408 'GH'),
409 'wiki': ('https://github.com/pandas-dev/pandas/wiki/%s',
410 'wiki ')}
411
412
413 ipython_warning_is_error = False
414 ipython_exec_lines = [
415 'import numpy as np',
416 'import pandas as pd',
417 # This ensures correct rendering on system with console encoding != utf8
418 # (windows). It forces pandas to encode its output reprs using utf8
419 # wherever the docs are built. The docs' target is the browser, not
420 # the console, so this is fine.
421 'pd.options.display.encoding="utf8"'
422 ]
423
424
425 def sphinxdocstring_str(self, indent=0, func_role="obj"):
426 # Pandas displays Attributes section in style like Methods section
427
428 # Function is copy of `SphinxDocString.__str__`
429 ns = {
430 'signature': self._str_signature(),
431 'index': self._str_index(),
432 'summary': self._str_summary(),
433 'extended_summary': self._str_extended_summary(),
434 'parameters': self._str_param_list('Parameters'),
435 'returns': self._str_returns('Returns'),
436 'yields': self._str_returns('Yields'),
437 'other_parameters': self._str_param_list('Other Parameters'),
438 'raises': self._str_param_list('Raises'),
439 'warns': self._str_param_list('Warns'),
440 'warnings': self._str_warnings(),
441 'see_also': self._str_see_also(func_role),
442 'notes': self._str_section('Notes'),
443 'references': self._str_references(),
444 'examples': self._str_examples(),
445 # Replaced `self._str_param_list('Attributes', fake_autosummary=True)`
446 # with `self._str_member_list('Attributes')`
447 'attributes': self._str_member_list('Attributes'),
448 'methods': self._str_member_list('Methods'),
449 }
450 ns = {k: '\n'.join(v) for k, v in ns.items()}
451
452 rendered = self.template.render(**ns)
453 return '\n'.join(self._str_indent(rendered.split('\n'), indent))
454
455
456 SphinxDocString.__str__ = sphinxdocstring_str
457
458
459 # Fix "WARNING: Inline strong start-string without end-string."
460 # PR #155 "Escape the * in *args and **kwargs" from numpydoc
461 # Can be removed after PR merges in v0.9.0
462 def decorate_process_param(func):
463 def _escape_args_and_kwargs(name):
464 if name[:2] == '**':
465 return r'\*\*' + name[2:]
466 elif name[:1] == '*':
467 return r'\*' + name[1:]
468 else:
469 return name
470
471 def func_wrapper(self, param, desc, fake_autosummary):
472 param = _escape_args_and_kwargs(param.strip())
473 return func(self, param, desc, fake_autosummary)
474
475 return func_wrapper
476
477
478 func = SphinxDocString._process_param
479 SphinxDocString._process_param = decorate_process_param(func)
480
481 # Add custom Documenter to handle attributes/methods of an AccessorProperty
482 # eg pandas.Series.str and pandas.Series.dt (see GH9322)
483
484 import sphinx
485 from sphinx.util import rpartition
486 from sphinx.ext.autodoc import (
487 Documenter, MethodDocumenter, AttributeDocumenter)
488 from sphinx.ext.autosummary import Autosummary
489
490
491 class AccessorDocumenter(MethodDocumenter):
492 """
493 Specialized Documenter subclass for accessors.
494 """
495 objtype = 'accessor'
496 directivetype = 'method'
497
498 # lower than MethodDocumenter so this is not chosen for normal methods
499 priority = 0.6
500
501 def format_signature(self):
502 # this method gives an error/warning for the accessors, therefore
503 # overriding it (accessor has no arguments)
504 return ''
505
506
507 class AccessorLevelDocumenter(Documenter):
508 """
509 Specialized Documenter subclass for objects on accessor level (methods,
510 attributes).
511 """
512 # This is the simple straightforward version
513 # modname is None, base the last elements (eg 'hour')
514 # and path the part before (eg 'Series.dt')
515 # def resolve_name(self, modname, parents, path, base):
516 # modname = 'pandas'
517 # mod_cls = path.rstrip('.')
518 # mod_cls = mod_cls.split('.')
519 #
520 # return modname, mod_cls + [base]
521 def resolve_name(self, modname, parents, path, base):
522 if modname is None:
523 if path:
524 mod_cls = path.rstrip('.')
525 else:
526 mod_cls = None
527 # if documenting a class-level object without path,
528 # there must be a current class, either from a parent
529 # auto directive ...
530 mod_cls = self.env.temp_data.get('autodoc:class')
531 # ... or from a class directive
532 if mod_cls is None:
533 mod_cls = self.env.temp_data.get('py:class')
534 # ... if still None, there's no way to know
535 if mod_cls is None:
536 return None, []
537 # HACK: this is added in comparison to ClassLevelDocumenter
538 # mod_cls still exists of class.accessor, so an extra
539 # rpartition is needed
540 modname, accessor = rpartition(mod_cls, '.')
541 modname, cls = rpartition(modname, '.')
542 parents = [cls, accessor]
543 # if the module name is still missing, get it like above
544 if not modname:
545 modname = self.env.temp_data.get('autodoc:module')
546 if not modname:
547 if sphinx.__version__ > '1.3':
548 modname = self.env.ref_context.get('py:module')
549 else:
550 modname = self.env.temp_data.get('py:module')
551 # ... else, it stays None, which means invalid
552 return modname, parents + [base]
553
554
555 class AccessorAttributeDocumenter(AccessorLevelDocumenter,
556 AttributeDocumenter):
557 objtype = 'accessorattribute'
558 directivetype = 'attribute'
559
560 # lower than AttributeDocumenter so this is not chosen for normal
561 # attributes
562 priority = 0.6
563
564
565 class AccessorMethodDocumenter(AccessorLevelDocumenter, MethodDocumenter):
566 objtype = 'accessormethod'
567 directivetype = 'method'
568
569 # lower than MethodDocumenter so this is not chosen for normal methods
570 priority = 0.6
571
572
573 class AccessorCallableDocumenter(AccessorLevelDocumenter, MethodDocumenter):
574 """
575 This documenter lets us removes .__call__ from the method signature for
576 callable accessors like Series.plot
577 """
578 objtype = 'accessorcallable'
579 directivetype = 'method'
580
581 # lower than MethodDocumenter; otherwise the doc build prints warnings
582 priority = 0.5
583
584 def format_name(self):
585 return MethodDocumenter.format_name(self).rstrip('.__call__')
586
587
588 class PandasAutosummary(Autosummary):
589 """
590 This alternative autosummary class lets us override the table summary for
591 Series.plot and DataFrame.plot in the API docs.
592 """
593 def _replace_pandas_items(self, display_name, sig, summary, real_name):
594 # this a hack: ideally we should extract the signature from the
595 # .__call__ method instead of hard coding this
596 if display_name == 'DataFrame.plot':
597 sig = '([x, y, kind, ax, ....])'
598 summary = 'DataFrame plotting accessor and method'
599 elif display_name == 'Series.plot':
600 sig = '([kind, ax, figsize, ....])'
601 summary = 'Series plotting accessor and method'
602 return (display_name, sig, summary, real_name)
603
604 @staticmethod
605 def _is_deprecated(real_name):
606 try:
607 obj, parent, modname = _import_by_name(real_name)
608 except ImportError:
609 return False
610 doc = NumpyDocString(obj.__doc__ or '')
611 summary = ''.join(doc['Summary'] + doc['Extended Summary'])
612 return '.. deprecated::' in summary
613
614 def _add_deprecation_prefixes(self, items):
615 for item in items:
616 display_name, sig, summary, real_name = item
617 if self._is_deprecated(real_name):
618 summary = '(DEPRECATED) %s' % summary
619 yield display_name, sig, summary, real_name
620
621 def get_items(self, names):
622 items = Autosummary.get_items(self, names)
623 items = [self._replace_pandas_items(*item) for item in items]
624 items = list(self._add_deprecation_prefixes(items))
625 return items
626
627
628 # based on numpy doc/source/conf.py
629 def linkcode_resolve(domain, info):
630 """
631 Determine the URL corresponding to Python object
632 """
633 if domain != 'py':
634 return None
635
636 modname = info['module']
637 fullname = info['fullname']
638
639 submod = sys.modules.get(modname)
640 if submod is None:
641 return None
642
643 obj = submod
644 for part in fullname.split('.'):
645 try:
646 obj = getattr(obj, part)
647 except AttributeError:
648 return None
649
650 try:
651 # inspect.unwrap() was added in Python version 3.4
652 if sys.version_info >= (3, 5):
653 fn = inspect.getsourcefile(inspect.unwrap(obj))
654 else:
655 fn = inspect.getsourcefile(obj)
656 except TypeError:
657 fn = None
658 if not fn:
659 return None
660
661 try:
662 source, lineno = inspect.getsourcelines(obj)
663 except OSError:
664 lineno = None
665
666 if lineno:
667 linespec = "#L{:d}-L{:d}".format(lineno, lineno + len(source) - 1)
668 else:
669 linespec = ""
670
671 fn = os.path.relpath(fn, start=os.path.dirname(pandas.__file__))
672
673 if '+' in pandas.__version__:
674 return ("http://github.com/pandas-dev/pandas/blob/master/pandas/"
675 "{}{}".format(fn, linespec))
676 else:
677 return ("http://github.com/pandas-dev/pandas/blob/"
678 "v{}/pandas/{}{}".format(pandas.__version__, fn, linespec))
679
680
681 # remove the docstring of the flags attribute (inherited from numpy ndarray)
682 # because these give doc build errors (see GH issue 5331)
683 def remove_flags_docstring(app, what, name, obj, options, lines):
684 if what == "attribute" and name.endswith(".flags"):
685 del lines[:]
686
687
688 def process_class_docstrings(app, what, name, obj, options, lines):
689 """
690 For those classes for which we use ::
691
692 :template: autosummary/class_without_autosummary.rst
693
694 the documented attributes/methods have to be listed in the class
695 docstring. However, if one of those lists is empty, we use 'None',
696 which then generates warnings in sphinx / ugly html output.
697 This "autodoc-process-docstring" event connector removes that part
698 from the processed docstring.
699
700 """
701 if what == "class":
702 joined = '\n'.join(lines)
703
704 templates = [
705 """.. rubric:: Attributes
706
707 .. autosummary::
708 :toctree:
709
710 None
711 """,
712 """.. rubric:: Methods
713
714 .. autosummary::
715 :toctree:
716
717 None
718 """
719 ]
720
721 for template in templates:
722 if template in joined:
723 joined = joined.replace(template, '')
724 lines[:] = joined.split('\n')
725
726
727 suppress_warnings = [
728 # We "overwrite" autosummary with our PandasAutosummary, but
729 # still want the regular autosummary setup to run. So we just
730 # suppress this warning.
731 'app.add_directive'
732 ]
733 if pattern:
734 # When building a single document we don't want to warn because references
735 # to other documents are unknown, as it's expected
736 suppress_warnings.append('ref.ref')
737
738
739 def rstjinja(app, docname, source):
740 """
741 Render our pages as a jinja template for fancy templating goodness.
742 """
743 # http://ericholscher.com/blog/2016/jul/25/integrating-jinja-rst-sphinx/
744 # Make sure we're outputting HTML
745 if app.builder.format != 'html':
746 return
747 src = source[0]
748 rendered = app.builder.templates.render_string(
749 src, app.config.html_context
750 )
751 source[0] = rendered
752
753
754 def setup(app):
755 app.connect("source-read", rstjinja)
756 app.connect("autodoc-process-docstring", remove_flags_docstring)
757 app.connect("autodoc-process-docstring", process_class_docstrings)
758 app.add_autodocumenter(AccessorDocumenter)
759 app.add_autodocumenter(AccessorAttributeDocumenter)
760 app.add_autodocumenter(AccessorMethodDocumenter)
761 app.add_autodocumenter(AccessorCallableDocumenter)
762 app.add_directive('autosummary', PandasAutosummary)
763
[end of doc/source/conf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pandas-dev/pandas
|
a6e43a43f2cb1b4b7d46b262be2efb825d033eb8
|
Fix type annotations for pandas.core.indexes.base
Part of #25882
Current errors are:
pandas\core\indexes\base.py:9: error: Module 'pandas._libs' has no attribute 'join'
|
The line with the error is:
from pandas._libs import (
algos as libalgos, index as libindex, join as libjoin, lib)
A `join.pyx` file does exists in the `_libs` directory but mypy is still giving error. The other imports `algos`, `index` and `lib` are also `.pyx` file but there is no error for them.
@WillAyd some help here please. Do we need a `.pyi` file?
You can just move the join as alias to a separate line - Mypy doesn’t seem to like that all being on one line and we’ve done that elsewhere
@WillAyd you mean like this? :point_down:
from pandas._libs import (
algos as libalgos, index as libindex, lib)
from pandas._libs import join as libjoin
No matter how I import it makes no difference. Still same error.
That’s right
Sent from my iPhone
> On May 13, 2019, at 6:46 AM, Vaibhav Vishal <[email protected]> wrote:
>
> @WillAyd you mean like this? 👇
>
> from pandas._libs import (
> algos as libalgos, index as libindex, lib)
> from pandas._libs import join as libjoin
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub, or mute the thread.
Try `import pandas._libs.join as libjoin`.
|
2019-05-13T15:19:17Z
|
<patch>
diff --git a/mypy.ini b/mypy.ini
--- a/mypy.ini
+++ b/mypy.ini
@@ -5,9 +5,6 @@ follow_imports=silent
[mypy-pandas.conftest,pandas.tests.*]
ignore_errors=True
-[mypy-pandas.core.indexes.base]
-ignore_errors=True
-
[mypy-pandas.core.indexes.datetimelike]
ignore_errors=True
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -6,8 +6,8 @@
import numpy as np
-from pandas._libs import (
- algos as libalgos, index as libindex, join as libjoin, lib)
+from pandas._libs import algos as libalgos, index as libindex, lib
+import pandas._libs.join as libjoin
from pandas._libs.lib import is_datetime_array
from pandas._libs.tslibs import OutOfBoundsDatetime, Timedelta, Timestamp
from pandas._libs.tslibs.timezones import tz_compare
</patch>
|
[]
|
[]
| |||
mesonbuild__meson-4489
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add an `install:` kwarg to `configure_file()`
Just for consistency with other targets otherwise people get confused by it not existing, even though setting `install_dir:` has the same effect: https://github.com/mesonbuild/meson/issues/860#issuecomment-410086728
</issue>
<code>
[start of README.md]
1 <p align="center">
2 <img src="http://mesonbuild.com/assets/images/meson_logo.png">
3 </p>
4 Meson® is a project to create the best possible next-generation
5 build system.
6
7 #### Status
8
9 [](https://pypi.python.org/pypi/meson)
10 [](https://travis-ci.org/mesonbuild/meson)
11 [](https://dev.azure.com/jussi0947/jussi/_build/latest?definitionId=1)
12 [](https://codecov.io/gh/mesonbuild/meson/branch/master)
13 [](https://lgtm.com/projects/g/mesonbuild/meson/context:python)
14 [](https://lgtm.com/projects/g/mesonbuild/meson/alerts)
15
16 #### Dependencies
17
18 - [Python](http://python.org) (version 3.5 or newer)
19 - [Ninja](https://ninja-build.org) (version 1.5 or newer)
20
21 #### Installing from source
22
23 You can run Meson directly from a revision control checkout or an
24 extracted tarball. If you wish you can install it locally with the
25 standard Python distutils command `python3 setup.py install <your
26 options here>`.
27
28 Meson is also available from
29 [PyPi](https://pypi.python.org/pypi/meson), so it can be installed
30 with `pip3 install meson` (this does not require a source checkout,
31 pip will download the package automatically). The exact command to
32 type to install with pip can vary between systems, be sure to use the
33 Python 3 version of pip.
34
35 #### Running
36
37 Meson requires that you have a source directory and a build directory
38 and that these two are different. In your source root must exist a file
39 called 'meson.build'. To generate the build system run this command:
40
41 `meson <source directory> <build directory>`
42
43 Depending on how you obtained Meson the command might also be called
44 `meson.py` instead of plain `meson`. In the rest of this document we
45 are going to use the latter form.
46
47 You can omit either of the two directories, and Meson will substitute
48 the current directory and autodetect what you mean. This allows you to
49 do things like this:
50
51 `cd source_root; mkdir builddir; cd builddir; meson ..`
52
53 or
54
55 `cd source_root; mkdir builddir; meson builddir`
56
57 To compile, cd into your build directory and type `ninja`. To run unit
58 tests, type `ninja test`.
59
60 Install is the same but it can take an extra argument:
61
62 `DESTDIR=/destdir/path ninja install`
63
64 `DESTDIR` can be omitted. If you are installing to system directories,
65 you may need to run this command with sudo.
66
67
68 #### Contributing
69
70 We love code contributions. See the [contributing.md](contributing.md) file for
71 details.
72
73
74 #### IRC
75
76 The irc channel for Meson is `#mesonbuild` over at Freenode.
77
78 You can use [FreeNode's official webchat][meson_irc]
79 to connect to this channel.
80
81 [meson_irc]: https://webchat.freenode.net/?channels=%23mesonbuild
82
83 #### Further info
84
85 More information about the Meson build system can be found at the
86 [project's home page](http://mesonbuild.com).
87
88 Meson is a registered trademark of Jussi Pakkanen.
89
[end of README.md]
[start of mesonbuild/modules/pkgconfig.py]
1 # Copyright 2015 The Meson development team
2
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6
7 # http://www.apache.org/licenses/LICENSE-2.0
8
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import os, types
16 from pathlib import PurePath
17
18 from .. import build
19 from .. import dependencies
20 from .. import mesonlib
21 from .. import mlog
22 from . import ModuleReturnValue
23 from . import ExtensionModule
24 from ..interpreterbase import permittedKwargs, FeatureNew, FeatureNewKwargs
25
26 class DependenciesHelper:
27 def __init__(self, name):
28 self.name = name
29 self.pub_libs = []
30 self.pub_reqs = []
31 self.priv_libs = []
32 self.priv_reqs = []
33 self.cflags = []
34 self.version_reqs = {}
35
36 def add_pub_libs(self, libs):
37 libs, reqs, cflags = self._process_libs(libs, True)
38 self.pub_libs = libs + self.pub_libs # prepend to preserve dependencies
39 self.pub_reqs += reqs
40 self.cflags += cflags
41
42 def add_priv_libs(self, libs):
43 libs, reqs, _ = self._process_libs(libs, False)
44 self.priv_libs = libs + self.priv_libs
45 self.priv_reqs += reqs
46
47 def add_pub_reqs(self, reqs):
48 self.pub_reqs += self._process_reqs(reqs)
49
50 def add_priv_reqs(self, reqs):
51 self.priv_reqs += self._process_reqs(reqs)
52
53 def _check_generated_pc_deprecation(self, obj):
54 if hasattr(obj, 'generated_pc_warn'):
55 mlog.deprecation('Library', mlog.bold(obj.name), 'was passed to the '
56 '"libraries" keyword argument of a previous call '
57 'to generate() method instead of first positional '
58 'argument.', 'Adding', mlog.bold(obj.generated_pc),
59 'to "Requires" field, but this is a deprecated '
60 'behaviour that will change in a future version '
61 'of Meson. Please report the issue if this '
62 'warning cannot be avoided in your case.',
63 location=obj.generated_pc_warn)
64
65 def _process_reqs(self, reqs):
66 '''Returns string names of requirements'''
67 processed_reqs = []
68 for obj in mesonlib.listify(reqs, unholder=True):
69 if hasattr(obj, 'generated_pc'):
70 self._check_generated_pc_deprecation(obj)
71 processed_reqs.append(obj.generated_pc)
72 elif hasattr(obj, 'pcdep'):
73 pcdeps = mesonlib.listify(obj.pcdep)
74 for d in pcdeps:
75 processed_reqs.append(d.name)
76 self.add_version_reqs(d.name, obj.version_reqs)
77 elif isinstance(obj, dependencies.PkgConfigDependency):
78 if obj.found():
79 processed_reqs.append(obj.name)
80 self.add_version_reqs(obj.name, obj.version_reqs)
81 elif isinstance(obj, str):
82 name, version_req = self.split_version_req(obj)
83 processed_reqs.append(name)
84 self.add_version_reqs(name, version_req)
85 elif isinstance(obj, dependencies.Dependency) and not obj.found():
86 pass
87 else:
88 raise mesonlib.MesonException('requires argument not a string, '
89 'library with pkgconfig-generated file '
90 'or pkgconfig-dependency object, '
91 'got {!r}'.format(obj))
92 return processed_reqs
93
94 def add_cflags(self, cflags):
95 self.cflags += mesonlib.stringlistify(cflags)
96
97 def _process_libs(self, libs, public):
98 libs = mesonlib.listify(libs, unholder=True)
99 processed_libs = []
100 processed_reqs = []
101 processed_cflags = []
102 for obj in libs:
103 shared_library_only = getattr(obj, 'shared_library_only', False)
104 if hasattr(obj, 'pcdep'):
105 pcdeps = mesonlib.listify(obj.pcdep)
106 for d in pcdeps:
107 processed_reqs.append(d.name)
108 self.add_version_reqs(d.name, obj.version_reqs)
109 elif hasattr(obj, 'generated_pc'):
110 self._check_generated_pc_deprecation(obj)
111 processed_reqs.append(obj.generated_pc)
112 elif isinstance(obj, dependencies.PkgConfigDependency):
113 if obj.found():
114 processed_reqs.append(obj.name)
115 self.add_version_reqs(obj.name, obj.version_reqs)
116 elif isinstance(obj, dependencies.ThreadDependency):
117 processed_libs += obj.get_compiler().thread_link_flags(obj.env)
118 processed_cflags += obj.get_compiler().thread_flags(obj.env)
119 elif isinstance(obj, dependencies.InternalDependency):
120 if obj.found():
121 processed_libs += obj.get_link_args()
122 processed_cflags += obj.get_compile_args()
123 if public:
124 self.add_pub_libs(obj.libraries)
125 else:
126 self.add_priv_libs(obj.libraries)
127 elif isinstance(obj, dependencies.Dependency):
128 if obj.found():
129 processed_libs += obj.get_link_args()
130 processed_cflags += obj.get_compile_args()
131 elif isinstance(obj, build.SharedLibrary) and shared_library_only:
132 # Do not pull dependencies for shared libraries because they are
133 # only required for static linking. Adding private requires has
134 # the side effect of exposing their cflags, which is the
135 # intended behaviour of pkg-config but force Debian to add more
136 # than needed build deps.
137 # See https://bugs.freedesktop.org/show_bug.cgi?id=105572
138 processed_libs.append(obj)
139 elif isinstance(obj, (build.SharedLibrary, build.StaticLibrary)):
140 processed_libs.append(obj)
141 if isinstance(obj, build.StaticLibrary) and public:
142 self.add_pub_libs(obj.get_dependencies(internal=False))
143 self.add_pub_libs(obj.get_external_deps())
144 else:
145 self.add_priv_libs(obj.get_dependencies(internal=False))
146 self.add_priv_libs(obj.get_external_deps())
147 elif isinstance(obj, str):
148 processed_libs.append(obj)
149 else:
150 raise mesonlib.MesonException('library argument not a string, library or dependency object.')
151
152 return processed_libs, processed_reqs, processed_cflags
153
154 def add_version_reqs(self, name, version_reqs):
155 if version_reqs:
156 if name not in self.version_reqs:
157 self.version_reqs[name] = set()
158 # Note that pkg-config is picky about whitespace.
159 # 'foo > 1.2' is ok but 'foo>1.2' is not.
160 # foo, bar' is ok, but 'foo,bar' is not.
161 new_vreqs = [s for s in mesonlib.stringlistify(version_reqs)]
162 self.version_reqs[name].update(new_vreqs)
163
164 def split_version_req(self, s):
165 for op in ['>=', '<=', '!=', '==', '=', '>', '<']:
166 pos = s.find(op)
167 if pos > 0:
168 return s[0:pos].strip(), s[pos:].strip()
169 return s, None
170
171 def format_vreq(self, vreq):
172 # vreq are '>=1.0' and pkgconfig wants '>= 1.0'
173 for op in ['>=', '<=', '!=', '==', '=', '>', '<']:
174 if vreq.startswith(op):
175 return op + ' ' + vreq[len(op):]
176 return vreq
177
178 def format_reqs(self, reqs):
179 result = []
180 for name in reqs:
181 vreqs = self.version_reqs.get(name, None)
182 if vreqs:
183 result += [name + ' ' + self.format_vreq(vreq) for vreq in vreqs]
184 else:
185 result += [name]
186 return ', '.join(result)
187
188 def remove_dups(self):
189 def _fn(xs, libs=False):
190 # Remove duplicates whilst preserving original order
191 result = []
192 for x in xs:
193 # Don't de-dup unknown strings to avoid messing up arguments like:
194 # ['-framework', 'CoreAudio', '-framework', 'CoreMedia']
195 known_flags = ['-pthread']
196 cannot_dedup = libs and isinstance(x, str) and \
197 not x.startswith(('-l', '-L')) and \
198 x not in known_flags
199 if x not in result or cannot_dedup:
200 result.append(x)
201 return result
202 self.pub_libs = _fn(self.pub_libs, True)
203 self.pub_reqs = _fn(self.pub_reqs)
204 self.priv_libs = _fn(self.priv_libs, True)
205 self.priv_reqs = _fn(self.priv_reqs)
206 self.cflags = _fn(self.cflags)
207
208 # Remove from private libs/reqs if they are in public already
209 self.priv_libs = [i for i in self.priv_libs if i not in self.pub_libs]
210 self.priv_reqs = [i for i in self.priv_reqs if i not in self.pub_reqs]
211
212 class PkgConfigModule(ExtensionModule):
213
214 def _get_lname(self, l, msg, pcfile):
215 # Nothing special
216 if not l.name_prefix_set:
217 return l.name
218 # Sometimes people want the library to start with 'lib' everywhere,
219 # which is achieved by setting name_prefix to '' and the target name to
220 # 'libfoo'. In that case, try to get the pkg-config '-lfoo' arg correct.
221 if l.prefix == '' and l.name.startswith('lib'):
222 return l.name[3:]
223 # If the library is imported via an import library which is always
224 # named after the target name, '-lfoo' is correct.
225 if l.import_filename:
226 return l.name
227 # In other cases, we can't guarantee that the compiler will be able to
228 # find the library via '-lfoo', so tell the user that.
229 mlog.warning(msg.format(l.name, 'name_prefix', l.name, pcfile))
230 return l.name
231
232 def _escape(self, value):
233 '''
234 We cannot use shlex.quote because it quotes with ' and " which does not
235 work with pkg-config and pkgconf at all.
236 '''
237 # We should always write out paths with / because pkg-config requires
238 # spaces to be quoted with \ and that messes up on Windows:
239 # https://bugs.freedesktop.org/show_bug.cgi?id=103203
240 if isinstance(value, PurePath):
241 value = value.as_posix()
242 return value.replace(' ', '\ ')
243
244 def _make_relative(self, prefix, subdir):
245 if isinstance(prefix, PurePath):
246 prefix = prefix.as_posix()
247 if isinstance(subdir, PurePath):
248 subdir = subdir.as_posix()
249 if subdir.startswith(prefix):
250 subdir = subdir.replace(prefix, '')
251 return subdir
252
253 def generate_pkgconfig_file(self, state, deps, subdirs, name, description,
254 url, version, pcfile, conflicts, variables):
255 deps.remove_dups()
256 coredata = state.environment.get_coredata()
257 outdir = state.environment.scratch_dir
258 fname = os.path.join(outdir, pcfile)
259 prefix = PurePath(coredata.get_builtin_option('prefix'))
260 # These always return paths relative to prefix
261 libdir = PurePath(coredata.get_builtin_option('libdir'))
262 incdir = PurePath(coredata.get_builtin_option('includedir'))
263 with open(fname, 'w') as ofile:
264 ofile.write('prefix={}\n'.format(self._escape(prefix)))
265 ofile.write('libdir={}\n'.format(self._escape('${prefix}' / libdir)))
266 ofile.write('includedir={}\n'.format(self._escape('${prefix}' / incdir)))
267 if variables:
268 ofile.write('\n')
269 for k, v in variables:
270 ofile.write('{}={}\n'.format(k, self._escape(v)))
271 ofile.write('\n')
272 ofile.write('Name: %s\n' % name)
273 if len(description) > 0:
274 ofile.write('Description: %s\n' % description)
275 if len(url) > 0:
276 ofile.write('URL: %s\n' % url)
277 ofile.write('Version: %s\n' % version)
278 reqs_str = deps.format_reqs(deps.pub_reqs)
279 if len(reqs_str) > 0:
280 ofile.write('Requires: {}\n'.format(reqs_str))
281 reqs_str = deps.format_reqs(deps.priv_reqs)
282 if len(reqs_str) > 0:
283 ofile.write('Requires.private: {}\n'.format(reqs_str))
284 if len(conflicts) > 0:
285 ofile.write('Conflicts: {}\n'.format(' '.join(conflicts)))
286
287 def generate_libs_flags(libs):
288 msg = 'Library target {0!r} has {1!r} set. Compilers ' \
289 'may not find it from its \'-l{2}\' linker flag in the ' \
290 '{3!r} pkg-config file.'
291 Lflags = []
292 for l in libs:
293 if isinstance(l, str):
294 yield l
295 else:
296 install_dir = l.get_custom_install_dir()[0]
297 if install_dir is False:
298 continue
299 if 'cs' in l.compilers:
300 if isinstance(install_dir, str):
301 Lflag = '-r${prefix}/%s/%s ' % (self._escape(self._make_relative(prefix, install_dir)), l.filename)
302 else: # install_dir is True
303 Lflag = '-r${libdir}/%s' % l.filename
304 else:
305 if isinstance(install_dir, str):
306 Lflag = '-L${prefix}/%s ' % self._escape(self._make_relative(prefix, install_dir))
307 else: # install_dir is True
308 Lflag = '-L${libdir}'
309 if Lflag not in Lflags:
310 Lflags.append(Lflag)
311 yield Lflag
312 lname = self._get_lname(l, msg, pcfile)
313 # If using a custom suffix, the compiler may not be able to
314 # find the library
315 if l.name_suffix_set:
316 mlog.warning(msg.format(l.name, 'name_suffix', lname, pcfile))
317 if 'cs' not in l.compilers:
318 yield '-l%s' % lname
319
320 if len(deps.pub_libs) > 0:
321 ofile.write('Libs: {}\n'.format(' '.join(generate_libs_flags(deps.pub_libs))))
322 if len(deps.priv_libs) > 0:
323 ofile.write('Libs.private: {}\n'.format(' '.join(generate_libs_flags(deps.priv_libs))))
324 ofile.write('Cflags:')
325 for h in subdirs:
326 ofile.write(' ')
327 if h == '.':
328 ofile.write('-I${includedir}')
329 else:
330 ofile.write(self._escape(PurePath('-I${includedir}') / h))
331 for f in deps.cflags:
332 ofile.write(' ')
333 ofile.write(self._escape(f))
334 ofile.write('\n')
335
336 @FeatureNewKwargs('pkgconfig.generate', '0.42.0', ['extra_cflags'])
337 @FeatureNewKwargs('pkgconfig.generate', '0.41.0', ['variables'])
338 @permittedKwargs({'libraries', 'version', 'name', 'description', 'filebase',
339 'subdirs', 'requires', 'requires_private', 'libraries_private',
340 'install_dir', 'extra_cflags', 'variables', 'url', 'd_module_versions'})
341 def generate(self, state, args, kwargs):
342 if 'variables' in kwargs:
343 FeatureNew('custom pkgconfig variables', '0.41.0').use(state.subproject)
344 default_version = state.project_version['version']
345 default_install_dir = None
346 default_description = None
347 default_name = None
348 mainlib = None
349 if len(args) == 1:
350 FeatureNew('pkgconfig.generate optional positional argument', '0.46.0').use(state.subproject)
351 mainlib = getattr(args[0], 'held_object', args[0])
352 if not isinstance(mainlib, (build.StaticLibrary, build.SharedLibrary)):
353 raise mesonlib.MesonException('Pkgconfig_gen first positional argument must be a library object')
354 default_name = mainlib.name
355 default_description = state.project_name + ': ' + mainlib.name
356 install_dir = mainlib.get_custom_install_dir()[0]
357 if isinstance(install_dir, str):
358 default_install_dir = os.path.join(install_dir, 'pkgconfig')
359 elif len(args) > 1:
360 raise mesonlib.MesonException('Too many positional arguments passed to Pkgconfig_gen.')
361
362 subdirs = mesonlib.stringlistify(kwargs.get('subdirs', ['.']))
363 version = kwargs.get('version', default_version)
364 if not isinstance(version, str):
365 raise mesonlib.MesonException('Version must be specified.')
366 name = kwargs.get('name', default_name)
367 if not isinstance(name, str):
368 raise mesonlib.MesonException('Name not specified.')
369 filebase = kwargs.get('filebase', name)
370 if not isinstance(filebase, str):
371 raise mesonlib.MesonException('Filebase must be a string.')
372 description = kwargs.get('description', default_description)
373 if not isinstance(description, str):
374 raise mesonlib.MesonException('Description is not a string.')
375 url = kwargs.get('url', '')
376 if not isinstance(url, str):
377 raise mesonlib.MesonException('URL is not a string.')
378 conflicts = mesonlib.stringlistify(kwargs.get('conflicts', []))
379
380 deps = DependenciesHelper(filebase)
381 if mainlib:
382 deps.add_pub_libs(mainlib)
383 deps.add_pub_libs(kwargs.get('libraries', []))
384 deps.add_priv_libs(kwargs.get('libraries_private', []))
385 deps.add_pub_reqs(kwargs.get('requires', []))
386 deps.add_priv_reqs(kwargs.get('requires_private', []))
387 deps.add_cflags(kwargs.get('extra_cflags', []))
388
389 dversions = kwargs.get('d_module_versions', None)
390 if dversions:
391 compiler = state.environment.coredata.compilers.get('d')
392 if compiler:
393 deps.add_cflags(compiler.get_feature_args({'versions': dversions}, None))
394
395 def parse_variable_list(stringlist):
396 reserved = ['prefix', 'libdir', 'includedir']
397 variables = []
398 for var in stringlist:
399 # foo=bar=baz is ('foo', 'bar=baz')
400 l = var.split('=', 1)
401 if len(l) < 2:
402 raise mesonlib.MesonException('Invalid variable "{}". Variables must be in \'name=value\' format'.format(var))
403
404 name, value = l[0].strip(), l[1].strip()
405 if not name or not value:
406 raise mesonlib.MesonException('Invalid variable "{}". Variables must be in \'name=value\' format'.format(var))
407
408 # Variable names must not contain whitespaces
409 if any(c.isspace() for c in name):
410 raise mesonlib.MesonException('Invalid whitespace in assignment "{}"'.format(var))
411
412 if name in reserved:
413 raise mesonlib.MesonException('Variable "{}" is reserved'.format(name))
414
415 variables.append((name, value))
416
417 return variables
418
419 variables = parse_variable_list(mesonlib.stringlistify(kwargs.get('variables', [])))
420
421 pcfile = filebase + '.pc'
422 pkgroot = kwargs.get('install_dir', default_install_dir)
423 if pkgroot is None:
424 pkgroot = os.path.join(state.environment.coredata.get_builtin_option('libdir'), 'pkgconfig')
425 if not isinstance(pkgroot, str):
426 raise mesonlib.MesonException('Install_dir must be a string.')
427 self.generate_pkgconfig_file(state, deps, subdirs, name, description, url,
428 version, pcfile, conflicts, variables)
429 res = build.Data(mesonlib.File(True, state.environment.get_scratch_dir(), pcfile), pkgroot)
430 # Associate the main library with this generated pc file. If the library
431 # is used in any subsequent call to the generated, it will generate a
432 # 'Requires:' or 'Requires.private:'.
433 # Backward compatibility: We used to set 'generated_pc' on all public
434 # libraries instead of just the main one. Keep doing that but warn if
435 # anyone is relying on that deprecated behaviour.
436 if mainlib:
437 if not hasattr(mainlib, 'generated_pc'):
438 mainlib.generated_pc = filebase
439 else:
440 mlog.warning('Already generated a pkg-config file for', mlog.bold(mainlib.name))
441 for lib in deps.pub_libs:
442 if not isinstance(lib, str) and not hasattr(lib, 'generated_pc'):
443 lib.generated_pc = filebase
444 lib.generated_pc_warn = types.SimpleNamespace(subdir=state.subdir,
445 lineno=state.current_lineno)
446 return ModuleReturnValue(res, [res])
447
448 def initialize(*args, **kwargs):
449 return PkgConfigModule(*args, **kwargs)
450
[end of mesonbuild/modules/pkgconfig.py]
[start of mesonbuild/modules/python.py]
1 # Copyright 2018 The Meson development team
2
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6
7 # http://www.apache.org/licenses/LICENSE-2.0
8
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import os
16 import json
17 import shutil
18
19 from pathlib import Path
20 from .. import mesonlib
21 from ..mesonlib import MesonException
22 from . import ExtensionModule
23 from mesonbuild.modules import ModuleReturnValue
24 from ..interpreterbase import (
25 noPosargs, noKwargs, permittedKwargs,
26 InterpreterObject, InvalidArguments,
27 FeatureNew, FeatureNewKwargs, disablerIfNotFound
28 )
29 from ..interpreter import ExternalProgramHolder, extract_required_kwarg
30 from ..interpreterbase import flatten
31 from ..build import known_shmod_kwargs
32 from .. import mlog
33 from ..environment import detect_cpu_family
34 from ..dependencies.base import (
35 DependencyMethods, ExternalDependency,
36 ExternalProgram, PkgConfigDependency,
37 NonExistingExternalProgram
38 )
39
40 mod_kwargs = set(['subdir'])
41 mod_kwargs.update(known_shmod_kwargs)
42 mod_kwargs -= set(['name_prefix', 'name_suffix'])
43
44
45 def run_command(python, command):
46 _, stdout, _ = mesonlib.Popen_safe(python.get_command() + [
47 '-c',
48 command])
49
50 return stdout.strip()
51
52
53 class PythonDependency(ExternalDependency):
54
55 def __init__(self, python_holder, environment, kwargs):
56 super().__init__('python', environment, None, kwargs)
57 self.name = 'python'
58 self.static = kwargs.get('static', False)
59 self.version = python_holder.version
60 self.platform = python_holder.platform
61 self.pkgdep = None
62 self.variables = python_holder.variables
63 self.paths = python_holder.paths
64 if mesonlib.version_compare(self.version, '>= 3.0'):
65 self.major_version = 3
66 else:
67 self.major_version = 2
68
69 # We first try to find the necessary python variables using pkgconfig
70 if DependencyMethods.PKGCONFIG in self.methods and not python_holder.is_pypy:
71 pkg_version = self.variables.get('LDVERSION') or self.version
72 pkg_libdir = self.variables.get('LIBPC')
73
74 # If python-X.Y.pc exists in LIBPC, we will try to use it
75 if pkg_libdir is not None and Path(os.path.join(pkg_libdir, 'python-{}.pc'.format(pkg_version))).is_file():
76 old_pkg_libdir = os.environ.get('PKG_CONFIG_LIBDIR')
77 old_pkg_path = os.environ.get('PKG_CONFIG_PATH')
78
79 os.environ.pop('PKG_CONFIG_PATH', None)
80
81 if pkg_libdir:
82 os.environ['PKG_CONFIG_LIBDIR'] = pkg_libdir
83
84 try:
85 self.pkgdep = PkgConfigDependency('python-{}'.format(pkg_version), environment, kwargs)
86 mlog.debug('Found "python-{}" via pkgconfig lookup in LIBPC ({})'.format(pkg_version, pkg_libdir))
87 py_lookup_method = 'pkgconfig'
88 except MesonException as e:
89 mlog.debug('"python-{}" could not be found in LIBPC ({})'.format(pkg_version, pkg_libdir))
90 mlog.debug(e)
91
92 if old_pkg_path is not None:
93 os.environ['PKG_CONFIG_PATH'] = old_pkg_path
94
95 if old_pkg_libdir is not None:
96 os.environ['PKG_CONFIG_LIBDIR'] = old_pkg_libdir
97 else:
98 os.environ.pop('PKG_CONFIG_LIBDIR', None)
99 else:
100 mlog.debug('"python-{}" could not be found in LIBPC ({}), this is likely due to a relocated python installation'.format(pkg_version, pkg_libdir))
101
102 # If lookup via LIBPC failed, try to use fallback PKG_CONFIG_LIBDIR/PKG_CONFIG_PATH mechanisms
103 if self.pkgdep is None or not self.pkgdep.found():
104 try:
105 self.pkgdep = PkgConfigDependency('python-{}'.format(pkg_version), environment, kwargs)
106 mlog.debug('Found "python-{}" via fallback pkgconfig lookup in PKG_CONFIG_LIBDIR/PKG_CONFIG_PATH'.format(pkg_version))
107 py_lookup_method = 'pkgconfig-fallback'
108 except MesonException as e:
109 mlog.debug('"python-{}" could not be found via fallback pkgconfig lookup in PKG_CONFIG_LIBDIR/PKG_CONFIG_PATH'.format(pkg_version))
110 mlog.debug(e)
111
112 if self.pkgdep and self.pkgdep.found():
113 self.compile_args = self.pkgdep.get_compile_args()
114 self.link_args = self.pkgdep.get_link_args()
115 self.is_found = True
116 self.pcdep = self.pkgdep
117 else:
118 self.pkgdep = None
119
120 # Finally, try to find python via SYSCONFIG as a final measure
121 if DependencyMethods.SYSCONFIG in self.methods:
122 if mesonlib.is_windows():
123 self._find_libpy_windows(environment)
124 else:
125 self._find_libpy(python_holder, environment)
126
127 if self.is_found:
128 mlog.debug('Found "python-{}" via SYSCONFIG module'.format(self.version))
129 py_lookup_method = 'sysconfig'
130
131 if self.is_found:
132 mlog.log('Dependency', mlog.bold(self.name), 'found:', mlog.green('YES ({})'.format(py_lookup_method)))
133 else:
134 mlog.log('Dependency', mlog.bold(self.name), 'found:', mlog.red('NO'))
135
136 def _find_libpy(self, python_holder, environment):
137 if python_holder.is_pypy:
138 if self.major_version == 3:
139 libname = 'pypy3-c'
140 else:
141 libname = 'pypy-c'
142 libdir = os.path.join(self.variables.get('base'), 'bin')
143 libdirs = [libdir]
144 else:
145 libname = 'python{}'.format(self.version)
146 if 'DEBUG_EXT' in self.variables:
147 libname += self.variables['DEBUG_EXT']
148 if 'ABIFLAGS' in self.variables:
149 libname += self.variables['ABIFLAGS']
150 libdirs = []
151
152 largs = self.clib_compiler.find_library(libname, environment, libdirs)
153
154 self.is_found = largs is not None
155 if self.is_found:
156 self.link_args = largs
157
158 inc_paths = mesonlib.OrderedSet([
159 self.variables.get('INCLUDEPY'),
160 self.paths.get('include'),
161 self.paths.get('platinclude')])
162
163 self.compile_args += ['-I' + path for path in inc_paths if path]
164
165 def get_windows_python_arch(self):
166 if self.platform == 'mingw':
167 pycc = self.variables.get('CC')
168 if pycc.startswith('x86_64'):
169 return '64'
170 elif pycc.startswith(('i686', 'i386')):
171 return '32'
172 else:
173 mlog.log('MinGW Python built with unknown CC {!r}, please file'
174 'a bug'.format(pycc))
175 return None
176 elif self.platform == 'win32':
177 return '32'
178 elif self.platform in ('win64', 'win-amd64'):
179 return '64'
180 mlog.log('Unknown Windows Python platform {!r}'.format(self.platform))
181 return None
182
183 def get_windows_link_args(self):
184 if self.platform.startswith('win'):
185 vernum = self.variables.get('py_version_nodot')
186 if self.static:
187 libname = 'libpython{}.a'.format(vernum)
188 else:
189 libname = 'python{}.lib'.format(vernum)
190 lib = Path(self.variables.get('base')) / 'libs' / libname
191 elif self.platform == 'mingw':
192 if self.static:
193 libname = self.variables.get('LIBRARY')
194 else:
195 libname = self.variables.get('LDLIBRARY')
196 lib = Path(self.variables.get('LIBDIR')) / libname
197 if not lib.exists():
198 mlog.log('Could not find Python3 library {!r}'.format(str(lib)))
199 return None
200 return [str(lib)]
201
202 def _find_libpy_windows(self, env):
203 '''
204 Find python3 libraries on Windows and also verify that the arch matches
205 what we are building for.
206 '''
207 pyarch = self.get_windows_python_arch()
208 if pyarch is None:
209 self.is_found = False
210 return
211 arch = detect_cpu_family(env.coredata.compilers)
212 if arch == 'x86':
213 arch = '32'
214 elif arch == 'x86_64':
215 arch = '64'
216 else:
217 # We can't cross-compile Python 3 dependencies on Windows yet
218 mlog.log('Unknown architecture {!r} for'.format(arch),
219 mlog.bold(self.name))
220 self.is_found = False
221 return
222 # Pyarch ends in '32' or '64'
223 if arch != pyarch:
224 mlog.log('Need', mlog.bold(self.name), 'for {}-bit, but '
225 'found {}-bit'.format(arch, pyarch))
226 self.is_found = False
227 return
228 # This can fail if the library is not found
229 largs = self.get_windows_link_args()
230 if largs is None:
231 self.is_found = False
232 return
233 self.link_args = largs
234 # Compile args
235 inc_paths = mesonlib.OrderedSet([
236 self.variables.get('INCLUDEPY'),
237 self.paths.get('include'),
238 self.paths.get('platinclude')])
239
240 self.compile_args += ['-I' + path for path in inc_paths if path]
241
242 # https://sourceforge.net/p/mingw-w64/mailman/message/30504611/
243 if pyarch == '64' and self.major_version == 2:
244 self.compile_args += ['-DMS_WIN64']
245
246 self.is_found = True
247
248 @staticmethod
249 def get_methods():
250 if mesonlib.is_windows():
251 return [DependencyMethods.PKGCONFIG, DependencyMethods.SYSCONFIG]
252 elif mesonlib.is_osx():
253 return [DependencyMethods.PKGCONFIG, DependencyMethods.EXTRAFRAMEWORK]
254 else:
255 return [DependencyMethods.PKGCONFIG, DependencyMethods.SYSCONFIG]
256
257 def get_pkgconfig_variable(self, variable_name, kwargs):
258 if self.pkgdep:
259 return self.pkgdep.get_pkgconfig_variable(variable_name, kwargs)
260 else:
261 return super().get_pkgconfig_variable(variable_name, kwargs)
262
263
264 INTROSPECT_COMMAND = '''
265 import sysconfig
266 import json
267 import sys
268
269 install_paths = sysconfig.get_paths(scheme='posix_prefix', vars={'base': '', 'platbase': '', 'installed_base': ''})
270
271 def links_against_libpython():
272 from distutils.core import Distribution, Extension
273 cmd = Distribution().get_command_obj('build_ext')
274 cmd.ensure_finalized()
275 return bool(cmd.get_libraries(Extension('dummy', [])))
276
277 print (json.dumps ({
278 'variables': sysconfig.get_config_vars(),
279 'paths': sysconfig.get_paths(),
280 'install_paths': install_paths,
281 'version': sysconfig.get_python_version(),
282 'platform': sysconfig.get_platform(),
283 'is_pypy': '__pypy__' in sys.builtin_module_names,
284 'link_libpython': links_against_libpython(),
285 }))
286 '''
287
288
289 class PythonInstallation(ExternalProgramHolder):
290 def __init__(self, interpreter, python, info):
291 ExternalProgramHolder.__init__(self, python)
292 self.interpreter = interpreter
293 self.subproject = self.interpreter.subproject
294 prefix = self.interpreter.environment.coredata.get_builtin_option('prefix')
295 self.variables = info['variables']
296 self.paths = info['paths']
297 install_paths = info['install_paths']
298 self.platlib_install_path = os.path.join(prefix, install_paths['platlib'][1:])
299 self.purelib_install_path = os.path.join(prefix, install_paths['purelib'][1:])
300 self.version = info['version']
301 self.platform = info['platform']
302 self.is_pypy = info['is_pypy']
303 self.link_libpython = info['link_libpython']
304 self.methods.update({
305 'extension_module': self.extension_module_method,
306 'dependency': self.dependency_method,
307 'install_sources': self.install_sources_method,
308 'get_install_dir': self.get_install_dir_method,
309 'language_version': self.language_version_method,
310 'found': self.found_method,
311 'has_path': self.has_path_method,
312 'get_path': self.get_path_method,
313 'has_variable': self.has_variable_method,
314 'get_variable': self.get_variable_method,
315 'path': self.path_method,
316 })
317
318 @permittedKwargs(mod_kwargs)
319 def extension_module_method(self, args, kwargs):
320 if 'subdir' in kwargs and 'install_dir' in kwargs:
321 raise InvalidArguments('"subdir" and "install_dir" are mutually exclusive')
322
323 if 'subdir' in kwargs:
324 subdir = kwargs.pop('subdir', '')
325 if not isinstance(subdir, str):
326 raise InvalidArguments('"subdir" argument must be a string.')
327
328 kwargs['install_dir'] = os.path.join(self.platlib_install_path, subdir)
329
330 # On macOS and some Linux distros (Debian) distutils doesn't link
331 # extensions against libpython. We call into distutils and mirror its
332 # behavior. See https://github.com/mesonbuild/meson/issues/4117
333 if not self.link_libpython:
334 new_deps = []
335 for holder in mesonlib.extract_as_list(kwargs, 'dependencies'):
336 dep = holder.held_object
337 if isinstance(dep, PythonDependency):
338 holder = self.interpreter.holderify(dep.get_partial_dependency(compile_args=True))
339 new_deps.append(holder)
340 kwargs['dependencies'] = new_deps
341
342 suffix = self.variables.get('EXT_SUFFIX') or self.variables.get('SO') or self.variables.get('.so')
343
344 # msys2's python3 has "-cpython-36m.dll", we have to be clever
345 split = suffix.rsplit('.', 1)
346 suffix = split.pop(-1)
347 args[0] += ''.join(s for s in split)
348
349 kwargs['name_prefix'] = ''
350 kwargs['name_suffix'] = suffix
351
352 return self.interpreter.func_shared_module(None, args, kwargs)
353
354 def dependency_method(self, args, kwargs):
355 dep = PythonDependency(self, self.interpreter.environment, kwargs)
356 return self.interpreter.holderify(dep)
357
358 @permittedKwargs(['pure', 'subdir'])
359 def install_sources_method(self, args, kwargs):
360 pure = kwargs.pop('pure', False)
361 if not isinstance(pure, bool):
362 raise InvalidArguments('"pure" argument must be a boolean.')
363
364 subdir = kwargs.pop('subdir', '')
365 if not isinstance(subdir, str):
366 raise InvalidArguments('"subdir" argument must be a string.')
367
368 if pure:
369 kwargs['install_dir'] = os.path.join(self.purelib_install_path, subdir)
370 else:
371 kwargs['install_dir'] = os.path.join(self.platlib_install_path, subdir)
372
373 return self.interpreter.holderify(self.interpreter.func_install_data(None, args, kwargs))
374
375 @noPosargs
376 @permittedKwargs(['pure', 'subdir'])
377 def get_install_dir_method(self, args, kwargs):
378 pure = kwargs.pop('pure', True)
379 if not isinstance(pure, bool):
380 raise InvalidArguments('"pure" argument must be a boolean.')
381
382 subdir = kwargs.pop('subdir', '')
383 if not isinstance(subdir, str):
384 raise InvalidArguments('"subdir" argument must be a string.')
385
386 if pure:
387 res = os.path.join(self.purelib_install_path, subdir)
388 else:
389 res = os.path.join(self.platlib_install_path, subdir)
390
391 return self.interpreter.module_method_callback(ModuleReturnValue(res, []))
392
393 @noPosargs
394 @noKwargs
395 def language_version_method(self, args, kwargs):
396 return self.interpreter.module_method_callback(ModuleReturnValue(self.version, []))
397
398 @noKwargs
399 def has_path_method(self, args, kwargs):
400 if len(args) != 1:
401 raise InvalidArguments('has_path takes exactly one positional argument.')
402 path_name = args[0]
403 if not isinstance(path_name, str):
404 raise InvalidArguments('has_path argument must be a string.')
405
406 return self.interpreter.module_method_callback(ModuleReturnValue(path_name in self.paths, []))
407
408 @noKwargs
409 def get_path_method(self, args, kwargs):
410 if len(args) not in (1, 2):
411 raise InvalidArguments('get_path must have one or two arguments.')
412 path_name = args[0]
413 if not isinstance(path_name, str):
414 raise InvalidArguments('get_path argument must be a string.')
415
416 try:
417 path = self.paths[path_name]
418 except KeyError:
419 if len(args) == 2:
420 path = args[1]
421 else:
422 raise InvalidArguments('{} is not a valid path name'.format(path_name))
423
424 return self.interpreter.module_method_callback(ModuleReturnValue(path, []))
425
426 @noKwargs
427 def has_variable_method(self, args, kwargs):
428 if len(args) != 1:
429 raise InvalidArguments('has_variable takes exactly one positional argument.')
430 var_name = args[0]
431 if not isinstance(var_name, str):
432 raise InvalidArguments('has_variable argument must be a string.')
433
434 return self.interpreter.module_method_callback(ModuleReturnValue(var_name in self.variables, []))
435
436 @noKwargs
437 def get_variable_method(self, args, kwargs):
438 if len(args) not in (1, 2):
439 raise InvalidArguments('get_variable must have one or two arguments.')
440 var_name = args[0]
441 if not isinstance(var_name, str):
442 raise InvalidArguments('get_variable argument must be a string.')
443
444 try:
445 var = self.variables[var_name]
446 except KeyError:
447 if len(args) == 2:
448 var = args[1]
449 else:
450 raise InvalidArguments('{} is not a valid variable name'.format(var_name))
451
452 return self.interpreter.module_method_callback(ModuleReturnValue(var, []))
453
454 @noPosargs
455 @noKwargs
456 @FeatureNew('Python module path method', '0.50.0')
457 def path_method(self, args, kwargs):
458 return super().path_method(args, kwargs)
459
460
461 class PythonModule(ExtensionModule):
462
463 @FeatureNew('Python Module', '0.46.0')
464 def __init__(self, *args, **kwargs):
465 super().__init__(*args, **kwargs)
466 self.snippets.add('find_installation')
467
468 # https://www.python.org/dev/peps/pep-0397/
469 def _get_win_pythonpath(self, name_or_path):
470 if name_or_path not in ['python2', 'python3']:
471 return None
472 if not shutil.which('py'):
473 # program not installed, return without an exception
474 return None
475 ver = {'python2': '-2', 'python3': '-3'}[name_or_path]
476 cmd = ['py', ver, '-c', "import sysconfig; print(sysconfig.get_config_var('BINDIR'))"]
477 _, stdout, _ = mesonlib.Popen_safe(cmd)
478 dir = stdout.strip()
479 if os.path.exists(dir):
480 return os.path.join(dir, 'python')
481 else:
482 return None
483
484 def _check_version(self, name_or_path, version):
485 if name_or_path == 'python2':
486 return mesonlib.version_compare(version, '< 3.0')
487 elif name_or_path == 'python3':
488 return mesonlib.version_compare(version, '>= 3.0')
489 return True
490
491 @FeatureNewKwargs('python.find_installation', '0.49.0', ['disabler'])
492 @disablerIfNotFound
493 @permittedKwargs(['required'])
494 def find_installation(self, interpreter, state, args, kwargs):
495 feature_check = FeatureNew('Passing "feature" option to find_installation', '0.48.0')
496 disabled, required, feature = extract_required_kwarg(kwargs, state.subproject, feature_check)
497 if disabled:
498 mlog.log('find_installation skipped: feature', mlog.bold(feature), 'disabled')
499 return ExternalProgramHolder(NonExistingExternalProgram())
500
501 if len(args) > 1:
502 raise InvalidArguments('find_installation takes zero or one positional argument.')
503
504 name_or_path = state.environment.binaries.host.lookup_entry('python')
505 if name_or_path is None and args:
506 name_or_path = args[0]
507 if not isinstance(name_or_path, str):
508 raise InvalidArguments('find_installation argument must be a string.')
509
510 if not name_or_path:
511 mlog.log("Using meson's python {}".format(mesonlib.python_command))
512 python = ExternalProgram('python3', mesonlib.python_command, silent=True)
513 else:
514 python = ExternalProgram.from_entry('python3', name_or_path)
515
516 if not python.found() and mesonlib.is_windows():
517 pythonpath = self._get_win_pythonpath(name_or_path)
518 if pythonpath is not None:
519 name_or_path = pythonpath
520 python = ExternalProgram(name_or_path, silent = True)
521
522 # Last ditch effort, python2 or python3 can be named python
523 # on various platforms, let's not give up just yet, if an executable
524 # named python is available and has a compatible version, let's use
525 # it
526 if not python.found() and name_or_path in ['python2', 'python3']:
527 python = ExternalProgram('python', silent = True)
528
529 if not python.found():
530 if required:
531 raise mesonlib.MesonException('{} not found'.format(name_or_path or 'python'))
532 res = ExternalProgramHolder(NonExistingExternalProgram())
533 else:
534 # Sanity check, we expect to have something that at least quacks in tune
535 try:
536 info = json.loads(run_command(python, INTROSPECT_COMMAND))
537 except json.JSONDecodeError:
538 info = None
539
540 if isinstance(info, dict) and 'version' in info and self._check_version(name_or_path, info['version']):
541 res = PythonInstallation(interpreter, python, info)
542 else:
543 res = ExternalProgramHolder(NonExistingExternalProgram())
544 if required:
545 raise mesonlib.MesonException('{} is not a valid python'.format(python))
546
547 return res
548
549
550 def initialize(*args, **kwargs):
551 return PythonModule(*args, **kwargs)
552
[end of mesonbuild/modules/python.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
mesonbuild/meson
|
3bf2ca483e9ca80ee81ee0a07d5f5c9f36817bb3
|
Add an `install:` kwarg to `configure_file()`
Just for consistency with other targets otherwise people get confused by it not existing, even though setting `install_dir:` has the same effect: https://github.com/mesonbuild/meson/issues/860#issuecomment-410086728
|
2018-11-08T02:38:26Z
|
<patch>
diff --git a/mesonbuild/interpreter.py b/mesonbuild/interpreter.py
--- a/mesonbuild/interpreter.py
+++ b/mesonbuild/interpreter.py
@@ -3532,6 +3532,7 @@ def func_install_subdir(self, node, args, kwargs):
@FeatureNewKwargs('configure_file', '0.47.0', ['copy', 'output_format', 'install_mode', 'encoding'])
@FeatureNewKwargs('configure_file', '0.46.0', ['format'])
@FeatureNewKwargs('configure_file', '0.41.0', ['capture'])
+ @FeatureNewKwargs('configure_file', '0.50.0', ['install'])
@permittedKwargs(permitted_kwargs['configure_file'])
def func_configure_file(self, node, args, kwargs):
if len(args) > 0:
@@ -3691,8 +3692,17 @@ def func_configure_file(self, node, args, kwargs):
# Install file if requested, we check for the empty string
# for backwards compatibility. That was the behaviour before
# 0.45.0 so preserve it.
- idir = kwargs.get('install_dir', None)
- if isinstance(idir, str) and idir:
+ idir = kwargs.get('install_dir', '')
+ if not isinstance(idir, str):
+ raise InterpreterException('"install_dir" must be a string')
+ install = kwargs.get('install', idir != '')
+ if not isinstance(install, bool):
+ raise InterpreterException('"install" must be a boolean')
+ if install:
+ if not idir:
+ raise InterpreterException('"install_dir" must be specified '
+ 'when "install" in a configure_file '
+ 'is true')
cfile = mesonlib.File.from_built_file(ofile_path, ofile_fname)
install_mode = self._get_kwarg_install_mode(kwargs)
self.build.data.append(build.Data([cfile], idir, install_mode))
</patch>
|
[]
|
[]
| ||||
conan-io__conan-4663
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[revisions] Handle only_v2 capability from ping endpoint
Artifactory now returns ``only_v2`` server capability in some cases. This together with the ``revisions`` one should be enough to raise an error in case a user wants to perform a ``conan install`` without revisons when V1 is not supported.
This will improve the ERROR 500 that we are receiving currently:
```
{
"errors": [
{
"status": 500,
"message": "Unsupported Conan v2 repository request for conan-remote"
}
]
}
```
</issue>
<code>
[start of README.rst]
1 Conan
2 =====
3
4 A distributed, open-source, C/C++ package manager.
5
6 +------------------------+-------------------------+
7 | **master** | **develop** |
8 +========================+=========================+
9 | |Build Status Master| | |Build Status Develop| |
10 +------------------------+-------------------------+
11
12
13 +------------------------+---------------------------+---------------------------------------------+
14 | **Coverage master** | **Coverage develop** | **Coverage graph** |
15 +========================+===========================+=============================================+
16 | |Master coverage| | |Develop coverage| | |Coverage graph| |
17 +------------------------+---------------------------+---------------------------------------------+
18
19
20 Setup
21 =====
22
23 From binaries
24 -------------
25
26 We have installers for `most platforms here <http://conan.io>`__ but you
27 can run **conan** from sources if you want.
28
29 From pip
30 --------
31
32 Conan is compatible with Python 2 and Python 3.
33
34 - Install pip following `pip docs`_.
35 - Install conan:
36
37 .. code-block:: bash
38
39 $ pip install conan
40
41 You can also use `test.pypi.org <https://test.pypi.org/project/conan/#history>`_ repository to install development (non-stable) Conan versions:
42
43
44 .. code-block:: bash
45
46 $ pip install --index-url https://test.pypi.org/simple/ conan
47
48
49 From Homebrew (OSx)
50 -------------------
51
52 - Install Homebrew following `brew homepage`_.
53
54 .. code-block:: bash
55
56 $ brew update
57 $ brew install conan
58
59 From source
60 -----------
61
62 You can run **conan** client and server in Windows, MacOS, and Linux.
63
64 - **Install pip following** `pip docs`_.
65
66 - **Clone conan repository:**
67
68 .. code-block:: bash
69
70 $ git clone https://github.com/conan-io/conan.git
71
72 - **Install in editable mode**
73
74 .. code-block:: bash
75
76 $ cd conan && sudo pip install -e .
77
78 If you are in Windows, using ``sudo`` is not required.
79
80 - **You are ready, try to run conan:**
81
82 .. code-block::
83
84 $ conan --help
85
86 Consumer commands
87 install Installs the requirements specified in a conanfile (.py or .txt).
88 config Manages configuration. Edits the conan.conf or installs config files.
89 get Gets a file or list a directory of a given reference or package.
90 info Gets information about the dependency graph of a recipe.
91 search Searches package recipes and binaries in the local cache or in a remote.
92 Creator commands
93 new Creates a new package recipe template with a 'conanfile.py'.
94 create Builds a binary package for recipe (conanfile.py) located in current dir.
95 upload Uploads a recipe and binary packages to a remote.
96 export Copies the recipe (conanfile.py & associated files) to your local cache.
97 export-pkg Exports a recipe & creates a package with given files calling 'package'.
98 test Test a package, consuming it with a conanfile recipe with a test() method.
99 Package development commands
100 source Calls your local conanfile.py 'source()' method.
101 build Calls your local conanfile.py 'build()' method.
102 package Calls your local conanfile.py 'package()' method.
103 Misc commands
104 profile Lists profiles in the '.conan/profiles' folder, or shows profile details.
105 remote Manages the remote list and the package recipes associated to a remote.
106 user Authenticates against a remote with user/pass, caching the auth token.
107 imports Calls your local conanfile.py or conanfile.txt 'imports' method.
108 copy Copies conan recipes and packages to another user/channel.
109 remove Removes packages or binaries matching pattern from local cache or remote.
110 alias Creates and exports an 'alias recipe'.
111 download Downloads recipe and binaries to the local cache, without using settings.
112
113 Conan commands. Type "conan <command> -h" for help
114
115 Contributing to the project
116 ===========================
117
118 Feedback and contribution is always welcome in this project.
119 Please read our [contributing guide](https://github.com/conan-io/conan/blob/develop/.github/CONTRIBUTING.md).
120
121 Running the tests
122 =================
123
124 Using tox
125 ---------
126
127 .. code-block:: bash
128
129 $ tox
130
131 It will install the needed requirements and launch `nose` skipping some heavy and slow test.
132 If you want to run the full test suite:
133
134 .. code-block:: bash
135
136 $ tox -e full
137
138 Without tox
139 -----------
140
141 **Install python requirements**
142
143 .. code-block:: bash
144
145 $ pip install -r conans/requirements.txt
146 $ pip install -r conans/requirements_server.txt
147 $ pip install -r conans/requirements_dev.txt
148
149
150 Only in OSX:
151
152 .. code-block:: bash
153
154 $ pip install -r conans/requirements_osx.txt # You can omit this one if not running OSX
155
156
157 If you are not Windows and you are not using a python virtual environment, you will need to run these
158 commands using `sudo`.
159
160 Before you can run the tests, you need to set a few environment variables first.
161
162 .. code-block:: bash
163
164 $ export PYTHONPATH=$PYTHONPATH:$(pwd)
165
166 On Windows it would be (while being in the conan root directory):
167
168 .. code-block:: bash
169
170 $ set PYTHONPATH=.
171
172 Ensure that your ``cmake`` has version 2.8 or later. You can see the
173 version with the following command:
174
175 .. code-block:: bash
176
177 $ cmake --version
178
179 The appropriate values of ``CONAN_COMPILER`` and ``CONAN_COMPILER_VERSION`` depend on your
180 operating system and your requirements.
181
182 These should work for the GCC from ``build-essential`` on Ubuntu 14.04:
183
184 .. code-block:: bash
185
186 $ export CONAN_COMPILER=gcc
187 $ export CONAN_COMPILER_VERSION=4.8
188
189 These should work for OS X:
190
191 .. code-block:: bash
192
193 $ export CONAN_COMPILER=clang
194 $ export CONAN_COMPILER_VERSION=3.5
195
196 Finally, there are some tests that use conan to package Go-lang
197 libraries, so you might **need to install go-lang** in your computer and
198 add it to the path.
199
200 You can run the actual tests like this:
201
202 .. code-block:: bash
203
204 $ nosetests .
205
206
207 There are a couple of test attributes defined, as ``slow``, or ``golang`` that you can use
208 to filter the tests, and do not execute them:
209
210 .. code-block:: bash
211
212 $ nosetests . -a !golang
213
214 A few minutes later it should print ``OK``:
215
216 .. code-block:: bash
217
218 ............................................................................................
219 ----------------------------------------------------------------------
220 Ran 146 tests in 50.993s
221
222 OK
223
224 To run specific tests, you can specify the test name too, something like:
225
226 .. code-block:: bash
227
228 $ nosetests conans.test.command.config_install_test:ConfigInstallTest.install_file_test --nocapture
229
230 The ``--nocapture`` argument can be useful to see some output that otherwise is captured by nosetests.
231
232 License
233 -------
234
235 `MIT LICENSE <./LICENSE.md>`__
236
237 .. |Build Status Master| image:: https://conan-ci.jfrog.info/buildStatus/icon?job=ConanTestSuite/master
238 :target: https://conan-ci.jfrog.info/job/ConanTestSuite/job/master
239
240 .. |Build Status Develop| image:: https://conan-ci.jfrog.info/buildStatus/icon?job=ConanTestSuite/develop
241 :target: https://conan-ci.jfrog.info/job/ConanTestSuite/job/develop
242
243 .. |Master coverage| image:: https://codecov.io/gh/conan-io/conan/branch/master/graph/badge.svg
244 :target: https://codecov.io/gh/conan-io/conan/branch/master
245
246 .. |Develop coverage| image:: https://codecov.io/gh/conan-io/conan/branch/develop/graph/badge.svg
247 :target: https://codecov.io/gh/conan-io/conan/branch/develop
248
249 .. |Coverage graph| image:: https://codecov.io/gh/conan-io/conan/branch/develop/graphs/tree.svg
250 :height: 50px
251 :width: 50 px
252 :alt: Conan develop coverage
253
254 .. _`pip docs`: https://pip.pypa.io/en/stable/installing/
255
256 .. _`brew homepage`: http://brew.sh/
257
[end of README.rst]
[start of conans/client/cmd/uploader.py]
1 import os
2 import stat
3 import tarfile
4 import time
5 from collections import defaultdict
6
7 from conans.client.source import complete_recipe_sources
8 from conans.errors import ConanException, NotFoundException
9 from conans.model.manifest import gather_files, FileTreeManifest
10 from conans.model.ref import ConanFileReference, PackageReference, check_valid_ref
11 from conans.paths import (CONAN_MANIFEST, CONANFILE, EXPORT_SOURCES_TGZ_NAME,
12 EXPORT_TGZ_NAME, PACKAGE_TGZ_NAME, CONANINFO)
13 from conans.search.search import search_packages, search_recipes
14 from conans.util.files import (load, clean_dirty, is_dirty,
15 gzopen_without_timestamps, set_dirty)
16 from conans.util.log import logger
17 from conans.util.tracer import (log_recipe_upload, log_compressed_files,
18 log_package_upload)
19
20
21 UPLOAD_POLICY_FORCE = "force-upload"
22 UPLOAD_POLICY_NO_OVERWRITE = "no-overwrite"
23 UPLOAD_POLICY_NO_OVERWRITE_RECIPE = "no-overwrite-recipe"
24 UPLOAD_POLICY_SKIP = "skip-upload"
25
26
27 class CmdUpload(object):
28 """ This class is responsible for uploading packages to remotes. The flow is:
29 - Collect all the data from the local cache:
30 - Collect the refs that matches the given pattern _collect_refs_to_upload
31 - Collect for every ref all the binaries IDs that has to be uploaded
32 "_collect_packages_to_upload". This may discard binaries that do not
33 belong to the current RREV
34 The collection of this does the interactivity (ask user if yes/no),
35 the errors (don't upload packages with policy=build_always, and computing
36 the full REVISIONS for every that has to be uploaded.
37 No remote API calls are done in this step, everything is local
38 - Execute the upload. For every ref:
39 - Upload the recipe of the ref: "_upload_recipe"
40 - If not FORCE, check the date "_check_recipe_date", i.e. if there are
41 changes, do not allow uploading if the remote date is newer than the
42 local cache one
43 - Retrieve the sources (exports_sources), if they are not cached, and
44 uploading to a different remote. "complete_recipe_sources"
45 - Gather files and create 2 .tgz (exports, exports_sources) with
46 "_compress_recipe_files"
47 - Decide which files have to be uploaded and deleted from the server
48 based on the different with the remote snapshot "_recipe_files_to_upload"
49 This can raise if upload policy is not overwrite
50 - Execute the real transfer "remote_manager.upload_recipe()"
51 - For every package_id of every ref: "_upload_package"
52 - Gather files and create package.tgz. "_compress_package_files"
53 - (Optional) Do the integrity check of the package
54 - Decide which files to upload and delete from server:
55 "_package_files_to_upload". Can raise if policy is NOT overwrite
56 - Do the actual upload
57
58 All the REVISIONS are local defined, not retrieved from servers
59
60 This requires calling to the remote API methods:
61 - get_recipe_sources() to get the export_sources if they are missing
62 - get_recipe_snapshot() to do the diff and know what files to upload
63 - get_package_snapshot() to do the diff and know what files to upload
64 - get_recipe_manifest() to check the date and raise if policy requires
65 - get_package_manifest() to raise if policy!=force and manifests change
66 """
67 def __init__(self, cache, user_io, remote_manager, loader, hook_manager):
68 self._cache = cache
69 self._user_io = user_io
70 self._remote_manager = remote_manager
71 self._registry = cache.registry
72 self._loader = loader
73 self._hook_manager = hook_manager
74
75 def upload(self, upload_recorder, reference_or_pattern, package_id=None, all_packages=None,
76 confirm=False, retry=0, retry_wait=0, integrity_check=False, policy=None,
77 remote_name=None, query=None):
78 t1 = time.time()
79 refs, confirm = self._collects_refs_to_upload(package_id, reference_or_pattern, confirm)
80 refs_by_remote = self._collect_packages_to_upload(refs, confirm, remote_name, all_packages,
81 query, package_id)
82 # Do the job
83 for remote, refs in refs_by_remote.items():
84 self._user_io.out.info("Uploading to remote '{}':".format(remote.name))
85 for (ref, conanfile, prefs) in refs:
86 self._upload_ref(conanfile, ref, prefs, retry, retry_wait,
87 integrity_check, policy, remote, upload_recorder)
88
89 logger.debug("UPLOAD: Time manager upload: %f" % (time.time() - t1))
90
91 def _collects_refs_to_upload(self, package_id, reference_or_pattern, confirm):
92 """ validate inputs and compute the refs (without revisions) to be uploaded
93 """
94 if package_id and not check_valid_ref(reference_or_pattern, allow_pattern=False):
95 raise ConanException("-p parameter only allowed with a valid recipe reference, "
96 "not with a pattern")
97
98 if package_id or check_valid_ref(reference_or_pattern, allow_pattern=False):
99 # Upload package
100 ref = ConanFileReference.loads(reference_or_pattern)
101 refs = [ref, ]
102 confirm = True
103 else:
104 refs = search_recipes(self._cache, reference_or_pattern)
105 if not refs:
106 raise NotFoundException(("No packages found matching pattern '%s'" %
107 reference_or_pattern))
108 return refs, confirm
109
110 def _collect_packages_to_upload(self, refs, confirm, remote_name, all_packages, query,
111 package_id):
112 """ compute the references with revisions and the package_ids to be uploaded
113 """
114 # Group recipes by remote
115 refs_by_remote = defaultdict(list)
116 default_remote = (self._registry.remotes.get(remote_name) if remote_name else
117 self._registry.remotes.default)
118
119 for ref in refs:
120 metadata = self._cache.package_layout(ref).load_metadata()
121 ref = ref.copy_with_rev(metadata.recipe.revision)
122 if not remote_name:
123 remote = self._registry.refs.get(ref) or default_remote
124 else:
125 remote = default_remote
126
127 upload = True
128 if not confirm:
129 msg = "Are you sure you want to upload '%s' to '%s'?" % (str(ref), remote.name)
130 upload = self._user_io.request_boolean(msg)
131 if upload:
132 try:
133 conanfile_path = self._cache.conanfile(ref)
134 conanfile = self._loader.load_class(conanfile_path)
135 except NotFoundException:
136 raise NotFoundException(("There is no local conanfile exported as %s" %
137 str(ref)))
138
139 # TODO: This search of binary packages has to be improved, more robust
140 # So only real packages are retrieved
141 if all_packages or query:
142 if all_packages:
143 query = None
144 # better to do a search, that will retrieve real packages with ConanInfo
145 # Not only "package_id" folders that could be empty
146 package_layout = self._cache.package_layout(ref.copy_clear_rev())
147 packages = search_packages(package_layout, query)
148 packages_ids = list(packages.keys())
149 elif package_id:
150 packages_ids = [package_id, ]
151 else:
152 packages_ids = []
153 if packages_ids:
154 if conanfile.build_policy == "always":
155 raise ConanException("Conanfile '%s' has build_policy='always', "
156 "no packages can be uploaded" % str(ref))
157 prefs = []
158 # Gather all the complete PREFS with PREV
159 for package_id in packages_ids:
160 if package_id not in metadata.packages:
161 raise ConanException("Binary package %s:%s not found"
162 % (str(ref), package_id))
163 # Filter packages that don't match the recipe revision
164 if self._cache.config.revisions_enabled and ref.revision:
165 rec_rev = metadata.packages[package_id].recipe_revision
166 if ref.revision != rec_rev:
167 self._user_io.out.warn("Skipping package '%s', it doesn't belong to the "
168 "current recipe revision" % package_id)
169 continue
170 package_revision = metadata.packages[package_id].revision
171 assert package_revision is not None, "PREV cannot be None to upload"
172 prefs.append(PackageReference(ref, package_id, package_revision))
173 refs_by_remote[remote].append((ref, conanfile, prefs))
174
175 return refs_by_remote
176
177 def _upload_ref(self, conanfile, ref, prefs, retry, retry_wait, integrity_check, policy,
178 recipe_remote, upload_recorder):
179 """ Uploads the recipes and binaries identified by ref
180 """
181 assert (ref.revision is not None), "Cannot upload a recipe without RREV"
182 conanfile_path = self._cache.conanfile(ref)
183 # FIXME: I think it makes no sense to specify a remote to "pre_upload"
184 # FIXME: because the recipe can have one and the package a different one
185 self._hook_manager.execute("pre_upload", conanfile_path=conanfile_path,
186 reference=ref, remote=recipe_remote)
187
188 self._user_io.out.info("Uploading %s to remote '%s'" % (str(ref), recipe_remote.name))
189 self._upload_recipe(ref, conanfile, retry, retry_wait, policy, recipe_remote)
190 upload_recorder.add_recipe(ref, recipe_remote.name, recipe_remote.url)
191
192 # Now the binaries
193 if prefs:
194 total = len(prefs)
195 for index, pref in enumerate(prefs):
196 p_remote = recipe_remote
197 msg = ("Uploading package %d/%d: %s to '%s'" % (index+1, total, str(pref.id),
198 p_remote.name))
199 self._user_io.out.info(msg)
200 self._upload_package(pref, retry, retry_wait,
201 integrity_check, policy, p_remote)
202 upload_recorder.add_package(pref, p_remote.name, p_remote.url)
203
204 # FIXME: I think it makes no sense to specify a remote to "post_upload"
205 # FIXME: because the recipe can have one and the package a different one
206 self._hook_manager.execute("post_upload", conanfile_path=conanfile_path, reference=ref,
207 remote=recipe_remote)
208
209 def _upload_recipe(self, ref, conanfile, retry, retry_wait, policy, remote):
210 if policy != UPLOAD_POLICY_FORCE:
211 remote_manifest = self._check_recipe_date(ref, remote)
212 else:
213 remote_manifest = None
214
215 current_remote = self._registry.refs.get(ref)
216
217 if remote != current_remote:
218 complete_recipe_sources(self._remote_manager, self._cache, conanfile, ref)
219
220 conanfile_path = self._cache.conanfile(ref)
221 self._hook_manager.execute("pre_upload_recipe", conanfile_path=conanfile_path,
222 reference=ref, remote=remote)
223
224 t1 = time.time()
225 the_files = self._compress_recipe_files(ref)
226 if policy == UPLOAD_POLICY_SKIP:
227 return ref
228 files_to_upload, deleted = self._recipe_files_to_upload(ref, policy, the_files,
229 remote, remote_manifest)
230 if files_to_upload or deleted:
231 self._remote_manager.upload_recipe(ref, files_to_upload, deleted,
232 remote, retry, retry_wait)
233 self._upload_recipe_end_msg(ref, remote)
234 else:
235 self._user_io.out.info("Recipe is up to date, upload skipped")
236 duration = time.time() - t1
237 log_recipe_upload(ref, duration, the_files, remote.name)
238 self._hook_manager.execute("post_upload_recipe", conanfile_path=conanfile_path,
239 reference=ref, remote=remote)
240
241 # The recipe wasn't in the registry or it has changed the revision field only
242 if not current_remote:
243 self._registry.refs.set(ref, remote.name)
244
245 return ref
246
247 def _upload_package(self, pref, retry=None, retry_wait=None, integrity_check=False,
248 policy=None, p_remote=None):
249
250 assert (pref.revision is not None), "Cannot upload a package without PREV"
251 assert (pref.ref.revision is not None), "Cannot upload a package without RREV"
252
253 conanfile_path = self._cache.conanfile(pref.ref)
254 self._hook_manager.execute("pre_upload_package", conanfile_path=conanfile_path,
255 reference=pref.ref,
256 package_id=pref.id,
257 remote=p_remote)
258
259 t1 = time.time()
260 the_files = self._compress_package_files(pref, integrity_check)
261 if policy == UPLOAD_POLICY_SKIP:
262 return None
263 files_to_upload, deleted = self._package_files_to_upload(pref, policy, the_files, p_remote)
264
265 if files_to_upload or deleted:
266 self._remote_manager.upload_package(pref, files_to_upload, deleted, p_remote, retry,
267 retry_wait)
268 logger.debug("UPLOAD: Time upload package: %f" % (time.time() - t1))
269 else:
270 self._user_io.out.info("Package is up to date, upload skipped")
271
272 duration = time.time() - t1
273 log_package_upload(pref, duration, the_files, p_remote)
274 self._hook_manager.execute("post_upload_package", conanfile_path=conanfile_path,
275 reference=pref.ref, package_id=pref.id, remote=p_remote)
276
277 logger.debug("UPLOAD: Time uploader upload_package: %f" % (time.time() - t1))
278 cur_package_remote = self._registry.prefs.get(pref.copy_clear_rev())
279 if not cur_package_remote and policy != UPLOAD_POLICY_SKIP:
280 self._registry.prefs.set(pref, p_remote.name)
281
282 return pref
283
284 def _compress_recipe_files(self, ref):
285 export_folder = self._cache.export(ref)
286
287 for f in (EXPORT_TGZ_NAME, EXPORT_SOURCES_TGZ_NAME):
288 tgz_path = os.path.join(export_folder, f)
289 if is_dirty(tgz_path):
290 self._user_io.out.warn("%s: Removing %s, marked as dirty" % (str(ref), f))
291 os.remove(tgz_path)
292 clean_dirty(tgz_path)
293
294 files, symlinks = gather_files(export_folder)
295 if CONANFILE not in files or CONAN_MANIFEST not in files:
296 raise ConanException("Cannot upload corrupted recipe '%s'" % str(ref))
297 export_src_folder = self._cache.export_sources(ref, short_paths=None)
298 src_files, src_symlinks = gather_files(export_src_folder)
299 the_files = _compress_recipe_files(files, symlinks, src_files, src_symlinks, export_folder,
300 self._user_io.out)
301 return the_files
302
303 def _compress_package_files(self, pref, integrity_check):
304
305 t1 = time.time()
306 # existing package, will use short paths if defined
307 package_folder = self._cache.package(pref, short_paths=None)
308
309 if is_dirty(package_folder):
310 raise ConanException("Package %s is corrupted, aborting upload.\n"
311 "Remove it with 'conan remove %s -p=%s'"
312 % (pref, pref.ref, pref.id))
313 tgz_path = os.path.join(package_folder, PACKAGE_TGZ_NAME)
314 if is_dirty(tgz_path):
315 self._user_io.out.warn("%s: Removing %s, marked as dirty"
316 % (str(pref), PACKAGE_TGZ_NAME))
317 os.remove(tgz_path)
318 clean_dirty(tgz_path)
319 # Get all the files in that directory
320 files, symlinks = gather_files(package_folder)
321
322 if CONANINFO not in files or CONAN_MANIFEST not in files:
323 logger.error("Missing info or manifest in uploading files: %s" % (str(files)))
324 raise ConanException("Cannot upload corrupted package '%s'" % str(pref))
325
326 logger.debug("UPLOAD: Time remote_manager build_files_set : %f" % (time.time() - t1))
327 if integrity_check:
328 self._package_integrity_check(pref, files, package_folder)
329 logger.debug("UPLOAD: Time remote_manager check package integrity : %f"
330 % (time.time() - t1))
331
332 the_files = _compress_package_files(files, symlinks, package_folder, self._user_io.out)
333 return the_files
334
335 def _recipe_files_to_upload(self, ref, policy, the_files, remote, remote_manifest):
336 # Get the remote snapshot
337 remote_snapshot = self._remote_manager.get_recipe_snapshot(ref, remote)
338
339 if remote_snapshot and policy != UPLOAD_POLICY_FORCE:
340 local_manifest = FileTreeManifest.loads(load(the_files["conanmanifest.txt"]))
341
342 if remote_manifest == local_manifest:
343 return None, None
344
345 if policy in (UPLOAD_POLICY_NO_OVERWRITE, UPLOAD_POLICY_NO_OVERWRITE_RECIPE):
346 raise ConanException("Local recipe is different from the remote recipe. "
347 "Forbidden overwrite.")
348
349 files_to_upload = {filename.replace("\\", "/"): path
350 for filename, path in the_files.items()}
351 deleted = set(remote_snapshot).difference(the_files)
352 return files_to_upload, deleted
353
354 def _package_files_to_upload(self, pref, policy, the_files, remote):
355 # Get the remote snapshot
356 remote_snapshot = self._remote_manager.get_package_snapshot(pref, remote)
357
358 if remote_snapshot:
359 remote_manifest, _ = self._remote_manager.get_package_manifest(pref, remote)
360 local_manifest = FileTreeManifest.loads(load(the_files["conanmanifest.txt"]))
361
362 if remote_manifest == local_manifest:
363 return None, None
364
365 if policy == UPLOAD_POLICY_NO_OVERWRITE:
366 raise ConanException("Local package is different from the remote package. "
367 "Forbidden overwrite.")
368 files_to_upload = the_files
369 deleted = set(remote_snapshot).difference(the_files)
370
371 return files_to_upload, deleted
372
373 def _upload_recipe_end_msg(self, ref, remote):
374 msg = "Uploaded conan recipe '%s' to '%s'" % (str(ref), remote.name)
375 url = remote.url.replace("https://api.bintray.com/conan", "https://bintray.com")
376 msg += ": %s" % url
377 self._user_io.out.info(msg)
378
379 def _package_integrity_check(self, pref, files, package_folder):
380 # If package has been modified remove tgz to regenerate it
381 self._user_io.out.rewrite_line("Checking package integrity...")
382
383 # short_paths = None is enough if there exist short_paths
384 layout = self._cache.package_layout(pref.ref, short_paths=None)
385 read_manifest, expected_manifest = layout.package_manifests(pref)
386
387 if read_manifest != expected_manifest:
388 self._user_io.out.writeln("")
389 diff = read_manifest.difference(expected_manifest)
390 for fname, (h1, h2) in diff.items():
391 self._user_io.out.warn("Mismatched checksum '%s' (manifest: %s, file: %s)"
392 % (fname, h1, h2))
393
394 if PACKAGE_TGZ_NAME in files:
395 try:
396 tgz_path = os.path.join(package_folder, PACKAGE_TGZ_NAME)
397 os.unlink(tgz_path)
398 except Exception:
399 pass
400 error_msg = os.linesep.join("Mismatched checksum '%s' (manifest: %s, file: %s)"
401 % (fname, h1, h2) for fname, (h1, h2) in diff.items())
402 logger.error("Manifests doesn't match!\n%s" % error_msg)
403 raise ConanException("Cannot upload corrupted package '%s'" % str(pref))
404 else:
405 self._user_io.out.rewrite_line("Package integrity OK!")
406 self._user_io.out.writeln("")
407
408 def _check_recipe_date(self, ref, remote):
409 try:
410 remote_recipe_manifest, ref = self._remote_manager.get_recipe_manifest(ref, remote)
411 except NotFoundException:
412 return # First time uploading this package
413
414 local_manifest = self._cache.package_layout(ref).recipe_manifest()
415 if (remote_recipe_manifest != local_manifest and
416 remote_recipe_manifest.time > local_manifest.time):
417 self._print_manifest_information(remote_recipe_manifest, local_manifest, ref, remote)
418 raise ConanException("Remote recipe is newer than local recipe: "
419 "\n Remote date: %s\n Local date: %s" %
420 (remote_recipe_manifest.time, local_manifest.time))
421
422 return remote_recipe_manifest
423
424 def _print_manifest_information(self, remote_recipe_manifest, local_manifest, ref, remote):
425 try:
426 self._user_io.out.info("\n%s" % ("-"*40))
427 self._user_io.out.info("Remote manifest:")
428 self._user_io.out.info(remote_recipe_manifest)
429 self._user_io.out.info("Local manifest:")
430 self._user_io.out.info(local_manifest)
431 difference = remote_recipe_manifest.difference(local_manifest)
432 if "conanfile.py" in difference:
433 contents = load(os.path.join(self._cache.export(ref), "conanfile.py"))
434 endlines = "\\r\\n" if "\r\n" in contents else "\\n"
435 self._user_io.out.info("Local 'conanfile.py' using '%s' line-ends" % endlines)
436 remote_contents = self._remote_manager.get_recipe_path(ref, path="conanfile.py",
437 remote=remote)
438 endlines = "\\r\\n" if "\r\n" in remote_contents else "\\n"
439 self._user_io.out.info("Remote 'conanfile.py' using '%s' line-ends" % endlines)
440 self._user_io.out.info("\n%s" % ("-"*40))
441 except Exception as e:
442 self._user_io.out.info("Error printing information about the diff: %s" % str(e))
443
444
445 def _compress_recipe_files(files, symlinks, src_files, src_symlinks, dest_folder, output):
446 # This is the minimum recipe
447 result = {CONANFILE: files.pop(CONANFILE),
448 CONAN_MANIFEST: files.pop(CONAN_MANIFEST)}
449
450 export_tgz_path = files.pop(EXPORT_TGZ_NAME, None)
451 sources_tgz_path = files.pop(EXPORT_SOURCES_TGZ_NAME, None)
452
453 def add_tgz(tgz_name, tgz_path, tgz_files, tgz_symlinks, msg):
454 if tgz_path:
455 result[tgz_name] = tgz_path
456 elif tgz_files:
457 output.rewrite_line(msg)
458 tgz_path = compress_files(tgz_files, tgz_symlinks, tgz_name, dest_folder, output)
459 result[tgz_name] = tgz_path
460
461 add_tgz(EXPORT_TGZ_NAME, export_tgz_path, files, symlinks, "Compressing recipe...")
462 add_tgz(EXPORT_SOURCES_TGZ_NAME, sources_tgz_path, src_files, src_symlinks,
463 "Compressing recipe sources...")
464
465 return result
466
467
468 def _compress_package_files(files, symlinks, dest_folder, output):
469 tgz_path = files.get(PACKAGE_TGZ_NAME)
470 if not tgz_path:
471 output.writeln("Compressing package...")
472 tgz_files = {f: path for f, path in files.items() if f not in [CONANINFO, CONAN_MANIFEST]}
473 tgz_path = compress_files(tgz_files, symlinks, PACKAGE_TGZ_NAME, dest_folder, output)
474
475 return {PACKAGE_TGZ_NAME: tgz_path,
476 CONANINFO: files[CONANINFO],
477 CONAN_MANIFEST: files[CONAN_MANIFEST]}
478
479
480 def compress_files(files, symlinks, name, dest_dir, output=None):
481 t1 = time.time()
482 # FIXME, better write to disk sequentially and not keep tgz contents in memory
483 tgz_path = os.path.join(dest_dir, name)
484 set_dirty(tgz_path)
485 with open(tgz_path, "wb") as tgz_handle:
486 # tgz_contents = BytesIO()
487 tgz = gzopen_without_timestamps(name, mode="w", fileobj=tgz_handle)
488
489 for filename, dest in sorted(symlinks.items()):
490 info = tarfile.TarInfo(name=filename)
491 info.type = tarfile.SYMTYPE
492 info.linkname = dest
493 tgz.addfile(tarinfo=info)
494
495 mask = ~(stat.S_IWOTH | stat.S_IWGRP)
496 i_file = 0
497 n_files = len(files)
498 last_progress = None
499 if output and n_files > 1 and not output.is_terminal:
500 output.write("[")
501 for filename, abs_path in sorted(files.items()):
502 info = tarfile.TarInfo(name=filename)
503 info.size = os.stat(abs_path).st_size
504 info.mode = os.stat(abs_path).st_mode & mask
505 if os.path.islink(abs_path):
506 info.type = tarfile.SYMTYPE
507 info.linkname = os.readlink(abs_path) # @UndefinedVariable
508 tgz.addfile(tarinfo=info)
509 else:
510 with open(abs_path, 'rb') as file_handler:
511 tgz.addfile(tarinfo=info, fileobj=file_handler)
512 if output and n_files > 1:
513 i_file = i_file + 1
514 units = min(50, int(50 * i_file / n_files))
515 if last_progress != units: # Avoid screen refresh if nothing has change
516 if output.is_terminal:
517 text = "%s/%s files" % (i_file, n_files)
518 output.rewrite_line("[%s%s] %s" % ('=' * units, ' ' * (50 - units), text))
519 else:
520 output.write('=' * (units - (last_progress or 0)))
521 last_progress = units
522
523 if output and n_files > 1:
524 if output.is_terminal:
525 output.writeln("")
526 else:
527 output.writeln("]")
528 tgz.close()
529
530 clean_dirty(tgz_path)
531 duration = time.time() - t1
532 log_compressed_files(files, duration, tgz_path)
533
534 return tgz_path
535
[end of conans/client/cmd/uploader.py]
[start of conans/client/rest/rest_client_common.py]
1 import json
2
3 from requests.auth import AuthBase, HTTPBasicAuth
4
5 from conans import COMPLEX_SEARCH_CAPABILITY
6 from conans.errors import (EXCEPTION_CODE_MAPPING, NotFoundException, ConanException,
7 AuthenticationException, RecipeNotFoundException,
8 PackageNotFoundException)
9 from conans.model.ref import ConanFileReference
10 from conans.search.search import filter_packages
11 from conans.util.files import decode_text
12 from conans.util.log import logger
13
14
15 class JWTAuth(AuthBase):
16 """Attaches JWT Authentication to the given Request object."""
17 def __init__(self, token):
18 self.token = token
19
20 def __call__(self, request):
21 if self.token:
22 request.headers['Authorization'] = "Bearer %s" % str(self.token)
23 return request
24
25
26 def _base_error(error_code):
27 return int(str(error_code)[0] + "00")
28
29
30 def get_exception_from_error(error_code):
31 try:
32 tmp = {}
33 for key, value in EXCEPTION_CODE_MAPPING.items():
34 if key not in (RecipeNotFoundException, PackageNotFoundException):
35 tmp[value] = key
36 if error_code in tmp:
37 logger.debug("REST ERROR: %s" % str(tmp[error_code]))
38 return tmp[error_code]
39 else:
40 logger.debug("REST ERROR: %s" % str(_base_error(error_code)))
41 return tmp[_base_error(error_code)]
42 except KeyError:
43 return None
44
45
46 def handle_return_deserializer(deserializer=None):
47 """Decorator for rest api methods.
48 Map exceptions and http return codes and deserialize if needed.
49
50 deserializer: Function for deserialize values"""
51 def handle_return(method):
52 def inner(*argc, **argv):
53 ret = method(*argc, **argv)
54 if ret.status_code != 200:
55 ret.charset = "utf-8" # To be able to access ret.text (ret.content are bytes)
56 text = ret.text if ret.status_code != 404 else "404 Not found"
57 raise get_exception_from_error(ret.status_code)(text)
58 return deserializer(ret.content) if deserializer else decode_text(ret.content)
59 return inner
60 return handle_return
61
62
63 class RestCommonMethods(object):
64
65 def __init__(self, remote_url, token, custom_headers, output, requester, verify_ssl,
66 put_headers=None):
67
68 self.token = token
69 self.remote_url = remote_url
70 self.custom_headers = custom_headers
71 self._output = output
72 self.requester = requester
73 self.verify_ssl = verify_ssl
74 self._put_headers = put_headers
75
76 @property
77 def auth(self):
78 return JWTAuth(self.token)
79
80 @handle_return_deserializer()
81 def authenticate(self, user, password):
82 """Sends user + password to get a token"""
83 auth = HTTPBasicAuth(user, password)
84 url = self.router.common_authenticate()
85 logger.debug("REST: Authenticate: %s" % url)
86 ret = self.requester.get(url, auth=auth, headers=self.custom_headers,
87 verify=self.verify_ssl)
88 if ret.status_code == 401:
89 raise AuthenticationException("Wrong user or password")
90 # Cannot check content-type=text/html, conan server is doing it wrong
91 if not ret.ok or "html>" in str(ret.content):
92 raise ConanException("%s\n\nInvalid server response, check remote URL and "
93 "try again" % str(ret.content))
94 return ret
95
96 @handle_return_deserializer()
97 def check_credentials(self):
98 """If token is not valid will raise AuthenticationException.
99 User will be asked for new user/pass"""
100 url = self.router.common_check_credentials()
101 logger.debug("REST: Check credentials: %s" % url)
102 ret = self.requester.get(url, auth=self.auth, headers=self.custom_headers,
103 verify=self.verify_ssl)
104 return ret
105
106 def server_info(self):
107 """Get information about the server: status, version, type and capabilities"""
108 url = self.router.ping()
109 logger.debug("REST: ping: %s" % url)
110
111 ret = self.requester.get(url, auth=self.auth, headers=self.custom_headers,
112 verify=self.verify_ssl)
113 if ret.status_code == 404:
114 raise NotFoundException("Not implemented endpoint")
115
116 version_check = ret.headers.get('X-Conan-Client-Version-Check', None)
117 server_version = ret.headers.get('X-Conan-Server-Version', None)
118 server_capabilities = ret.headers.get('X-Conan-Server-Capabilities', "")
119 server_capabilities = [cap.strip() for cap in server_capabilities.split(",") if cap]
120
121 return version_check, server_version, server_capabilities
122
123 def get_json(self, url, data=None):
124 headers = self.custom_headers
125 if data: # POST request
126 headers.update({'Content-type': 'application/json',
127 'Accept': 'text/plain',
128 'Accept': 'application/json'})
129 logger.debug("REST: post: %s" % url)
130 response = self.requester.post(url, auth=self.auth, headers=headers,
131 verify=self.verify_ssl,
132 stream=True,
133 data=json.dumps(data))
134 else:
135 logger.debug("REST: get: %s" % url)
136 response = self.requester.get(url, auth=self.auth, headers=headers,
137 verify=self.verify_ssl,
138 stream=True)
139
140 if response.status_code != 200: # Error message is text
141 response.charset = "utf-8" # To be able to access ret.text (ret.content are bytes)
142 raise get_exception_from_error(response.status_code)(response.text)
143
144 content = decode_text(response.content)
145 content_type = response.headers.get("Content-Type")
146 if content_type != 'application/json':
147 raise ConanException("%s\n\nResponse from remote is not json, but '%s'"
148 % (content, content_type))
149
150 try: # This can fail, if some proxy returns 200 and an html message
151 result = json.loads(content)
152 except Exception:
153 raise ConanException("Remote responded with broken json: %s" % content)
154 if not isinstance(result, dict):
155 raise ConanException("Unexpected server response %s" % result)
156 return result
157
158 def upload_recipe(self, ref, files_to_upload, deleted, retry, retry_wait):
159 if files_to_upload:
160 self._upload_recipe(ref, files_to_upload, retry, retry_wait)
161 if deleted:
162 self._remove_conanfile_files(ref, deleted)
163
164 def get_recipe_snapshot(self, ref):
165 self.check_credentials()
166
167 url = self.router.recipe_snapshot(ref)
168 snap = self._get_snapshot(url)
169 return snap
170
171 def get_package_snapshot(self, pref):
172 self.check_credentials()
173
174 url = self.router.package_snapshot(pref)
175 snap = self._get_snapshot(url)
176 return snap
177
178 def upload_package(self, pref, files_to_upload, deleted, retry, retry_wait):
179 if files_to_upload:
180 self._upload_package(pref, files_to_upload, retry, retry_wait)
181 if deleted:
182 raise Exception("This shouldn't be happening, deleted files "
183 "in local package present in remote: %s.\n Please, report it at "
184 "https://github.com/conan-io/conan/issues " % str(deleted))
185
186 def search(self, pattern=None, ignorecase=True):
187 """
188 the_files: dict with relative_path: content
189 """
190 url = self.router.search(pattern, ignorecase)
191 response = self.get_json(url)["results"]
192 return [ConanFileReference.loads(reference) for reference in response]
193
194 def search_packages(self, ref, query):
195
196 if not query:
197 url = self.router.search_packages(ref)
198 package_infos = self.get_json(url)
199 return package_infos
200
201 # Read capabilities
202 try:
203 _, _, capabilities = self.server_info()
204 except NotFoundException:
205 capabilities = []
206
207 if COMPLEX_SEARCH_CAPABILITY in capabilities:
208 url = self.router.search_packages(ref, query)
209 package_infos = self.get_json(url)
210 return package_infos
211 else:
212 url = self.router.search_packages(ref)
213 package_infos = self.get_json(url)
214 return filter_packages(query, package_infos)
215
216 def _post_json(self, url, payload):
217 logger.debug("REST: post: %s" % url)
218 response = self.requester.post(url,
219 auth=self.auth,
220 headers=self.custom_headers,
221 verify=self.verify_ssl,
222 json=payload)
223 return response
224
[end of conans/client/rest/rest_client_common.py]
[start of conans/errors.py]
1 """
2 Exceptions raised and handled in Conan server.
3 These exceptions are mapped between server (as an HTTP response) and client
4 through the REST API. When an error happens in server its translated to an HTTP
5 error code that its sent to client. Client reads the server code and raise the
6 matching exception.
7
8 see return_plugin.py
9
10 """
11 from contextlib import contextmanager
12
13 from conans.util.env_reader import get_env
14
15
16 @contextmanager
17 def conanfile_exception_formatter(conanfile_name, func_name):
18 """
19 Decorator to throw an exception formatted with the line of the conanfile where the error ocurrs.
20 :param reference: Reference of the conanfile
21 :return:
22 """
23 try:
24 yield
25 except ConanInvalidConfiguration as exc:
26 msg = "{}: Invalid configuration: {}".format(conanfile_name, exc) # TODO: Move from here?
27 raise ConanInvalidConfiguration(msg)
28 except Exception as exc:
29 msg = _format_conanfile_exception(conanfile_name, func_name, exc)
30 raise ConanExceptionInUserConanfileMethod(msg)
31
32
33 def _format_conanfile_exception(scope, method, exception):
34 """
35 It will iterate the traceback lines, when it finds that the source code is inside the users
36 conanfile it "start recording" the messages, when the trace exits the conanfile we return
37 the traces.
38 """
39 import sys
40 import traceback
41 if get_env("CONAN_VERBOSE_TRACEBACK", False):
42 return traceback.format_exc()
43 try:
44 conanfile_reached = False
45 tb = sys.exc_info()[2]
46 index = 0
47 content_lines = []
48
49 while True: # If out of index will raise and will be captured later
50 # 40 levels of nested functions max, get the latest
51 filepath, line, name, contents = traceback.extract_tb(tb, 40)[index]
52 if "conanfile.py" not in filepath: # Avoid show trace from internal conan source code
53 if conanfile_reached: # The error goes to internal code, exit print
54 break
55 else:
56 if not conanfile_reached: # First line
57 msg = "%s: Error in %s() method" % (scope, method)
58 msg += ", line %d\n\t%s" % (line, contents)
59 else:
60 msg = ("while calling '%s', line %d\n\t%s" % (name, line, contents)
61 if line else "\n\t%s" % contents)
62 content_lines.append(msg)
63 conanfile_reached = True
64 index += 1
65 except Exception:
66 pass
67 ret = "\n".join(content_lines)
68 ret += "\n\t%s: %s" % (exception.__class__.__name__, str(exception))
69 return ret
70
71
72 class ConanException(Exception):
73 """
74 Generic conans exception
75 """
76 def __init__(self, *args, **kwargs):
77 self.info = None
78 self.remote = kwargs.pop("remote", None)
79 super(ConanException, self).__init__(*args, **kwargs)
80
81 def remote_message(self):
82 if self.remote:
83 return " [Remote: {}]".format(self.remote.name)
84 return ""
85
86 def __str__(self):
87 from conans.util.files import exception_message_safe
88 msg = super(ConanException, self).__str__()
89 if self.remote:
90 return "{}.{}".format(exception_message_safe(msg), self.remote_message())
91
92 return exception_message_safe(msg)
93
94
95 class NoRestV2Available(ConanException):
96 pass
97
98
99 class NoRemoteAvailable(ConanException):
100 """ No default remote configured or the specified remote do not exists
101 """
102 pass
103
104
105 class InvalidNameException(ConanException):
106 pass
107
108
109 class ConanConnectionError(ConanException):
110 pass
111
112
113 class ConanOutdatedClient(ConanException):
114 pass
115
116
117 class ConanExceptionInUserConanfileMethod(ConanException):
118 pass
119
120
121 class ConanInvalidConfiguration(ConanExceptionInUserConanfileMethod):
122 pass
123
124
125 # Remote exceptions #
126 class InternalErrorException(ConanException):
127 """
128 Generic 500 error
129 """
130 pass
131
132
133 class RequestErrorException(ConanException):
134 """
135 Generic 400 error
136 """
137 pass
138
139
140 class AuthenticationException(ConanException): # 401
141 """
142 401 error
143 """
144 pass
145
146
147 class ForbiddenException(ConanException): # 403
148 """
149 403 error
150 """
151 pass
152
153
154 class NotFoundException(ConanException): # 404
155 """
156 404 error
157 """
158
159 def __init__(self, *args, **kwargs):
160 self.remote = kwargs.pop("remote", None)
161 super(NotFoundException, self).__init__(*args, **kwargs)
162
163
164 class RecipeNotFoundException(NotFoundException):
165
166 def __init__(self, ref, remote=None, print_rev=False):
167 from conans.model.ref import ConanFileReference
168 assert isinstance(ref, ConanFileReference), "RecipeNotFoundException requires a " \
169 "ConanFileReference"
170 self.ref = ref
171 self.print_rev = print_rev
172 super(RecipeNotFoundException, self).__init__(remote=remote)
173
174 def __str__(self):
175 tmp = self.ref.full_repr() if self.print_rev else str(self.ref)
176 return "Recipe not found: '{}'".format(tmp, self.remote_message())
177
178
179 class PackageNotFoundException(NotFoundException):
180
181 def __init__(self, pref, remote=None, print_rev=False):
182 from conans.model.ref import PackageReference
183 assert isinstance(pref, PackageReference), "PackageNotFoundException requires a " \
184 "PackageReference"
185 self.pref = pref
186 self.print_rev = print_rev
187
188 super(PackageNotFoundException, self).__init__(remote=remote)
189
190 def __str__(self):
191 tmp = self.pref.full_repr() if self.print_rev else str(self.pref)
192 return "Binary package not found: '{}'{}".format(tmp, self.remote_message())
193
194
195 class UserInterfaceErrorException(RequestErrorException):
196 """
197 420 error
198 """
199 pass
200
201
202 EXCEPTION_CODE_MAPPING = {InternalErrorException: 500,
203 RequestErrorException: 400,
204 AuthenticationException: 401,
205 ForbiddenException: 403,
206 NotFoundException: 404,
207 RecipeNotFoundException: 404,
208 PackageNotFoundException: 404,
209 UserInterfaceErrorException: 420}
210
[end of conans/errors.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
conan-io/conan
|
01e611766f61d85183e33e20dc17a85fcdaea049
|
[revisions] Handle only_v2 capability from ping endpoint
Artifactory now returns ``only_v2`` server capability in some cases. This together with the ``revisions`` one should be enough to raise an error in case a user wants to perform a ``conan install`` without revisons when V1 is not supported.
This will improve the ERROR 500 that we are receiving currently:
```
{
"errors": [
{
"status": 500,
"message": "Unsupported Conan v2 repository request for conan-remote"
}
]
}
```
|
2019-03-05T16:51:28Z
|
<patch>
diff --git a/conans/__init__.py b/conans/__init__.py
--- a/conans/__init__.py
+++ b/conans/__init__.py
@@ -15,6 +15,7 @@
COMPLEX_SEARCH_CAPABILITY = "complex_search"
CHECKSUM_DEPLOY = "checksum_deploy" # Only when v2
REVISIONS = "revisions" # Only when enabled in config, not by default look at server_launcher.py
+ONLY_V2 = "only_v2" # Remotes and virtuals from Artifactory returns this capability
SERVER_CAPABILITIES = [COMPLEX_SEARCH_CAPABILITY, REVISIONS] # Server is always with revisions
DEFAULT_REVISION_V1 = "0"
diff --git a/conans/client/conan_api.py b/conans/client/conan_api.py
--- a/conans/client/conan_api.py
+++ b/conans/client/conan_api.py
@@ -5,7 +5,6 @@
import requests
import conans
-
from conans import __version__ as client_version
from conans.client import packager, tools
from conans.client.cache.cache import ClientCache
@@ -22,6 +21,7 @@
from conans.client.cmd.uploader import CmdUpload
from conans.client.cmd.user import user_set, users_clean, users_list
from conans.client.conf import ConanClientConfigParser
+from conans.client.graph.graph import RECIPE_EDITABLE
from conans.client.graph.graph_manager import GraphManager
from conans.client.graph.printer import print_graph
from conans.client.graph.proxy import ConanProxy
@@ -62,8 +62,6 @@
from conans.util.files import exception_message_safe, mkdir, save_files
from conans.util.log import configure_logger
from conans.util.tracer import log_command, log_exception
-from conans.client.graph.graph import RECIPE_EDITABLE
-
default_manifest_folder = '.conan_manifests'
diff --git a/conans/client/rest/rest_client.py b/conans/client/rest/rest_client.py
--- a/conans/client/rest/rest_client.py
+++ b/conans/client/rest/rest_client.py
@@ -1,8 +1,9 @@
from collections import defaultdict
-from conans import CHECKSUM_DEPLOY, REVISIONS
+from conans import CHECKSUM_DEPLOY, REVISIONS, ONLY_V2
from conans.client.rest.rest_client_v1 import RestV1Methods
from conans.client.rest.rest_client_v2 import RestV2Methods
+from conans.errors import OnlyV2Available
class RestApiClient(object):
@@ -32,6 +33,8 @@ def _get_api(self):
self.requester, self.verify_ssl, self._put_headers)
_, _, cap = tmp.server_info()
self._cached_capabilities[self.remote_url] = cap
+ if not self._revisions_enabled and ONLY_V2 in cap:
+ raise OnlyV2Available(self.remote_url)
if self._revisions_enabled and REVISIONS in self._cached_capabilities[self.remote_url]:
checksum_deploy = CHECKSUM_DEPLOY in self._cached_capabilities[self.remote_url]
diff --git a/conans/errors.py b/conans/errors.py
--- a/conans/errors.py
+++ b/conans/errors.py
@@ -92,6 +92,15 @@ def __str__(self):
return exception_message_safe(msg)
+class OnlyV2Available(ConanException):
+
+ def __init__(self, remote_url):
+ msg = "The remote at '%s' only works with revisions enabled. " \
+ "Set CONAN_REVISIONS_ENABLED=1 " \
+ "or set 'general.revisions_enabled = 1' at the 'conan.conf'" % remote_url
+ super(OnlyV2Available, self).__init__(msg)
+
+
class NoRestV2Available(ConanException):
pass
</patch>
|
[]
|
[]
| ||||
pandas-dev__pandas-26916
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
BUG: timedelta64 + Timestamp raises
```
>>> np.timedelta64(3600*10**9, 'ns') + pd.Timestamp.now()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: ufunc add cannot use operands with types dtype('<m8[ns]') and dtype('O')
```
I think we can fix this by defining `Timestamp.__array_priority__`
</issue>
<code>
[start of README.md]
1 <div align="center">
2 <img src="https://github.com/pandas-dev/pandas/blob/master/doc/logo/pandas_logo.png"><br>
3 </div>
4
5 -----------------
6
7 # pandas: powerful Python data analysis toolkit
8
9 <table>
10 <tr>
11 <td>Latest Release</td>
12 <td>
13 <a href="https://pypi.org/project/pandas/">
14 <img src="https://img.shields.io/pypi/v/pandas.svg" alt="latest release" />
15 </a>
16 </td>
17 </tr>
18 <td></td>
19 <td>
20 <a href="https://anaconda.org/anaconda/pandas/">
21 <img src="https://anaconda.org/conda-forge/pandas/badges/version.svg" alt="latest release" />
22 </a>
23 </td>
24 </tr>
25 <tr>
26 <td>Package Status</td>
27 <td>
28 <a href="https://pypi.org/project/pandas/">
29 <img src="https://img.shields.io/pypi/status/pandas.svg" alt="status" />
30 </a>
31 </td>
32 </tr>
33 <tr>
34 <td>License</td>
35 <td>
36 <a href="https://github.com/pandas-dev/pandas/blob/master/LICENSE">
37 <img src="https://img.shields.io/pypi/l/pandas.svg" alt="license" />
38 </a>
39 </td>
40 </tr>
41 <tr>
42 <td>Build Status</td>
43 <td>
44 <a href="https://travis-ci.org/pandas-dev/pandas">
45 <img src="https://travis-ci.org/pandas-dev/pandas.svg?branch=master" alt="travis build status" />
46 </a>
47 </td>
48 </tr>
49 <tr>
50 <td></td>
51 <td>
52 <a href="https://dev.azure.com/pandas-dev/pandas/_build/latest?definitionId=1&branch=master">
53 <img src="https://dev.azure.com/pandas-dev/pandas/_apis/build/status/pandas-dev.pandas?branch=master" alt="Azure Pipelines build status" />
54 </a>
55 </td>
56 </tr>
57 <tr>
58 <td>Coverage</td>
59 <td>
60 <a href="https://codecov.io/gh/pandas-dev/pandas">
61 <img src="https://codecov.io/github/pandas-dev/pandas/coverage.svg?branch=master" alt="coverage" />
62 </a>
63 </td>
64 </tr>
65 <tr>
66 <td>Downloads</td>
67 <td>
68 <a href="https://pandas.pydata.org">
69 <img src="https://anaconda.org/conda-forge/pandas/badges/downloads.svg" alt="conda-forge downloads" />
70 </a>
71 </td>
72 </tr>
73 <tr>
74 <td>Gitter</td>
75 <td>
76 <a href="https://gitter.im/pydata/pandas">
77 <img src="https://badges.gitter.im/Join%20Chat.svg" />
78 </a>
79 </td>
80 </tr>
81 </table>
82
83
84
85 ## What is it?
86
87 **pandas** is a Python package providing fast, flexible, and expressive data
88 structures designed to make working with "relational" or "labeled" data both
89 easy and intuitive. It aims to be the fundamental high-level building block for
90 doing practical, **real world** data analysis in Python. Additionally, it has
91 the broader goal of becoming **the most powerful and flexible open source data
92 analysis / manipulation tool available in any language**. It is already well on
93 its way towards this goal.
94
95 ## Main Features
96 Here are just a few of the things that pandas does well:
97
98 - Easy handling of [**missing data**][missing-data] (represented as
99 `NaN`) in floating point as well as non-floating point data
100 - Size mutability: columns can be [**inserted and
101 deleted**][insertion-deletion] from DataFrame and higher dimensional
102 objects
103 - Automatic and explicit [**data alignment**][alignment]: objects can
104 be explicitly aligned to a set of labels, or the user can simply
105 ignore the labels and let `Series`, `DataFrame`, etc. automatically
106 align the data for you in computations
107 - Powerful, flexible [**group by**][groupby] functionality to perform
108 split-apply-combine operations on data sets, for both aggregating
109 and transforming data
110 - Make it [**easy to convert**][conversion] ragged,
111 differently-indexed data in other Python and NumPy data structures
112 into DataFrame objects
113 - Intelligent label-based [**slicing**][slicing], [**fancy
114 indexing**][fancy-indexing], and [**subsetting**][subsetting] of
115 large data sets
116 - Intuitive [**merging**][merging] and [**joining**][joining] data
117 sets
118 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of
119 data sets
120 - [**Hierarchical**][mi] labeling of axes (possible to have multiple
121 labels per tick)
122 - Robust IO tools for loading data from [**flat files**][flat-files]
123 (CSV and delimited), [**Excel files**][excel], [**databases**][db],
124 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore]
125 - [**Time series**][timeseries]-specific functionality: date range
126 generation and frequency conversion, moving window statistics,
127 moving window linear regressions, date shifting and lagging, etc.
128
129
130 [missing-data]: https://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data
131 [insertion-deletion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#column-selection-addition-deletion
132 [alignment]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html?highlight=alignment#intro-to-data-structures
133 [groupby]: https://pandas.pydata.org/pandas-docs/stable/groupby.html#group-by-split-apply-combine
134 [conversion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe
135 [slicing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges
136 [fancy-indexing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#advanced-indexing-with-ix
137 [subsetting]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing
138 [merging]: https://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging
139 [joining]: https://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index
140 [reshape]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-and-pivot-tables
141 [pivot-table]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#pivot-tables-and-cross-tabulations
142 [mi]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex
143 [flat-files]: https://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files
144 [excel]: https://pandas.pydata.org/pandas-docs/stable/io.html#excel-files
145 [db]: https://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries
146 [hdfstore]: https://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables
147 [timeseries]: https://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-series-date-functionality
148
149 ## Where to get it
150 The source code is currently hosted on GitHub at:
151 https://github.com/pandas-dev/pandas
152
153 Binary installers for the latest released version are available at the [Python
154 package index](https://pypi.org/project/pandas) and on conda.
155
156 ```sh
157 # conda
158 conda install pandas
159 ```
160
161 ```sh
162 # or PyPI
163 pip install pandas
164 ```
165
166 ## Dependencies
167 - [NumPy](https://www.numpy.org): 1.13.3 or higher
168 - [python-dateutil](https://labix.org/python-dateutil): 2.5.0 or higher
169 - [pytz](https://pythonhosted.org/pytz): 2015.4 or higher
170
171 See the [full installation instructions](https://pandas.pydata.org/pandas-docs/stable/install.html#dependencies)
172 for recommended and optional dependencies.
173
174 ## Installation from sources
175 To install pandas from source you need Cython in addition to the normal
176 dependencies above. Cython can be installed from pypi:
177
178 ```sh
179 pip install cython
180 ```
181
182 In the `pandas` directory (same one where you found this file after
183 cloning the git repo), execute:
184
185 ```sh
186 python setup.py install
187 ```
188
189 or for installing in [development mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs):
190
191 ```sh
192 python setup.py develop
193 ```
194
195 Alternatively, you can use `pip` if you want all the dependencies pulled
196 in automatically (the `-e` option is for installing it in [development
197 mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs)):
198
199 ```sh
200 pip install -e .
201 ```
202
203 See the full instructions for [installing from source](https://pandas.pydata.org/pandas-docs/stable/install.html#installing-from-source).
204
205 ## License
206 [BSD 3](LICENSE)
207
208 ## Documentation
209 The official documentation is hosted on PyData.org: https://pandas.pydata.org/pandas-docs/stable
210
211 ## Background
212 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and
213 has been under active development since then.
214
215 ## Getting Help
216
217 For usage questions, the best place to go to is [StackOverflow](https://stackoverflow.com/questions/tagged/pandas).
218 Further, general questions and discussions can also take place on the [pydata mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata).
219
220 ## Discussion and Development
221 Most development discussion is taking place on github in this repo. Further, the [pandas-dev mailing list](https://mail.python.org/mailman/listinfo/pandas-dev) can also be used for specialized discussions or design issues, and a [Gitter channel](https://gitter.im/pydata/pandas) is available for quick development related questions.
222
223 ## Contributing to pandas [](https://www.codetriage.com/pandas-dev/pandas)
224
225 All contributions, bug reports, bug fixes, documentation improvements, enhancements and ideas are welcome.
226
227 A detailed overview on how to contribute can be found in the **[contributing guide](https://dev.pandas.io/contributing.html)**. There is also an [overview](.github/CONTRIBUTING.md) on GitHub.
228
229 If you are simply looking to start working with the pandas codebase, navigate to the [GitHub "issues" tab](https://github.com/pandas-dev/pandas/issues) and start looking through interesting issues. There are a number of issues listed under [Docs](https://github.com/pandas-dev/pandas/issues?labels=Docs&sort=updated&state=open) and [good first issue](https://github.com/pandas-dev/pandas/issues?labels=good+first+issue&sort=updated&state=open) where you could start out.
230
231 You can also triage issues which may include reproducing bug reports, or asking for vital information such as version numbers or reproduction instructions. If you would like to start triaging issues, one easy way to get started is to [subscribe to pandas on CodeTriage](https://www.codetriage.com/pandas-dev/pandas).
232
233 Or maybe through using pandas you have an idea of your own or are looking for something in the documentation and thinking ‘this can be improved’...you can do something about it!
234
235 Feel free to ask questions on the [mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata) or on [Gitter](https://gitter.im/pydata/pandas).
236
[end of README.md]
[start of pandas/core/tools/datetimes.py]
1 from collections import abc
2 from datetime import datetime, time
3 from functools import partial
4
5 import numpy as np
6
7 from pandas._libs import tslib, tslibs
8 from pandas._libs.tslibs import Timestamp, conversion, parsing
9 from pandas._libs.tslibs.parsing import ( # noqa
10 DateParseError, _format_is_iso, _guess_datetime_format, parse_time_string)
11 from pandas._libs.tslibs.strptime import array_strptime
12 from pandas.util._decorators import deprecate_kwarg
13
14 from pandas.core.dtypes.common import (
15 ensure_object, is_datetime64_dtype, is_datetime64_ns_dtype,
16 is_datetime64tz_dtype, is_float, is_integer, is_integer_dtype,
17 is_list_like, is_numeric_dtype, is_object_dtype, is_scalar)
18 from pandas.core.dtypes.generic import ABCDataFrame, ABCIndexClass, ABCSeries
19 from pandas.core.dtypes.missing import notna
20
21 from pandas.core import algorithms
22
23
24 def _guess_datetime_format_for_array(arr, **kwargs):
25 # Try to guess the format based on the first non-NaN element
26 non_nan_elements = notna(arr).nonzero()[0]
27 if len(non_nan_elements):
28 return _guess_datetime_format(arr[non_nan_elements[0]], **kwargs)
29
30
31 def _maybe_cache(arg, format, cache, convert_listlike):
32 """
33 Create a cache of unique dates from an array of dates
34
35 Parameters
36 ----------
37 arg : integer, float, string, datetime, list, tuple, 1-d array, Series
38 format : string
39 Strftime format to parse time
40 cache : boolean
41 True attempts to create a cache of converted values
42 convert_listlike : function
43 Conversion function to apply on dates
44
45 Returns
46 -------
47 cache_array : Series
48 Cache of converted, unique dates. Can be empty
49 """
50 from pandas import Series
51 cache_array = Series()
52 if cache:
53 # Perform a quicker unique check
54 from pandas import Index
55 unique_dates = Index(arg).unique()
56 if len(unique_dates) < len(arg):
57 cache_dates = convert_listlike(unique_dates.to_numpy(),
58 True, format)
59 cache_array = Series(cache_dates, index=unique_dates)
60 return cache_array
61
62
63 def _convert_and_box_cache(arg, cache_array, box, errors, name=None):
64 """
65 Convert array of dates with a cache and box the result
66
67 Parameters
68 ----------
69 arg : integer, float, string, datetime, list, tuple, 1-d array, Series
70 cache_array : Series
71 Cache of converted, unique dates
72 box : boolean
73 True boxes result as an Index-like, False returns an ndarray
74 errors : string
75 'ignore' plus box=True will convert result to Index
76 name : string, default None
77 Name for a DatetimeIndex
78
79 Returns
80 -------
81 result : datetime of converted dates
82 Returns:
83
84 - Index-like if box=True
85 - ndarray if box=False
86 """
87 from pandas import Series, DatetimeIndex, Index
88 result = Series(arg).map(cache_array)
89 if box:
90 if errors == 'ignore':
91 return Index(result, name=name)
92 else:
93 return DatetimeIndex(result, name=name)
94 return result.values
95
96
97 def _return_parsed_timezone_results(result, timezones, box, tz, name):
98 """
99 Return results from array_strptime if a %z or %Z directive was passed.
100
101 Parameters
102 ----------
103 result : ndarray
104 int64 date representations of the dates
105 timezones : ndarray
106 pytz timezone objects
107 box : boolean
108 True boxes result as an Index-like, False returns an ndarray
109 tz : object
110 None or pytz timezone object
111 name : string, default None
112 Name for a DatetimeIndex
113
114 Returns
115 -------
116 tz_result : ndarray of parsed dates with timezone
117 Returns:
118
119 - Index-like if box=True
120 - ndarray of Timestamps if box=False
121
122 """
123 if tz is not None:
124 raise ValueError("Cannot pass a tz argument when "
125 "parsing strings with timezone "
126 "information.")
127 tz_results = np.array([Timestamp(res).tz_localize(zone) for res, zone
128 in zip(result, timezones)])
129 if box:
130 from pandas import Index
131 return Index(tz_results, name=name)
132 return tz_results
133
134
135 def _convert_listlike_datetimes(arg, box, format, name=None, tz=None,
136 unit=None, errors=None,
137 infer_datetime_format=None, dayfirst=None,
138 yearfirst=None, exact=None):
139 """
140 Helper function for to_datetime. Performs the conversions of 1D listlike
141 of dates
142
143 Parameters
144 ----------
145 arg : list, tuple, ndarray, Series, Index
146 date to be parced
147 box : boolean
148 True boxes result as an Index-like, False returns an ndarray
149 name : object
150 None or string for the Index name
151 tz : object
152 None or 'utc'
153 unit : string
154 None or string of the frequency of the passed data
155 errors : string
156 error handing behaviors from to_datetime, 'raise', 'coerce', 'ignore'
157 infer_datetime_format : boolean
158 inferring format behavior from to_datetime
159 dayfirst : boolean
160 dayfirst parsing behavior from to_datetime
161 yearfirst : boolean
162 yearfirst parsing behavior from to_datetime
163 exact : boolean
164 exact format matching behavior from to_datetime
165
166 Returns
167 -------
168 ndarray of parsed dates
169 Returns:
170
171 - Index-like if box=True
172 - ndarray of Timestamps if box=False
173 """
174 from pandas import DatetimeIndex
175 from pandas.core.arrays import DatetimeArray
176 from pandas.core.arrays.datetimes import (
177 maybe_convert_dtype, objects_to_datetime64ns)
178
179 if isinstance(arg, (list, tuple)):
180 arg = np.array(arg, dtype='O')
181
182 # these are shortcutable
183 if is_datetime64tz_dtype(arg):
184 if not isinstance(arg, (DatetimeArray, DatetimeIndex)):
185 return DatetimeIndex(arg, tz=tz, name=name)
186 if tz == 'utc':
187 arg = arg.tz_convert(None).tz_localize(tz)
188 return arg
189
190 elif is_datetime64_ns_dtype(arg):
191 if box and not isinstance(arg, (DatetimeArray, DatetimeIndex)):
192 try:
193 return DatetimeIndex(arg, tz=tz, name=name)
194 except ValueError:
195 pass
196
197 return arg
198
199 elif unit is not None:
200 if format is not None:
201 raise ValueError("cannot specify both format and unit")
202 arg = getattr(arg, 'values', arg)
203 result, tz_parsed = tslib.array_with_unit_to_datetime(arg, unit,
204 errors=errors)
205 if box:
206 if errors == 'ignore':
207 from pandas import Index
208 result = Index(result, name=name)
209 else:
210 result = DatetimeIndex(result, name=name)
211 # GH 23758: We may still need to localize the result with tz
212 # GH 25546: Apply tz_parsed first (from arg), then tz (from caller)
213 # result will be naive but in UTC
214 try:
215 result = result.tz_localize('UTC').tz_convert(tz_parsed)
216 except AttributeError:
217 # Regular Index from 'ignore' path
218 return result
219 if tz is not None:
220 if result.tz is None:
221 result = result.tz_localize(tz)
222 else:
223 result = result.tz_convert(tz)
224 return result
225 elif getattr(arg, 'ndim', 1) > 1:
226 raise TypeError('arg must be a string, datetime, list, tuple, '
227 '1-d array, or Series')
228
229 # warn if passing timedelta64, raise for PeriodDtype
230 # NB: this must come after unit transformation
231 orig_arg = arg
232 arg, _ = maybe_convert_dtype(arg, copy=False)
233
234 arg = ensure_object(arg)
235 require_iso8601 = False
236
237 if infer_datetime_format and format is None:
238 format = _guess_datetime_format_for_array(arg, dayfirst=dayfirst)
239
240 if format is not None:
241 # There is a special fast-path for iso8601 formatted
242 # datetime strings, so in those cases don't use the inferred
243 # format because this path makes process slower in this
244 # special case
245 format_is_iso8601 = _format_is_iso(format)
246 if format_is_iso8601:
247 require_iso8601 = not infer_datetime_format
248 format = None
249
250 tz_parsed = None
251 result = None
252
253 if format is not None:
254 try:
255 # shortcut formatting here
256 if format == '%Y%m%d':
257 try:
258 # pass orig_arg as float-dtype may have been converted to
259 # datetime64[ns]
260 orig_arg = ensure_object(orig_arg)
261 result = _attempt_YYYYMMDD(orig_arg, errors=errors)
262 except (ValueError, TypeError, tslibs.OutOfBoundsDatetime):
263 raise ValueError("cannot convert the input to "
264 "'%Y%m%d' date format")
265
266 # fallback
267 if result is None:
268 try:
269 result, timezones = array_strptime(
270 arg, format, exact=exact, errors=errors)
271 if '%Z' in format or '%z' in format:
272 return _return_parsed_timezone_results(
273 result, timezones, box, tz, name)
274 except tslibs.OutOfBoundsDatetime:
275 if errors == 'raise':
276 raise
277 elif errors == 'coerce':
278 result = np.empty(arg.shape, dtype='M8[ns]')
279 iresult = result.view('i8')
280 iresult.fill(tslibs.iNaT)
281 else:
282 result = arg
283 except ValueError:
284 # if format was inferred, try falling back
285 # to array_to_datetime - terminate here
286 # for specified formats
287 if not infer_datetime_format:
288 if errors == 'raise':
289 raise
290 elif errors == 'coerce':
291 result = np.empty(arg.shape, dtype='M8[ns]')
292 iresult = result.view('i8')
293 iresult.fill(tslibs.iNaT)
294 else:
295 result = arg
296 except ValueError as e:
297 # Fallback to try to convert datetime objects if timezone-aware
298 # datetime objects are found without passing `utc=True`
299 try:
300 values, tz = conversion.datetime_to_datetime64(arg)
301 return DatetimeIndex._simple_new(values, name=name, tz=tz)
302 except (ValueError, TypeError):
303 raise e
304
305 if result is None:
306 assert format is None or infer_datetime_format
307 utc = tz == 'utc'
308 result, tz_parsed = objects_to_datetime64ns(
309 arg, dayfirst=dayfirst, yearfirst=yearfirst,
310 utc=utc, errors=errors, require_iso8601=require_iso8601,
311 allow_object=True)
312
313 if tz_parsed is not None:
314 if box:
315 # We can take a shortcut since the datetime64 numpy array
316 # is in UTC
317 return DatetimeIndex._simple_new(result, name=name,
318 tz=tz_parsed)
319 else:
320 # Convert the datetime64 numpy array to an numpy array
321 # of datetime objects
322 result = [Timestamp(ts, tz=tz_parsed).to_pydatetime()
323 for ts in result]
324 return np.array(result, dtype=object)
325
326 if box:
327 # Ensure we return an Index in all cases where box=True
328 if is_datetime64_dtype(result):
329 return DatetimeIndex(result, tz=tz, name=name)
330 elif is_object_dtype(result):
331 # e.g. an Index of datetime objects
332 from pandas import Index
333 return Index(result, name=name)
334 return result
335
336
337 def _adjust_to_origin(arg, origin, unit):
338 """
339 Helper function for to_datetime.
340 Adjust input argument to the specified origin
341
342 Parameters
343 ----------
344 arg : list, tuple, ndarray, Series, Index
345 date to be adjusted
346 origin : 'julian' or Timestamp
347 origin offset for the arg
348 unit : string
349 passed unit from to_datetime, must be 'D'
350
351 Returns
352 -------
353 ndarray or scalar of adjusted date(s)
354 """
355 if origin == 'julian':
356 original = arg
357 j0 = Timestamp(0).to_julian_date()
358 if unit != 'D':
359 raise ValueError("unit must be 'D' for origin='julian'")
360 try:
361 arg = arg - j0
362 except TypeError:
363 raise ValueError("incompatible 'arg' type for given "
364 "'origin'='julian'")
365
366 # preemptively check this for a nice range
367 j_max = Timestamp.max.to_julian_date() - j0
368 j_min = Timestamp.min.to_julian_date() - j0
369 if np.any(arg > j_max) or np.any(arg < j_min):
370 raise tslibs.OutOfBoundsDatetime(
371 "{original} is Out of Bounds for "
372 "origin='julian'".format(original=original))
373 else:
374 # arg must be numeric
375 if not ((is_scalar(arg) and (is_integer(arg) or is_float(arg))) or
376 is_numeric_dtype(np.asarray(arg))):
377 raise ValueError(
378 "'{arg}' is not compatible with origin='{origin}'; "
379 "it must be numeric with a unit specified ".format(
380 arg=arg,
381 origin=origin))
382
383 # we are going to offset back to unix / epoch time
384 try:
385 offset = Timestamp(origin)
386 except tslibs.OutOfBoundsDatetime:
387 raise tslibs.OutOfBoundsDatetime(
388 "origin {origin} is Out of Bounds".format(origin=origin))
389 except ValueError:
390 raise ValueError("origin {origin} cannot be converted "
391 "to a Timestamp".format(origin=origin))
392
393 if offset.tz is not None:
394 raise ValueError(
395 "origin offset {} must be tz-naive".format(offset))
396 offset -= Timestamp(0)
397
398 # convert the offset to the unit of the arg
399 # this should be lossless in terms of precision
400 offset = offset // tslibs.Timedelta(1, unit=unit)
401
402 # scalars & ndarray-like can handle the addition
403 if is_list_like(arg) and not isinstance(
404 arg, (ABCSeries, ABCIndexClass, np.ndarray)):
405 arg = np.asarray(arg)
406 arg = arg + offset
407 return arg
408
409
410 @deprecate_kwarg(old_arg_name='box', new_arg_name=None)
411 def to_datetime(arg, errors='raise', dayfirst=False, yearfirst=False,
412 utc=None, box=True, format=None, exact=True,
413 unit=None, infer_datetime_format=False, origin='unix',
414 cache=False):
415 """
416 Convert argument to datetime.
417
418 Parameters
419 ----------
420 arg : integer, float, string, datetime, list, tuple, 1-d array, Series
421
422 .. versionadded:: 0.18.1
423
424 or DataFrame/dict-like
425
426 errors : {'ignore', 'raise', 'coerce'}, default 'raise'
427
428 - If 'raise', then invalid parsing will raise an exception
429 - If 'coerce', then invalid parsing will be set as NaT
430 - If 'ignore', then invalid parsing will return the input
431 dayfirst : boolean, default False
432 Specify a date parse order if `arg` is str or its list-likes.
433 If True, parses dates with the day first, eg 10/11/12 is parsed as
434 2012-11-10.
435 Warning: dayfirst=True is not strict, but will prefer to parse
436 with day first (this is a known bug, based on dateutil behavior).
437 yearfirst : boolean, default False
438 Specify a date parse order if `arg` is str or its list-likes.
439
440 - If True parses dates with the year first, eg 10/11/12 is parsed as
441 2010-11-12.
442 - If both dayfirst and yearfirst are True, yearfirst is preceded (same
443 as dateutil).
444
445 Warning: yearfirst=True is not strict, but will prefer to parse
446 with year first (this is a known bug, based on dateutil behavior).
447
448 .. versionadded:: 0.16.1
449
450 utc : boolean, default None
451 Return UTC DatetimeIndex if True (converting any tz-aware
452 datetime.datetime objects as well).
453 box : boolean, default True
454
455 - If True returns a DatetimeIndex or Index-like object
456 - If False returns ndarray of values.
457
458 .. deprecated:: 0.25.0
459 Use :meth:`Series.to_numpy` or :meth:`Timestamp.to_datetime64`
460 instead to get an ndarray of values or numpy.datetime64,
461 respectively.
462
463 format : string, default None
464 strftime to parse time, eg "%d/%m/%Y", note that "%f" will parse
465 all the way up to nanoseconds.
466 See strftime documentation for more information on choices:
467 https://docs.python.org/3/library/datetime.html#strftime-and-strptime-behavior
468 exact : boolean, True by default
469
470 - If True, require an exact format match.
471 - If False, allow the format to match anywhere in the target string.
472
473 unit : string, default 'ns'
474 unit of the arg (D,s,ms,us,ns) denote the unit, which is an
475 integer or float number. This will be based off the origin.
476 Example, with unit='ms' and origin='unix' (the default), this
477 would calculate the number of milliseconds to the unix epoch start.
478 infer_datetime_format : boolean, default False
479 If True and no `format` is given, attempt to infer the format of the
480 datetime strings, and if it can be inferred, switch to a faster
481 method of parsing them. In some cases this can increase the parsing
482 speed by ~5-10x.
483 origin : scalar, default is 'unix'
484 Define the reference date. The numeric values would be parsed as number
485 of units (defined by `unit`) since this reference date.
486
487 - If 'unix' (or POSIX) time; origin is set to 1970-01-01.
488 - If 'julian', unit must be 'D', and origin is set to beginning of
489 Julian Calendar. Julian day number 0 is assigned to the day starting
490 at noon on January 1, 4713 BC.
491 - If Timestamp convertible, origin is set to Timestamp identified by
492 origin.
493
494 .. versionadded:: 0.20.0
495 cache : boolean, default False
496 If True, use a cache of unique, converted dates to apply the datetime
497 conversion. May produce significant speed-up when parsing duplicate
498 date strings, especially ones with timezone offsets.
499
500 .. versionadded:: 0.23.0
501
502 Returns
503 -------
504 ret : datetime if parsing succeeded.
505 Return type depends on input:
506
507 - list-like: DatetimeIndex
508 - Series: Series of datetime64 dtype
509 - scalar: Timestamp
510
511 In case when it is not possible to return designated types (e.g. when
512 any element of input is before Timestamp.min or after Timestamp.max)
513 return will have datetime.datetime type (or corresponding
514 array/Series).
515
516 See Also
517 --------
518 DataFrame.astype : Cast argument to a specified dtype.
519 to_timedelta : Convert argument to timedelta.
520
521 Examples
522 --------
523 Assembling a datetime from multiple columns of a DataFrame. The keys can be
524 common abbreviations like ['year', 'month', 'day', 'minute', 'second',
525 'ms', 'us', 'ns']) or plurals of the same
526
527 >>> df = pd.DataFrame({'year': [2015, 2016],
528 ... 'month': [2, 3],
529 ... 'day': [4, 5]})
530 >>> pd.to_datetime(df)
531 0 2015-02-04
532 1 2016-03-05
533 dtype: datetime64[ns]
534
535 If a date does not meet the `timestamp limitations
536 <http://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html
537 #timeseries-timestamp-limits>`_, passing errors='ignore'
538 will return the original input instead of raising any exception.
539
540 Passing errors='coerce' will force an out-of-bounds date to NaT,
541 in addition to forcing non-dates (or non-parseable dates) to NaT.
542
543 >>> pd.to_datetime('13000101', format='%Y%m%d', errors='ignore')
544 datetime.datetime(1300, 1, 1, 0, 0)
545 >>> pd.to_datetime('13000101', format='%Y%m%d', errors='coerce')
546 NaT
547
548 Passing infer_datetime_format=True can often-times speedup a parsing
549 if its not an ISO8601 format exactly, but in a regular format.
550
551 >>> s = pd.Series(['3/11/2000', '3/12/2000', '3/13/2000'] * 1000)
552 >>> s.head()
553 0 3/11/2000
554 1 3/12/2000
555 2 3/13/2000
556 3 3/11/2000
557 4 3/12/2000
558 dtype: object
559
560 >>> %timeit pd.to_datetime(s,infer_datetime_format=True) # doctest: +SKIP
561 100 loops, best of 3: 10.4 ms per loop
562
563 >>> %timeit pd.to_datetime(s,infer_datetime_format=False) # doctest: +SKIP
564 1 loop, best of 3: 471 ms per loop
565
566 Using a unix epoch time
567
568 >>> pd.to_datetime(1490195805, unit='s')
569 Timestamp('2017-03-22 15:16:45')
570 >>> pd.to_datetime(1490195805433502912, unit='ns')
571 Timestamp('2017-03-22 15:16:45.433502912')
572
573 .. warning:: For float arg, precision rounding might happen. To prevent
574 unexpected behavior use a fixed-width exact type.
575
576 Using a non-unix epoch origin
577
578 >>> pd.to_datetime([1, 2, 3], unit='D',
579 ... origin=pd.Timestamp('1960-01-01'))
580 DatetimeIndex(['1960-01-02', '1960-01-03', '1960-01-04'], \
581 dtype='datetime64[ns]', freq=None)
582 """
583 if arg is None:
584 return None
585
586 if origin != 'unix':
587 arg = _adjust_to_origin(arg, origin, unit)
588
589 tz = 'utc' if utc else None
590 convert_listlike = partial(_convert_listlike_datetimes, tz=tz, unit=unit,
591 dayfirst=dayfirst, yearfirst=yearfirst,
592 errors=errors, exact=exact,
593 infer_datetime_format=infer_datetime_format)
594
595 if isinstance(arg, Timestamp):
596 result = arg
597 if tz is not None:
598 if arg.tz is not None:
599 result = result.tz_convert(tz)
600 else:
601 result = result.tz_localize(tz)
602 elif isinstance(arg, ABCSeries):
603 cache_array = _maybe_cache(arg, format, cache, convert_listlike)
604 if not cache_array.empty:
605 result = arg.map(cache_array)
606 else:
607 values = convert_listlike(arg._values, True, format)
608 result = arg._constructor(values, index=arg.index, name=arg.name)
609 elif isinstance(arg, (ABCDataFrame, abc.MutableMapping)):
610 result = _assemble_from_unit_mappings(arg, errors, box, tz)
611 elif isinstance(arg, ABCIndexClass):
612 cache_array = _maybe_cache(arg, format, cache, convert_listlike)
613 if not cache_array.empty:
614 result = _convert_and_box_cache(arg, cache_array, box, errors,
615 name=arg.name)
616 else:
617 convert_listlike = partial(convert_listlike, name=arg.name)
618 result = convert_listlike(arg, box, format)
619 elif is_list_like(arg):
620 cache_array = _maybe_cache(arg, format, cache, convert_listlike)
621 if not cache_array.empty:
622 result = _convert_and_box_cache(arg, cache_array, box, errors)
623 else:
624 result = convert_listlike(arg, box, format)
625 else:
626 result = convert_listlike(np.array([arg]), box, format)[0]
627
628 return result
629
630
631 # mappings for assembling units
632 _unit_map = {'year': 'year',
633 'years': 'year',
634 'month': 'month',
635 'months': 'month',
636 'day': 'day',
637 'days': 'day',
638 'hour': 'h',
639 'hours': 'h',
640 'minute': 'm',
641 'minutes': 'm',
642 'second': 's',
643 'seconds': 's',
644 'ms': 'ms',
645 'millisecond': 'ms',
646 'milliseconds': 'ms',
647 'us': 'us',
648 'microsecond': 'us',
649 'microseconds': 'us',
650 'ns': 'ns',
651 'nanosecond': 'ns',
652 'nanoseconds': 'ns'
653 }
654
655
656 def _assemble_from_unit_mappings(arg, errors, box, tz):
657 """
658 assemble the unit specified fields from the arg (DataFrame)
659 Return a Series for actual parsing
660
661 Parameters
662 ----------
663 arg : DataFrame
664 errors : {'ignore', 'raise', 'coerce'}, default 'raise'
665
666 - If 'raise', then invalid parsing will raise an exception
667 - If 'coerce', then invalid parsing will be set as NaT
668 - If 'ignore', then invalid parsing will return the input
669 box : boolean
670
671 - If True, return a DatetimeIndex
672 - If False, return an array
673 tz : None or 'utc'
674
675 Returns
676 -------
677 Series
678 """
679 from pandas import to_timedelta, to_numeric, DataFrame
680 arg = DataFrame(arg)
681 if not arg.columns.is_unique:
682 raise ValueError("cannot assemble with duplicate keys")
683
684 # replace passed unit with _unit_map
685 def f(value):
686 if value in _unit_map:
687 return _unit_map[value]
688
689 # m is case significant
690 if value.lower() in _unit_map:
691 return _unit_map[value.lower()]
692
693 return value
694
695 unit = {k: f(k) for k in arg.keys()}
696 unit_rev = {v: k for k, v in unit.items()}
697
698 # we require at least Ymd
699 required = ['year', 'month', 'day']
700 req = sorted(list(set(required) - set(unit_rev.keys())))
701 if len(req):
702 raise ValueError("to assemble mappings requires at least that "
703 "[year, month, day] be specified: [{required}] "
704 "is missing".format(required=','.join(req)))
705
706 # keys we don't recognize
707 excess = sorted(list(set(unit_rev.keys()) - set(_unit_map.values())))
708 if len(excess):
709 raise ValueError("extra keys have been passed "
710 "to the datetime assemblage: "
711 "[{excess}]".format(excess=','.join(excess)))
712
713 def coerce(values):
714 # we allow coercion to if errors allows
715 values = to_numeric(values, errors=errors)
716
717 # prevent overflow in case of int8 or int16
718 if is_integer_dtype(values):
719 values = values.astype('int64', copy=False)
720 return values
721
722 values = (coerce(arg[unit_rev['year']]) * 10000 +
723 coerce(arg[unit_rev['month']]) * 100 +
724 coerce(arg[unit_rev['day']]))
725 try:
726 values = to_datetime(values, format='%Y%m%d', errors=errors, utc=tz)
727 except (TypeError, ValueError) as e:
728 raise ValueError("cannot assemble the "
729 "datetimes: {error}".format(error=e))
730
731 for u in ['h', 'm', 's', 'ms', 'us', 'ns']:
732 value = unit_rev.get(u)
733 if value is not None and value in arg:
734 try:
735 values += to_timedelta(coerce(arg[value]),
736 unit=u,
737 errors=errors)
738 except (TypeError, ValueError) as e:
739 raise ValueError("cannot assemble the datetimes [{value}]: "
740 "{error}".format(value=value, error=e))
741 if not box:
742 return values.values
743 return values
744
745
746 def _attempt_YYYYMMDD(arg, errors):
747 """
748 try to parse the YYYYMMDD/%Y%m%d format, try to deal with NaT-like,
749 arg is a passed in as an object dtype, but could really be ints/strings
750 with nan-like/or floats (e.g. with nan)
751
752 Parameters
753 ----------
754 arg : passed value
755 errors : 'raise','ignore','coerce'
756 """
757
758 def calc(carg):
759 # calculate the actual result
760 carg = carg.astype(object)
761 parsed = parsing.try_parse_year_month_day(carg / 10000,
762 carg / 100 % 100,
763 carg % 100)
764 return tslib.array_to_datetime(parsed, errors=errors)[0]
765
766 def calc_with_mask(carg, mask):
767 result = np.empty(carg.shape, dtype='M8[ns]')
768 iresult = result.view('i8')
769 iresult[~mask] = tslibs.iNaT
770
771 masked_result = calc(carg[mask].astype(np.float64).astype(np.int64))
772 result[mask] = masked_result.astype('M8[ns]')
773 return result
774
775 # try intlike / strings that are ints
776 try:
777 return calc(arg.astype(np.int64))
778 except (ValueError, OverflowError):
779 pass
780
781 # a float with actual np.nan
782 try:
783 carg = arg.astype(np.float64)
784 return calc_with_mask(carg, notna(carg))
785 except (ValueError, OverflowError):
786 pass
787
788 # string with NaN-like
789 try:
790 mask = ~algorithms.isin(arg, list(tslib.nat_strings))
791 return calc_with_mask(arg, mask)
792 except (ValueError, OverflowError):
793 pass
794
795 return None
796
797
798 # Fixed time formats for time parsing
799 _time_formats = ["%H:%M", "%H%M", "%I:%M%p", "%I%M%p",
800 "%H:%M:%S", "%H%M%S", "%I:%M:%S%p", "%I%M%S%p"]
801
802
803 def _guess_time_format_for_array(arr):
804 # Try to guess the format based on the first non-NaN element
805 non_nan_elements = notna(arr).nonzero()[0]
806 if len(non_nan_elements):
807 element = arr[non_nan_elements[0]]
808 for time_format in _time_formats:
809 try:
810 datetime.strptime(element, time_format)
811 return time_format
812 except ValueError:
813 pass
814
815 return None
816
817
818 def to_time(arg, format=None, infer_time_format=False, errors='raise'):
819 """
820 Parse time strings to time objects using fixed strptime formats ("%H:%M",
821 "%H%M", "%I:%M%p", "%I%M%p", "%H:%M:%S", "%H%M%S", "%I:%M:%S%p",
822 "%I%M%S%p")
823
824 Use infer_time_format if all the strings are in the same format to speed
825 up conversion.
826
827 Parameters
828 ----------
829 arg : string in time format, datetime.time, list, tuple, 1-d array, Series
830 format : str, default None
831 Format used to convert arg into a time object. If None, fixed formats
832 are used.
833 infer_time_format: bool, default False
834 Infer the time format based on the first non-NaN element. If all
835 strings are in the same format, this will speed up conversion.
836 errors : {'ignore', 'raise', 'coerce'}, default 'raise'
837 - If 'raise', then invalid parsing will raise an exception
838 - If 'coerce', then invalid parsing will be set as None
839 - If 'ignore', then invalid parsing will return the input
840
841 Returns
842 -------
843 datetime.time
844 """
845
846 def _convert_listlike(arg, format):
847
848 if isinstance(arg, (list, tuple)):
849 arg = np.array(arg, dtype='O')
850
851 elif getattr(arg, 'ndim', 1) > 1:
852 raise TypeError('arg must be a string, datetime, list, tuple, '
853 '1-d array, or Series')
854
855 arg = ensure_object(arg)
856
857 if infer_time_format and format is None:
858 format = _guess_time_format_for_array(arg)
859
860 times = []
861 if format is not None:
862 for element in arg:
863 try:
864 times.append(datetime.strptime(element, format).time())
865 except (ValueError, TypeError):
866 if errors == 'raise':
867 msg = ("Cannot convert {element} to a time with given "
868 "format {format}").format(element=element,
869 format=format)
870 raise ValueError(msg)
871 elif errors == 'ignore':
872 return arg
873 else:
874 times.append(None)
875 else:
876 formats = _time_formats[:]
877 format_found = False
878 for element in arg:
879 time_object = None
880 for time_format in formats:
881 try:
882 time_object = datetime.strptime(element,
883 time_format).time()
884 if not format_found:
885 # Put the found format in front
886 fmt = formats.pop(formats.index(time_format))
887 formats.insert(0, fmt)
888 format_found = True
889 break
890 except (ValueError, TypeError):
891 continue
892
893 if time_object is not None:
894 times.append(time_object)
895 elif errors == 'raise':
896 raise ValueError("Cannot convert arg {arg} to "
897 "a time".format(arg=arg))
898 elif errors == 'ignore':
899 return arg
900 else:
901 times.append(None)
902
903 return times
904
905 if arg is None:
906 return arg
907 elif isinstance(arg, time):
908 return arg
909 elif isinstance(arg, ABCSeries):
910 values = _convert_listlike(arg._values, format)
911 return arg._constructor(values, index=arg.index, name=arg.name)
912 elif isinstance(arg, ABCIndexClass):
913 return _convert_listlike(arg, format)
914 elif is_list_like(arg):
915 return _convert_listlike(arg, format)
916
917 return _convert_listlike(np.array([arg]), format)[0]
918
[end of pandas/core/tools/datetimes.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pandas-dev/pandas
|
83fe8d78b6b086f3ceabe81cd420a3c7affe9aba
|
BUG: timedelta64 + Timestamp raises
```
>>> np.timedelta64(3600*10**9, 'ns') + pd.Timestamp.now()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: ufunc add cannot use operands with types dtype('<m8[ns]') and dtype('O')
```
I think we can fix this by defining `Timestamp.__array_priority__`
|
2019-06-18T03:08:55Z
|
<patch>
diff --git a/doc/source/whatsnew/v0.25.0.rst b/doc/source/whatsnew/v0.25.0.rst
--- a/doc/source/whatsnew/v0.25.0.rst
+++ b/doc/source/whatsnew/v0.25.0.rst
@@ -603,6 +603,8 @@ Datetimelike
- Bug when comparing a :class:`PeriodIndex` against a zero-dimensional numpy array (:issue:`26689`)
- Bug in constructing a ``Series`` or ``DataFrame`` from a numpy ``datetime64`` array with a non-ns unit and out-of-bound timestamps generating rubbish data, which will now correctly raise an ``OutOfBoundsDatetime`` error (:issue:`26206`).
- Bug in :func:`date_range` with unnecessary ``OverflowError`` being raised for very large or very small dates (:issue:`26651`)
+- Bug where adding :class:`Timestamp` to a ``np.timedelta64`` object would raise instead of returning a :class:`Timestamp` (:issue:`24775`)
+- Bug where comparing a zero-dimensional numpy array containing a ``np.datetime64`` object to a :class:`Timestamp` would incorrect raise ``TypeError`` (:issue:`26916`)
Timedelta
^^^^^^^^^
diff --git a/pandas/_libs/tslibs/c_timestamp.pyx b/pandas/_libs/tslibs/c_timestamp.pyx
--- a/pandas/_libs/tslibs/c_timestamp.pyx
+++ b/pandas/_libs/tslibs/c_timestamp.pyx
@@ -55,6 +55,9 @@ def maybe_integer_op_deprecated(obj):
cdef class _Timestamp(datetime):
+ # higher than np.ndarray and np.matrix
+ __array_priority__ = 100
+
def __hash__(_Timestamp self):
if self.nanosecond:
return hash(self.value)
@@ -85,6 +88,15 @@ cdef class _Timestamp(datetime):
if ndim == 0:
if is_datetime64_object(other):
other = self.__class__(other)
+ elif is_array(other):
+ # zero-dim array, occurs if try comparison with
+ # datetime64 scalar on the left hand side
+ # Unfortunately, for datetime64 values, other.item()
+ # incorrectly returns an integer, so we need to use
+ # the numpy C api to extract it.
+ other = cnp.PyArray_ToScalar(cnp.PyArray_DATA(other),
+ other)
+ other = self.__class__(other)
else:
return NotImplemented
elif is_array(other):
</patch>
|
[]
|
[]
| ||||
numpy__numpy-7016
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
inconsistency in np.isclose
The documentation for `np.isclose` says:
> Returns a boolean array of where `a` and `b` are equal within the given tolerance. If both `a` and `b` are scalars, returns a single boolean value.
This is true for comparing two finite scalars, a single boolean value is returned:
```
>>> np.isclose(0, 1)
False
```
However, a boolean array is returned when one or more of the scalars is non-finite:
```
>>> np.isclose(0, np.inf)
array([False], dtype=bool)
```
I would expect (from the documentation) the above to return a single boolean value, not an array. I note that both values in the last example are scalars:
```
>>> np.isscalar(0)
True
>>> np.isscalar(np.inf)
True
```
</issue>
<code>
[start of README.md]
1 [](https://travis-ci.org/numpy/numpy)
2
3 NumPy is the fundamental package needed for scientific computing with Python.
4 This package contains:
5
6 * a powerful N-dimensional array object
7 * sophisticated (broadcasting) functions
8 * tools for integrating C/C++ and Fortran code
9 * useful linear algebra, Fourier transform, and random number capabilities.
10
11 It derives from the old Numeric code base and can be used as a replacement for Numeric. It also adds the features introduced by numarray and can be used to replace numarray.
12
13 More information can be found at the website:
14
15 * http://www.numpy.org
16
17 After installation, tests can be run (if ``nose`` is installed) with:
18
19 python -c 'import numpy; numpy.test()'
20
21 The most current development version is always available from our
22 git repository:
23
24 * http://github.com/numpy/numpy
25
[end of README.md]
[start of numpy/doc/indexing.py]
1 """==============
2 Array indexing
3 ==============
4
5 Array indexing refers to any use of the square brackets ([]) to index
6 array values. There are many options to indexing, which give numpy
7 indexing great power, but with power comes some complexity and the
8 potential for confusion. This section is just an overview of the
9 various options and issues related to indexing. Aside from single
10 element indexing, the details on most of these options are to be
11 found in related sections.
12
13 Assignment vs referencing
14 =========================
15
16 Most of the following examples show the use of indexing when
17 referencing data in an array. The examples work just as well
18 when assigning to an array. See the section at the end for
19 specific examples and explanations on how assignments work.
20
21 Single element indexing
22 =======================
23
24 Single element indexing for a 1-D array is what one expects. It work
25 exactly like that for other standard Python sequences. It is 0-based,
26 and accepts negative indices for indexing from the end of the array. ::
27
28 >>> x = np.arange(10)
29 >>> x[2]
30 2
31 >>> x[-2]
32 8
33
34 Unlike lists and tuples, numpy arrays support multidimensional indexing
35 for multidimensional arrays. That means that it is not necessary to
36 separate each dimension's index into its own set of square brackets. ::
37
38 >>> x.shape = (2,5) # now x is 2-dimensional
39 >>> x[1,3]
40 8
41 >>> x[1,-1]
42 9
43
44 Note that if one indexes a multidimensional array with fewer indices
45 than dimensions, one gets a subdimensional array. For example: ::
46
47 >>> x[0]
48 array([0, 1, 2, 3, 4])
49
50 That is, each index specified selects the array corresponding to the
51 rest of the dimensions selected. In the above example, choosing 0
52 means that the remaining dimension of length 5 is being left unspecified,
53 and that what is returned is an array of that dimensionality and size.
54 It must be noted that the returned array is not a copy of the original,
55 but points to the same values in memory as does the original array.
56 In this case, the 1-D array at the first position (0) is returned.
57 So using a single index on the returned array, results in a single
58 element being returned. That is: ::
59
60 >>> x[0][2]
61 2
62
63 So note that ``x[0,2] = x[0][2]`` though the second case is more
64 inefficient as a new temporary array is created after the first index
65 that is subsequently indexed by 2.
66
67 Note to those used to IDL or Fortran memory order as it relates to
68 indexing. Numpy uses C-order indexing. That means that the last
69 index usually represents the most rapidly changing memory location,
70 unlike Fortran or IDL, where the first index represents the most
71 rapidly changing location in memory. This difference represents a
72 great potential for confusion.
73
74 Other indexing options
75 ======================
76
77 It is possible to slice and stride arrays to extract arrays of the
78 same number of dimensions, but of different sizes than the original.
79 The slicing and striding works exactly the same way it does for lists
80 and tuples except that they can be applied to multiple dimensions as
81 well. A few examples illustrates best: ::
82
83 >>> x = np.arange(10)
84 >>> x[2:5]
85 array([2, 3, 4])
86 >>> x[:-7]
87 array([0, 1, 2])
88 >>> x[1:7:2]
89 array([1, 3, 5])
90 >>> y = np.arange(35).reshape(5,7)
91 >>> y[1:5:2,::3]
92 array([[ 7, 10, 13],
93 [21, 24, 27]])
94
95 Note that slices of arrays do not copy the internal array data but
96 also produce new views of the original data.
97
98 It is possible to index arrays with other arrays for the purposes of
99 selecting lists of values out of arrays into new arrays. There are
100 two different ways of accomplishing this. One uses one or more arrays
101 of index values. The other involves giving a boolean array of the proper
102 shape to indicate the values to be selected. Index arrays are a very
103 powerful tool that allow one to avoid looping over individual elements in
104 arrays and thus greatly improve performance.
105
106 It is possible to use special features to effectively increase the
107 number of dimensions in an array through indexing so the resulting
108 array aquires the shape needed for use in an expression or with a
109 specific function.
110
111 Index arrays
112 ============
113
114 Numpy arrays may be indexed with other arrays (or any other sequence-
115 like object that can be converted to an array, such as lists, with the
116 exception of tuples; see the end of this document for why this is). The
117 use of index arrays ranges from simple, straightforward cases to
118 complex, hard-to-understand cases. For all cases of index arrays, what
119 is returned is a copy of the original data, not a view as one gets for
120 slices.
121
122 Index arrays must be of integer type. Each value in the array indicates
123 which value in the array to use in place of the index. To illustrate: ::
124
125 >>> x = np.arange(10,1,-1)
126 >>> x
127 array([10, 9, 8, 7, 6, 5, 4, 3, 2])
128 >>> x[np.array([3, 3, 1, 8])]
129 array([7, 7, 9, 2])
130
131
132 The index array consisting of the values 3, 3, 1 and 8 correspondingly
133 create an array of length 4 (same as the index array) where each index
134 is replaced by the value the index array has in the array being indexed.
135
136 Negative values are permitted and work as they do with single indices
137 or slices: ::
138
139 >>> x[np.array([3,3,-3,8])]
140 array([7, 7, 4, 2])
141
142 It is an error to have index values out of bounds: ::
143
144 >>> x[np.array([3, 3, 20, 8])]
145 <type 'exceptions.IndexError'>: index 20 out of bounds 0<=index<9
146
147 Generally speaking, what is returned when index arrays are used is
148 an array with the same shape as the index array, but with the type
149 and values of the array being indexed. As an example, we can use a
150 multidimensional index array instead: ::
151
152 >>> x[np.array([[1,1],[2,3]])]
153 array([[9, 9],
154 [8, 7]])
155
156 Indexing Multi-dimensional arrays
157 =================================
158
159 Things become more complex when multidimensional arrays are indexed,
160 particularly with multidimensional index arrays. These tend to be
161 more unusal uses, but theyare permitted, and they are useful for some
162 problems. We'll start with thesimplest multidimensional case (using
163 the array y from the previous examples): ::
164
165 >>> y[np.array([0,2,4]), np.array([0,1,2])]
166 array([ 0, 15, 30])
167
168 In this case, if the index arrays have a matching shape, and there is
169 an index array for each dimension of the array being indexed, the
170 resultant array has the same shape as the index arrays, and the values
171 correspond to the index set for each position in the index arrays. In
172 this example, the first index value is 0 for both index arrays, and
173 thus the first value of the resultant array is y[0,0]. The next value
174 is y[2,1], and the last is y[4,2].
175
176 If the index arrays do not have the same shape, there is an attempt to
177 broadcast them to the same shape. If they cannot be broadcast to the
178 same shape, an exception is raised: ::
179
180 >>> y[np.array([0,2,4]), np.array([0,1])]
181 <type 'exceptions.ValueError'>: shape mismatch: objects cannot be
182 broadcast to a single shape
183
184 The broadcasting mechanism permits index arrays to be combined with
185 scalars for other indices. The effect is that the scalar value is used
186 for all the corresponding values of the index arrays: ::
187
188 >>> y[np.array([0,2,4]), 1]
189 array([ 1, 15, 29])
190
191 Jumping to the next level of complexity, it is possible to only
192 partially index an array with index arrays. It takes a bit of thought
193 to understand what happens in such cases. For example if we just use
194 one index array with y: ::
195
196 >>> y[np.array([0,2,4])]
197 array([[ 0, 1, 2, 3, 4, 5, 6],
198 [14, 15, 16, 17, 18, 19, 20],
199 [28, 29, 30, 31, 32, 33, 34]])
200
201 What results is the construction of a new array where each value of
202 the index array selects one row from the array being indexed and the
203 resultant array has the resulting shape (size of row, number index
204 elements).
205
206 An example of where this may be useful is for a color lookup table
207 where we want to map the values of an image into RGB triples for
208 display. The lookup table could have a shape (nlookup, 3). Indexing
209 such an array with an image with shape (ny, nx) with dtype=np.uint8
210 (or any integer type so long as values are with the bounds of the
211 lookup table) will result in an array of shape (ny, nx, 3) where a
212 triple of RGB values is associated with each pixel location.
213
214 In general, the shape of the resulant array will be the concatenation
215 of the shape of the index array (or the shape that all the index arrays
216 were broadcast to) with the shape of any unused dimensions (those not
217 indexed) in the array being indexed.
218
219 Boolean or "mask" index arrays
220 ==============================
221
222 Boolean arrays used as indices are treated in a different manner
223 entirely than index arrays. Boolean arrays must be of the same shape
224 as the initial dimensions of the array being indexed. In the
225 most straightforward case, the boolean array has the same shape: ::
226
227 >>> b = y>20
228 >>> y[b]
229 array([21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34])
230
231 Unlike in the case of integer index arrays, in the boolean case, the
232 result is a 1-D array containing all the elements in the indexed array
233 corresponding to all the true elements in the boolean array. The
234 elements in the indexed array are always iterated and returned in
235 :term:`row-major` (C-style) order. The result is also identical to
236 ``y[np.nonzero(b)]``. As with index arrays, what is returned is a copy
237 of the data, not a view as one gets with slices.
238
239 The result will be multidimensional if y has more dimensions than b.
240 For example: ::
241
242 >>> b[:,5] # use a 1-D boolean whose first dim agrees with the first dim of y
243 array([False, False, False, True, True], dtype=bool)
244 >>> y[b[:,5]]
245 array([[21, 22, 23, 24, 25, 26, 27],
246 [28, 29, 30, 31, 32, 33, 34]])
247
248 Here the 4th and 5th rows are selected from the indexed array and
249 combined to make a 2-D array.
250
251 In general, when the boolean array has fewer dimensions than the array
252 being indexed, this is equivalent to y[b, ...], which means
253 y is indexed by b followed by as many : as are needed to fill
254 out the rank of y.
255 Thus the shape of the result is one dimension containing the number
256 of True elements of the boolean array, followed by the remaining
257 dimensions of the array being indexed.
258
259 For example, using a 2-D boolean array of shape (2,3)
260 with four True elements to select rows from a 3-D array of shape
261 (2,3,5) results in a 2-D result of shape (4,5): ::
262
263 >>> x = np.arange(30).reshape(2,3,5)
264 >>> x
265 array([[[ 0, 1, 2, 3, 4],
266 [ 5, 6, 7, 8, 9],
267 [10, 11, 12, 13, 14]],
268 [[15, 16, 17, 18, 19],
269 [20, 21, 22, 23, 24],
270 [25, 26, 27, 28, 29]]])
271 >>> b = np.array([[True, True, False], [False, True, True]])
272 >>> x[b]
273 array([[ 0, 1, 2, 3, 4],
274 [ 5, 6, 7, 8, 9],
275 [20, 21, 22, 23, 24],
276 [25, 26, 27, 28, 29]])
277
278 For further details, consult the numpy reference documentation on array indexing.
279
280 Combining index arrays with slices
281 ==================================
282
283 Index arrays may be combined with slices. For example: ::
284
285 >>> y[np.array([0,2,4]),1:3]
286 array([[ 1, 2],
287 [15, 16],
288 [29, 30]])
289
290 In effect, the slice is converted to an index array
291 np.array([[1,2]]) (shape (1,2)) that is broadcast with the index array
292 to produce a resultant array of shape (3,2).
293
294 Likewise, slicing can be combined with broadcasted boolean indices: ::
295
296 >>> y[b[:,5],1:3]
297 array([[22, 23],
298 [29, 30]])
299
300 Structural indexing tools
301 =========================
302
303 To facilitate easy matching of array shapes with expressions and in
304 assignments, the np.newaxis object can be used within array indices
305 to add new dimensions with a size of 1. For example: ::
306
307 >>> y.shape
308 (5, 7)
309 >>> y[:,np.newaxis,:].shape
310 (5, 1, 7)
311
312 Note that there are no new elements in the array, just that the
313 dimensionality is increased. This can be handy to combine two
314 arrays in a way that otherwise would require explicitly reshaping
315 operations. For example: ::
316
317 >>> x = np.arange(5)
318 >>> x[:,np.newaxis] + x[np.newaxis,:]
319 array([[0, 1, 2, 3, 4],
320 [1, 2, 3, 4, 5],
321 [2, 3, 4, 5, 6],
322 [3, 4, 5, 6, 7],
323 [4, 5, 6, 7, 8]])
324
325 The ellipsis syntax maybe used to indicate selecting in full any
326 remaining unspecified dimensions. For example: ::
327
328 >>> z = np.arange(81).reshape(3,3,3,3)
329 >>> z[1,...,2]
330 array([[29, 32, 35],
331 [38, 41, 44],
332 [47, 50, 53]])
333
334 This is equivalent to: ::
335
336 >>> z[1,:,:,2]
337 array([[29, 32, 35],
338 [38, 41, 44],
339 [47, 50, 53]])
340
341 Assigning values to indexed arrays
342 ==================================
343
344 As mentioned, one can select a subset of an array to assign to using
345 a single index, slices, and index and mask arrays. The value being
346 assigned to the indexed array must be shape consistent (the same shape
347 or broadcastable to the shape the index produces). For example, it is
348 permitted to assign a constant to a slice: ::
349
350 >>> x = np.arange(10)
351 >>> x[2:7] = 1
352
353 or an array of the right size: ::
354
355 >>> x[2:7] = np.arange(5)
356
357 Note that assignments may result in changes if assigning
358 higher types to lower types (like floats to ints) or even
359 exceptions (assigning complex to floats or ints): ::
360
361 >>> x[1] = 1.2
362 >>> x[1]
363 1
364 >>> x[1] = 1.2j
365 <type 'exceptions.TypeError'>: can't convert complex to long; use
366 long(abs(z))
367
368
369 Unlike some of the references (such as array and mask indices)
370 assignments are always made to the original data in the array
371 (indeed, nothing else would make sense!). Note though, that some
372 actions may not work as one may naively expect. This particular
373 example is often surprising to people: ::
374
375 >>> x = np.arange(0, 50, 10)
376 >>> x
377 array([ 0, 10, 20, 30, 40])
378 >>> x[np.array([1, 1, 3, 1])] += 1
379 >>> x
380 array([ 0, 11, 20, 31, 40])
381
382 Where people expect that the 1st location will be incremented by 3.
383 In fact, it will only be incremented by 1. The reason is because
384 a new array is extracted from the original (as a temporary) containing
385 the values at 1, 1, 3, 1, then the value 1 is added to the temporary,
386 and then the temporary is assigned back to the original array. Thus
387 the value of the array at x[1]+1 is assigned to x[1] three times,
388 rather than being incremented 3 times.
389
390 Dealing with variable numbers of indices within programs
391 ========================================================
392
393 The index syntax is very powerful but limiting when dealing with
394 a variable number of indices. For example, if you want to write
395 a function that can handle arguments with various numbers of
396 dimensions without having to write special case code for each
397 number of possible dimensions, how can that be done? If one
398 supplies to the index a tuple, the tuple will be interpreted
399 as a list of indices. For example (using the previous definition
400 for the array z): ::
401
402 >>> indices = (1,1,1,1)
403 >>> z[indices]
404 40
405
406 So one can use code to construct tuples of any number of indices
407 and then use these within an index.
408
409 Slices can be specified within programs by using the slice() function
410 in Python. For example: ::
411
412 >>> indices = (1,1,1,slice(0,2)) # same as [1,1,1,0:2]
413 >>> z[indices]
414 array([39, 40])
415
416 Likewise, ellipsis can be specified by code by using the Ellipsis
417 object: ::
418
419 >>> indices = (1, Ellipsis, 1) # same as [1,...,1]
420 >>> z[indices]
421 array([[28, 31, 34],
422 [37, 40, 43],
423 [46, 49, 52]])
424
425 For this reason it is possible to use the output from the np.where()
426 function directly as an index since it always returns a tuple of index
427 arrays.
428
429 Because the special treatment of tuples, they are not automatically
430 converted to an array as a list would be. As an example: ::
431
432 >>> z[[1,1,1,1]] # produces a large array
433 array([[[[27, 28, 29],
434 [30, 31, 32], ...
435 >>> z[(1,1,1,1)] # returns a single value
436 40
437
438 """
439 from __future__ import division, absolute_import, print_function
440
[end of numpy/doc/indexing.py]
[start of numpy/lib/type_check.py]
1 """Automatically adapted for numpy Sep 19, 2005 by convertcode.py
2
3 """
4 from __future__ import division, absolute_import, print_function
5
6 __all__ = ['iscomplexobj', 'isrealobj', 'imag', 'iscomplex',
7 'isreal', 'nan_to_num', 'real', 'real_if_close',
8 'typename', 'asfarray', 'mintypecode', 'asscalar',
9 'common_type']
10
11 import numpy.core.numeric as _nx
12 from numpy.core.numeric import asarray, asanyarray, array, isnan, \
13 obj2sctype, zeros
14 from .ufunclike import isneginf, isposinf
15
16 _typecodes_by_elsize = 'GDFgdfQqLlIiHhBb?'
17
18 def mintypecode(typechars,typeset='GDFgdf',default='d'):
19 """
20 Return the character for the minimum-size type to which given types can
21 be safely cast.
22
23 The returned type character must represent the smallest size dtype such
24 that an array of the returned type can handle the data from an array of
25 all types in `typechars` (or if `typechars` is an array, then its
26 dtype.char).
27
28 Parameters
29 ----------
30 typechars : list of str or array_like
31 If a list of strings, each string should represent a dtype.
32 If array_like, the character representation of the array dtype is used.
33 typeset : str or list of str, optional
34 The set of characters that the returned character is chosen from.
35 The default set is 'GDFgdf'.
36 default : str, optional
37 The default character, this is returned if none of the characters in
38 `typechars` matches a character in `typeset`.
39
40 Returns
41 -------
42 typechar : str
43 The character representing the minimum-size type that was found.
44
45 See Also
46 --------
47 dtype, sctype2char, maximum_sctype
48
49 Examples
50 --------
51 >>> np.mintypecode(['d', 'f', 'S'])
52 'd'
53 >>> x = np.array([1.1, 2-3.j])
54 >>> np.mintypecode(x)
55 'D'
56
57 >>> np.mintypecode('abceh', default='G')
58 'G'
59
60 """
61 typecodes = [(isinstance(t, str) and t) or asarray(t).dtype.char
62 for t in typechars]
63 intersection = [t for t in typecodes if t in typeset]
64 if not intersection:
65 return default
66 if 'F' in intersection and 'd' in intersection:
67 return 'D'
68 l = []
69 for t in intersection:
70 i = _typecodes_by_elsize.index(t)
71 l.append((i, t))
72 l.sort()
73 return l[0][1]
74
75 def asfarray(a, dtype=_nx.float_):
76 """
77 Return an array converted to a float type.
78
79 Parameters
80 ----------
81 a : array_like
82 The input array.
83 dtype : str or dtype object, optional
84 Float type code to coerce input array `a`. If `dtype` is one of the
85 'int' dtypes, it is replaced with float64.
86
87 Returns
88 -------
89 out : ndarray
90 The input `a` as a float ndarray.
91
92 Examples
93 --------
94 >>> np.asfarray([2, 3])
95 array([ 2., 3.])
96 >>> np.asfarray([2, 3], dtype='float')
97 array([ 2., 3.])
98 >>> np.asfarray([2, 3], dtype='int8')
99 array([ 2., 3.])
100
101 """
102 dtype = _nx.obj2sctype(dtype)
103 if not issubclass(dtype, _nx.inexact):
104 dtype = _nx.float_
105 return asarray(a, dtype=dtype)
106
107 def real(val):
108 """
109 Return the real part of the elements of the array.
110
111 Parameters
112 ----------
113 val : array_like
114 Input array.
115
116 Returns
117 -------
118 out : ndarray
119 Output array. If `val` is real, the type of `val` is used for the
120 output. If `val` has complex elements, the returned type is float.
121
122 See Also
123 --------
124 real_if_close, imag, angle
125
126 Examples
127 --------
128 >>> a = np.array([1+2j, 3+4j, 5+6j])
129 >>> a.real
130 array([ 1., 3., 5.])
131 >>> a.real = 9
132 >>> a
133 array([ 9.+2.j, 9.+4.j, 9.+6.j])
134 >>> a.real = np.array([9, 8, 7])
135 >>> a
136 array([ 9.+2.j, 8.+4.j, 7.+6.j])
137
138 """
139 return asanyarray(val).real
140
141 def imag(val):
142 """
143 Return the imaginary part of the elements of the array.
144
145 Parameters
146 ----------
147 val : array_like
148 Input array.
149
150 Returns
151 -------
152 out : ndarray
153 Output array. If `val` is real, the type of `val` is used for the
154 output. If `val` has complex elements, the returned type is float.
155
156 See Also
157 --------
158 real, angle, real_if_close
159
160 Examples
161 --------
162 >>> a = np.array([1+2j, 3+4j, 5+6j])
163 >>> a.imag
164 array([ 2., 4., 6.])
165 >>> a.imag = np.array([8, 10, 12])
166 >>> a
167 array([ 1. +8.j, 3.+10.j, 5.+12.j])
168
169 """
170 return asanyarray(val).imag
171
172 def iscomplex(x):
173 """
174 Returns a bool array, where True if input element is complex.
175
176 What is tested is whether the input has a non-zero imaginary part, not if
177 the input type is complex.
178
179 Parameters
180 ----------
181 x : array_like
182 Input array.
183
184 Returns
185 -------
186 out : ndarray of bools
187 Output array.
188
189 See Also
190 --------
191 isreal
192 iscomplexobj : Return True if x is a complex type or an array of complex
193 numbers.
194
195 Examples
196 --------
197 >>> np.iscomplex([1+1j, 1+0j, 4.5, 3, 2, 2j])
198 array([ True, False, False, False, False, True], dtype=bool)
199
200 """
201 ax = asanyarray(x)
202 if issubclass(ax.dtype.type, _nx.complexfloating):
203 return ax.imag != 0
204 res = zeros(ax.shape, bool)
205 return +res # convet to array-scalar if needed
206
207 def isreal(x):
208 """
209 Returns a bool array, where True if input element is real.
210
211 If element has complex type with zero complex part, the return value
212 for that element is True.
213
214 Parameters
215 ----------
216 x : array_like
217 Input array.
218
219 Returns
220 -------
221 out : ndarray, bool
222 Boolean array of same shape as `x`.
223
224 See Also
225 --------
226 iscomplex
227 isrealobj : Return True if x is not a complex type.
228
229 Examples
230 --------
231 >>> np.isreal([1+1j, 1+0j, 4.5, 3, 2, 2j])
232 array([False, True, True, True, True, False], dtype=bool)
233
234 """
235 return imag(x) == 0
236
237 def iscomplexobj(x):
238 """
239 Check for a complex type or an array of complex numbers.
240
241 The type of the input is checked, not the value. Even if the input
242 has an imaginary part equal to zero, `iscomplexobj` evaluates to True.
243
244 Parameters
245 ----------
246 x : any
247 The input can be of any type and shape.
248
249 Returns
250 -------
251 iscomplexobj : bool
252 The return value, True if `x` is of a complex type or has at least
253 one complex element.
254
255 See Also
256 --------
257 isrealobj, iscomplex
258
259 Examples
260 --------
261 >>> np.iscomplexobj(1)
262 False
263 >>> np.iscomplexobj(1+0j)
264 True
265 >>> np.iscomplexobj([3, 1+0j, True])
266 True
267
268 """
269 return issubclass(asarray(x).dtype.type, _nx.complexfloating)
270
271 def isrealobj(x):
272 """
273 Return True if x is a not complex type or an array of complex numbers.
274
275 The type of the input is checked, not the value. So even if the input
276 has an imaginary part equal to zero, `isrealobj` evaluates to False
277 if the data type is complex.
278
279 Parameters
280 ----------
281 x : any
282 The input can be of any type and shape.
283
284 Returns
285 -------
286 y : bool
287 The return value, False if `x` is of a complex type.
288
289 See Also
290 --------
291 iscomplexobj, isreal
292
293 Examples
294 --------
295 >>> np.isrealobj(1)
296 True
297 >>> np.isrealobj(1+0j)
298 False
299 >>> np.isrealobj([3, 1+0j, True])
300 False
301
302 """
303 return not issubclass(asarray(x).dtype.type, _nx.complexfloating)
304
305 #-----------------------------------------------------------------------------
306
307 def _getmaxmin(t):
308 from numpy.core import getlimits
309 f = getlimits.finfo(t)
310 return f.max, f.min
311
312 def nan_to_num(x):
313 """
314 Replace nan with zero and inf with finite numbers.
315
316 Returns an array or scalar replacing Not a Number (NaN) with zero,
317 (positive) infinity with a very large number and negative infinity
318 with a very small (or negative) number.
319
320 Parameters
321 ----------
322 x : array_like
323 Input data.
324
325 Returns
326 -------
327 out : ndarray
328 New Array with the same shape as `x` and dtype of the element in
329 `x` with the greatest precision. If `x` is inexact, then NaN is
330 replaced by zero, and infinity (-infinity) is replaced by the
331 largest (smallest or most negative) floating point value that fits
332 in the output dtype. If `x` is not inexact, then a copy of `x` is
333 returned.
334
335 See Also
336 --------
337 isinf : Shows which elements are negative or negative infinity.
338 isneginf : Shows which elements are negative infinity.
339 isposinf : Shows which elements are positive infinity.
340 isnan : Shows which elements are Not a Number (NaN).
341 isfinite : Shows which elements are finite (not NaN, not infinity)
342
343 Notes
344 -----
345 Numpy uses the IEEE Standard for Binary Floating-Point for Arithmetic
346 (IEEE 754). This means that Not a Number is not equivalent to infinity.
347
348
349 Examples
350 --------
351 >>> np.set_printoptions(precision=8)
352 >>> x = np.array([np.inf, -np.inf, np.nan, -128, 128])
353 >>> np.nan_to_num(x)
354 array([ 1.79769313e+308, -1.79769313e+308, 0.00000000e+000,
355 -1.28000000e+002, 1.28000000e+002])
356
357 """
358 x = _nx.array(x, subok=True)
359 xtype = x.dtype.type
360 if not issubclass(xtype, _nx.inexact):
361 return x
362
363 iscomplex = issubclass(xtype, _nx.complexfloating)
364 isscalar = (x.ndim == 0)
365
366 x = x[None] if isscalar else x
367 dest = (x.real, x.imag) if iscomplex else (x,)
368 maxf, minf = _getmaxmin(x.real.dtype)
369 for d in dest:
370 _nx.copyto(d, 0.0, where=isnan(d))
371 _nx.copyto(d, maxf, where=isposinf(d))
372 _nx.copyto(d, minf, where=isneginf(d))
373 return x[0] if isscalar else x
374
375 #-----------------------------------------------------------------------------
376
377 def real_if_close(a,tol=100):
378 """
379 If complex input returns a real array if complex parts are close to zero.
380
381 "Close to zero" is defined as `tol` * (machine epsilon of the type for
382 `a`).
383
384 Parameters
385 ----------
386 a : array_like
387 Input array.
388 tol : float
389 Tolerance in machine epsilons for the complex part of the elements
390 in the array.
391
392 Returns
393 -------
394 out : ndarray
395 If `a` is real, the type of `a` is used for the output. If `a`
396 has complex elements, the returned type is float.
397
398 See Also
399 --------
400 real, imag, angle
401
402 Notes
403 -----
404 Machine epsilon varies from machine to machine and between data types
405 but Python floats on most platforms have a machine epsilon equal to
406 2.2204460492503131e-16. You can use 'np.finfo(np.float).eps' to print
407 out the machine epsilon for floats.
408
409 Examples
410 --------
411 >>> np.finfo(np.float).eps
412 2.2204460492503131e-16
413
414 >>> np.real_if_close([2.1 + 4e-14j], tol=1000)
415 array([ 2.1])
416 >>> np.real_if_close([2.1 + 4e-13j], tol=1000)
417 array([ 2.1 +4.00000000e-13j])
418
419 """
420 a = asanyarray(a)
421 if not issubclass(a.dtype.type, _nx.complexfloating):
422 return a
423 if tol > 1:
424 from numpy.core import getlimits
425 f = getlimits.finfo(a.dtype.type)
426 tol = f.eps * tol
427 if _nx.allclose(a.imag, 0, atol=tol):
428 a = a.real
429 return a
430
431
432 def asscalar(a):
433 """
434 Convert an array of size 1 to its scalar equivalent.
435
436 Parameters
437 ----------
438 a : ndarray
439 Input array of size 1.
440
441 Returns
442 -------
443 out : scalar
444 Scalar representation of `a`. The output data type is the same type
445 returned by the input's `item` method.
446
447 Examples
448 --------
449 >>> np.asscalar(np.array([24]))
450 24
451
452 """
453 return a.item()
454
455 #-----------------------------------------------------------------------------
456
457 _namefromtype = {'S1': 'character',
458 '?': 'bool',
459 'b': 'signed char',
460 'B': 'unsigned char',
461 'h': 'short',
462 'H': 'unsigned short',
463 'i': 'integer',
464 'I': 'unsigned integer',
465 'l': 'long integer',
466 'L': 'unsigned long integer',
467 'q': 'long long integer',
468 'Q': 'unsigned long long integer',
469 'f': 'single precision',
470 'd': 'double precision',
471 'g': 'long precision',
472 'F': 'complex single precision',
473 'D': 'complex double precision',
474 'G': 'complex long double precision',
475 'S': 'string',
476 'U': 'unicode',
477 'V': 'void',
478 'O': 'object'
479 }
480
481 def typename(char):
482 """
483 Return a description for the given data type code.
484
485 Parameters
486 ----------
487 char : str
488 Data type code.
489
490 Returns
491 -------
492 out : str
493 Description of the input data type code.
494
495 See Also
496 --------
497 dtype, typecodes
498
499 Examples
500 --------
501 >>> typechars = ['S1', '?', 'B', 'D', 'G', 'F', 'I', 'H', 'L', 'O', 'Q',
502 ... 'S', 'U', 'V', 'b', 'd', 'g', 'f', 'i', 'h', 'l', 'q']
503 >>> for typechar in typechars:
504 ... print(typechar, ' : ', np.typename(typechar))
505 ...
506 S1 : character
507 ? : bool
508 B : unsigned char
509 D : complex double precision
510 G : complex long double precision
511 F : complex single precision
512 I : unsigned integer
513 H : unsigned short
514 L : unsigned long integer
515 O : object
516 Q : unsigned long long integer
517 S : string
518 U : unicode
519 V : void
520 b : signed char
521 d : double precision
522 g : long precision
523 f : single precision
524 i : integer
525 h : short
526 l : long integer
527 q : long long integer
528
529 """
530 return _namefromtype[char]
531
532 #-----------------------------------------------------------------------------
533
534 #determine the "minimum common type" for a group of arrays.
535 array_type = [[_nx.half, _nx.single, _nx.double, _nx.longdouble],
536 [None, _nx.csingle, _nx.cdouble, _nx.clongdouble]]
537 array_precision = {_nx.half: 0,
538 _nx.single: 1,
539 _nx.double: 2,
540 _nx.longdouble: 3,
541 _nx.csingle: 1,
542 _nx.cdouble: 2,
543 _nx.clongdouble: 3}
544 def common_type(*arrays):
545 """
546 Return a scalar type which is common to the input arrays.
547
548 The return type will always be an inexact (i.e. floating point) scalar
549 type, even if all the arrays are integer arrays. If one of the inputs is
550 an integer array, the minimum precision type that is returned is a
551 64-bit floating point dtype.
552
553 All input arrays can be safely cast to the returned dtype without loss
554 of information.
555
556 Parameters
557 ----------
558 array1, array2, ... : ndarrays
559 Input arrays.
560
561 Returns
562 -------
563 out : data type code
564 Data type code.
565
566 See Also
567 --------
568 dtype, mintypecode
569
570 Examples
571 --------
572 >>> np.common_type(np.arange(2, dtype=np.float32))
573 <type 'numpy.float32'>
574 >>> np.common_type(np.arange(2, dtype=np.float32), np.arange(2))
575 <type 'numpy.float64'>
576 >>> np.common_type(np.arange(4), np.array([45, 6.j]), np.array([45.0]))
577 <type 'numpy.complex128'>
578
579 """
580 is_complex = False
581 precision = 0
582 for a in arrays:
583 t = a.dtype.type
584 if iscomplexobj(a):
585 is_complex = True
586 if issubclass(t, _nx.integer):
587 p = 2 # array_precision[_nx.double]
588 else:
589 p = array_precision.get(t, None)
590 if p is None:
591 raise TypeError("can't get common type for non-numeric array")
592 precision = max(precision, p)
593 if is_complex:
594 return array_type[1][precision]
595 else:
596 return array_type[0][precision]
597
[end of numpy/lib/type_check.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
numpy/numpy
|
aa6335c494e4807d65404d91e0e9d25a7d2fe338
|
inconsistency in np.isclose
The documentation for `np.isclose` says:
> Returns a boolean array of where `a` and `b` are equal within the given tolerance. If both `a` and `b` are scalars, returns a single boolean value.
This is true for comparing two finite scalars, a single boolean value is returned:
```
>>> np.isclose(0, 1)
False
```
However, a boolean array is returned when one or more of the scalars is non-finite:
```
>>> np.isclose(0, np.inf)
array([False], dtype=bool)
```
I would expect (from the documentation) the above to return a single boolean value, not an array. I note that both values in the last example are scalars:
```
>>> np.isscalar(0)
True
>>> np.isscalar(np.inf)
True
```
|
2016-01-15T04:52:12Z
|
<patch>
diff --git a/numpy/core/numeric.py b/numpy/core/numeric.py
--- a/numpy/core/numeric.py
+++ b/numpy/core/numeric.py
@@ -2467,7 +2467,11 @@ def within_tol(x, y, atol, rtol):
# Make NaN == NaN
both_nan = isnan(x) & isnan(y)
cond[both_nan] = both_nan[both_nan]
- return cond
+
+ if isscalar(a) and isscalar(b):
+ return bool(cond)
+ else:
+ return cond
def array_equal(a1, a2):
"""
</patch>
|
[]
|
[]
| ||||
pantsbuild__pants-19224
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Unactionable(?) "indirectly referring to a target" warning for `pants package ::` with pex or FaaS
**Describe the bug**
Running a goal like `pants package ::` when there's `pex_binary`, `python_awslambda` or `python_google_cloud_function` targets (all of the users of `SecondaryOwnerMixin` fields) gives a warning about indirectly referring to a target:
```
20:51:16.61 [WARN] DEPRECATED: indirectly referring to a target by using a corresponding file argument, when the target owning the file isn't applicable is scheduled to be removed in version 2.18.0.dev0.
Refer to the following targets by their addresses:
* //:gcf
* //:lambda
* //:pex
```
As a user, I'm not sure what I can usefully change to resolve this. Replacing the `::` in the CLI invocation with the individual targets seems to work (`pants package :pex :lambda :gcf`), but:
- I don't think we actually want users to be doing this (as a user, I certainly don't want to list out every single target like this)
- even if we do want users to be doing this, I don't think it's clear from the error message: I'm _not_ using the corresponding file argument anywhere.
I'm assuming this is meant to be catching invocations like `pants package ./file.py` packaging all of those targets, but is getting confused by the use of the `::` glob?
Reproducer:
```shell
cd $(mktemp -d)
cat > pants.toml <<EOF
[GLOBAL]
pants_version = "2.17.0a1"
backend_packages = [
"pants.backend.python",
"pants.backend.awslambda.python",
"pants.backend.google_cloud_function.python",
]
[python]
interpreter_constraints = [">=3.8"]
[python-infer]
use_rust_parser = false
EOF
echo "def func(): pass" > file.py
cat > BUILD <<EOF
python_sources(name="py")
pex_binary(name="pex", entry_point="file.py")
python_awslambda(name="lambda", handler="file.py:func", runtime="python3.9")
python_google_cloud_function(name="gcf", handler="file.py:func", runtime="python39", type="event")
EOF
# BUG: prints `[WARN] DEPRECATED: indirectly referring to a target by using a corresponding file argument ...`
pants package ::
```
**Pants version**
2.17.0a1
**OS**
macOS
**Additional info**
#18737
</issue>
<code>
[start of README.md]
1 # Pants Build System
2
3 Pants is a scalable build system for _monorepos_: codebases containing
4 multiple projects, often using multiple programming languages and frameworks,
5 in a single unified code repository.
6
7 Some noteworthy features include:
8
9 * Explicit dependency modeling.
10 * Fine-grained invalidation.
11 * Shared result caching.
12 * Concurrent execution.
13 * Remote execution.
14 * Unified interface for multiple tools and languages.
15 * Extensibility and customizability via a plugin API.
16
17 Documentation: [www.pantsbuild.org](https://www.pantsbuild.org/).
18
19 # Getting started
20
21 See the [getting started](https://www.pantsbuild.org/docs/getting-started) documentation.
22
23 # Credits
24
25 We release to [PyPI](https://pypi.org/pypi)
26
27 [](https://pypi.org/pypi/pantsbuild.pants)
28 [](https://pypi.org/pypi/pantsbuild.pants)
29
30 Linux ARM64 CI resources provided by [Works on ARM](https://www.arm.com/markets/computing-infrastructure/works-on-arm).
31
32 macOS CI resources provided by [MacStadium](https://www.macstadium.com/).
33
34 <img width="150" height="61" src="https://uploads-ssl.webflow.com/5ac3c046c82724970fc60918/5c019d917bba312af7553b49_MacStadium-developerlogo.png">
35
36
[end of README.md]
[start of src/python/pants/backend/python/subsystems/setup.py]
1 # Copyright 2014 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 from __future__ import annotations
5
6 import enum
7 import logging
8 import os
9 from typing import Iterable, List, Optional, TypeVar, cast
10
11 from packaging.utils import canonicalize_name
12
13 from pants.core.goals.generate_lockfiles import UnrecognizedResolveNamesError
14 from pants.option.errors import OptionsError
15 from pants.option.option_types import (
16 BoolOption,
17 DictOption,
18 EnumOption,
19 FileOption,
20 StrListOption,
21 StrOption,
22 )
23 from pants.option.subsystem import Subsystem
24 from pants.util.docutil import bin_name, doc_url
25 from pants.util.memo import memoized_method, memoized_property
26 from pants.util.strutil import softwrap
27
28 logger = logging.getLogger(__name__)
29
30
31 @enum.unique
32 class PipVersion(enum.Enum):
33 V20_3_4 = "20.3.4-patched"
34 V22_2_2 = "22.2.2"
35 V22_3 = "22.3"
36 V22_3_1 = "22.3.1"
37 V23_0 = "23.0"
38 V23_0_1 = "23.0.1"
39 V23_1 = "23.1"
40 V23_1_1 = "23.1.1"
41 V23_1_2 = "23.1.2"
42 LATEST = "latest"
43
44
45 @enum.unique
46 class InvalidLockfileBehavior(enum.Enum):
47 error = "error"
48 ignore = "ignore"
49 warn = "warn"
50
51
52 @enum.unique
53 class LockfileGenerator(enum.Enum):
54 PEX = "pex"
55 POETRY = "poetry"
56
57
58 RESOLVE_OPTION_KEY__DEFAULT = "__default__"
59
60 _T = TypeVar("_T")
61
62
63 class PythonSetup(Subsystem):
64 options_scope = "python"
65 help = "Options for Pants's Python backend."
66
67 default_interpreter_universe = ["2.7", "3.5", "3.6", "3.7", "3.8", "3.9", "3.10", "3.11"]
68
69 _interpreter_constraints = StrListOption(
70 default=None,
71 help=softwrap(
72 """
73 The Python interpreters your codebase is compatible with.
74
75 These constraints are used as the default value for the `interpreter_constraints`
76 field of Python targets.
77
78 Specify with requirement syntax, e.g. 'CPython>=2.7,<3' (A CPython interpreter with
79 version >=2.7 AND version <3) or 'PyPy' (A pypy interpreter of any version). Multiple
80 constraint strings will be ORed together.
81 """
82 ),
83 advanced=True,
84 metavar="<requirement>",
85 )
86
87 @memoized_property
88 def interpreter_constraints(self) -> tuple[str, ...]:
89 if not self._interpreter_constraints:
90 # TODO: This is a hacky affordance for Pants's own tests, dozens of which were
91 # written when Pants provided default ICs, and implicitly rely on that assumption.
92 # We'll probably want to find and modify all those tests to set an explicit IC, but
93 # that will take time.
94 if "PYTEST_CURRENT_TEST" in os.environ:
95 return (">=3.7,<4",)
96 raise OptionsError(
97 softwrap(
98 f"""\
99 You must explicitly specify the default Python interpreter versions your code
100 is intended to run against.
101
102 You specify these interpreter constraints using the `interpreter_constraints`
103 option in the `[python]` section of pants.toml.
104
105 We recommend constraining to a single interpreter minor version if you can,
106 e.g., `interpreter_constraints = ['==3.11.*']`, or at least a small number of
107 interpreter minor versions, e.g., `interpreter_constraints = ['>=3.10,<3.12']`.
108
109 Individual targets can override these default interpreter constraints,
110 if different parts of your codebase run against different python interpreter
111 versions in a single repo.
112
113 See {doc_url("python-interpreter-compatibility")} for details.
114 """
115 ),
116 )
117 return self._interpreter_constraints
118
119 interpreter_versions_universe = StrListOption(
120 default=default_interpreter_universe,
121 help=softwrap(
122 f"""
123 All known Python major/minor interpreter versions that may be used by either
124 your code or tools used by your code.
125
126 This is used by Pants to robustly handle interpreter constraints, such as knowing
127 when generating lockfiles which Python versions to check if your code is using.
128
129 This does not control which interpreter your code will use. Instead, to set your
130 interpreter constraints, update `[python].interpreter_constraints`, the
131 `interpreter_constraints` field, and relevant tool options like
132 `[isort].interpreter_constraints` to tell Pants which interpreters your code
133 actually uses. See {doc_url('python-interpreter-compatibility')}.
134
135 All elements must be the minor and major Python version, e.g. '2.7' or '3.10'. Do
136 not include the patch version.
137 """
138 ),
139 advanced=True,
140 )
141 enable_resolves = BoolOption(
142 default=False,
143 help=softwrap(
144 """
145 Set to true to enable lockfiles for user code. See `[python].resolves` for an
146 explanation of this feature.
147
148 This option is mutually exclusive with `[python].requirement_constraints`. We strongly
149 recommend using this option because it:
150
151 1. Uses `--hash` to validate that all downloaded files are expected, which reduces\
152 the risk of supply chain attacks.
153 2. Enforces that all transitive dependencies are in the lockfile, whereas\
154 constraints allow you to leave off dependencies. This ensures your build is more\
155 stable and reduces the risk of supply chain attacks.
156 3. Allows you to have multiple lockfiles in your repository.
157 """
158 ),
159 advanced=True,
160 mutually_exclusive_group="lockfile",
161 )
162 resolves = DictOption[str](
163 default={"python-default": "3rdparty/python/default.lock"},
164 help=softwrap(
165 f"""
166 A mapping of logical names to lockfile paths used in your project.
167
168 Many organizations only need a single resolve for their whole project, which is
169 a good default and often the simplest thing to do. However, you may need multiple
170 resolves, such as if you use two conflicting versions of a requirement in
171 your repository.
172
173 If you only need a single resolve, run `{bin_name()} generate-lockfiles` to
174 generate the lockfile.
175
176 If you need multiple resolves:
177
178 1. Via this option, define multiple resolve names and their lockfile paths.\
179 The names should be meaningful to your repository, such as `data-science` or\
180 `pants-plugins`.
181 2. Set the default with `[python].default_resolve`.
182 3. Update your `python_requirement` targets with the `resolve` field to declare which\
183 resolve they should be available in. They default to `[python].default_resolve`,\
184 so you only need to update targets that you want in non-default resolves.\
185 (Often you'll set this via the `python_requirements` or `poetry_requirements`\
186 target generators)
187 4. Run `{bin_name()} generate-lockfiles` to generate the lockfiles. If the results\
188 aren't what you'd expect, adjust the prior step.
189 5. Update any targets like `python_source` / `python_sources`,\
190 `python_test` / `python_tests`, and `pex_binary` which need to set a non-default\
191 resolve with the `resolve` field.
192
193 If a target can work with multiple resolves, you can either use the `parametrize`
194 mechanism or manually create a distinct target per resolve. See {doc_url("targets")}
195 for information about `parametrize`.
196
197 For example:
198
199 python_sources(
200 resolve=parametrize("data-science", "web-app"),
201 )
202
203 You can name the lockfile paths what you would like; Pants does not expect a
204 certain file extension or location.
205
206 Only applies if `[python].enable_resolves` is true.
207 """
208 ),
209 advanced=True,
210 )
211 default_resolve = StrOption(
212 default="python-default",
213 help=softwrap(
214 """
215 The default value used for the `resolve` field.
216
217 The name must be defined as a resolve in `[python].resolves`.
218 """
219 ),
220 advanced=True,
221 )
222 default_run_goal_use_sandbox = BoolOption(
223 default=True,
224 help=softwrap(
225 """
226 The default value used for the `run_goal_use_sandbox` field of Python targets. See the
227 relevant field for more details.
228 """
229 ),
230 )
231 pip_version = EnumOption(
232 default=PipVersion.V20_3_4,
233 help=softwrap(
234 """
235 Use this version of Pip for resolving requirements and generating lockfiles.
236
237 N.B.: The `latest` value selects the latest of the listed choices which is not
238 necessarily the latest Pip version released on PyPI.
239 """
240 ),
241 advanced=True,
242 )
243 _resolves_to_interpreter_constraints = DictOption["list[str]"](
244 help=softwrap(
245 """
246 Override the interpreter constraints to use when generating a resolve's lockfile
247 with the `generate-lockfiles` goal.
248
249 By default, each resolve from `[python].resolves` will use your
250 global interpreter constraints set in `[python].interpreter_constraints`. With
251 this option, you can override each resolve to use certain interpreter
252 constraints, such as `{'data-science': ['==3.8.*']}`.
253
254 Warning: this does NOT impact the interpreter constraints used by targets within the
255 resolve, which is instead set by the option `[python].interpreter_constraints` and the
256 `interpreter_constraints` field. It only impacts how the lockfile is generated.
257
258 Pants will validate that the interpreter constraints of your code using a
259 resolve are compatible with that resolve's own constraints. For example, if your
260 code is set to use ['==3.9.*'] via the `interpreter_constraints` field, but it's
261 using a resolve whose interpreter constraints are set to ['==3.7.*'], then
262 Pants will error explaining the incompatibility.
263
264 The keys must be defined as resolves in `[python].resolves`.
265 """
266 ),
267 advanced=True,
268 )
269 _resolves_to_constraints_file = DictOption[str](
270 help=softwrap(
271 f"""
272 When generating a resolve's lockfile, use a constraints file to pin the version of
273 certain requirements. This is particularly useful to pin the versions of transitive
274 dependencies of your direct requirements.
275
276 See https://pip.pypa.io/en/stable/user_guide/#constraints-files for more information on
277 the format of constraint files and how constraints are applied in Pex and pip.
278
279 Expects a dictionary of resolve names from `[python].resolves` and Python tools (e.g.
280 `black` and `pytest`) to file paths for
281 constraints files. For example,
282 `{{'data-science': '3rdparty/data-science-constraints.txt'}}`.
283 If a resolve is not set in the dictionary, it will not use a constraints file.
284
285 You can use the key `{RESOLVE_OPTION_KEY__DEFAULT}` to set a default value for all
286 resolves.
287 """
288 ),
289 advanced=True,
290 )
291 _resolves_to_no_binary = DictOption[List[str]](
292 help=softwrap(
293 f"""
294 When generating a resolve's lockfile, do not use binary packages (i.e. wheels) for
295 these 3rdparty project names.
296
297 Expects a dictionary of resolve names from `[python].resolves` and Python tools (e.g.
298 `black` and `pytest`) to lists of project names. For example,
299 `{{'data-science': ['requests', 'numpy']}}`. If a resolve is not set in the dictionary,
300 it will have no restrictions on binary packages.
301
302 You can use the key `{RESOLVE_OPTION_KEY__DEFAULT}` to set a default value for all
303 resolves.
304
305 For each resolve, you can also use the value `:all:` to disable all binary packages:
306 `{{'data-science': [':all:']}}`.
307
308 Note that some packages are tricky to compile and may fail to install when this option
309 is used on them. See https://pip.pypa.io/en/stable/cli/pip_install/#install-no-binary
310 for details.
311 """
312 ),
313 advanced=True,
314 )
315 _resolves_to_only_binary = DictOption[List[str]](
316 help=softwrap(
317 f"""
318 When generating a resolve's lockfile, do not use source packages (i.e. sdists) for
319 these 3rdparty project names, e.g `['django', 'requests']`.
320
321 Expects a dictionary of resolve names from `[python].resolves` and Python tools (e.g.
322 `black` and `pytest`) to lists of project names. For example,
323 `{{'data-science': ['requests', 'numpy']}}`. If a resolve is not set in the dictionary,
324 it will have no restrictions on source packages.
325
326 You can use the key `{RESOLVE_OPTION_KEY__DEFAULT}` to set a default value for all
327 resolves.
328
329 For each resolve you can use the value `:all:` to disable all source packages:
330 `{{'data-science': [':all:']}}`.
331
332 Packages without binary distributions will fail to install when this option is used on
333 them. See https://pip.pypa.io/en/stable/cli/pip_install/#install-only-binary for
334 details.
335 """
336 ),
337 advanced=True,
338 )
339 invalid_lockfile_behavior = EnumOption(
340 default=InvalidLockfileBehavior.error,
341 help=softwrap(
342 """
343 The behavior when a lockfile has requirements or interpreter constraints that are
344 not compatible with what the current build is using.
345
346 We recommend keeping the default of `error` for CI builds.
347
348 Note that `warn` will still expect a Pants lockfile header, it only won't error if
349 the lockfile is stale and should be regenerated.
350
351 Use `ignore` to avoid needing a lockfile header at all, e.g. if you are manually
352 managing lockfiles rather than using the `generate-lockfiles` goal.
353 """
354 ),
355 advanced=True,
356 )
357 resolves_generate_lockfiles = BoolOption(
358 default=True,
359 help=softwrap(
360 """
361 If False, Pants will not attempt to generate lockfiles for `[python].resolves` when
362 running the `generate-lockfiles` goal.
363
364 This is intended to allow you to manually generate lockfiles for your own code,
365 rather than using Pex lockfiles. For example, when adopting Pants in a project already
366 using Poetry, you can use `poetry export --dev` to create a requirements.txt-style
367 lockfile understood by Pants, then point `[python].resolves` to the file.
368
369 If you set this to False, Pants will not attempt to validate the metadata headers
370 for your user lockfiles. This is useful so that you can keep
371 `[python].invalid_lockfile_behavior` to `error` or `warn` if you'd like so that tool
372 lockfiles continue to be validated, while user lockfiles are skipped.
373
374 Warning: it will likely be slower to install manually generated user lockfiles than Pex
375 ones because Pants cannot as efficiently extract the subset of requirements used for a
376 particular task. See the option `[python].run_against_entire_lockfile`.
377 """
378 ),
379 advanced=True,
380 )
381 run_against_entire_lockfile = BoolOption(
382 default=False,
383 help=softwrap(
384 """
385 If enabled, when running binaries, tests, and repls, Pants will use the entire
386 lockfile file instead of just the relevant subset.
387
388 If you are using Pex lockfiles, we generally do not recommend this. You will already
389 get similar performance benefits to this option, without the downsides.
390
391 Otherwise, this option can improve performance and reduce cache size.
392 But it has two consequences:
393 1) All cached test results will be invalidated if any requirement in the lockfile
394 changes, rather than just those that depend on the changed requirement.
395 2) Requirements unneeded by a test/run/repl will be present on the sys.path, which
396 might in rare cases cause their behavior to change.
397
398 This option does not affect packaging deployable artifacts, such as
399 PEX files, wheels and cloud functions, which will still use just the exact
400 subset of requirements needed.
401 """
402 ),
403 advanced=True,
404 )
405
406 __constraints_deprecation_msg = softwrap(
407 f"""
408 We encourage instead migrating to `[python].enable_resolves` and `[python].resolves`,
409 which is an improvement over this option. The `[python].resolves` feature ensures that
410 your lockfiles are fully comprehensive, i.e. include all transitive dependencies;
411 uses hashes for better supply chain security; and supports advanced features like VCS
412 and local requirements, along with options `[python].resolves_to_only_binary`.
413
414 To migrate, stop setting `[python].requirement_constraints` and
415 `[python].resolve_all_constraints`, and instead set `[python].enable_resolves` to
416 `true`. Then, run `{bin_name()} generate-lockfiles`.
417 """
418 )
419 requirement_constraints = FileOption(
420 default=None,
421 help=softwrap(
422 """
423 When resolving third-party requirements for your own code (vs. tools you run),
424 use this constraints file to determine which versions to use.
425
426 Mutually exclusive with `[python].enable_resolves`, which we generally recommend as an
427 improvement over constraints file.
428
429 See https://pip.pypa.io/en/stable/user_guide/#constraints-files for more
430 information on the format of constraint files and how constraints are applied in
431 Pex and pip.
432
433 This only applies when resolving user requirements, rather than tools you run
434 like Black and Pytest. To constrain tools, set `[tool].lockfile`, e.g.
435 `[black].lockfile`.
436 """
437 ),
438 advanced=True,
439 mutually_exclusive_group="lockfile",
440 removal_version="3.0.0.dev0",
441 removal_hint=__constraints_deprecation_msg,
442 )
443 _resolve_all_constraints = BoolOption(
444 default=True,
445 help=softwrap(
446 """
447 (Only relevant when using `[python].requirement_constraints.`) If enabled, when
448 resolving requirements, Pants will first resolve your entire
449 constraints file as a single global resolve. Then, if the code uses a subset of
450 your constraints file, Pants will extract the relevant requirements from that
451 global resolve so that only what's actually needed gets used. If disabled, Pants
452 will not use a global resolve and will resolve each subset of your requirements
453 independently.
454
455 Usually this option should be enabled because it can result in far fewer resolves.
456 """
457 ),
458 advanced=True,
459 removal_version="3.0.0.dev0",
460 removal_hint=__constraints_deprecation_msg,
461 )
462 resolver_manylinux = StrOption(
463 default="manylinux2014",
464 help=softwrap(
465 """
466 Whether to allow resolution of manylinux wheels when resolving requirements for
467 foreign linux platforms. The value should be a manylinux platform upper bound,
468 e.g.: 'manylinux2010', or else the string 'no' to disallow.
469 """
470 ),
471 advanced=True,
472 )
473
474 tailor_source_targets = BoolOption(
475 default=True,
476 help=softwrap(
477 """
478 If true, add `python_sources`, `python_tests`, and `python_test_utils` targets with
479 the `tailor` goal."""
480 ),
481 advanced=True,
482 )
483 tailor_ignore_empty_init_files = BoolOption(
484 "--tailor-ignore-empty-init-files",
485 default=True,
486 help=softwrap(
487 """
488 If true, don't add `python_sources` targets for `__init__.py` files that are both empty
489 and where there are no other Python files in the directory.
490
491 Empty and solitary `__init__.py` files usually exist as import scaffolding rather than
492 true library code, so it can be noisy to add BUILD files.
493
494 Even if this option is set to true, Pants will still ensure the empty `__init__.py`
495 files are included in the sandbox when running processes.
496
497 If you set to false, you may also want to set `[python-infer].init_files = "always"`.
498 """
499 ),
500 advanced=True,
501 )
502 tailor_requirements_targets = BoolOption(
503 default=True,
504 help=softwrap(
505 """
506 If true, add `python_requirements`, `poetry_requirements`, and `pipenv_requirements`
507 target generators with the `tailor` goal.
508
509 `python_requirements` targets are added for any file that matches the pattern
510 `*requirements*.txt`. You will need to manually add `python_requirements` for different
511 file names like `reqs.txt`.
512
513 `poetry_requirements` targets are added for `pyproject.toml` files with `[tool.poetry`
514 in them.
515 """
516 ),
517 advanced=True,
518 )
519 tailor_pex_binary_targets = BoolOption(
520 default=False,
521 help=softwrap(
522 """
523 If true, add `pex_binary` targets for Python files named `__main__.py` or with a
524 `__main__` clause with the `tailor` goal.
525 """
526 ),
527 advanced=True,
528 )
529 tailor_py_typed_targets = BoolOption(
530 default=True,
531 help=softwrap(
532 """
533 If true, add `resource` targets for marker files named `py.typed` with the `tailor` goal.
534 """
535 ),
536 advanced=True,
537 )
538 macos_big_sur_compatibility = BoolOption(
539 default=False,
540 help=softwrap(
541 """
542 If set, and if running on MacOS Big Sur, use macosx_10_16 as the platform
543 when building wheels. Otherwise, the default of macosx_11_0 will be used.
544 This may be required for pip to be able to install the resulting distribution
545 on Big Sur.
546 """
547 ),
548 advanced=True,
549 )
550 enable_lockfile_targets = BoolOption(
551 default=True,
552 help=softwrap(
553 """
554 Create targets for all Python lockfiles defined in `[python].resolves`.
555
556 The lockfile targets will then be used as dependencies to the `python_requirement`
557 targets that use them, invalidating source targets per resolve when the lockfile
558 changes.
559
560 If another targets address is in conflict with the created lockfile target, it will
561 shadow the lockfile target and it will not be available as a dependency for any
562 `python_requirement` targets.
563 """
564 ),
565 advanced=True,
566 )
567 repl_history = BoolOption(
568 default=True,
569 help="Whether to use the standard Python command history file when running a repl.",
570 )
571
572 @property
573 def enable_synthetic_lockfiles(self) -> bool:
574 return self.enable_resolves and self.enable_lockfile_targets
575
576 @memoized_property
577 def resolves_to_interpreter_constraints(self) -> dict[str, tuple[str, ...]]:
578 result = {}
579 unrecognized_resolves = []
580 for resolve, ics in self._resolves_to_interpreter_constraints.items():
581 if resolve not in self.resolves:
582 unrecognized_resolves.append(resolve)
583 result[resolve] = tuple(ics)
584 if unrecognized_resolves:
585 raise UnrecognizedResolveNamesError(
586 unrecognized_resolves,
587 self.resolves.keys(),
588 description_of_origin="the option `[python].resolves_to_interpreter_constraints`",
589 )
590 return result
591
592 def _resolves_to_option_helper(
593 self,
594 option_value: dict[str, _T],
595 option_name: str,
596 all_python_tool_resolve_names: tuple[str, ...],
597 ) -> dict[str, _T]:
598 all_valid_resolves = {*self.resolves, *all_python_tool_resolve_names}
599 unrecognized_resolves = set(option_value.keys()) - {
600 RESOLVE_OPTION_KEY__DEFAULT,
601 *all_valid_resolves,
602 }
603 if unrecognized_resolves:
604 raise UnrecognizedResolveNamesError(
605 sorted(unrecognized_resolves),
606 {*all_valid_resolves, RESOLVE_OPTION_KEY__DEFAULT},
607 description_of_origin=f"the option `[python].{option_name}`",
608 )
609 default_val = option_value.get(RESOLVE_OPTION_KEY__DEFAULT)
610 if not default_val:
611 return option_value
612 return {resolve: option_value.get(resolve, default_val) for resolve in all_valid_resolves}
613
614 @memoized_method
615 def resolves_to_constraints_file(
616 self, all_python_tool_resolve_names: tuple[str, ...]
617 ) -> dict[str, str]:
618 return self._resolves_to_option_helper(
619 self._resolves_to_constraints_file,
620 "resolves_to_constraints_file",
621 all_python_tool_resolve_names,
622 )
623
624 @memoized_method
625 def resolves_to_no_binary(
626 self, all_python_tool_resolve_names: tuple[str, ...]
627 ) -> dict[str, list[str]]:
628 return {
629 resolve: [canonicalize_name(v) for v in vals]
630 for resolve, vals in self._resolves_to_option_helper(
631 self._resolves_to_no_binary,
632 "resolves_to_no_binary",
633 all_python_tool_resolve_names,
634 ).items()
635 }
636
637 @memoized_method
638 def resolves_to_only_binary(
639 self, all_python_tool_resolve_names: tuple[str, ...]
640 ) -> dict[str, list[str]]:
641 return {
642 resolve: sorted([canonicalize_name(v) for v in vals])
643 for resolve, vals in self._resolves_to_option_helper(
644 self._resolves_to_only_binary,
645 "resolves_to_only_binary",
646 all_python_tool_resolve_names,
647 ).items()
648 }
649
650 @property
651 def manylinux(self) -> str | None:
652 manylinux = cast(Optional[str], self.resolver_manylinux)
653 if manylinux is None or manylinux.lower() in ("false", "no", "none"):
654 return None
655 return manylinux
656
657 @property
658 def resolve_all_constraints(self) -> bool:
659 if (
660 self._resolve_all_constraints
661 and not self.options.is_default("resolve_all_constraints")
662 and not self.requirement_constraints
663 ):
664 raise ValueError(
665 softwrap(
666 """
667 `[python].resolve_all_constraints` is enabled, so
668 `[python].requirement_constraints` must also be set.
669 """
670 )
671 )
672 return self._resolve_all_constraints
673
674 @property
675 def scratch_dir(self):
676 return os.path.join(self.options.pants_workdir, *self.options_scope.split("."))
677
678 def compatibility_or_constraints(self, compatibility: Iterable[str] | None) -> tuple[str, ...]:
679 """Return either the given `compatibility` field or the global interpreter constraints.
680
681 If interpreter constraints are supplied by the CLI flag, return those only.
682 """
683 if self.options.is_flagged("interpreter_constraints"):
684 return self.interpreter_constraints
685 return tuple(compatibility or self.interpreter_constraints)
686
687 def compatibilities_or_constraints(
688 self, compatibilities: Iterable[Iterable[str] | None]
689 ) -> tuple[str, ...]:
690 return tuple(
691 constraint
692 for compatibility in compatibilities
693 for constraint in self.compatibility_or_constraints(compatibility)
694 )
695
[end of src/python/pants/backend/python/subsystems/setup.py]
[start of src/python/pants/backend/python/util_rules/faas.py]
1 # Copyright 2023 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3 """Function-as-a-service (FaaS) support like AWS Lambda and Google Cloud Functions."""
4
5 from __future__ import annotations
6
7 import logging
8 import os.path
9 from abc import ABC, abstractmethod
10 from dataclasses import dataclass
11 from pathlib import Path
12 from typing import Optional, cast
13
14 from pants.backend.python.dependency_inference.module_mapper import (
15 PythonModuleOwners,
16 PythonModuleOwnersRequest,
17 )
18 from pants.backend.python.dependency_inference.rules import import_rules
19 from pants.backend.python.dependency_inference.subsystem import (
20 AmbiguityResolution,
21 PythonInferSubsystem,
22 )
23 from pants.backend.python.subsystems.lambdex import Lambdex
24 from pants.backend.python.subsystems.setup import PythonSetup
25 from pants.backend.python.target_types import (
26 PexCompletePlatformsField,
27 PexLayout,
28 PythonResolveField,
29 )
30 from pants.backend.python.util_rules.pex import (
31 CompletePlatforms,
32 Pex,
33 PexPlatforms,
34 PexRequest,
35 VenvPex,
36 VenvPexProcess,
37 )
38 from pants.backend.python.util_rules.pex_from_targets import PexFromTargetsRequest
39 from pants.backend.python.util_rules.pex_from_targets import rules as pex_from_targets_rules
40 from pants.backend.python.util_rules.pex_venv import PexVenv, PexVenvLayout, PexVenvRequest
41 from pants.backend.python.util_rules.pex_venv import rules as pex_venv_rules
42 from pants.core.goals.package import BuiltPackage, BuiltPackageArtifact, OutputPathField
43 from pants.engine.addresses import Address, UnparsedAddressInputs
44 from pants.engine.fs import (
45 CreateDigest,
46 Digest,
47 FileContent,
48 GlobMatchErrorBehavior,
49 PathGlobs,
50 Paths,
51 )
52 from pants.engine.platform import Platform
53 from pants.engine.process import ProcessResult
54 from pants.engine.rules import Get, MultiGet, collect_rules, rule
55 from pants.engine.target import (
56 AsyncFieldMixin,
57 Dependencies,
58 DependenciesRequest,
59 ExplicitlyProvidedDependencies,
60 FieldSet,
61 InferDependenciesRequest,
62 InferredDependencies,
63 InvalidFieldException,
64 SecondaryOwnerMixin,
65 StringField,
66 TransitiveTargets,
67 TransitiveTargetsRequest,
68 )
69 from pants.engine.unions import UnionRule
70 from pants.source.filespec import Filespec
71 from pants.source.source_root import SourceRoot, SourceRootRequest
72 from pants.util.docutil import bin_name
73 from pants.util.strutil import help_text
74
75 logger = logging.getLogger(__name__)
76
77
78 class PythonFaaSHandlerField(StringField, AsyncFieldMixin, SecondaryOwnerMixin):
79 alias = "handler"
80 required = True
81 value: str
82 help = help_text(
83 """
84 You can specify a full module like 'path.to.module:handler_func' or use a shorthand to
85 specify a file name, using the same syntax as the `sources` field, e.g.
86 'cloud_function.py:handler_func'.
87
88 You must use the file name shorthand for file arguments to work with this target.
89 """
90 )
91
92 @classmethod
93 def compute_value(cls, raw_value: Optional[str], address: Address) -> str:
94 value = cast(str, super().compute_value(raw_value, address))
95 if ":" not in value:
96 raise InvalidFieldException(
97 f"The `{cls.alias}` field in target at {address} must end in the "
98 f"format `:my_handler_func`, but was {value}."
99 )
100 return value
101
102 @property
103 def filespec(self) -> Filespec:
104 path, _, func = self.value.partition(":")
105 if not path.endswith(".py"):
106 return {"includes": []}
107 full_glob = os.path.join(self.address.spec_path, path)
108 return {"includes": [full_glob]}
109
110
111 @dataclass(frozen=True)
112 class ResolvedPythonFaaSHandler:
113 module: str
114 func: str
115 file_name_used: bool
116
117
118 @dataclass(frozen=True)
119 class ResolvePythonFaaSHandlerRequest:
120 field: PythonFaaSHandlerField
121
122
123 @rule(desc="Determining the handler for a python FaaS target")
124 async def resolve_python_faas_handler(
125 request: ResolvePythonFaaSHandlerRequest,
126 ) -> ResolvedPythonFaaSHandler:
127 handler_val = request.field.value
128 field_alias = request.field.alias
129 address = request.field.address
130 path, _, func = handler_val.partition(":")
131
132 # If it's already a module, simply use that. Otherwise, convert the file name into a module
133 # path.
134 if not path.endswith(".py"):
135 return ResolvedPythonFaaSHandler(module=path, func=func, file_name_used=False)
136
137 # Use the engine to validate that the file exists and that it resolves to only one file.
138 full_glob = os.path.join(address.spec_path, path)
139 handler_paths = await Get(
140 Paths,
141 PathGlobs(
142 [full_glob],
143 glob_match_error_behavior=GlobMatchErrorBehavior.error,
144 description_of_origin=f"{address}'s `{field_alias}` field",
145 ),
146 )
147 # We will have already raised if the glob did not match, i.e. if there were no files. But
148 # we need to check if they used a file glob (`*` or `**`) that resolved to >1 file.
149 if len(handler_paths.files) != 1:
150 raise InvalidFieldException(
151 f"Multiple files matched for the `{field_alias}` {repr(handler_val)} for the target "
152 f"{address}, but only one file expected. Are you using a glob, rather than a file "
153 f"name?\n\nAll matching files: {list(handler_paths.files)}."
154 )
155 handler_path = handler_paths.files[0]
156 source_root = await Get(
157 SourceRoot,
158 SourceRootRequest,
159 SourceRootRequest.for_file(handler_path),
160 )
161 stripped_source_path = os.path.relpath(handler_path, source_root.path)
162 module_base, _ = os.path.splitext(stripped_source_path)
163 normalized_path = module_base.replace(os.path.sep, ".")
164 return ResolvedPythonFaaSHandler(module=normalized_path, func=func, file_name_used=True)
165
166
167 class PythonFaaSDependencies(Dependencies):
168 supports_transitive_excludes = True
169
170
171 @dataclass(frozen=True)
172 class PythonFaaSHandlerInferenceFieldSet(FieldSet):
173 required_fields = (
174 PythonFaaSDependencies,
175 PythonFaaSHandlerField,
176 PythonResolveField,
177 )
178
179 dependencies: PythonFaaSDependencies
180 handler: PythonFaaSHandlerField
181 resolve: PythonResolveField
182
183
184 class InferPythonFaaSHandlerDependency(InferDependenciesRequest):
185 infer_from = PythonFaaSHandlerInferenceFieldSet
186
187
188 @rule(desc="Inferring dependency from the python FaaS `handler` field")
189 async def infer_faas_handler_dependency(
190 request: InferPythonFaaSHandlerDependency,
191 python_infer_subsystem: PythonInferSubsystem,
192 python_setup: PythonSetup,
193 ) -> InferredDependencies:
194 if not python_infer_subsystem.entry_points:
195 return InferredDependencies([])
196
197 explicitly_provided_deps, handler = await MultiGet(
198 Get(ExplicitlyProvidedDependencies, DependenciesRequest(request.field_set.dependencies)),
199 Get(
200 ResolvedPythonFaaSHandler,
201 ResolvePythonFaaSHandlerRequest(request.field_set.handler),
202 ),
203 )
204
205 # Only set locality if needed, to avoid unnecessary rule graph memoization misses.
206 # When set, use the source root, which is useful in practice, but incurs fewer memoization
207 # misses than using the full spec_path.
208 locality = None
209 if python_infer_subsystem.ambiguity_resolution == AmbiguityResolution.by_source_root:
210 source_root = await Get(
211 SourceRoot, SourceRootRequest, SourceRootRequest.for_address(request.field_set.address)
212 )
213 locality = source_root.path
214
215 owners = await Get(
216 PythonModuleOwners,
217 PythonModuleOwnersRequest(
218 handler.module,
219 resolve=request.field_set.resolve.normalized_value(python_setup),
220 locality=locality,
221 ),
222 )
223 address = request.field_set.address
224 explicitly_provided_deps.maybe_warn_of_ambiguous_dependency_inference(
225 owners.ambiguous,
226 address,
227 # If the handler was specified as a file, like `app.py`, we know the module must
228 # live in the python_google_cloud_function's directory or subdirectory, so the owners must be ancestors.
229 owners_must_be_ancestors=handler.file_name_used,
230 import_reference="module",
231 context=(
232 f"The target {address} has the field "
233 f"`handler={repr(request.field_set.handler.value)}`, which maps "
234 f"to the Python module `{handler.module}`"
235 ),
236 )
237 maybe_disambiguated = explicitly_provided_deps.disambiguated(
238 owners.ambiguous, owners_must_be_ancestors=handler.file_name_used
239 )
240 unambiguous_owners = owners.unambiguous or (
241 (maybe_disambiguated,) if maybe_disambiguated else ()
242 )
243 return InferredDependencies(unambiguous_owners)
244
245
246 class PythonFaaSCompletePlatforms(PexCompletePlatformsField):
247 help = help_text(
248 f"""
249 {PexCompletePlatformsField.help}
250
251 N.B.: If specifying `complete_platforms` to work around packaging failures encountered when
252 using the `runtime` field, ensure you delete the `runtime` field from the target.
253 """
254 )
255
256
257 class PythonFaaSRuntimeField(StringField, ABC):
258 alias = "runtime"
259 default = None
260
261 @abstractmethod
262 def to_interpreter_version(self) -> None | tuple[int, int]:
263 """Returns the Python version implied by the runtime, as (major, minor)."""
264
265 def to_platform_string(self) -> None | str:
266 # We hardcode the platform value to the appropriate one for each FaaS runtime.
267 # (Running the "hello world" cloud function in the example code will report the platform, and can be
268 # used to verify correctness of these platform strings.)
269 interpreter_version = self.to_interpreter_version()
270 if interpreter_version is None:
271 return None
272
273 py_major, py_minor = interpreter_version
274 platform_str = f"linux_x86_64-cp-{py_major}{py_minor}-cp{py_major}{py_minor}"
275 # set pymalloc ABI flag - this was removed in python 3.8 https://bugs.python.org/issue36707
276 if py_major <= 3 and py_minor < 8:
277 platform_str += "m"
278 return platform_str
279
280
281 @rule
282 async def digest_complete_platforms(
283 complete_platforms: PythonFaaSCompletePlatforms,
284 ) -> CompletePlatforms:
285 return await Get(
286 CompletePlatforms, UnparsedAddressInputs, complete_platforms.to_unparsed_address_inputs()
287 )
288
289
290 @dataclass(frozen=True)
291 class BuildLambdexRequest:
292 address: Address
293 target_name: str
294
295 complete_platforms: PythonFaaSCompletePlatforms
296 handler: PythonFaaSHandlerField
297 output_path: OutputPathField
298 runtime: PythonFaaSRuntimeField
299
300 include_requirements: bool
301
302 script_handler: None | str
303 script_module: None | str
304
305 handler_log_message: str
306
307
308 @rule
309 async def build_lambdex(
310 request: BuildLambdexRequest,
311 lambdex: Lambdex,
312 platform: Platform,
313 ) -> BuiltPackage:
314 if platform.is_macos:
315 logger.warning(
316 f"`{request.target_name}` targets built on macOS may fail to build. If your function uses any"
317 " third-party dependencies without binary wheels (bdist) for Linux available, it will"
318 " fail to build. If this happens, you will either need to update your dependencies to"
319 " only use dependencies with pre-built wheels, or find a Linux environment to run"
320 f" {bin_name()} package. (See https://realpython.com/python-wheels/ for more about"
321 " wheels.)\n\n(If the build does not raise an exception, it's safe to use macOS.)"
322 )
323 lambdex.warn_for_layout(request.target_name)
324
325 output_filename = request.output_path.value_or_default(
326 # FaaS typically use the .zip suffix, so we use that instead of .pex.
327 file_ending="zip",
328 )
329
330 platform_str = request.runtime.to_platform_string()
331 pex_platforms = [platform_str] if platform_str else []
332
333 additional_pex_args = (
334 # Ensure we can resolve manylinux wheels in addition to any AMI-specific wheels.
335 "--manylinux=manylinux2014",
336 # When we're executing Pex on Linux, allow a local interpreter to be resolved if
337 # available and matching the AMI platform.
338 "--resolve-local-platforms",
339 )
340
341 complete_platforms = await Get(
342 CompletePlatforms, PythonFaaSCompletePlatforms, request.complete_platforms
343 )
344
345 pex_request = PexFromTargetsRequest(
346 addresses=[request.address],
347 internal_only=False,
348 include_requirements=request.include_requirements,
349 output_filename=output_filename,
350 platforms=PexPlatforms(pex_platforms),
351 complete_platforms=complete_platforms,
352 additional_args=additional_pex_args,
353 additional_lockfile_args=additional_pex_args,
354 warn_for_transitive_files_targets=True,
355 )
356 lambdex_request = lambdex.to_pex_request()
357
358 lambdex_pex, pex_result, handler, transitive_targets = await MultiGet(
359 Get(VenvPex, PexRequest, lambdex_request),
360 Get(Pex, PexFromTargetsRequest, pex_request),
361 Get(ResolvedPythonFaaSHandler, ResolvePythonFaaSHandlerRequest(request.handler)),
362 Get(TransitiveTargets, TransitiveTargetsRequest([request.address])),
363 )
364
365 lambdex_args = ["build", "-e", f"{handler.module}:{handler.func}", output_filename]
366 if request.script_handler:
367 lambdex_args.extend(("-H", request.script_handler))
368 if request.script_module:
369 lambdex_args.extend(("-M", request.script_module))
370
371 # NB: Lambdex modifies its input pex in-place, so the input file is also the output file.
372 result = await Get(
373 ProcessResult,
374 VenvPexProcess(
375 lambdex_pex,
376 argv=tuple(lambdex_args),
377 input_digest=pex_result.digest,
378 output_files=(output_filename,),
379 description=f"Setting up handler in {output_filename}",
380 ),
381 )
382
383 extra_log_data: list[tuple[str, str]] = []
384 if request.runtime.value:
385 extra_log_data.append(("Runtime", request.runtime.value))
386 extra_log_data.extend(("Complete platform", path) for path in complete_platforms)
387 extra_log_data.append(("Handler", request.handler_log_message))
388
389 first_column_width = 4 + max(len(header) for header, _ in extra_log_data)
390 artifact = BuiltPackageArtifact(
391 output_filename,
392 extra_log_lines=tuple(
393 f"{header.rjust(first_column_width, ' ')}: {data}" for header, data in extra_log_data
394 ),
395 )
396 return BuiltPackage(digest=result.output_digest, artifacts=(artifact,))
397
398
399 @dataclass(frozen=True)
400 class BuildPythonFaaSRequest:
401 address: Address
402 target_name: str
403
404 complete_platforms: PythonFaaSCompletePlatforms
405 handler: PythonFaaSHandlerField
406 output_path: OutputPathField
407 runtime: PythonFaaSRuntimeField
408
409 include_requirements: bool
410
411 reexported_handler_module: str
412 log_only_reexported_handler_func: bool = False
413
414
415 @rule
416 async def build_python_faas(
417 request: BuildPythonFaaSRequest,
418 ) -> BuiltPackage:
419 platform_str = request.runtime.to_platform_string()
420 pex_platforms = PexPlatforms([platform_str] if platform_str else [])
421
422 additional_pex_args = (
423 # Ensure we can resolve manylinux wheels in addition to any AMI-specific wheels.
424 "--manylinux=manylinux2014",
425 # When we're executing Pex on Linux, allow a local interpreter to be resolved if
426 # available and matching the AMI platform.
427 "--resolve-local-platforms",
428 )
429
430 complete_platforms, handler = await MultiGet(
431 Get(CompletePlatforms, PythonFaaSCompletePlatforms, request.complete_platforms),
432 Get(ResolvedPythonFaaSHandler, ResolvePythonFaaSHandlerRequest(request.handler)),
433 )
434
435 # TODO: improve diagnostics if there's more than one platform/complete_platform
436
437 # synthesise a source file that gives a fixed handler path, no matter what the entry point is:
438 # some platforms require a certain name (e.g. GCF), and even on others, giving a fixed name
439 # means users don't need to duplicate the entry_point config in both the pants BUILD file and
440 # infrastructure definitions (the latter can always use the same names, for every lambda).
441 reexported_handler_file = f"{request.reexported_handler_module}.py"
442 reexported_handler_func = "handler"
443 reexported_handler_content = (
444 f"from {handler.module} import {handler.func} as {reexported_handler_func}"
445 )
446 additional_sources = await Get(
447 Digest,
448 CreateDigest([FileContent(reexported_handler_file, reexported_handler_content.encode())]),
449 )
450
451 repository_filename = "faas_repository.pex"
452 pex_request = PexFromTargetsRequest(
453 addresses=[request.address],
454 internal_only=False,
455 include_requirements=request.include_requirements,
456 output_filename=repository_filename,
457 platforms=pex_platforms,
458 complete_platforms=complete_platforms,
459 layout=PexLayout.PACKED,
460 additional_args=additional_pex_args,
461 additional_lockfile_args=additional_pex_args,
462 additional_sources=additional_sources,
463 warn_for_transitive_files_targets=True,
464 )
465
466 pex_result = await Get(Pex, PexFromTargetsRequest, pex_request)
467
468 output_filename = request.output_path.value_or_default(file_ending="zip")
469
470 result = await Get(
471 PexVenv,
472 PexVenvRequest(
473 pex=pex_result,
474 layout=PexVenvLayout.FLAT_ZIPPED,
475 platforms=pex_platforms,
476 complete_platforms=complete_platforms,
477 output_path=Path(output_filename),
478 description=f"Build {request.target_name} artifact for {request.address}",
479 ),
480 )
481
482 if request.log_only_reexported_handler_func:
483 handler_text = reexported_handler_func
484 else:
485 handler_text = f"{request.reexported_handler_module}.{reexported_handler_func}"
486
487 artifact = BuiltPackageArtifact(
488 output_filename,
489 extra_log_lines=(f" Handler: {handler_text}",),
490 )
491 return BuiltPackage(digest=result.digest, artifacts=(artifact,))
492
493
494 def rules():
495 return (
496 *collect_rules(),
497 *import_rules(),
498 *pex_venv_rules(),
499 *pex_from_targets_rules(),
500 UnionRule(InferDependenciesRequest, InferPythonFaaSHandlerDependency),
501 )
502
[end of src/python/pants/backend/python/util_rules/faas.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pantsbuild/pants
|
88952f8549f6d2e8e091d65350083ac3b238f1b1
|
Unactionable(?) "indirectly referring to a target" warning for `pants package ::` with pex or FaaS
**Describe the bug**
Running a goal like `pants package ::` when there's `pex_binary`, `python_awslambda` or `python_google_cloud_function` targets (all of the users of `SecondaryOwnerMixin` fields) gives a warning about indirectly referring to a target:
```
20:51:16.61 [WARN] DEPRECATED: indirectly referring to a target by using a corresponding file argument, when the target owning the file isn't applicable is scheduled to be removed in version 2.18.0.dev0.
Refer to the following targets by their addresses:
* //:gcf
* //:lambda
* //:pex
```
As a user, I'm not sure what I can usefully change to resolve this. Replacing the `::` in the CLI invocation with the individual targets seems to work (`pants package :pex :lambda :gcf`), but:
- I don't think we actually want users to be doing this (as a user, I certainly don't want to list out every single target like this)
- even if we do want users to be doing this, I don't think it's clear from the error message: I'm _not_ using the corresponding file argument anywhere.
I'm assuming this is meant to be catching invocations like `pants package ./file.py` packaging all of those targets, but is getting confused by the use of the `::` glob?
Reproducer:
```shell
cd $(mktemp -d)
cat > pants.toml <<EOF
[GLOBAL]
pants_version = "2.17.0a1"
backend_packages = [
"pants.backend.python",
"pants.backend.awslambda.python",
"pants.backend.google_cloud_function.python",
]
[python]
interpreter_constraints = [">=3.8"]
[python-infer]
use_rust_parser = false
EOF
echo "def func(): pass" > file.py
cat > BUILD <<EOF
python_sources(name="py")
pex_binary(name="pex", entry_point="file.py")
python_awslambda(name="lambda", handler="file.py:func", runtime="python3.9")
python_google_cloud_function(name="gcf", handler="file.py:func", runtime="python39", type="event")
EOF
# BUG: prints `[WARN] DEPRECATED: indirectly referring to a target by using a corresponding file argument ...`
pants package ::
```
**Pants version**
2.17.0a1
**OS**
macOS
**Additional info**
#18737
|
2023-06-01T20:35:38Z
|
<patch>
diff --git a/src/python/pants/engine/internals/graph.py b/src/python/pants/engine/internals/graph.py
--- a/src/python/pants/engine/internals/graph.py
+++ b/src/python/pants/engine/internals/graph.py
@@ -11,7 +11,7 @@
import os.path
from dataclasses import dataclass
from pathlib import PurePath
-from typing import Any, Iterable, Iterator, NamedTuple, Sequence, Type, cast
+from typing import Any, Iterable, Iterator, NamedTuple, NewType, Sequence, Type, cast
from pants.base.deprecated import warn_or_error
from pants.base.specs import AncestorGlobSpec, RawSpecsWithoutFileOwners, RecursiveGlobSpec
@@ -909,7 +909,15 @@ class OwnersRequest:
match_if_owning_build_file_included_in_sources: bool = False
-class Owners(Collection[Address]):
+# NB: This was changed from:
+# class Owners(Collection[Address]):
+# pass
+# In https://github.com/pantsbuild/pants/pull/19191 to facilitate surgical warning of deprecation
+# of SecondaryOwnerMixin. After the Deprecation ends, it can be changed back.
+IsPrimary = NewType("IsPrimary", bool)
+
+
+class Owners(FrozenDict[Address, IsPrimary]):
pass
@@ -964,7 +972,7 @@ def create_live_and_deleted_gets(
)
live_candidate_tgts, deleted_candidate_tgts = await MultiGet(live_get, deleted_get)
- matching_addresses: OrderedSet[Address] = OrderedSet()
+ result = {}
unmatched_sources = set(owners_request.sources)
for live in (True, False):
candidate_tgts: Sequence[Target]
@@ -989,6 +997,8 @@ def create_live_and_deleted_gets(
matching_files = set(
candidate_tgt.get(SourcesField).filespec_matcher.matches(list(sources_set))
)
+ is_primary = bool(matching_files)
+
# Also consider secondary ownership, meaning it's not a `SourcesField` field with
# primary ownership, but the target still should match the file. We can't use
# `tgt.get()` because this is a mixin, and there technically may be >1 field.
@@ -999,8 +1009,9 @@ def create_live_and_deleted_gets(
)
for secondary_owner_field in secondary_owner_fields:
matching_files.update(
- *secondary_owner_field.filespec_matcher.matches(list(sources_set))
+ secondary_owner_field.filespec_matcher.matches(list(sources_set))
)
+
if not matching_files and not (
owners_request.match_if_owning_build_file_included_in_sources
and bfa.rel_path in sources_set
@@ -1008,7 +1019,7 @@ def create_live_and_deleted_gets(
continue
unmatched_sources -= matching_files
- matching_addresses.add(candidate_tgt.address)
+ result[candidate_tgt.address] = IsPrimary(is_primary)
if (
unmatched_sources
@@ -1018,7 +1029,7 @@ def create_live_and_deleted_gets(
[PurePath(path) for path in unmatched_sources], owners_request.owners_not_found_behavior
)
- return Owners(matching_addresses)
+ return Owners(result)
# -----------------------------------------------------------------------------------------------
diff --git a/src/python/pants/engine/internals/specs_rules.py b/src/python/pants/engine/internals/specs_rules.py
--- a/src/python/pants/engine/internals/specs_rules.py
+++ b/src/python/pants/engine/internals/specs_rules.py
@@ -3,6 +3,7 @@
from __future__ import annotations
+import collections.abc
import dataclasses
import itertools
import logging
@@ -46,7 +47,6 @@
FilteredTargets,
NoApplicableTargetsBehavior,
RegisteredTargetTypes,
- SecondaryOwnerMixin,
SourcesField,
SourcesPaths,
SourcesPathsRequest,
@@ -227,7 +227,7 @@ def valid_tgt(
@rule(_masked_types=[EnvironmentName])
async def addresses_from_raw_specs_with_only_file_owners(
specs: RawSpecsWithOnlyFileOwners,
-) -> Addresses:
+) -> Owners:
"""Find the owner(s) for each spec."""
paths_per_include = await MultiGet(
Get(Paths, PathGlobs, specs.path_globs_for_spec(spec)) for spec in specs.all_specs()
@@ -242,6 +242,11 @@ async def addresses_from_raw_specs_with_only_file_owners(
match_if_owning_build_file_included_in_sources=False,
),
)
+ return owners
+
+
+@rule(_masked_types=[EnvironmentName])
+async def addresses_from_owners(owners: Owners) -> Addresses:
return Addresses(sorted(owners))
@@ -482,6 +487,48 @@ def __init__(
)
+# NB: Remove when SecondaryOwnerMixin is removed
+def _maybe_warn_deprecated_secondary_owner_semantics(
+ addresses_from_nonfile_specs: Addresses,
+ owners_from_filespecs: Owners,
+ matched_addresses: collections.abc.Set[Address],
+):
+ """Warn about deprecated semantics of implicitly referring to a target through "Secondary
+ Ownership".
+
+ E.g. If there's a `pex_binary` whose entry point is `foo.py`, using `foo.py` as a spec to mean
+ "package the pex binary" is deprecated, and the caller should specify the binary directly.
+
+ This shouldn't warn if both the primary and secondary owner are in the specs (which is common
+ with specs like `::` or `dir:`).
+ """
+ problematic_target_specs = {
+ address.spec
+ for address in matched_addresses
+ if address in owners_from_filespecs
+ and not owners_from_filespecs[address]
+ and address not in addresses_from_nonfile_specs
+ }
+
+ if problematic_target_specs:
+ warn_or_error(
+ removal_version="2.18.0.dev1",
+ entity=softwrap(
+ """
+ indirectly referring to a target by using a corresponding file argument, when the
+ target owning the file isn't applicable
+ """
+ ),
+ hint=softwrap(
+ f"""
+ Refer to the following targets by their addresses:
+
+ {bullet_list(sorted(problematic_target_specs))}
+ """
+ ),
+ )
+
+
@rule
async def find_valid_field_sets_for_target_roots(
request: TargetRootsToFieldSetsRequest,
@@ -530,33 +577,25 @@ async def find_valid_field_sets_for_target_roots(
):
logger.warning(str(no_applicable_exception))
- secondary_owner_targets = set()
- specified_literal_addresses = {
- address_literal.to_address() for address_literal in specs.includes.address_literals
- }
- for tgt, field_sets in targets_to_applicable_field_sets.items():
- is_secondary = any(
- isinstance(field, SecondaryOwnerMixin) for field in tgt.field_values.values()
- )
- is_explicitly_specified = tgt.address in specified_literal_addresses
- if is_secondary and not is_explicitly_specified:
- secondary_owner_targets.add(tgt)
- if secondary_owner_targets:
- warn_or_error(
- removal_version="2.18.0.dev0",
- entity=softwrap(
- """
- indirectly referring to a target by using a corresponding file argument, when the
- target owning the file isn't applicable
- """
- ),
- hint=softwrap(
- f"""
- Refer to the following targets by their addresses:
-
- {bullet_list(sorted(tgt.address.spec for tgt in secondary_owner_targets))}
- """
+ # NB: Remove when SecondaryOwnerMixin is removed
+ if targets_to_applicable_field_sets:
+ _maybe_warn_deprecated_secondary_owner_semantics(
+ # NB: All of these should be memoized, so it's not inappropriate to request simply for warning sake.
+ *(
+ await MultiGet(
+ Get(
+ Addresses,
+ RawSpecsWithoutFileOwners,
+ RawSpecsWithoutFileOwners.from_raw_specs(specs.includes),
+ ),
+ Get(
+ Owners,
+ RawSpecsWithOnlyFileOwners,
+ RawSpecsWithOnlyFileOwners.from_raw_specs(specs.includes),
+ ),
+ )
),
+ {tgt.address for tgt in targets_to_applicable_field_sets},
)
if request.num_shards > 0:
</patch>
|
[]
|
[]
| ||||
ray-project__ray-9297
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[tune] Parameters from `tune.choice()` do not get logged to TensorBoard when integers
### What is the problem?
When providing parameters via `tune.choice()` that include integers, the values are not logged to TensorBoard's HPARAMS section.
The issue is that `numpy.random.choice([1, 2, 3])` (for example) returns `numpy.int32`/`numpy.int64` and those types are not included in the `VALID_HPARAMS = (str, bool, int, float, list)` tuple (python/ray/tune/logger.py).
Since TensorBoard has no issues with logging `numpy.int32/64`, one simple solution would be to just include those types in the tuple above. Happy to provide a PR if you think this is the way to go.
*Ray version and other system information (Python version, TensorFlow version, OS):*
ray: 0.8.6
python: 3.7.7
tensorboard: 2.2.2
ubuntu: 20.04
### Reproduction (REQUIRED)
```python
from ray import tune
def trainable(config):
tune.report(score=config["a"])
config_dict = {"a": tune.choice([1, 2, 3])}
tune.run(trainable, config=config_dict, num_samples=1)
```
- [x] I have verified my script runs in a clean environment and reproduces the issue.
- [x] I have verified the issue also occurs with the [latest wheels](https://docs.ray.io/en/latest/installation.html).
</issue>
<code>
[start of README.rst]
1 .. image:: https://github.com/ray-project/ray/raw/master/doc/source/images/ray_header_logo.png
2
3 .. image:: https://travis-ci.com/ray-project/ray.svg?branch=master
4 :target: https://travis-ci.com/ray-project/ray
5
6 .. image:: https://readthedocs.org/projects/ray/badge/?version=latest
7 :target: http://docs.ray.io/en/latest/?badge=latest
8
9 |
10
11
12 **Ray is a fast and simple framework for building and running distributed applications.**
13
14 Ray is packaged with the following libraries for accelerating machine learning workloads:
15
16 - `Tune`_: Scalable Hyperparameter Tuning
17 - `RLlib`_: Scalable Reinforcement Learning
18 - `RaySGD <https://docs.ray.io/en/latest/raysgd/raysgd.html>`__: Distributed Training Wrappers
19
20 Install Ray with: ``pip install ray``. For nightly wheels, see the
21 `Installation page <https://docs.ray.io/en/latest/installation.html>`__.
22
23 **NOTE:** As of Ray 0.8.1, Python 2 is no longer supported.
24
25 Quick Start
26 -----------
27
28 Execute Python functions in parallel.
29
30 .. code-block:: python
31
32 import ray
33 ray.init()
34
35 @ray.remote
36 def f(x):
37 return x * x
38
39 futures = [f.remote(i) for i in range(4)]
40 print(ray.get(futures))
41
42 To use Ray's actor model:
43
44 .. code-block:: python
45
46
47 import ray
48 ray.init()
49
50 @ray.remote
51 class Counter(object):
52 def __init__(self):
53 self.n = 0
54
55 def increment(self):
56 self.n += 1
57
58 def read(self):
59 return self.n
60
61 counters = [Counter.remote() for i in range(4)]
62 [c.increment.remote() for c in counters]
63 futures = [c.read.remote() for c in counters]
64 print(ray.get(futures))
65
66
67 Ray programs can run on a single machine, and can also seamlessly scale to large clusters. To execute the above Ray script in the cloud, just download `this configuration file <https://github.com/ray-project/ray/blob/master/python/ray/autoscaler/aws/example-full.yaml>`__, and run:
68
69 ``ray submit [CLUSTER.YAML] example.py --start``
70
71 Read more about `launching clusters <https://docs.ray.io/en/latest/autoscaling.html>`_.
72
73 Tune Quick Start
74 ----------------
75
76 .. image:: https://github.com/ray-project/ray/raw/master/doc/source/images/tune-wide.png
77
78 `Tune`_ is a library for hyperparameter tuning at any scale.
79
80 - Launch a multi-node distributed hyperparameter sweep in less than 10 lines of code.
81 - Supports any deep learning framework, including PyTorch, TensorFlow, and Keras.
82 - Visualize results with `TensorBoard <https://www.tensorflow.org/get_started/summaries_and_tensorboard>`__.
83 - Choose among scalable SOTA algorithms such as `Population Based Training (PBT)`_, `Vizier's Median Stopping Rule`_, `HyperBand/ASHA`_.
84 - Tune integrates with many optimization libraries such as `Facebook Ax <http://ax.dev>`_, `HyperOpt <https://github.com/hyperopt/hyperopt>`_, and `Bayesian Optimization <https://github.com/fmfn/BayesianOptimization>`_ and enables you to scale them transparently.
85
86 To run this example, you will need to install the following:
87
88 .. code-block:: bash
89
90 $ pip install ray[tune] torch torchvision filelock
91
92
93 This example runs a parallel grid search to train a Convolutional Neural Network using PyTorch.
94
95 .. code-block:: python
96
97
98 import torch.optim as optim
99 from ray import tune
100 from ray.tune.examples.mnist_pytorch import (
101 get_data_loaders, ConvNet, train, test)
102
103
104 def train_mnist(config):
105 train_loader, test_loader = get_data_loaders()
106 model = ConvNet()
107 optimizer = optim.SGD(model.parameters(), lr=config["lr"])
108 for i in range(10):
109 train(model, optimizer, train_loader)
110 acc = test(model, test_loader)
111 tune.track.log(mean_accuracy=acc)
112
113
114 analysis = tune.run(
115 train_mnist, config={"lr": tune.grid_search([0.001, 0.01, 0.1])})
116
117 print("Best config: ", analysis.get_best_config(metric="mean_accuracy"))
118
119 # Get a dataframe for analyzing trial results.
120 df = analysis.dataframe()
121
122 If TensorBoard is installed, automatically visualize all trial results:
123
124 .. code-block:: bash
125
126 tensorboard --logdir ~/ray_results
127
128 .. _`Tune`: https://docs.ray.io/en/latest/tune.html
129 .. _`Population Based Training (PBT)`: https://docs.ray.io/en/latest/tune-schedulers.html#population-based-training-pbt
130 .. _`Vizier's Median Stopping Rule`: https://docs.ray.io/en/latest/tune-schedulers.html#median-stopping-rule
131 .. _`HyperBand/ASHA`: https://docs.ray.io/en/latest/tune-schedulers.html#asynchronous-hyperband
132
133 RLlib Quick Start
134 -----------------
135
136 .. image:: https://github.com/ray-project/ray/raw/master/doc/source/images/rllib-wide.jpg
137
138 `RLlib`_ is an open-source library for reinforcement learning built on top of Ray that offers both high scalability and a unified API for a variety of applications.
139
140 .. code-block:: bash
141
142 pip install tensorflow # or tensorflow-gpu
143 pip install ray[rllib] # also recommended: ray[debug]
144
145 .. code-block:: python
146
147 import gym
148 from gym.spaces import Discrete, Box
149 from ray import tune
150
151 class SimpleCorridor(gym.Env):
152 def __init__(self, config):
153 self.end_pos = config["corridor_length"]
154 self.cur_pos = 0
155 self.action_space = Discrete(2)
156 self.observation_space = Box(0.0, self.end_pos, shape=(1, ))
157
158 def reset(self):
159 self.cur_pos = 0
160 return [self.cur_pos]
161
162 def step(self, action):
163 if action == 0 and self.cur_pos > 0:
164 self.cur_pos -= 1
165 elif action == 1:
166 self.cur_pos += 1
167 done = self.cur_pos >= self.end_pos
168 return [self.cur_pos], 1 if done else 0, done, {}
169
170 tune.run(
171 "PPO",
172 config={
173 "env": SimpleCorridor,
174 "num_workers": 4,
175 "env_config": {"corridor_length": 5}})
176
177 .. _`RLlib`: https://docs.ray.io/en/latest/rllib.html
178
179
180 More Information
181 ----------------
182
183 - `Documentation`_
184 - `Tutorial`_
185 - `Blog`_
186 - `Ray paper`_
187 - `Ray HotOS paper`_
188 - `RLlib paper`_
189 - `Tune paper`_
190
191 .. _`Documentation`: http://docs.ray.io/en/latest/index.html
192 .. _`Tutorial`: https://github.com/ray-project/tutorial
193 .. _`Blog`: https://ray-project.github.io/
194 .. _`Ray paper`: https://arxiv.org/abs/1712.05889
195 .. _`Ray HotOS paper`: https://arxiv.org/abs/1703.03924
196 .. _`RLlib paper`: https://arxiv.org/abs/1712.09381
197 .. _`Tune paper`: https://arxiv.org/abs/1807.05118
198
199 Getting Involved
200 ----------------
201
202 - `[email protected]`_: For discussions about development or any general
203 questions.
204 - `StackOverflow`_: For questions about how to use Ray.
205 - `GitHub Issues`_: For reporting bugs and feature requests.
206 - `Pull Requests`_: For submitting code contributions.
207 - `Meetup Group`_: Join our meetup group.
208 - `Community Slack`_: Join our Slack workspace.
209 - `Twitter`_: Follow updates on Twitter.
210
211 .. _`[email protected]`: https://groups.google.com/forum/#!forum/ray-dev
212 .. _`GitHub Issues`: https://github.com/ray-project/ray/issues
213 .. _`StackOverflow`: https://stackoverflow.com/questions/tagged/ray
214 .. _`Pull Requests`: https://github.com/ray-project/ray/pulls
215 .. _`Meetup Group`: https://www.meetup.com/Bay-Area-Ray-Meetup/
216 .. _`Community Slack`: https://forms.gle/9TSdDYUgxYs8SA9e8
217 .. _`Twitter`: https://twitter.com/raydistributed
218
[end of README.rst]
[start of doc/source/conf.py]
1 # -*- coding: utf-8 -*-
2 #
3 # Ray documentation build configuration file, created by
4 # sphinx-quickstart on Fri Jul 1 13:19:58 2016.
5 #
6 # This file is execfile()d with the current directory set to its
7 # containing dir.
8 #
9 # Note that not all possible configuration values are present in this
10 # autogenerated file.
11 #
12 # All configuration values have a default; values that are commented out
13 # serve to show the default.
14
15 import glob
16 import shutil
17 import sys
18 import os
19 import urllib
20 sys.path.insert(0, os.path.abspath('.'))
21 from custom_directives import CustomGalleryItemDirective
22
23 # These lines added to enable Sphinx to work without installing Ray.
24 import mock
25 MOCK_MODULES = [
26 "blist", "gym", "gym.spaces", "psutil", "ray._raylet",
27 "ray.core.generated", "ray.core.generated.gcs_pb2",
28 "ray.core.generated.ray.protocol.Task", "scipy", "scipy.signal",
29 "scipy.stats", "setproctitle", "tensorflow_probability", "tensorflow",
30 "tensorflow.contrib", "tensorflow.contrib.all_reduce", "tree",
31 "tensorflow.contrib.all_reduce.python", "tensorflow.contrib.layers",
32 "tensorflow.contrib.rnn", "tensorflow.contrib.slim", "tensorflow.core",
33 "tensorflow.core.util", "tensorflow.python", "tensorflow.python.client",
34 "tensorflow.python.util", "torch", "torch.distributed", "torch.nn",
35 "torch.nn.parallel", "torch.utils.data", "torch.utils.data.distributed",
36 "zoopt"
37 ]
38 for mod_name in MOCK_MODULES:
39 sys.modules[mod_name] = mock.Mock()
40 # ray.rllib.models.action_dist.py and
41 # ray.rllib.models.lstm.py will use tf.VERSION
42 sys.modules["tensorflow"].VERSION = "9.9.9"
43
44 # If extensions (or modules to document with autodoc) are in another directory,
45 # add these directories to sys.path here. If the directory is relative to the
46 # documentation root, use os.path.abspath to make it absolute, like shown here.
47 sys.path.insert(0, os.path.abspath("../../python/"))
48
49 import ray
50
51 # -- General configuration ------------------------------------------------
52
53 # If your documentation needs a minimal Sphinx version, state it here.
54 #needs_sphinx = '1.0'
55
56 # Add any Sphinx extension module names here, as strings. They can be
57 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
58 # ones.
59 extensions = [
60 'sphinx.ext.autodoc',
61 'sphinx.ext.viewcode',
62 'sphinx.ext.napoleon',
63 'sphinx_click.ext',
64 'sphinx-jsonschema',
65 'sphinx_gallery.gen_gallery',
66 'sphinx_copybutton',
67 'versionwarning.extension',
68 ]
69
70 versionwarning_messages = {
71 "master": (
72 "This document is for the master branch. "
73 'Visit the <a href="/en/latest/">latest pip release documentation here</a>.'
74 ),
75 "latest": (
76 "This document is for the latest pip release. "
77 'Visit the <a href="/en/master/">master branch documentation here</a>.'
78 ),
79 }
80
81 versionwarning_body_selector = "div.document"
82 sphinx_gallery_conf = {
83 "examples_dirs": ["../examples", "tune/_tutorials"], # path to example scripts
84 # path where to save generated examples
85 "gallery_dirs": ["auto_examples", "tune/tutorials"],
86 "ignore_pattern": "../examples/doc_code/",
87 "plot_gallery": "False",
88 # "filename_pattern": "tutorial.py",
89 # "backreferences_dir": "False",
90 # "show_memory': False,
91 # 'min_reported_time': False
92 }
93
94 for i in range(len(sphinx_gallery_conf["examples_dirs"])):
95 gallery_dir = sphinx_gallery_conf["gallery_dirs"][i]
96 source_dir = sphinx_gallery_conf["examples_dirs"][i]
97 try:
98 os.mkdir(gallery_dir)
99 except OSError:
100 pass
101
102 # Copy rst files from source dir to gallery dir.
103 for f in glob.glob(os.path.join(source_dir, '*.rst')):
104 shutil.copy(f, gallery_dir)
105
106 # Add any paths that contain templates here, relative to this directory.
107 templates_path = ['_templates']
108
109 # The suffix of source filenames.
110 from recommonmark.parser import CommonMarkParser
111
112 # The suffix of source filenames.
113 source_suffix = ['.rst', '.md']
114
115 source_parsers = {
116 '.md': CommonMarkParser,
117 }
118
119 # The encoding of source files.
120 #source_encoding = 'utf-8-sig'
121
122 # The master toctree document.
123 master_doc = 'index'
124
125 # General information about the project.
126 project = u'Ray'
127 copyright = u'2019, The Ray Team'
128 author = u'The Ray Team'
129
130 # The version info for the project you're documenting, acts as replacement for
131 # |version| and |release|, also used in various other places throughout the
132 # built documents.
133 #
134 # The short X.Y version.
135 from ray import __version__ as version
136 # The full version, including alpha/beta/rc tags.
137 release = version
138
139 # The language for content autogenerated by Sphinx. Refer to documentation
140 # for a list of supported languages.
141 #
142 # This is also used if you do content translation via gettext catalogs.
143 # Usually you set "language" from the command line for these cases.
144 language = None
145
146 # There are two options for replacing |today|: either, you set today to some
147 # non-false value, then it is used:
148 #today = ''
149 # Else, today_fmt is used as the format for a strftime call.
150 #today_fmt = '%B %d, %Y'
151
152 # List of patterns, relative to source directory, that match files and
153 # directories to ignore when looking for source files.
154 exclude_patterns = ['_build']
155 exclude_patterns += sphinx_gallery_conf['examples_dirs']
156
157 # The reST default role (used for this markup: `text`) to use for all
158 # documents.
159 #default_role = None
160
161 # If true, '()' will be appended to :func: etc. cross-reference text.
162 #add_function_parentheses = True
163
164 # If true, the current module name will be prepended to all description
165 # unit titles (such as .. function::).
166 #add_module_names = True
167
168 # If true, sectionauthor and moduleauthor directives will be shown in the
169 # output. They are ignored by default.
170 #show_authors = False
171
172 # The name of the Pygments (syntax highlighting) style to use.
173 pygments_style = 'sphinx'
174
175 # A list of ignored prefixes for module index sorting.
176 #modindex_common_prefix = []
177
178 # If true, keep warnings as "system message" paragraphs in the built documents.
179 #keep_warnings = False
180
181 # If true, `todo` and `todoList` produce output, else they produce nothing.
182 todo_include_todos = False
183
184 # -- Options for HTML output ----------------------------------------------
185
186 # The theme to use for HTML and HTML Help pages. See the documentation for
187 # a list of builtin themes.
188 import sphinx_rtd_theme
189 html_theme = 'sphinx_rtd_theme'
190 html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
191
192 # Theme options are theme-specific and customize the look and feel of a theme
193 # further. For a list of options available for each theme, see the
194 # documentation.
195 #html_theme_options = {}
196
197 # Add any paths that contain custom themes here, relative to this directory.
198 #html_theme_path = []
199
200 # The name for this set of Sphinx documents. If None, it defaults to
201 # "<project> v<release> documentation".
202 #html_title = None
203
204 # A shorter title for the navigation bar. Default is the same as html_title.
205 #html_short_title = None
206
207 # The name of an image file (relative to this directory) to place at the top
208 # of the sidebar.
209 #html_logo = None
210
211 # The name of an image file (within the static path) to use as favicon of the
212 # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
213 # pixels large.
214 #html_favicon = None
215
216 # Add any paths that contain custom static files (such as style sheets) here,
217 # relative to this directory. They are copied after the builtin static files,
218 # so a file named "default.css" will overwrite the builtin "default.css".
219 html_static_path = ['_static']
220
221 # Add any extra paths that contain custom files (such as robots.txt or
222 # .htaccess) here, relative to this directory. These files are copied
223 # directly to the root of the documentation.
224 #html_extra_path = []
225
226 # If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
227 # using the given strftime format.
228 #html_last_updated_fmt = '%b %d, %Y'
229
230 # If true, SmartyPants will be used to convert quotes and dashes to
231 # typographically correct entities.
232 #html_use_smartypants = True
233
234 # Custom sidebar templates, maps document names to template names.
235 html_sidebars = {'**': ['index.html']}
236
237 # Additional templates that should be rendered to pages, maps page names to
238 # template names.
239 #html_additional_pages = {}
240
241 # If false, no module index is generated.
242 #html_domain_indices = True
243
244 # If false, no index is generated.
245 #html_use_index = True
246
247 # If true, the index is split into individual pages for each letter.
248 #html_split_index = False
249
250 # If true, links to the reST sources are added to the pages.
251 #html_show_sourcelink = True
252
253 # If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
254 #html_show_sphinx = True
255
256 # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
257 #html_show_copyright = True
258
259 # If true, an OpenSearch description file will be output, and all pages will
260 # contain a <link> tag referring to it. The value of this option must be the
261 # base URL from which the finished HTML is served.
262 #html_use_opensearch = ''
263
264 # This is the file name suffix for HTML files (e.g. ".xhtml").
265 #html_file_suffix = None
266
267 # Language to be used for generating the HTML full-text search index.
268 # Sphinx supports the following languages:
269 # 'da', 'de', 'en', 'es', 'fi', 'fr', 'hu', 'it', 'ja'
270 # 'nl', 'no', 'pt', 'ro', 'ru', 'sv', 'tr'
271 #html_search_language = 'en'
272
273 # A dictionary with options for the search language support, empty by default.
274 # Now only 'ja' uses this config value
275 #html_search_options = {'type': 'default'}
276
277 # The name of a javascript file (relative to the configuration directory) that
278 # implements a search results scorer. If empty, the default will be used.
279 #html_search_scorer = 'scorer.js'
280
281 # Output file base name for HTML help builder.
282 htmlhelp_basename = 'Raydoc'
283
284 # -- Options for LaTeX output ---------------------------------------------
285
286 latex_elements = {
287 # The paper size ('letterpaper' or 'a4paper').
288 #'papersize': 'letterpaper',
289
290 # The font size ('10pt', '11pt' or '12pt').
291 #'pointsize': '10pt',
292
293 # Additional stuff for the LaTeX preamble.
294 #'preamble': '',
295
296 # Latex figure (float) alignment
297 #'figure_align': 'htbp',
298 }
299
300 # Grouping the document tree into LaTeX files. List of tuples
301 # (source start file, target name, title,
302 # author, documentclass [howto, manual, or own class]).
303 latex_documents = [
304 (master_doc, 'Ray.tex', u'Ray Documentation', u'The Ray Team', 'manual'),
305 ]
306
307 # The name of an image file (relative to this directory) to place at the top of
308 # the title page.
309 #latex_logo = None
310
311 # For "manual" documents, if this is true, then toplevel headings are parts,
312 # not chapters.
313 #latex_use_parts = False
314
315 # If true, show page references after internal links.
316 #latex_show_pagerefs = False
317
318 # If true, show URL addresses after external links.
319 #latex_show_urls = False
320
321 # Documents to append as an appendix to all manuals.
322 #latex_appendices = []
323
324 # If false, no module index is generated.
325 #latex_domain_indices = True
326
327 # -- Options for manual page output ---------------------------------------
328
329 # One entry per manual page. List of tuples
330 # (source start file, name, description, authors, manual section).
331 man_pages = [(master_doc, 'ray', u'Ray Documentation', [author], 1)]
332
333 # If true, show URL addresses after external links.
334 #man_show_urls = False
335
336 # -- Options for Texinfo output -------------------------------------------
337
338 # Grouping the document tree into Texinfo files. List of tuples
339 # (source start file, target name, title, author,
340 # dir menu entry, description, category)
341 texinfo_documents = [
342 (master_doc, 'Ray', u'Ray Documentation', author, 'Ray',
343 'One line description of project.', 'Miscellaneous'),
344 ]
345
346 # Documents to append as an appendix to all manuals.
347 #texinfo_appendices = []
348
349 # If false, no module index is generated.
350 #texinfo_domain_indices = True
351
352 # How to display URL addresses: 'footnote', 'no', or 'inline'.
353 #texinfo_show_urls = 'footnote'
354
355 # If true, do not generate a @detailmenu in the "Top" node's menu.
356 #texinfo_no_detailmenu = False
357
358 # pcmoritz: To make the following work, you have to run
359 # sudo pip install recommonmark
360
361 # Python methods should be presented in source code order
362 autodoc_member_order = 'bysource'
363
364 # Taken from https://github.com/edx/edx-documentation
365 FEEDBACK_FORM_FMT = "https://github.com/ray-project/ray/issues/new?title={title}&labels=docs&body={body}"
366
367
368 def feedback_form_url(project, page):
369 """Create a URL for feedback on a particular page in a project."""
370 return FEEDBACK_FORM_FMT.format(
371 title=urllib.parse.quote(
372 "[docs] Issue on `{page}.rst`".format(page=page)),
373 body=urllib.parse.quote(
374 "# Documentation Problem/Question/Comment\n"
375 "<!-- Describe your issue/question/comment below. -->\n"
376 "<!-- If there are typos or errors in the docs, feel free to create a pull-request. -->\n"
377 "\n\n\n\n"
378 "(Created directly from the docs)\n"))
379
380
381 def update_context(app, pagename, templatename, context, doctree):
382 """Update the page rendering context to include ``feedback_form_url``."""
383 context['feedback_form_url'] = feedback_form_url(app.config.project,
384 pagename)
385
386
387 # see also http://searchvoidstar.tumblr.com/post/125486358368/making-pdfs-from-markdown-on-readthedocsorg-using
388
389
390 def setup(app):
391 app.connect('html-page-context', update_context)
392 app.add_stylesheet('css/custom.css')
393 # Custom directives
394 app.add_directive('customgalleryitem', CustomGalleryItemDirective)
395
[end of doc/source/conf.py]
[start of python/ray/setup-dev.py]
1 #!/usr/bin/env python
2 """This script allows you to develop RLlib without needing to compile Ray."""
3
4 import argparse
5 import click
6 import os
7 import shutil
8 import subprocess
9
10 import ray
11
12
13 def do_link(package, force=False, local_path=""):
14 package_home = os.path.abspath(
15 os.path.join(ray.__file__, "../{}".format(package)))
16 local_home = os.path.abspath(
17 os.path.join(__file__, local_path + "../{}".format(package)))
18 if not os.path.isdir(package_home):
19 print("{} does not exist. Continuing to link.".format(package_home))
20 assert os.path.isdir(local_home), local_home
21 if not force and not click.confirm(
22 "This will replace:\n {}\nwith a symlink to:\n {}".format(
23 package_home, local_home),
24 default=True):
25 return
26 # Windows: Create directory junction.
27 if os.name == "nt":
28 try:
29 shutil.rmtree(package_home)
30 except FileNotFoundError:
31 pass
32 except OSError:
33 os.remove(package_home)
34 subprocess.check_call(
35 ["mklink", "/J", package_home, local_home], shell=True)
36 # Posix: Use `ln -s` to create softlink.
37 else:
38 sudo = []
39 if not os.access(os.path.dirname(package_home), os.W_OK):
40 print("You don't have write permission to {}, using sudo:".format(
41 package_home))
42 sudo = ["sudo"]
43 subprocess.check_call(sudo + ["rm", "-rf", package_home])
44 subprocess.check_call(sudo + ["ln", "-s", local_home, package_home])
45
46
47 if __name__ == "__main__":
48 parser = argparse.ArgumentParser(
49 formatter_class=argparse.RawDescriptionHelpFormatter,
50 description="Setup dev.")
51 parser.add_argument(
52 "--yes", action="store_true", help="Don't ask for confirmation.")
53 args = parser.parse_args()
54
55 do_link("rllib", force=args.yes, local_path="../../")
56 do_link("tune", force=args.yes)
57 do_link("autoscaler", force=args.yes)
58 do_link("scripts", force=args.yes)
59 do_link("internal", force=args.yes)
60 do_link("tests", force=args.yes)
61 do_link("experimental", force=args.yes)
62 do_link("util", force=args.yes)
63 do_link("dashboard", force=args.yes)
64 print("Created links.\n\nIf you run into issues initializing Ray, please "
65 "ensure that your local repo and the installed Ray are in sync "
66 "(pip install -U the latest wheels at "
67 "https://docs.ray.io/en/latest/installation.html, "
68 "and ensure you are up-to-date on the master branch on git).\n\n"
69 "Note that you may need to delete the package symlinks when pip "
70 "installing new Ray versions to prevent pip from overwriting files "
71 "in your git repo.")
72
[end of python/ray/setup-dev.py]
[start of python/ray/tune/logger.py]
1 import csv
2 import json
3 import logging
4 import os
5 import yaml
6 import numbers
7 import numpy as np
8
9 import ray.cloudpickle as cloudpickle
10 from ray.util.debug import log_once
11 from ray.tune.result import (NODE_IP, TRAINING_ITERATION, TIME_TOTAL_S,
12 TIMESTEPS_TOTAL, EXPR_PARAM_FILE,
13 EXPR_PARAM_PICKLE_FILE, EXPR_PROGRESS_FILE,
14 EXPR_RESULT_FILE)
15 from ray.tune.syncer import get_node_syncer
16 from ray.tune.utils import flatten_dict
17
18 logger = logging.getLogger(__name__)
19
20 tf = None
21 VALID_SUMMARY_TYPES = [int, float, np.float32, np.float64, np.int32, np.int64]
22
23
24 class Logger:
25 """Logging interface for ray.tune.
26
27 By default, the UnifiedLogger implementation is used which logs results in
28 multiple formats (TensorBoard, rllab/viskit, plain json, custom loggers)
29 at once.
30
31 Arguments:
32 config: Configuration passed to all logger creators.
33 logdir: Directory for all logger creators to log to.
34 trial (Trial): Trial object for the logger to access.
35 """
36
37 def __init__(self, config, logdir, trial=None):
38 self.config = config
39 self.logdir = logdir
40 self.trial = trial
41 self._init()
42
43 def _init(self):
44 pass
45
46 def on_result(self, result):
47 """Given a result, appends it to the existing log."""
48
49 raise NotImplementedError
50
51 def update_config(self, config):
52 """Updates the config for logger."""
53
54 pass
55
56 def close(self):
57 """Releases all resources used by this logger."""
58
59 pass
60
61 def flush(self):
62 """Flushes all disk writes to storage."""
63
64 pass
65
66
67 class NoopLogger(Logger):
68 def on_result(self, result):
69 pass
70
71
72 class MLFLowLogger(Logger):
73 """MLFlow logger.
74
75 Requires the experiment configuration to have a MLFlow Experiment ID
76 or manually set the proper environment variables.
77
78 """
79
80 def _init(self):
81 from mlflow.tracking import MlflowClient
82 client = MlflowClient()
83 run = client.create_run(self.config.get("mlflow_experiment_id"))
84 self._run_id = run.info.run_id
85 for key, value in self.config.items():
86 client.log_param(self._run_id, key, value)
87 self.client = client
88
89 def on_result(self, result):
90 for key, value in result.items():
91 if not isinstance(value, float):
92 continue
93 self.client.log_metric(
94 self._run_id, key, value, step=result.get(TRAINING_ITERATION))
95
96 def close(self):
97 self.client.set_terminated(self._run_id)
98
99
100 class JsonLogger(Logger):
101 """Logs trial results in json format.
102
103 Also writes to a results file and param.json file when results or
104 configurations are updated. Experiments must be executed with the
105 JsonLogger to be compatible with the ExperimentAnalysis tool.
106 """
107
108 def _init(self):
109 self.update_config(self.config)
110 local_file = os.path.join(self.logdir, EXPR_RESULT_FILE)
111 self.local_out = open(local_file, "a")
112
113 def on_result(self, result):
114 json.dump(result, self, cls=_SafeFallbackEncoder)
115 self.write("\n")
116 self.local_out.flush()
117
118 def write(self, b):
119 self.local_out.write(b)
120
121 def flush(self):
122 self.local_out.flush()
123
124 def close(self):
125 self.local_out.close()
126
127 def update_config(self, config):
128 self.config = config
129 config_out = os.path.join(self.logdir, EXPR_PARAM_FILE)
130 with open(config_out, "w") as f:
131 json.dump(
132 self.config,
133 f,
134 indent=2,
135 sort_keys=True,
136 cls=_SafeFallbackEncoder)
137 config_pkl = os.path.join(self.logdir, EXPR_PARAM_PICKLE_FILE)
138 with open(config_pkl, "wb") as f:
139 cloudpickle.dump(self.config, f)
140
141
142 class CSVLogger(Logger):
143 """Logs results to progress.csv under the trial directory.
144
145 Automatically flattens nested dicts in the result dict before writing
146 to csv:
147
148 {"a": {"b": 1, "c": 2}} -> {"a/b": 1, "a/c": 2}
149
150 """
151
152 def _init(self):
153 """CSV outputted with Headers as first set of results."""
154 progress_file = os.path.join(self.logdir, EXPR_PROGRESS_FILE)
155 self._continuing = os.path.exists(progress_file)
156 self._file = open(progress_file, "a")
157 self._csv_out = None
158
159 def on_result(self, result):
160 tmp = result.copy()
161 if "config" in tmp:
162 del tmp["config"]
163 result = flatten_dict(tmp, delimiter="/")
164 if self._csv_out is None:
165 self._csv_out = csv.DictWriter(self._file, result.keys())
166 if not self._continuing:
167 self._csv_out.writeheader()
168 self._csv_out.writerow(
169 {k: v
170 for k, v in result.items() if k in self._csv_out.fieldnames})
171 self._file.flush()
172
173 def flush(self):
174 self._file.flush()
175
176 def close(self):
177 self._file.close()
178
179
180 class TBXLogger(Logger):
181 """TensorBoardX Logger.
182
183 Note that hparams will be written only after a trial has terminated.
184 This logger automatically flattens nested dicts to show on TensorBoard:
185
186 {"a": {"b": 1, "c": 2}} -> {"a/b": 1, "a/c": 2}
187 """
188
189 # NoneType is not supported on the last TBX release yet.
190 VALID_HPARAMS = (str, bool, int, float, list)
191
192 def _init(self):
193 try:
194 from tensorboardX import SummaryWriter
195 except ImportError:
196 logger.error("pip install 'ray[tune]' to see TensorBoard files.")
197 raise
198 self._file_writer = SummaryWriter(self.logdir, flush_secs=30)
199 self.last_result = None
200
201 def on_result(self, result):
202 step = result.get(TIMESTEPS_TOTAL) or result[TRAINING_ITERATION]
203
204 tmp = result.copy()
205 for k in [
206 "config", "pid", "timestamp", TIME_TOTAL_S, TRAINING_ITERATION
207 ]:
208 if k in tmp:
209 del tmp[k] # not useful to log these
210
211 flat_result = flatten_dict(tmp, delimiter="/")
212 path = ["ray", "tune"]
213 valid_result = {}
214
215 for attr, value in flat_result.items():
216 full_attr = "/".join(path + [attr])
217 if type(value) in VALID_SUMMARY_TYPES and not np.isnan(value):
218 valid_result[full_attr] = value
219 self._file_writer.add_scalar(
220 full_attr, value, global_step=step)
221 elif (type(value) == list
222 and len(value) > 0) or (type(value) == np.ndarray
223 and value.size > 0):
224 valid_result[full_attr] = value
225 try:
226 self._file_writer.add_histogram(
227 full_attr, value, global_step=step)
228 # In case TensorboardX still doesn't think it's a valid value
229 # (e.g. `[[]]`), warn and move on.
230 except (ValueError, TypeError):
231 if log_once("invalid_tbx_value"):
232 logger.warning(
233 "You are trying to log an invalid value ({}={}) "
234 "via {}!".format(full_attr, value,
235 type(self).__name__))
236
237 self.last_result = valid_result
238 self._file_writer.flush()
239
240 def flush(self):
241 if self._file_writer is not None:
242 self._file_writer.flush()
243
244 def close(self):
245 if self._file_writer is not None:
246 if self.trial and self.trial.evaluated_params and self.last_result:
247 flat_result = flatten_dict(self.last_result, delimiter="/")
248 scrubbed_result = {
249 k: value
250 for k, value in flat_result.items()
251 if type(value) in VALID_SUMMARY_TYPES
252 }
253 self._try_log_hparams(scrubbed_result)
254 self._file_writer.close()
255
256 def _try_log_hparams(self, result):
257 # TBX currently errors if the hparams value is None.
258 flat_params = flatten_dict(self.trial.evaluated_params)
259 scrubbed_params = {
260 k: v
261 for k, v in flat_params.items()
262 if isinstance(v, self.VALID_HPARAMS)
263 }
264
265 removed = {
266 k: v
267 for k, v in flat_params.items()
268 if not isinstance(v, self.VALID_HPARAMS)
269 }
270 if removed:
271 logger.info(
272 "Removed the following hyperparameter values when "
273 "logging to tensorboard: %s", str(removed))
274
275 from tensorboardX.summary import hparams
276 try:
277 experiment_tag, session_start_tag, session_end_tag = hparams(
278 hparam_dict=scrubbed_params, metric_dict=result)
279 self._file_writer.file_writer.add_summary(experiment_tag)
280 self._file_writer.file_writer.add_summary(session_start_tag)
281 self._file_writer.file_writer.add_summary(session_end_tag)
282 except Exception:
283 logger.exception("TensorboardX failed to log hparams. "
284 "This may be due to an unsupported type "
285 "in the hyperparameter values.")
286
287
288 DEFAULT_LOGGERS = (JsonLogger, CSVLogger, TBXLogger)
289
290
291 class UnifiedLogger(Logger):
292 """Unified result logger for TensorBoard, rllab/viskit, plain json.
293
294 Arguments:
295 config: Configuration passed to all logger creators.
296 logdir: Directory for all logger creators to log to.
297 loggers (list): List of logger creators. Defaults to CSV, Tensorboard,
298 and JSON loggers.
299 sync_function (func|str): Optional function for syncer to run.
300 See ray/python/ray/tune/syncer.py
301 """
302
303 def __init__(self,
304 config,
305 logdir,
306 trial=None,
307 loggers=None,
308 sync_function=None):
309 if loggers is None:
310 self._logger_cls_list = DEFAULT_LOGGERS
311 else:
312 self._logger_cls_list = loggers
313 if JsonLogger not in self._logger_cls_list:
314 if log_once("JsonLogger"):
315 logger.warning(
316 "JsonLogger not provided. The ExperimentAnalysis tool is "
317 "disabled.")
318 self._sync_function = sync_function
319 self._log_syncer = None
320
321 super(UnifiedLogger, self).__init__(config, logdir, trial)
322
323 def _init(self):
324 self._loggers = []
325 for cls in self._logger_cls_list:
326 try:
327 self._loggers.append(cls(self.config, self.logdir, self.trial))
328 except Exception as exc:
329 logger.warning("Could not instantiate %s: %s.", cls.__name__,
330 str(exc))
331 self._log_syncer = get_node_syncer(
332 self.logdir,
333 remote_dir=self.logdir,
334 sync_function=self._sync_function)
335
336 def on_result(self, result):
337 for _logger in self._loggers:
338 _logger.on_result(result)
339 self._log_syncer.set_worker_ip(result.get(NODE_IP))
340 self._log_syncer.sync_down_if_needed()
341
342 def update_config(self, config):
343 for _logger in self._loggers:
344 _logger.update_config(config)
345
346 def close(self):
347 for _logger in self._loggers:
348 _logger.close()
349
350 def flush(self, sync_down=True):
351 for _logger in self._loggers:
352 _logger.flush()
353 if sync_down:
354 if not self._log_syncer.sync_down():
355 logger.warning("Trial %s: Post-flush sync skipped.",
356 self.trial)
357
358 def sync_up(self):
359 return self._log_syncer.sync_up()
360
361 def sync_down(self):
362 return self._log_syncer.sync_down()
363
364 def wait(self):
365 self._log_syncer.wait()
366
367 def sync_results_to_new_location(self, worker_ip):
368 """Sends the current log directory to the remote node.
369
370 Syncing will not occur if the cluster is not started
371 with the Ray autoscaler.
372 """
373 if worker_ip != self._log_syncer.worker_ip:
374 logger.info("Trial %s: Syncing (blocking) results to %s",
375 self.trial, worker_ip)
376 self._log_syncer.reset()
377 self._log_syncer.set_worker_ip(worker_ip)
378 if not self._log_syncer.sync_up():
379 logger.error(
380 "Trial %s: Sync up to new location skipped. "
381 "This should not occur.", self.trial)
382 self._log_syncer.wait()
383 else:
384 logger.error(
385 "Trial %s: Sync attempted to same IP %s. This "
386 "should not occur.", self.trial, worker_ip)
387
388
389 class _SafeFallbackEncoder(json.JSONEncoder):
390 def __init__(self, nan_str="null", **kwargs):
391 super(_SafeFallbackEncoder, self).__init__(**kwargs)
392 self.nan_str = nan_str
393
394 def default(self, value):
395 try:
396 if np.isnan(value):
397 return self.nan_str
398
399 if (type(value).__module__ == np.__name__
400 and isinstance(value, np.ndarray)):
401 return value.tolist()
402
403 if issubclass(type(value), numbers.Integral):
404 return int(value)
405 if issubclass(type(value), numbers.Number):
406 return float(value)
407
408 return super(_SafeFallbackEncoder, self).default(value)
409
410 except Exception:
411 return str(value) # give up, just stringify it (ok for logs)
412
413
414 def pretty_print(result):
415 result = result.copy()
416 result.update(config=None) # drop config from pretty print
417 result.update(hist_stats=None) # drop hist_stats from pretty print
418 out = {}
419 for k, v in result.items():
420 if v is not None:
421 out[k] = v
422
423 cleaned = json.dumps(out, cls=_SafeFallbackEncoder)
424 return yaml.safe_dump(json.loads(cleaned), default_flow_style=False)
425
[end of python/ray/tune/logger.py]
[start of python/ray/tune/suggest/repeater.py]
1 import copy
2 import logging
3 import numpy as np
4
5 from ray.tune.suggest.suggestion import Searcher
6
7 logger = logging.getLogger(__name__)
8
9 TRIAL_INDEX = "__trial_index__"
10 """str: A constant value representing the repeat index of the trial."""
11
12
13 def _warn_num_samples(searcher, num_samples):
14 if isinstance(searcher, Repeater) and num_samples % searcher.repeat:
15 logger.warning(
16 "`num_samples` is now expected to be the total number of trials, "
17 "including the repeat trials. For example, set num_samples=15 if "
18 "you intend to obtain 3 search algorithm suggestions and repeat "
19 "each suggestion 5 times. Any leftover trials "
20 "(num_samples mod repeat) will be ignored.")
21
22
23 class _TrialGroup:
24 """Internal class for grouping trials of same parameters.
25
26 This is used when repeating trials for reducing training variance.
27
28 Args:
29 primary_trial_id (str): Trial ID of the "primary trial".
30 This trial is the one that the Searcher is aware of.
31 config (dict): Suggested configuration shared across all trials
32 in the trial group.
33 max_trials (int): Max number of trials to execute within this group.
34
35 """
36
37 def __init__(self, primary_trial_id, config, max_trials=1):
38 assert type(config) is dict, (
39 "config is not a dict, got {}".format(config))
40 self.primary_trial_id = primary_trial_id
41 self.config = config
42 self._trials = {primary_trial_id: None}
43 self.max_trials = max_trials
44
45 def add(self, trial_id):
46 assert len(self._trials) < self.max_trials
47 self._trials[trial_id] = None
48
49 def full(self):
50 return len(self._trials) == self.max_trials
51
52 def report(self, trial_id, score):
53 assert trial_id in self._trials
54 if score is None:
55 raise ValueError("Internal Error: Score cannot be None.")
56 self._trials[trial_id] = score
57
58 def finished_reporting(self):
59 return None not in self._trials.values()
60
61 def scores(self):
62 return list(self._trials.values())
63
64 def count(self):
65 return len(self._trials)
66
67
68 class Repeater(Searcher):
69 """A wrapper algorithm for repeating trials of same parameters.
70
71 Set tune.run(num_samples=...) to be a multiple of `repeat`. For example,
72 set num_samples=15 if you intend to obtain 3 search algorithm suggestions
73 and repeat each suggestion 5 times. Any leftover trials
74 (num_samples mod repeat) will be ignored.
75
76 It is recommended that you do not run an early-stopping TrialScheduler
77 simultaneously.
78
79 Args:
80 searcher (Searcher): Searcher object that the
81 Repeater will optimize. Note that the Searcher
82 will only see 1 trial among multiple repeated trials.
83 The result/metric passed to the Searcher upon
84 trial completion will be averaged among all repeats.
85 repeat (int): Number of times to generate a trial with a repeated
86 configuration. Defaults to 1.
87 set_index (bool): Sets a tune.suggest.repeater.TRIAL_INDEX in
88 Trainable/Function config which corresponds to the index of the
89 repeated trial. This can be used for seeds. Defaults to True.
90
91 Example:
92
93 .. code-block:: python
94
95 from ray.tune.suggest import Repeater
96
97 search_alg = BayesOptSearch(...)
98 re_search_alg = Repeater(search_alg, repeat=10)
99
100 # Repeat 2 samples 10 times each.
101 tune.run(trainable, num_samples=20, search_alg=re_search_alg)
102
103 """
104
105 def __init__(self, searcher, repeat=1, set_index=True):
106 self.searcher = searcher
107 self.repeat = repeat
108 self._set_index = set_index
109 self._groups = []
110 self._trial_id_to_group = {}
111 self._current_group = None
112 super(Repeater, self).__init__(
113 metric=self.searcher.metric, mode=self.searcher.mode)
114
115 def suggest(self, trial_id):
116 if self._current_group is None or self._current_group.full():
117 config = self.searcher.suggest(trial_id)
118 if config is None:
119 return config
120 self._current_group = _TrialGroup(
121 trial_id, copy.deepcopy(config), max_trials=self.repeat)
122 self._groups.append(self._current_group)
123 index_in_group = 0
124 else:
125 index_in_group = self._current_group.count()
126 self._current_group.add(trial_id)
127
128 config = self._current_group.config.copy()
129 if self._set_index:
130 config[TRIAL_INDEX] = index_in_group
131 self._trial_id_to_group[trial_id] = self._current_group
132 return config
133
134 def on_trial_complete(self, trial_id, result=None, **kwargs):
135 """Stores the score for and keeps track of a completed trial.
136
137 Stores the metric of a trial as nan if any of the following conditions
138 are met:
139
140 1. ``result`` is empty or not provided.
141 2. ``result`` is provided but no metric was provided.
142
143 """
144 if trial_id not in self._trial_id_to_group:
145 logger.error("Trial {} not in group; cannot report score. "
146 "Seen trials: {}".format(
147 trial_id, list(self._trial_id_to_group)))
148 trial_group = self._trial_id_to_group[trial_id]
149 if not result or self.searcher.metric not in result:
150 score = np.nan
151 else:
152 score = result[self.searcher.metric]
153 trial_group.report(trial_id, score)
154
155 if trial_group.finished_reporting():
156 scores = trial_group.scores()
157 self.searcher.on_trial_complete(
158 trial_group.primary_trial_id,
159 result={self.searcher.metric: np.nanmean(scores)},
160 **kwargs)
161
162 def save(self, path):
163 self.searcher.save(path)
164
165 def restore(self, path):
166 self.searcher.restore(path)
167
[end of python/ray/tune/suggest/repeater.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
ray-project/ray
|
7a2d7964d8f944bd60c1f03d58d6cc190c7a7015
|
[tune] Parameters from `tune.choice()` do not get logged to TensorBoard when integers
### What is the problem?
When providing parameters via `tune.choice()` that include integers, the values are not logged to TensorBoard's HPARAMS section.
The issue is that `numpy.random.choice([1, 2, 3])` (for example) returns `numpy.int32`/`numpy.int64` and those types are not included in the `VALID_HPARAMS = (str, bool, int, float, list)` tuple (python/ray/tune/logger.py).
Since TensorBoard has no issues with logging `numpy.int32/64`, one simple solution would be to just include those types in the tuple above. Happy to provide a PR if you think this is the way to go.
*Ray version and other system information (Python version, TensorFlow version, OS):*
ray: 0.8.6
python: 3.7.7
tensorboard: 2.2.2
ubuntu: 20.04
### Reproduction (REQUIRED)
```python
from ray import tune
def trainable(config):
tune.report(score=config["a"])
config_dict = {"a": tune.choice([1, 2, 3])}
tune.run(trainable, config=config_dict, num_samples=1)
```
- [x] I have verified my script runs in a clean environment and reproduces the issue.
- [x] I have verified the issue also occurs with the [latest wheels](https://docs.ray.io/en/latest/installation.html).
|
yes! that'd be great - could you push a PR and ping me?
|
2020-07-03T17:40:47Z
|
<patch>
diff --git a/python/ray/tune/logger.py b/python/ray/tune/logger.py
--- a/python/ray/tune/logger.py
+++ b/python/ray/tune/logger.py
@@ -187,7 +187,7 @@ class TBXLogger(Logger):
"""
# NoneType is not supported on the last TBX release yet.
- VALID_HPARAMS = (str, bool, int, float, list)
+ VALID_HPARAMS = (str, bool, np.bool8, int, np.integer, float, list)
def _init(self):
try:
</patch>
|
[]
|
[]
| |||
pantsbuild__pants-15224
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
JVM: Add controls around memory usage
Before calling Java support an MVP, we'll need some controls around memory usage of spawned JVMs.
Any scope which allows for configuration of JVM options should have two pants-level options:
1. a max memory usage option (using the same helper as our other [max-memory-usage option](https://github.com/pantsbuild/pants/blob/ce9830a03142070ce6e5cfa30a856900641ac661/src/python/pants/option/global_options.py#L814-L835))
* This is reified into its own option value because Pants needs to know it in order to compute how many instances can be spawned.
2. arbitrary JVM flags, with templating to support embedding the max memory usage option value.
* The templating is a convenience, to remove redundancy between this option and the first one. Something like:
```
-XX:+UseShenandoahGC -Xmx${PANTS_MAX_MEM_MB}mb
```
The only scope that needs to be supported in a first cut is "global" (other scopes like per-tool or per-target can follow without much adjustment). Additionally, although we have separate `--tool-jdk` and `--jdk` options, it's easiest to ignore that until/unless we add a `jdk` target type.
----
In a first PR, we should statically determine how many nailgun servers to start based on two new [global options](https://github.com/pantsbuild/pants/blob/main/src/python/pants/option/global_options.py):
* a "total child process memory" option (as a neighbor to [`--process-execution-local-parallelism`](https://github.com/pantsbuild/pants/blob/ce9830a03142070ce6e5cfa30a856900641ac661/src/python/pants/option/global_options.py#L1054-L1067)), which controls the total amount of memory we will use for processes which report their max usage.
* a "default child process memory" option, which controls the per-process maximum value.
...as well as adding one [JVM specific option](https://github.com/pantsbuild/pants/blob/main/src/python/pants/jvm/subsystems.py#L11-L24):
* A "global JVM options" list-valued flag which specifies templated global JVM options (as described above).
The two global options should end up on the [`ExecutionOptions`](https://github.com/pantsbuild/pants/blob/ce9830a03142070ce6e5cfa30a856900641ac661/src/python/pants/option/global_options.py#L322-L328), which is then converted into [`(Py)ExecutionStrategyOptions`](https://github.com/pantsbuild/pants/blob/ce9830a03142070ce6e5cfa30a856900641ac661/src/python/pants/engine/internals/scheduler.py#L202-L210), and eventually consumed to [create the local command runner](https://github.com/pantsbuild/pants/blob/ce9830a03142070ce6e5cfa30a856900641ac661/src/rust/engine/src/context.rs#L170). The values should be used to [statically compute a pool size in that method](https://github.com/pantsbuild/pants/blob/ce9830a03142070ce6e5cfa30a856900641ac661/src/rust/engine/src/context.rs#L189-L194), and the per-child-process value should be stored as a field of the `nailgun::CommandRunner`.
Applying the configured max memory to the templated argument list should likely (initially) occur at [the top of the nailgun command runner](https://github.com/pantsbuild/pants/blob/ce9830a03142070ce6e5cfa30a856900641ac661/src/rust/engine/process_execution/src/nailgun/mod.rs#L112-L123). The nailgun command runner should apply string templating to fill in the per-child-process value that was stashed earlier.
Finally: to add JDK arguments, the "global JVM options" flag should be consumed by the [`JvmProcess` handling @rule](https://github.com/pantsbuild/pants/blob/ce9830a03142070ce6e5cfa30a856900641ac661/src/python/pants/jvm/jdk_rules.py#L343-L376) to add the options to the right spot in the `Process` args.
----
Future work:
* Support for per-tool or per-target JVM options
* Dynamically allowing more JVMs based on their actual heap usage
</issue>
<code>
[start of README.md]
1 # Pants Build System
2
3 Pants is a scalable build system for _monorepos_: codebases containing
4 multiple projects, often using multiple programming languages and frameworks,
5 in a single unified code repository.
6
7 Some noteworthy features include:
8
9 * Explicit dependency modeling.
10 * Fine-grained invalidation.
11 * Shared result caching.
12 * Concurrent execution.
13 * Remote execution.
14 * Unified interface for multiple tools and languages.
15 * Extensibility and customizability via a plugin API.
16
17 Documentation: [www.pantsbuild.org](https://www.pantsbuild.org/).
18
19 We release to [PyPI](https://pypi.org/pypi)
20 [](https://pypi.org/pypi/pantsbuild.pants)
21 [](https://pypi.org/pypi/pantsbuild.pants)
22
23 # Requirements
24
25 To run Pants, you need:
26
27 * Linux or macOS.
28 * Python 3.7+ discoverable on your `PATH`.
29 * A C compiler, system headers and Python headers (to compile native Python modules).
30 * Internet access (so that Pants can fully bootstrap itself).
31
[end of README.md]
[start of src/python/pants/engine/process.py]
1 # Copyright 2016 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 from __future__ import annotations
5
6 import dataclasses
7 import logging
8 from dataclasses import dataclass, field
9 from enum import Enum
10 from typing import Iterable, Mapping
11
12 from pants.engine.engine_aware import SideEffecting
13 from pants.engine.fs import EMPTY_DIGEST, AddPrefix, Digest, FileDigest, MergeDigests
14 from pants.engine.internals.selectors import MultiGet
15 from pants.engine.internals.session import RunId
16 from pants.engine.platform import Platform
17 from pants.engine.rules import Get, collect_rules, rule
18 from pants.option.global_options import ProcessCleanupOption
19 from pants.util.frozendict import FrozenDict
20 from pants.util.logging import LogLevel
21 from pants.util.meta import frozen_after_init
22
23 logger = logging.getLogger(__name__)
24
25
26 @dataclass(frozen=True)
27 class ProductDescription:
28 value: str
29
30
31 class ProcessCacheScope(Enum):
32 # Cached in all locations, regardless of success or failure.
33 ALWAYS = "always"
34 # Cached in all locations, but only if the process exits successfully.
35 SUCCESSFUL = "successful"
36 # Cached only in memory (i.e. memoized in pantsd), but never persistently, regardless of
37 # success vs. failure.
38 PER_RESTART_ALWAYS = "per_restart_always"
39 # Cached only in memory (i.e. memoized in pantsd), but never persistently, and only if
40 # successful.
41 PER_RESTART_SUCCESSFUL = "per_restart_successful"
42 # Will run once per Session, i.e. once per run of Pants. This happens because the engine
43 # de-duplicates identical work; the process is neither memoized in memory nor cached to disk.
44 PER_SESSION = "per_session"
45
46
47 @frozen_after_init
48 @dataclass(unsafe_hash=True)
49 class Process:
50 argv: tuple[str, ...]
51 description: str = dataclasses.field(compare=False)
52 level: LogLevel
53 input_digest: Digest
54 immutable_input_digests: FrozenDict[str, Digest]
55 use_nailgun: tuple[str, ...]
56 working_directory: str | None
57 env: FrozenDict[str, str]
58 append_only_caches: FrozenDict[str, str]
59 output_files: tuple[str, ...]
60 output_directories: tuple[str, ...]
61 timeout_seconds: int | float
62 jdk_home: str | None
63 execution_slot_variable: str | None
64 concurrency_available: int
65 cache_scope: ProcessCacheScope
66 platform: str | None
67
68 def __init__(
69 self,
70 argv: Iterable[str],
71 *,
72 description: str,
73 level: LogLevel = LogLevel.INFO,
74 input_digest: Digest = EMPTY_DIGEST,
75 immutable_input_digests: Mapping[str, Digest] | None = None,
76 use_nailgun: Iterable[str] = (),
77 working_directory: str | None = None,
78 env: Mapping[str, str] | None = None,
79 append_only_caches: Mapping[str, str] | None = None,
80 output_files: Iterable[str] | None = None,
81 output_directories: Iterable[str] | None = None,
82 timeout_seconds: int | float | None = None,
83 jdk_home: str | None = None,
84 execution_slot_variable: str | None = None,
85 concurrency_available: int = 0,
86 cache_scope: ProcessCacheScope = ProcessCacheScope.SUCCESSFUL,
87 platform: Platform | None = None,
88 ) -> None:
89 """Request to run a subprocess, similar to subprocess.Popen.
90
91 This process will be hermetic, meaning that it cannot access files and environment variables
92 that are not explicitly populated. For example, $PATH will not be defined by default, unless
93 populated through the `env` parameter.
94
95 Usually, you will want to provide input files/directories via the parameter `input_digest`.
96 The process will then be able to access these paths through relative paths. If you want to
97 give multiple input digests, first merge them with `await Get(Digest, MergeDigests)`.
98
99 Often, you will want to capture the files/directories created in the process. To do this,
100 you can either set `output_files` or `output_directories`. The specified paths should be
101 specified relative to the `working_directory`, if any, and will then be used to populate
102 `output_digest` on the `ProcessResult`. If you want to split up this output digest into
103 multiple digests, use `await Get(Digest, DigestSubset)` on the `output_digest`.
104
105 To actually run the process, use `await Get(ProcessResult, Process)` or
106 `await Get(FallibleProcessResult, Process)`.
107
108 Example:
109
110 result = await Get(
111 ProcessResult, Process(["/bin/echo", "hello world"], description="demo")
112 )
113 assert result.stdout == b"hello world"
114 """
115 if isinstance(argv, str):
116 raise ValueError("argv must be a sequence of strings, but was a single string.")
117 self.argv = tuple(argv)
118 self.description = description
119 self.level = level
120 self.input_digest = input_digest
121 self.immutable_input_digests = FrozenDict(immutable_input_digests or {})
122 self.use_nailgun = tuple(use_nailgun)
123 self.working_directory = working_directory
124 self.env = FrozenDict(env or {})
125 self.append_only_caches = FrozenDict(append_only_caches or {})
126 self.output_files = tuple(output_files or ())
127 self.output_directories = tuple(output_directories or ())
128 # NB: A negative or None time value is normalized to -1 to ease the transfer to Rust.
129 self.timeout_seconds = timeout_seconds if timeout_seconds and timeout_seconds > 0 else -1
130 self.jdk_home = jdk_home
131 self.execution_slot_variable = execution_slot_variable
132 self.concurrency_available = concurrency_available
133 self.cache_scope = cache_scope
134 self.platform = platform.value if platform is not None else None
135
136
137 @dataclass(frozen=True)
138 class ProcessResult:
139 """Result of executing a process which should not fail.
140
141 If the process has a non-zero exit code, this will raise an exception, unlike
142 FallibleProcessResult.
143 """
144
145 stdout: bytes
146 stdout_digest: FileDigest
147 stderr: bytes
148 stderr_digest: FileDigest
149 output_digest: Digest
150 platform: Platform
151 metadata: ProcessResultMetadata = field(compare=False, hash=False)
152
153
154 @frozen_after_init
155 @dataclass(unsafe_hash=True)
156 class FallibleProcessResult:
157 """Result of executing a process which might fail.
158
159 If the process has a non-zero exit code, this will not raise an exception, unlike ProcessResult.
160 """
161
162 stdout: bytes
163 stdout_digest: FileDigest
164 stderr: bytes
165 stderr_digest: FileDigest
166 exit_code: int
167 output_digest: Digest
168 platform: Platform
169 metadata: ProcessResultMetadata = field(compare=False, hash=False)
170
171
172 @dataclass(frozen=True)
173 class ProcessResultMetadata:
174 """Metadata for a ProcessResult, which is not included in its definition of equality."""
175
176 class Source(Enum):
177 RAN_LOCALLY = "ran_locally"
178 RAN_REMOTELY = "ran_remotely"
179 HIT_LOCALLY = "hit_locally"
180 HIT_REMOTELY = "hit_remotely"
181 MEMOIZED = "memoized"
182
183 # The execution time of the process, in milliseconds, or None if it could not be captured
184 # (since remote execution does not guarantee its availability).
185 total_elapsed_ms: int | None
186 # Whether the ProcessResult (when it was created in the attached run_id) came from the local
187 # or remote cache, or ran locally or remotely. See the `self.source` method.
188 _source: str
189 # The run_id in which a ProcessResult was created. See the `self.source` method.
190 source_run_id: int
191
192 def source(self, current_run_id: RunId) -> Source:
193 """Given the current run_id, return the calculated "source" of the ProcessResult.
194
195 If a ProcessResult is consumed in any run_id other than the one it was created in, the its
196 source implicitly becomes memoization, since the result was re-used in a new run without
197 being recreated.
198 """
199 return (
200 self.Source(self._source)
201 if self.source_run_id == current_run_id
202 else self.Source.MEMOIZED
203 )
204
205
206 class ProcessExecutionFailure(Exception):
207 """Used to denote that a process exited, but was unsuccessful in some way.
208
209 For example, exiting with a non-zero code.
210 """
211
212 def __init__(
213 self,
214 exit_code: int,
215 stdout: bytes,
216 stderr: bytes,
217 process_description: str,
218 *,
219 process_cleanup: bool,
220 ) -> None:
221 # These are intentionally "public" members.
222 self.exit_code = exit_code
223 self.stdout = stdout
224 self.stderr = stderr
225
226 def try_decode(content: bytes) -> str:
227 try:
228 return content.decode()
229 except ValueError:
230 content_repr = repr(stdout)
231 return f"{content_repr[:256]}..." if len(content_repr) > 256 else content_repr
232
233 # NB: We don't use dedent on a single format string here because it would attempt to
234 # interpret the stdio content.
235 err_strings = [
236 f"Process '{process_description}' failed with exit code {exit_code}.",
237 "stdout:",
238 try_decode(stdout),
239 "stderr:",
240 try_decode(stderr),
241 ]
242 if process_cleanup:
243 err_strings.append(
244 "\n\nUse `--no-process-cleanup` to preserve process chroots for inspection."
245 )
246 super().__init__("\n".join(err_strings))
247
248
249 @rule
250 def get_multi_platform_request_description(req: Process) -> ProductDescription:
251 return ProductDescription(req.description)
252
253
254 @rule
255 def fallible_to_exec_result_or_raise(
256 fallible_result: FallibleProcessResult,
257 description: ProductDescription,
258 process_cleanup: ProcessCleanupOption,
259 ) -> ProcessResult:
260 """Converts a FallibleProcessResult to a ProcessResult or raises an error."""
261
262 if fallible_result.exit_code == 0:
263 return ProcessResult(
264 stdout=fallible_result.stdout,
265 stdout_digest=fallible_result.stdout_digest,
266 stderr=fallible_result.stderr,
267 stderr_digest=fallible_result.stderr_digest,
268 output_digest=fallible_result.output_digest,
269 platform=fallible_result.platform,
270 metadata=fallible_result.metadata,
271 )
272 raise ProcessExecutionFailure(
273 fallible_result.exit_code,
274 fallible_result.stdout,
275 fallible_result.stderr,
276 description.value,
277 process_cleanup=process_cleanup.val,
278 )
279
280
281 @dataclass(frozen=True)
282 class InteractiveProcessResult:
283 exit_code: int
284
285
286 @frozen_after_init
287 @dataclass(unsafe_hash=True)
288 class InteractiveProcess(SideEffecting):
289 argv: tuple[str, ...]
290 env: FrozenDict[str, str]
291 input_digest: Digest
292 run_in_workspace: bool
293 forward_signals_to_process: bool
294 restartable: bool
295 append_only_caches: FrozenDict[str, str]
296
297 def __init__(
298 self,
299 argv: Iterable[str],
300 *,
301 env: Mapping[str, str] | None = None,
302 input_digest: Digest = EMPTY_DIGEST,
303 run_in_workspace: bool = False,
304 forward_signals_to_process: bool = True,
305 restartable: bool = False,
306 append_only_caches: Mapping[str, str] | None = None,
307 ) -> None:
308 """Request to run a subprocess in the foreground, similar to subprocess.run().
309
310 Unlike `Process`, the result will not be cached.
311
312 To run the process, use `await Effect(InteractiveProcessResult, InteractiveProcess(..))`
313 in a `@goal_rule`.
314
315 `forward_signals_to_process` controls whether pants will allow a SIGINT signal
316 sent to a process by hitting Ctrl-C in the terminal to actually reach the process,
317 or capture that signal itself, blocking it from the process.
318 """
319 self.argv = tuple(argv)
320 self.env = FrozenDict(env or {})
321 self.input_digest = input_digest
322 self.run_in_workspace = run_in_workspace
323 self.forward_signals_to_process = forward_signals_to_process
324 self.restartable = restartable
325 self.append_only_caches = FrozenDict(append_only_caches or {})
326
327 self.__post_init__()
328
329 def __post_init__(self):
330 if self.input_digest != EMPTY_DIGEST and self.run_in_workspace:
331 raise ValueError(
332 "InteractiveProcess should use the Workspace API to materialize any needed "
333 "files when it runs in the workspace"
334 )
335 if self.append_only_caches and self.run_in_workspace:
336 raise ValueError(
337 "InteractiveProcess requested setup of append-only caches and also requested to run"
338 " in the workspace. These options are incompatible since setting up append-only"
339 " caches would modify the workspace."
340 )
341
342 @classmethod
343 def from_process(
344 cls,
345 process: Process,
346 *,
347 forward_signals_to_process: bool = True,
348 restartable: bool = False,
349 ) -> InteractiveProcess:
350 # TODO: Remove this check once https://github.com/pantsbuild/pants/issues/13852 is
351 # implemented and the immutable_input_digests are propagated into the InteractiveProcess.
352 if process.immutable_input_digests:
353 raise ValueError(
354 "Process has immutable_input_digests, so it cannot be converted to an "
355 "InteractiveProcess by calling from_process(). Use an async "
356 "InteractiveProcessRequest instead."
357 )
358 return InteractiveProcess(
359 argv=process.argv,
360 env=process.env,
361 input_digest=process.input_digest,
362 forward_signals_to_process=forward_signals_to_process,
363 restartable=restartable,
364 append_only_caches=process.append_only_caches,
365 )
366
367
368 @dataclass(frozen=True)
369 class InteractiveProcessRequest:
370 process: Process
371 forward_signals_to_process: bool = True
372 restartable: bool = False
373
374
375 @rule
376 async def interactive_process_from_process(req: InteractiveProcessRequest) -> InteractiveProcess:
377 # TODO: Temporary workaround until https://github.com/pantsbuild/pants/issues/13852
378 # is implemented. Once that is implemented we can get rid of this rule, and the
379 # InteractiveProcessRequest type, and use InteractiveProcess.from_process directly.
380
381 if req.process.immutable_input_digests:
382 prefixed_immutable_input_digests = await MultiGet(
383 Get(Digest, AddPrefix(digest, prefix))
384 for prefix, digest in req.process.immutable_input_digests.items()
385 )
386 full_input_digest = await Get(
387 Digest, MergeDigests([req.process.input_digest, *prefixed_immutable_input_digests])
388 )
389 else:
390 full_input_digest = req.process.input_digest
391 return InteractiveProcess(
392 argv=req.process.argv,
393 env=req.process.env,
394 input_digest=full_input_digest,
395 forward_signals_to_process=req.forward_signals_to_process,
396 restartable=req.restartable,
397 append_only_caches=req.process.append_only_caches,
398 )
399
400
401 def rules():
402 return collect_rules()
403
[end of src/python/pants/engine/process.py]
[start of src/python/pants/jvm/jdk_rules.py]
1 # Copyright 2021 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 from __future__ import annotations
5
6 import dataclasses
7 import logging
8 import os
9 import re
10 import shlex
11 import textwrap
12 from dataclasses import dataclass
13 from enum import Enum
14 from typing import ClassVar, Iterable, Mapping
15
16 from pants.core.util_rules.system_binaries import BashBinary
17 from pants.engine.fs import CreateDigest, Digest, FileContent, FileDigest, MergeDigests
18 from pants.engine.internals.selectors import Get
19 from pants.engine.platform import Platform
20 from pants.engine.process import FallibleProcessResult, Process, ProcessCacheScope
21 from pants.engine.rules import collect_rules, rule
22 from pants.engine.target import CoarsenedTarget
23 from pants.jvm.compile import ClasspathEntry
24 from pants.jvm.resolve.common import Coordinate, Coordinates
25 from pants.jvm.resolve.coursier_fetch import CoursierLockfileEntry
26 from pants.jvm.resolve.coursier_setup import Coursier
27 from pants.jvm.subsystems import JvmSubsystem
28 from pants.jvm.target_types import JvmJdkField
29 from pants.util.frozendict import FrozenDict
30 from pants.util.logging import LogLevel
31 from pants.util.meta import classproperty, frozen_after_init
32
33 logger = logging.getLogger(__name__)
34
35
36 @dataclass(frozen=True)
37 class Nailgun:
38 classpath_entry: ClasspathEntry
39
40
41 class DefaultJdk(Enum):
42 SYSTEM = "system"
43 SOURCE_DEFAULT = "source_default"
44
45
46 @dataclass(frozen=True)
47 class JdkRequest:
48 """Request for a JDK with a specific major version, or a default (`--jvm-jdk` or System)."""
49
50 version: str | DefaultJdk
51
52 @classproperty
53 def SYSTEM(cls) -> JdkRequest:
54 return JdkRequest(DefaultJdk.SYSTEM)
55
56 @classproperty
57 def SOURCE_DEFAULT(cls) -> JdkRequest:
58 return JdkRequest(DefaultJdk.SOURCE_DEFAULT)
59
60 @staticmethod
61 def from_field(field: JvmJdkField) -> JdkRequest:
62 version = field.value
63 if version == "system":
64 return JdkRequest.SYSTEM
65 return JdkRequest(version) if version is not None else JdkRequest.SOURCE_DEFAULT
66
67 @staticmethod
68 def from_target(target: CoarsenedTarget) -> JdkRequest:
69 fields = [t[JvmJdkField] for t in target.members if t.has_field(JvmJdkField)]
70
71 if not fields:
72 raise ValueError(
73 f"Cannot construct a JDK request for {target}, since none of its "
74 f"members have a `{JvmJdkField.alias}=` field:\n{target.bullet_list()}"
75 )
76
77 field = fields[0]
78 if not all(f.value == field.value for f in fields):
79 values = {f.value for f in fields}
80 raise ValueError(
81 f"The members of {target} had mismatched values of the "
82 f"`{JvmJdkField.alias}=` field ({values}):\n{target.bullet_list()}"
83 )
84
85 return JdkRequest.from_field(field)
86
87
88 @dataclass(frozen=True)
89 class JdkEnvironment:
90 _digest: Digest
91 nailgun_jar: str
92 coursier: Coursier
93 jre_major_version: int
94
95 bin_dir: ClassVar[str] = "__jdk"
96 jdk_preparation_script: ClassVar[str] = f"{bin_dir}/jdk.sh"
97 java_home: ClassVar[str] = "__java_home"
98
99 def args(self, bash: BashBinary, classpath_entries: Iterable[str]) -> tuple[str, ...]:
100 return (
101 bash.path,
102 self.jdk_preparation_script,
103 f"{self.java_home}/bin/java",
104 "-cp",
105 ":".join([self.nailgun_jar, *classpath_entries]),
106 )
107
108 @property
109 def env(self) -> dict[str, str]:
110 return self.coursier.env
111
112 @property
113 def append_only_caches(self) -> dict[str, str]:
114 return self.coursier.append_only_caches
115
116 @property
117 def immutable_input_digests(self) -> dict[str, Digest]:
118 return {**self.coursier.immutable_input_digests, self.bin_dir: self._digest}
119
120
121 @dataclass(frozen=True)
122 class InternalJdk(JdkEnvironment):
123 """The JDK configured for internal Pants usage, rather than for matching source compatibility.
124
125 The InternalJdk should only be used in situations where no classfiles are required for a user's
126 firstparty or thirdparty code (such as for codegen, or analysis of source files).
127 """
128
129
130 VERSION_REGEX = re.compile(r"version \"(.+?)\"")
131
132
133 def parse_jre_major_version(version_lines: str) -> int | None:
134 for line in version_lines.splitlines():
135 m = VERSION_REGEX.search(line)
136 if m:
137 major_version, _, _ = m[1].partition(".")
138 return int(major_version)
139 return None
140
141
142 @rule
143 async def fetch_nailgun() -> Nailgun:
144 nailgun = await Get(
145 ClasspathEntry,
146 CoursierLockfileEntry(
147 coord=Coordinate.from_coord_str("com.martiansoftware:nailgun-server:0.9.1"),
148 file_name="com.martiansoftware_nailgun-server_0.9.1.jar",
149 direct_dependencies=Coordinates(),
150 dependencies=Coordinates(),
151 file_digest=FileDigest(
152 fingerprint="4518faa6bf4bd26fccdc4d85e1625dc679381a08d56872d8ad12151dda9cef25",
153 serialized_bytes_length=32927,
154 ),
155 ),
156 )
157
158 return Nailgun(nailgun)
159
160
161 @rule
162 async def internal_jdk(jvm: JvmSubsystem) -> InternalJdk:
163 """Creates a `JdkEnvironment` object based on the JVM subsystem options.
164
165 This is used for providing a predictable JDK version for Pants' internal usage rather than for
166 matching compatibility with source files (e.g. compilation/testing).
167 """
168
169 request = JdkRequest(jvm.tool_jdk) if jvm.tool_jdk is not None else JdkRequest.SYSTEM
170 env = await Get(JdkEnvironment, JdkRequest, request)
171 return InternalJdk(env._digest, env.nailgun_jar, env.coursier, env.jre_major_version)
172
173
174 @rule
175 async def prepare_jdk_environment(
176 jvm: JvmSubsystem, coursier: Coursier, nailgun_: Nailgun, bash: BashBinary, request: JdkRequest
177 ) -> JdkEnvironment:
178 nailgun = nailgun_.classpath_entry
179
180 version = request.version
181 if version == DefaultJdk.SOURCE_DEFAULT:
182 version = jvm.jdk
183
184 # TODO: add support for system JDKs with specific version
185 if version is DefaultJdk.SYSTEM:
186 coursier_jdk_option = "--system-jvm"
187 else:
188 coursier_jdk_option = shlex.quote(f"--jvm={version}")
189
190 # TODO(#14386) This argument re-writing code should be done in a more standardised way.
191 # See also `run_deploy_jar` for other argument re-writing code.
192 def prefixed(arg: str) -> str:
193 if arg.startswith("__"):
194 return f"${{PANTS_INTERNAL_ABSOLUTE_PREFIX}}{arg}"
195 else:
196 return arg
197
198 optionally_prefixed_coursier_args = [
199 prefixed(arg) for arg in coursier.args(["java-home", coursier_jdk_option])
200 ]
201 # NB: We `set +e` in the subshell to ensure that it exits as well.
202 # see https://unix.stackexchange.com/a/23099
203 java_home_command = " ".join(("set +e;", *optionally_prefixed_coursier_args))
204
205 env = {
206 "PANTS_INTERNAL_ABSOLUTE_PREFIX": "",
207 **coursier.env,
208 }
209
210 java_version_result = await Get(
211 FallibleProcessResult,
212 Process(
213 argv=(
214 bash.path,
215 "-c",
216 f"$({java_home_command})/bin/java -version",
217 ),
218 append_only_caches=coursier.append_only_caches,
219 immutable_input_digests=coursier.immutable_input_digests,
220 env=env,
221 description=f"Ensure download of JDK {coursier_jdk_option}.",
222 cache_scope=ProcessCacheScope.PER_RESTART_SUCCESSFUL,
223 level=LogLevel.DEBUG,
224 ),
225 )
226
227 if java_version_result.exit_code != 0:
228 raise ValueError(
229 f"Failed to locate Java for JDK `{version}`:\n"
230 f"{java_version_result.stderr.decode('utf-8')}"
231 )
232
233 java_version = java_version_result.stderr.decode("utf-8").strip()
234 jre_major_version = parse_jre_major_version(java_version)
235 if not jre_major_version:
236 raise ValueError(
237 "Pants was unable to parse the output of `java -version` for JDK "
238 f"`{request.version}`. Please open an issue at "
239 "https://github.com/pantsbuild/pants/issues/new/choose with the following output:\n\n"
240 f"{java_version}"
241 )
242
243 # TODO: Locate `ln`.
244 version_comment = "\n".join(f"# {line}" for line in java_version.splitlines())
245 jdk_preparation_script = textwrap.dedent(
246 f"""\
247 # pants javac script using Coursier {coursier_jdk_option}. `java -version`:"
248 {version_comment}
249 set -eu
250
251 /bin/ln -s "$({java_home_command})" "${{PANTS_INTERNAL_ABSOLUTE_PREFIX}}{JdkEnvironment.java_home}"
252 exec "$@"
253 """
254 )
255 jdk_preparation_script_digest = await Get(
256 Digest,
257 CreateDigest(
258 [
259 FileContent(
260 os.path.basename(JdkEnvironment.jdk_preparation_script),
261 jdk_preparation_script.encode("utf-8"),
262 is_executable=True,
263 ),
264 ]
265 ),
266 )
267 return JdkEnvironment(
268 _digest=await Get(
269 Digest,
270 MergeDigests(
271 [
272 jdk_preparation_script_digest,
273 nailgun.digest,
274 ]
275 ),
276 ),
277 nailgun_jar=os.path.join(JdkEnvironment.bin_dir, nailgun.filenames[0]),
278 coursier=coursier,
279 jre_major_version=jre_major_version,
280 )
281
282
283 @frozen_after_init
284 @dataclass(unsafe_hash=True)
285 class JvmProcess:
286 jdk: JdkEnvironment
287 argv: tuple[str, ...]
288 classpath_entries: tuple[str, ...]
289 input_digest: Digest
290 description: str = dataclasses.field(compare=False)
291 level: LogLevel
292 extra_nailgun_keys: tuple[str, ...]
293 output_files: tuple[str, ...]
294 output_directories: tuple[str, ...]
295 timeout_seconds: int | float | None
296 platform: Platform | None
297 extra_immutable_input_digests: FrozenDict[str, Digest]
298 extra_env: FrozenDict[str, str]
299 cache_scope: ProcessCacheScope | None
300 use_nailgun: bool
301
302 def __init__(
303 self,
304 jdk: JdkEnvironment,
305 argv: Iterable[str],
306 classpath_entries: Iterable[str],
307 input_digest: Digest,
308 description: str,
309 level: LogLevel = LogLevel.INFO,
310 extra_nailgun_keys: Iterable[str] | None = None,
311 output_files: Iterable[str] | None = None,
312 output_directories: Iterable[str] | None = None,
313 extra_immutable_input_digests: Mapping[str, Digest] | None = None,
314 extra_env: Mapping[str, str] | None = None,
315 timeout_seconds: int | float | None = None,
316 platform: Platform | None = None,
317 cache_scope: ProcessCacheScope | None = None,
318 use_nailgun: bool = True,
319 ):
320 self.jdk = jdk
321 self.argv = tuple(argv)
322 self.classpath_entries = tuple(classpath_entries)
323 self.input_digest = input_digest
324 self.description = description
325 self.level = level
326 self.extra_nailgun_keys = tuple(extra_nailgun_keys or ())
327 self.output_files = tuple(output_files or ())
328 self.output_directories = tuple(output_directories or ())
329 self.timeout_seconds = timeout_seconds
330 self.platform = platform
331 self.cache_scope = cache_scope
332 self.extra_immutable_input_digests = FrozenDict(extra_immutable_input_digests or {})
333 self.extra_env = FrozenDict(extra_env or {})
334 self.use_nailgun = use_nailgun
335
336 if not use_nailgun and extra_nailgun_keys:
337 raise AssertionError(
338 "`JvmProcess` specified nailgun keys, but has `use_nailgun=False`. Either "
339 "specify `extra_nailgun_keys=None` or `use_nailgun=True`."
340 )
341
342
343 @rule
344 async def jvm_process(bash: BashBinary, request: JvmProcess) -> Process:
345
346 jdk = request.jdk
347
348 immutable_input_digests = {
349 **jdk.immutable_input_digests,
350 **request.extra_immutable_input_digests,
351 }
352 env = {
353 "PANTS_INTERNAL_ABSOLUTE_PREFIX": "",
354 **jdk.env,
355 **request.extra_env,
356 }
357
358 use_nailgun = []
359 if request.use_nailgun:
360 use_nailgun = [*jdk.immutable_input_digests, *request.extra_nailgun_keys]
361
362 return Process(
363 [*jdk.args(bash, request.classpath_entries), *request.argv],
364 input_digest=request.input_digest,
365 immutable_input_digests=immutable_input_digests,
366 use_nailgun=use_nailgun,
367 description=request.description,
368 level=request.level,
369 output_directories=request.output_directories,
370 env=env,
371 platform=request.platform,
372 timeout_seconds=request.timeout_seconds,
373 append_only_caches=jdk.append_only_caches,
374 output_files=request.output_files,
375 cache_scope=request.cache_scope or ProcessCacheScope.SUCCESSFUL,
376 )
377
378
379 def rules():
380 return collect_rules()
381
[end of src/python/pants/jvm/jdk_rules.py]
[start of src/python/pants/option/options_bootstrapper.py]
1 # Copyright 2014 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 from __future__ import annotations
5
6 import itertools
7 import os
8 import warnings
9 from dataclasses import dataclass
10 from pathlib import Path
11 from typing import TYPE_CHECKING, Iterable, Mapping, Sequence
12
13 from pants.base.build_environment import get_default_pants_config_file, pants_version
14 from pants.base.exceptions import BuildConfigurationError
15 from pants.option.alias import CliAlias
16 from pants.option.config import Config
17 from pants.option.custom_types import ListValueComponent
18 from pants.option.global_options import BootstrapOptions, GlobalOptions
19 from pants.option.option_types import collect_options_info
20 from pants.option.options import Options
21 from pants.option.scope import GLOBAL_SCOPE, ScopeInfo
22 from pants.option.subsystem import Subsystem
23 from pants.util.dirutil import read_file
24 from pants.util.eval import parse_expression
25 from pants.util.memo import memoized_method, memoized_property
26 from pants.util.ordered_set import FrozenOrderedSet
27 from pants.util.strutil import ensure_text
28
29 if TYPE_CHECKING:
30 from pants.build_graph.build_configuration import BuildConfiguration
31
32
33 @dataclass(frozen=True)
34 class OptionsBootstrapper:
35 """Holds the result of the first stage of options parsing, and assists with parsing full
36 options."""
37
38 env_tuples: tuple[tuple[str, str], ...]
39 bootstrap_args: tuple[str, ...]
40 args: tuple[str, ...]
41 config: Config
42 alias: CliAlias
43
44 def __repr__(self) -> str:
45 env = {pair[0]: pair[1] for pair in self.env_tuples}
46 # Bootstrap args are included in `args`. We also drop the first argument, which is the path
47 # to `pants_loader.py`.
48 args = list(self.args[1:])
49 return f"OptionsBootstrapper(args={args}, env={env}, config={self.config})"
50
51 @staticmethod
52 def get_config_file_paths(env, args) -> list[str]:
53 """Get the location of the config files.
54
55 The locations are specified by the --pants-config-files option. However we need to load the
56 config in order to process the options. This method special-cases --pants-config-files
57 in order to solve this chicken-and-egg problem.
58
59 Note that, obviously, it's not possible to set the location of config files in a config file.
60 Doing so will have no effect.
61 """
62 # This exactly mirrors the logic applied in Option to all regular options. Note that we'll
63 # also parse --pants-config as a regular option later, but there's no harm in that. In fact,
64 # it's preferable, so that any code that happens to want to know where we read config from
65 # can inspect the option.
66 flag = "--pants-config-files="
67 evars = [
68 "PANTS_GLOBAL_PANTS_CONFIG_FILES",
69 "PANTS_PANTS_CONFIG_FILES",
70 "PANTS_CONFIG_FILES",
71 ]
72
73 path_list_values = []
74 default = get_default_pants_config_file()
75 if Path(default).is_file():
76 path_list_values.append(ListValueComponent.create(default))
77 for var in evars:
78 if var in env:
79 path_list_values.append(ListValueComponent.create(env[var]))
80 break
81
82 for arg in args:
83 # Technically this is very slightly incorrect, as we don't check scope. But it's
84 # very unlikely that any task or subsystem will have an option named --pants-config-files.
85 # TODO: Enforce a ban on options with a --pants- prefix outside our global options?
86 if arg.startswith(flag):
87 path_list_values.append(ListValueComponent.create(arg[len(flag) :]))
88
89 return ListValueComponent.merge(path_list_values).val
90
91 @staticmethod
92 def parse_bootstrap_options(
93 env: Mapping[str, str], args: Sequence[str], config: Config
94 ) -> Options:
95 bootstrap_options = Options.create(
96 env=env,
97 config=config,
98 known_scope_infos=[GlobalOptions.get_scope_info()],
99 args=args,
100 )
101
102 for options_info in collect_options_info(BootstrapOptions):
103 # Only use of Options.register?
104 bootstrap_options.register(
105 GLOBAL_SCOPE, *options_info.flag_names, **options_info.flag_options
106 )
107
108 return bootstrap_options
109
110 @classmethod
111 def create(
112 cls, env: Mapping[str, str], args: Sequence[str], *, allow_pantsrc: bool
113 ) -> OptionsBootstrapper:
114 """Parses the minimum amount of configuration necessary to create an OptionsBootstrapper.
115
116 :param env: An environment dictionary.
117 :param args: An args array.
118 :param allow_pantsrc: True to allow pantsrc files to be used. Unless tests are expecting to
119 consume pantsrc files, they should pass False in order to avoid reading files from
120 absolute paths. Production usecases should pass True to allow options values to make the
121 decision of whether to respect pantsrc files.
122 """
123 with warnings.catch_warnings(record=True):
124 # We can't use pants.engine.fs.FileContent here because it would cause a circular dep.
125 @dataclass(frozen=True)
126 class FileContent:
127 path: str
128 content: bytes
129
130 def filecontent_for(path: str) -> FileContent:
131 return FileContent(
132 ensure_text(path),
133 read_file(path, binary_mode=True),
134 )
135
136 bargs = cls._get_bootstrap_args(args)
137
138 config_file_paths = cls.get_config_file_paths(env=env, args=args)
139 config_files_products = [filecontent_for(p) for p in config_file_paths]
140 pre_bootstrap_config = Config.load(config_files_products, env=env)
141
142 initial_bootstrap_options = cls.parse_bootstrap_options(
143 env, bargs, pre_bootstrap_config
144 )
145 bootstrap_option_values = initial_bootstrap_options.for_global_scope()
146
147 # Now re-read the config, post-bootstrapping. Note the order: First whatever we bootstrapped
148 # from (typically pants.toml), then config override, then rcfiles.
149 full_config_paths = pre_bootstrap_config.sources()
150 if allow_pantsrc and bootstrap_option_values.pantsrc:
151 rcfiles = [
152 os.path.expanduser(str(rcfile))
153 for rcfile in bootstrap_option_values.pantsrc_files
154 ]
155 existing_rcfiles = list(filter(os.path.exists, rcfiles))
156 full_config_paths.extend(existing_rcfiles)
157
158 full_config_files_products = [filecontent_for(p) for p in full_config_paths]
159 post_bootstrap_config = Config.load(
160 full_config_files_products,
161 seed_values=bootstrap_option_values.as_dict(),
162 env=env,
163 )
164
165 # Finally, we expand any aliases and re-populate the bootstrap args, in case there
166 # were any from aliases.
167 # stuhood: This could potentially break the rust client when aliases are used:
168 # https://github.com/pantsbuild/pants/pull/13228#discussion_r728223889
169 alias_vals = post_bootstrap_config.get("cli", "alias")
170 alias_dict = parse_expression(
171 name="cli.alias",
172 val=alias_vals[-1] if alias_vals else "{}",
173 acceptable_types=dict,
174 )
175 alias = CliAlias.from_dict(alias_dict)
176
177 args = alias.expand_args(tuple(args))
178 bargs = cls._get_bootstrap_args(args)
179
180 # We need to set this env var to allow various static help strings to reference the
181 # right name (via `pants.util.docutil`), and we need to do it as early as possible to
182 # avoid needing to lazily import code to avoid chicken-and-egg-problems. This is the
183 # earliest place it makes sense to do so and is generically used by both the local and
184 # remote pants runners.
185 os.environ["PANTS_BIN_NAME"] = bootstrap_option_values.pants_bin_name
186
187 env_tuples = tuple(
188 sorted(
189 (item for item in env.items() if item[0].startswith("PANTS_")),
190 )
191 )
192 return cls(
193 env_tuples=env_tuples,
194 bootstrap_args=bargs,
195 args=args,
196 config=post_bootstrap_config,
197 alias=alias,
198 )
199
200 @classmethod
201 def _get_bootstrap_args(cls, args: Sequence[str]) -> tuple[str, ...]:
202 # TODO(13244): there is a typing issue with `memoized_classmethod`.
203 options = GlobalOptions.get_options_flags() # type: ignore[call-arg]
204
205 def is_bootstrap_option(arg: str) -> bool:
206 components = arg.split("=", 1)
207 if components[0] in options.flags:
208 return True
209 for flag in options.short_flags:
210 if arg.startswith(flag):
211 return True
212 return False
213
214 # Take just the bootstrap args, so we don't choke on other global-scope args on the cmd line.
215 # Stop before '--' since args after that are pass-through and may have duplicate names to our
216 # bootstrap options.
217 bargs = ("<ignored>",) + tuple(
218 filter(is_bootstrap_option, itertools.takewhile(lambda arg: arg != "--", args))
219 )
220 return bargs
221
222 @memoized_property
223 def env(self) -> dict[str, str]:
224 return dict(self.env_tuples)
225
226 @memoized_property
227 def bootstrap_options(self) -> Options:
228 """The post-bootstrap options, computed from the env, args, and fully discovered Config.
229
230 Re-computing options after Config has been fully expanded allows us to pick up bootstrap values
231 (such as backends) from a config override file, for example.
232
233 Because this can be computed from the in-memory representation of these values, it is not part
234 of the object's identity.
235 """
236 return self.parse_bootstrap_options(self.env, self.bootstrap_args, self.config)
237
238 def get_bootstrap_options(self) -> Options:
239 """Returns an Options instance that only knows about the bootstrap options."""
240 return self.bootstrap_options
241
242 @memoized_method
243 def _full_options(
244 self, known_scope_infos: FrozenOrderedSet[ScopeInfo], allow_unknown_options: bool = False
245 ) -> Options:
246 bootstrap_option_values = self.get_bootstrap_options().for_global_scope()
247 options = Options.create(
248 self.env,
249 self.config,
250 known_scope_infos,
251 args=self.args,
252 bootstrap_option_values=bootstrap_option_values,
253 allow_unknown_options=allow_unknown_options,
254 )
255
256 distinct_subsystem_classes: set[type[Subsystem]] = set()
257 for ksi in known_scope_infos:
258 if not ksi.subsystem_cls or ksi.subsystem_cls in distinct_subsystem_classes:
259 continue
260 distinct_subsystem_classes.add(ksi.subsystem_cls)
261 ksi.subsystem_cls.register_options_on_scope(options)
262
263 return options
264
265 def full_options_for_scopes(
266 self, known_scope_infos: Iterable[ScopeInfo], allow_unknown_options: bool = False
267 ) -> Options:
268 """Get the full Options instance bootstrapped by this object for the given known scopes.
269
270 :param known_scope_infos: ScopeInfos for all scopes that may be encountered.
271 :returns: A bootstrapped Options instance that also carries options for all the supplied known
272 scopes.
273 """
274 return self._full_options(
275 FrozenOrderedSet(sorted(known_scope_infos, key=lambda si: si.scope)),
276 allow_unknown_options=allow_unknown_options,
277 )
278
279 def full_options(self, build_configuration: BuildConfiguration) -> Options:
280 global_bootstrap_options = self.get_bootstrap_options().for_global_scope()
281 if global_bootstrap_options.pants_version != pants_version():
282 raise BuildConfigurationError(
283 f"Version mismatch: Requested version was {global_bootstrap_options.pants_version}, "
284 f"our version is {pants_version()}."
285 )
286
287 # Parse and register options.
288 known_scope_infos = [
289 subsystem.get_scope_info() for subsystem in build_configuration.all_subsystems
290 ]
291 options = self.full_options_for_scopes(
292 known_scope_infos, allow_unknown_options=build_configuration.allow_unknown_options
293 )
294 GlobalOptions.validate_instance(options.for_global_scope())
295 self.alias.check_name_conflicts(options.known_scope_to_info)
296 return options
297
[end of src/python/pants/option/options_bootstrapper.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pantsbuild/pants
|
8cbcac89f5ea8f92cbfff41e7145e65e02c531cc
|
JVM: Add controls around memory usage
Before calling Java support an MVP, we'll need some controls around memory usage of spawned JVMs.
Any scope which allows for configuration of JVM options should have two pants-level options:
1. a max memory usage option (using the same helper as our other [max-memory-usage option](https://github.com/pantsbuild/pants/blob/ce9830a03142070ce6e5cfa30a856900641ac661/src/python/pants/option/global_options.py#L814-L835))
* This is reified into its own option value because Pants needs to know it in order to compute how many instances can be spawned.
2. arbitrary JVM flags, with templating to support embedding the max memory usage option value.
* The templating is a convenience, to remove redundancy between this option and the first one. Something like:
```
-XX:+UseShenandoahGC -Xmx${PANTS_MAX_MEM_MB}mb
```
The only scope that needs to be supported in a first cut is "global" (other scopes like per-tool or per-target can follow without much adjustment). Additionally, although we have separate `--tool-jdk` and `--jdk` options, it's easiest to ignore that until/unless we add a `jdk` target type.
----
In a first PR, we should statically determine how many nailgun servers to start based on two new [global options](https://github.com/pantsbuild/pants/blob/main/src/python/pants/option/global_options.py):
* a "total child process memory" option (as a neighbor to [`--process-execution-local-parallelism`](https://github.com/pantsbuild/pants/blob/ce9830a03142070ce6e5cfa30a856900641ac661/src/python/pants/option/global_options.py#L1054-L1067)), which controls the total amount of memory we will use for processes which report their max usage.
* a "default child process memory" option, which controls the per-process maximum value.
...as well as adding one [JVM specific option](https://github.com/pantsbuild/pants/blob/main/src/python/pants/jvm/subsystems.py#L11-L24):
* A "global JVM options" list-valued flag which specifies templated global JVM options (as described above).
The two global options should end up on the [`ExecutionOptions`](https://github.com/pantsbuild/pants/blob/ce9830a03142070ce6e5cfa30a856900641ac661/src/python/pants/option/global_options.py#L322-L328), which is then converted into [`(Py)ExecutionStrategyOptions`](https://github.com/pantsbuild/pants/blob/ce9830a03142070ce6e5cfa30a856900641ac661/src/python/pants/engine/internals/scheduler.py#L202-L210), and eventually consumed to [create the local command runner](https://github.com/pantsbuild/pants/blob/ce9830a03142070ce6e5cfa30a856900641ac661/src/rust/engine/src/context.rs#L170). The values should be used to [statically compute a pool size in that method](https://github.com/pantsbuild/pants/blob/ce9830a03142070ce6e5cfa30a856900641ac661/src/rust/engine/src/context.rs#L189-L194), and the per-child-process value should be stored as a field of the `nailgun::CommandRunner`.
Applying the configured max memory to the templated argument list should likely (initially) occur at [the top of the nailgun command runner](https://github.com/pantsbuild/pants/blob/ce9830a03142070ce6e5cfa30a856900641ac661/src/rust/engine/process_execution/src/nailgun/mod.rs#L112-L123). The nailgun command runner should apply string templating to fill in the per-child-process value that was stashed earlier.
Finally: to add JDK arguments, the "global JVM options" flag should be consumed by the [`JvmProcess` handling @rule](https://github.com/pantsbuild/pants/blob/ce9830a03142070ce6e5cfa30a856900641ac661/src/python/pants/jvm/jdk_rules.py#L343-L376) to add the options to the right spot in the `Process` args.
----
Future work:
* Support for per-tool or per-target JVM options
* Dynamically allowing more JVMs based on their actual heap usage
|
Not supporting per-tool options might be too barebones: in particular, the first user report of a need for options was for allowing reflective access in tests. While allowing globally might also work, we'll likely want to survey some example JVM options usage in existing repos to see what MVP should look like here.
EDIT: To the extent that these might be specified per-target, it relates to #13882.
|
2022-04-22T12:31:43Z
|
<patch>
diff --git a/src/python/pants/backend/kotlin/dependency_inference/kotlin_parser.py b/src/python/pants/backend/kotlin/dependency_inference/kotlin_parser.py
--- a/src/python/pants/backend/kotlin/dependency_inference/kotlin_parser.py
+++ b/src/python/pants/backend/kotlin/dependency_inference/kotlin_parser.py
@@ -157,7 +157,7 @@ async def analyze_kotlin_source_dependencies(
# Use JDK 8 due to https://youtrack.jetbrains.com/issue/KTIJ-17192 and https://youtrack.jetbrains.com/issue/KT-37446.
request = JdkRequest("adopt:8")
env = await Get(JdkEnvironment, JdkRequest, request)
- jdk = InternalJdk(env._digest, env.nailgun_jar, env.coursier, env.jre_major_version)
+ jdk = InternalJdk.from_jdk_environment(env)
if len(source_files.files) > 1:
raise ValueError(
diff --git a/src/python/pants/engine/internals/scheduler.py b/src/python/pants/engine/internals/scheduler.py
--- a/src/python/pants/engine/internals/scheduler.py
+++ b/src/python/pants/engine/internals/scheduler.py
@@ -207,6 +207,8 @@ def __init__(
local_parallelism=execution_options.process_execution_local_parallelism,
local_enable_nailgun=execution_options.process_execution_local_enable_nailgun,
remote_parallelism=execution_options.process_execution_remote_parallelism,
+ child_max_memory=execution_options.process_total_child_memory_usage or 0,
+ child_default_memory=execution_options.process_per_child_memory_usage,
)
self._py_scheduler = native_engine.scheduler_create(
diff --git a/src/python/pants/jvm/jdk_rules.py b/src/python/pants/jvm/jdk_rules.py
--- a/src/python/pants/jvm/jdk_rules.py
+++ b/src/python/pants/jvm/jdk_rules.py
@@ -26,9 +26,12 @@
from pants.jvm.resolve.coursier_setup import Coursier
from pants.jvm.subsystems import JvmSubsystem
from pants.jvm.target_types import JvmJdkField
+from pants.option.global_options import GlobalOptions
+from pants.util.docutil import bin_name
from pants.util.frozendict import FrozenDict
from pants.util.logging import LogLevel
from pants.util.meta import classproperty, frozen_after_init
+from pants.util.strutil import fmt_memory_size, softwrap
logger = logging.getLogger(__name__)
@@ -91,6 +94,7 @@ class JdkEnvironment:
nailgun_jar: str
coursier: Coursier
jre_major_version: int
+ global_jvm_options: tuple[str, ...]
bin_dir: ClassVar[str] = "__jdk"
jdk_preparation_script: ClassVar[str] = f"{bin_dir}/jdk.sh"
@@ -126,6 +130,16 @@ class InternalJdk(JdkEnvironment):
firstparty or thirdparty code (such as for codegen, or analysis of source files).
"""
+ @classmethod
+ def from_jdk_environment(cls, env: JdkEnvironment) -> InternalJdk:
+ return cls(
+ env._digest,
+ env.nailgun_jar,
+ env.coursier,
+ env.jre_major_version,
+ env.global_jvm_options,
+ )
+
VERSION_REGEX = re.compile(r"version \"(.+?)\"")
@@ -168,7 +182,7 @@ async def internal_jdk(jvm: JvmSubsystem) -> InternalJdk:
request = JdkRequest(jvm.tool_jdk) if jvm.tool_jdk is not None else JdkRequest.SYSTEM
env = await Get(JdkEnvironment, JdkRequest, request)
- return InternalJdk(env._digest, env.nailgun_jar, env.coursier, env.jre_major_version)
+ return InternalJdk.from_jdk_environment(env)
@rule
@@ -264,6 +278,7 @@ def prefixed(arg: str) -> str:
]
),
)
+
return JdkEnvironment(
_digest=await Get(
Digest,
@@ -274,6 +289,7 @@ def prefixed(arg: str) -> str:
]
),
),
+ global_jvm_options=jvm.global_options,
nailgun_jar=os.path.join(JdkEnvironment.bin_dir, nailgun.filenames[0]),
coursier=coursier,
jre_major_version=jre_major_version,
@@ -289,6 +305,7 @@ class JvmProcess:
input_digest: Digest
description: str = dataclasses.field(compare=False)
level: LogLevel
+ extra_jvm_options: tuple[str, ...]
extra_nailgun_keys: tuple[str, ...]
output_files: tuple[str, ...]
output_directories: tuple[str, ...]
@@ -307,6 +324,7 @@ def __init__(
input_digest: Digest,
description: str,
level: LogLevel = LogLevel.INFO,
+ extra_jvm_options: Iterable[str] | None = None,
extra_nailgun_keys: Iterable[str] | None = None,
output_files: Iterable[str] | None = None,
output_directories: Iterable[str] | None = None,
@@ -323,6 +341,7 @@ def __init__(
self.input_digest = input_digest
self.description = description
self.level = level
+ self.extra_jvm_options = tuple(extra_jvm_options or ())
self.extra_nailgun_keys = tuple(extra_nailgun_keys or ())
self.output_files = tuple(output_files or ())
self.output_directories = tuple(output_directories or ())
@@ -340,8 +359,13 @@ def __init__(
)
+_JVM_HEAP_SIZE_UNITS = ["", "k", "m", "g"]
+
+
@rule
-async def jvm_process(bash: BashBinary, request: JvmProcess) -> Process:
+async def jvm_process(
+ bash: BashBinary, request: JvmProcess, global_options: GlobalOptions
+) -> Process:
jdk = request.jdk
@@ -355,12 +379,37 @@ async def jvm_process(bash: BashBinary, request: JvmProcess) -> Process:
**request.extra_env,
}
+ def valid_jvm_opt(opt: str) -> str:
+ if opt.startswith("-Xmx"):
+ raise ValueError(
+ softwrap(
+ f"""
+ Invalid value for JVM options: {opt}.
+
+ For setting a maximum heap size for the JVM child processes, use
+ `[GLOBAL].process_per_child_memory_usage` option instead.
+
+ Run `{bin_name()} help-advanced global` for more information.
+ """
+ )
+ )
+ return opt
+
+ max_heap_size = fmt_memory_size(
+ global_options.process_per_child_memory_usage, units=_JVM_HEAP_SIZE_UNITS
+ )
+ jvm_user_options = [*jdk.global_jvm_options, *request.extra_jvm_options]
+ jvm_options = [
+ f"-Xmx{max_heap_size}",
+ *[valid_jvm_opt(opt) for opt in jvm_user_options],
+ ]
+
use_nailgun = []
if request.use_nailgun:
use_nailgun = [*jdk.immutable_input_digests, *request.extra_nailgun_keys]
return Process(
- [*jdk.args(bash, request.classpath_entries), *request.argv],
+ [*jdk.args(bash, request.classpath_entries), *jvm_options, *request.argv],
input_digest=request.input_digest,
immutable_input_digests=immutable_input_digests,
use_nailgun=use_nailgun,
diff --git a/src/python/pants/jvm/subsystems.py b/src/python/pants/jvm/subsystems.py
--- a/src/python/pants/jvm/subsystems.py
+++ b/src/python/pants/jvm/subsystems.py
@@ -79,3 +79,16 @@ class JvmSubsystem(Subsystem):
"""
),
)
+ global_options = StrListOption(
+ "--global-options",
+ help=softwrap(
+ """
+ List of JVM options to pass to all JVM processes.
+
+ Options set here will be used by any JVM processes required by Pants, with
+ the exception of heap memory settings like `-Xmx`, which need to be set
+ using `[GLOBAL].process_total_child_memory_usage` and `[GLOBAL].process_per_child_memory_usage`.
+ """
+ ),
+ advanced=True,
+ )
diff --git a/src/python/pants/option/global_options.py b/src/python/pants/option/global_options.py
--- a/src/python/pants/option/global_options.py
+++ b/src/python/pants/option/global_options.py
@@ -50,7 +50,7 @@
from pants.util.memo import memoized_classmethod, memoized_property
from pants.util.ordered_set import FrozenOrderedSet, OrderedSet
from pants.util.osutil import CPU_COUNT
-from pants.util.strutil import softwrap
+from pants.util.strutil import fmt_memory_size, softwrap
from pants.version import VERSION
logger = logging.getLogger(__name__)
@@ -341,6 +341,9 @@ class ExecutionOptions:
process_execution_remote_parallelism: int
process_execution_cache_namespace: str | None
+ process_total_child_memory_usage: int | None
+ process_per_child_memory_usage: int
+
remote_store_address: str | None
remote_store_headers: dict[str, str]
remote_store_chunk_bytes: Any
@@ -381,6 +384,8 @@ def from_options(
process_execution_remote_parallelism=dynamic_remote_options.parallelism,
process_execution_cache_namespace=bootstrap_options.process_execution_cache_namespace,
process_execution_local_enable_nailgun=bootstrap_options.process_execution_local_enable_nailgun,
+ process_total_child_memory_usage=bootstrap_options.process_total_child_memory_usage,
+ process_per_child_memory_usage=bootstrap_options.process_per_child_memory_usage,
# Remote store setup.
remote_store_address=dynamic_remote_options.store_address,
remote_store_headers=dynamic_remote_options.store_headers,
@@ -453,6 +458,8 @@ def from_options(cls, options: OptionValueContainer) -> LocalStoreOptions:
remote_instance_name=None,
remote_ca_certs_path=None,
# Process execution setup.
+ process_total_child_memory_usage=None,
+ process_per_child_memory_usage=memory_size("512MiB"),
process_execution_local_parallelism=CPU_COUNT,
process_execution_remote_parallelism=128,
process_execution_cache_namespace=None,
@@ -1051,6 +1058,46 @@ class BootstrapOptions:
"""
),
)
+ process_total_child_memory_usage = MemorySizeOption(
+ "--process-total-child-memory-usage",
+ advanced=True,
+ default=None,
+ default_help_repr="1GiB",
+ help=softwrap(
+ """
+ The maximum memory usage for all child processes.
+
+ This value participates in precomputing the pool size of child processes used by
+ `pantsd`. A high value would result in a high number of child processes spawned,
+ potentially overconsuming your resources and triggering the OS' OOM killer. A low
+ value would mean a low number of child processes launched and therefore less
+ paralellism for the tasks that need those processes.
+
+ If setting this value, consider also setting a value for the `process-per-child-memory-usage`
+ option too.
+
+ You can suffix with `GiB`, `MiB`, `KiB`, or `B` to indicate the unit, e.g.
+ `2GiB` or `2.12GiB`. A bare number will be in bytes.
+ """
+ ),
+ )
+ process_per_child_memory_usage = MemorySizeOption(
+ "--process-per-child-memory-usage",
+ advanced=True,
+ default=DEFAULT_EXECUTION_OPTIONS.process_per_child_memory_usage,
+ default_help_repr="512MiB",
+ help=softwrap(
+ """
+ The default memory usage for a child process.
+
+ Check the documentation for the `process-total-child-memory-usage` for advice on
+ how to choose an appropriate value for this option.
+
+ You can suffix with `GiB`, `MiB`, `KiB`, or `B` to indicate the unit, e.g.
+ `2GiB` or `2.12GiB`. A bare number will be in bytes.
+ """
+ ),
+ )
process_execution_local_parallelism = IntOption(
_process_execution_local_parallelism_flag,
default=DEFAULT_EXECUTION_OPTIONS.process_execution_local_parallelism,
@@ -1550,6 +1597,23 @@ def validate_instance(cls, opts):
f"{opts.rule_threads_core}."
)
+ if (
+ opts.process_total_child_memory_usage is not None
+ and opts.process_total_child_memory_usage < opts.process_per_child_memory_usage
+ ):
+ raise OptionsError(
+ softwrap(
+ f"""
+ Nailgun pool can not be initialised as the total amount of memory allowed is \
+ smaller than the memory allocation for a single child process.
+
+ - total child process memory allowed: {fmt_memory_size(opts.process_total_child_memory_usage)}
+
+ - default child process memory: {fmt_memory_size(opts.process_per_child_memory_usage)}
+ """
+ )
+ )
+
if opts.remote_execution and (opts.remote_cache_read or opts.remote_cache_write):
raise OptionsError(
softwrap(
diff --git a/src/python/pants/util/strutil.py b/src/python/pants/util/strutil.py
--- a/src/python/pants/util/strutil.py
+++ b/src/python/pants/util/strutil.py
@@ -258,3 +258,24 @@ def softwrap(text: str) -> str:
result_strs.append(" ")
return "".join(result_strs).rstrip()
+
+
+_MEMORY_UNITS = ["B", "KiB", "MiB", "GiB"]
+
+
+def fmt_memory_size(value: int, *, units: Iterable[str] = _MEMORY_UNITS) -> str:
+ """Formats a numeric value as amount of bytes alongside the biggest byte-based unit from the
+ list that represents the same amount without using decimals."""
+
+ if not units:
+ return str(value)
+
+ amount = value
+ unit_idx = 0
+
+ units = tuple(units)
+ while (amount >= 1024 and amount % 1024 == 0) and unit_idx < len(units) - 1:
+ amount = int(amount / 1024)
+ unit_idx += 1
+
+ return f"{int(amount)}{units[unit_idx]}"
</patch>
|
[]
|
[]
| |||
pyca__cryptography-3750
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add the ANY extended key usage OID to the ExtendedKeyUsageOID class
`2.5.29.37.0`
</issue>
<code>
[start of README.rst]
1 pyca/cryptography
2 =================
3
4 .. image:: https://img.shields.io/pypi/v/cryptography.svg
5 :target: https://pypi.python.org/pypi/cryptography/
6 :alt: Latest Version
7
8 .. image:: https://readthedocs.org/projects/cryptography/badge/?version=latest
9 :target: https://cryptography.io
10 :alt: Latest Docs
11
12 .. image:: https://travis-ci.org/pyca/cryptography.svg?branch=master
13 :target: https://travis-ci.org/pyca/cryptography
14
15 .. image:: https://codecov.io/github/pyca/cryptography/coverage.svg?branch=master
16 :target: https://codecov.io/github/pyca/cryptography?branch=master
17
18
19 ``cryptography`` is a package which provides cryptographic recipes and
20 primitives to Python developers. Our goal is for it to be your "cryptographic
21 standard library". It supports Python 2.6-2.7, Python 3.4+, and PyPy 5.3+.
22
23 ``cryptography`` includes both high level recipes and low level interfaces to
24 common cryptographic algorithms such as symmetric ciphers, message digests, and
25 key derivation functions. For example, to encrypt something with
26 ``cryptography``'s high level symmetric encryption recipe:
27
28 .. code-block:: pycon
29
30 >>> from cryptography.fernet import Fernet
31 >>> # Put this somewhere safe!
32 >>> key = Fernet.generate_key()
33 >>> f = Fernet(key)
34 >>> token = f.encrypt(b"A really secret message. Not for prying eyes.")
35 >>> token
36 '...'
37 >>> f.decrypt(token)
38 'A really secret message. Not for prying eyes.'
39
40 You can find more information in the `documentation`_.
41
42 You can install ``cryptography`` with:
43
44 .. code-block:: console
45
46 $ pip install cryptography
47
48 For full details see `the installation documentation`_.
49
50 Discussion
51 ~~~~~~~~~~
52
53 If you run into bugs, you can file them in our `issue tracker`_.
54
55 We maintain a `cryptography-dev`_ mailing list for development discussion.
56
57 You can also join ``#cryptography-dev`` on Freenode to ask questions or get
58 involved.
59
60
61 .. _`documentation`: https://cryptography.io/
62 .. _`the installation documentation`: https://cryptography.io/en/latest/installation/
63 .. _`issue tracker`: https://github.com/pyca/cryptography/issues
64 .. _`cryptography-dev`: https://mail.python.org/mailman/listinfo/cryptography-dev
65
[end of README.rst]
[start of docs/conf.py]
1 # -*- coding: utf-8 -*-
2
3 # This file is dual licensed under the terms of the Apache License, Version
4 # 2.0, and the BSD License. See the LICENSE file in the root of this repository
5 # for complete details.
6
7 #
8 # Cryptography documentation build configuration file, created by
9 # sphinx-quickstart on Tue Aug 6 19:19:14 2013.
10 #
11 # This file is execfile()d with the current directory set to its containing dir
12 #
13 # Note that not all possible configuration values are present in this
14 # autogenerated file.
15 #
16 # All configuration values have a default; values that are commented out
17 # serve to show the default.
18
19 from __future__ import absolute_import, division, print_function
20
21 import os
22 import sys
23
24 try:
25 import sphinx_rtd_theme
26 except ImportError:
27 sphinx_rtd_theme = None
28
29 try:
30 from sphinxcontrib import spelling
31 except ImportError:
32 spelling = None
33
34
35 # If extensions (or modules to document with autodoc) are in another directory,
36 # add these directories to sys.path here. If the directory is relative to the
37 # documentation root, use os.path.abspath to make it absolute, like shown here.
38 sys.path.insert(0, os.path.abspath('.'))
39
40 # -- General configuration ----------------------------------------------------
41
42 # If your documentation needs a minimal Sphinx version, state it here.
43 # needs_sphinx = '1.0'
44
45 # Add any Sphinx extension module names here, as strings. They can be
46 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
47 extensions = [
48 'sphinx.ext.autodoc',
49 'sphinx.ext.doctest',
50 'sphinx.ext.intersphinx',
51 'sphinx.ext.viewcode',
52 'cryptography-docs',
53 ]
54
55 if spelling is not None:
56 extensions.append('sphinxcontrib.spelling')
57
58 # Add any paths that contain templates here, relative to this directory.
59 templates_path = ['_templates']
60
61 nitpicky = True
62
63 # The suffix of source filenames.
64 source_suffix = '.rst'
65
66 # The encoding of source files.
67 # source_encoding = 'utf-8-sig'
68
69 # The master toctree document.
70 master_doc = 'index'
71
72 # General information about the project.
73 project = 'Cryptography'
74 copyright = '2013-2017, Individual Contributors'
75
76 # The version info for the project you're documenting, acts as replacement for
77 # |version| and |release|, also used in various other places throughout the
78 # built documents.
79 #
80
81 base_dir = os.path.join(os.path.dirname(__file__), os.pardir)
82 about = {}
83 with open(os.path.join(base_dir, "src", "cryptography", "__about__.py")) as f:
84 exec(f.read(), about)
85
86 version = release = about["__version__"]
87
88 # The language for content autogenerated by Sphinx. Refer to documentation
89 # for a list of supported languages.
90 # language = None
91
92 # There are two options for replacing |today|: either, you set today to some
93 # non-false value, then it is used:
94 # today = ''
95 # Else, today_fmt is used as the format for a strftime call.
96 # today_fmt = '%B %d, %Y'
97
98 # List of patterns, relative to source directory, that match files and
99 # directories to ignore when looking for source files.
100 exclude_patterns = ['_build']
101
102 # The reST default role (used for this markup: `text`) to use for all documents
103 # default_role = None
104
105 # If true, '()' will be appended to :func: etc. cross-reference text.
106 # add_function_parentheses = True
107
108 # If true, the current module name will be prepended to all description
109 # unit titles (such as .. function::).
110 # add_module_names = True
111
112 # If true, sectionauthor and moduleauthor directives will be shown in the
113 # output. They are ignored by default.
114 # show_authors = False
115
116 # The name of the Pygments (syntax highlighting) style to use.
117 pygments_style = 'sphinx'
118
119 # -- Options for HTML output --------------------------------------------------
120
121 # The theme to use for HTML and HTML Help pages. See the documentation for
122 # a list of builtin themes.
123
124 if sphinx_rtd_theme:
125 html_theme = "sphinx_rtd_theme"
126 html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
127 else:
128 html_theme = "default"
129
130 # Add any paths that contain custom static files (such as style sheets) here,
131 # relative to this directory. They are copied after the builtin static files,
132 # so a file named "default.css" will overwrite the builtin "default.css".
133 html_static_path = ['_static']
134
135 # Output file base name for HTML help builder.
136 htmlhelp_basename = 'Cryptographydoc'
137
138
139 # -- Options for LaTeX output -------------------------------------------------
140
141 latex_elements = {
142 }
143
144 # Grouping the document tree into LaTeX files. List of tuples
145 # (source start file, target name, title, author, documentclass [howto/manual])
146 latex_documents = [
147 ('index', 'Cryptography.tex', 'Cryptography Documentation',
148 'Individual Contributors', 'manual'),
149 ]
150
151 # -- Options for manual page output -------------------------------------------
152
153 # One entry per manual page. List of tuples
154 # (source start file, name, description, authors, manual section).
155 man_pages = [
156 ('index', 'cryptography', 'Cryptography Documentation',
157 ['Individual Contributors'], 1)
158 ]
159
160 # -- Options for Texinfo output -----------------------------------------------
161
162 # Grouping the document tree into Texinfo files. List of tuples
163 # (source start file, target name, title, author,
164 # dir menu entry, description, category)
165 texinfo_documents = [
166 ('index', 'Cryptography', 'Cryptography Documentation',
167 'Individual Contributors', 'Cryptography',
168 'One line description of project.',
169 'Miscellaneous'),
170 ]
171
172 # Example configuration for intersphinx: refer to the Python standard library.
173 intersphinx_mapping = {'https://docs.python.org/3': None}
174
175 epub_theme = 'epub'
176
177 # Retry requests in the linkcheck builder so that we're resillient against
178 # transient network errors.
179 linkcheck_retries = 5
180
181 linkcheck_ignore = [
182 # Certificate is issued by a Japanese CA that isn't publicly trusted
183 "https://www.cryptrec.go.jp",
184 ]
185
[end of docs/conf.py]
[start of src/cryptography/x509/extensions.py]
1 # This file is dual licensed under the terms of the Apache License, Version
2 # 2.0, and the BSD License. See the LICENSE file in the root of this repository
3 # for complete details.
4
5 from __future__ import absolute_import, division, print_function
6
7 import abc
8 import datetime
9 import hashlib
10 import ipaddress
11 from enum import Enum
12
13 from asn1crypto.keys import PublicKeyInfo
14
15 import six
16
17 from cryptography import utils
18 from cryptography.hazmat.primitives import constant_time, serialization
19 from cryptography.hazmat.primitives.asymmetric.ec import EllipticCurvePublicKey
20 from cryptography.hazmat.primitives.asymmetric.rsa import RSAPublicKey
21 from cryptography.x509.certificate_transparency import (
22 SignedCertificateTimestamp
23 )
24 from cryptography.x509.general_name import GeneralName, IPAddress, OtherName
25 from cryptography.x509.name import RelativeDistinguishedName
26 from cryptography.x509.oid import (
27 CRLEntryExtensionOID, ExtensionOID, ObjectIdentifier
28 )
29
30
31 def _key_identifier_from_public_key(public_key):
32 if isinstance(public_key, RSAPublicKey):
33 data = public_key.public_bytes(
34 serialization.Encoding.DER,
35 serialization.PublicFormat.PKCS1,
36 )
37 elif isinstance(public_key, EllipticCurvePublicKey):
38 data = public_key.public_numbers().encode_point()
39 else:
40 # This is a very slow way to do this.
41 serialized = public_key.public_bytes(
42 serialization.Encoding.DER,
43 serialization.PublicFormat.SubjectPublicKeyInfo
44 )
45
46 data = six.binary_type(PublicKeyInfo.load(serialized)['public_key'])
47
48 return hashlib.sha1(data).digest()
49
50
51 class DuplicateExtension(Exception):
52 def __init__(self, msg, oid):
53 super(DuplicateExtension, self).__init__(msg)
54 self.oid = oid
55
56
57 class UnsupportedExtension(Exception):
58 pass
59
60
61 class ExtensionNotFound(Exception):
62 def __init__(self, msg, oid):
63 super(ExtensionNotFound, self).__init__(msg)
64 self.oid = oid
65
66
67 @six.add_metaclass(abc.ABCMeta)
68 class ExtensionType(object):
69 @abc.abstractproperty
70 def oid(self):
71 """
72 Returns the oid associated with the given extension type.
73 """
74
75
76 class Extensions(object):
77 def __init__(self, extensions):
78 self._extensions = extensions
79
80 def get_extension_for_oid(self, oid):
81 for ext in self:
82 if ext.oid == oid:
83 return ext
84
85 raise ExtensionNotFound("No {0} extension was found".format(oid), oid)
86
87 def get_extension_for_class(self, extclass):
88 if extclass is UnrecognizedExtension:
89 raise TypeError(
90 "UnrecognizedExtension can't be used with "
91 "get_extension_for_class because more than one instance of the"
92 " class may be present."
93 )
94
95 for ext in self:
96 if isinstance(ext.value, extclass):
97 return ext
98
99 raise ExtensionNotFound(
100 "No {0} extension was found".format(extclass), extclass.oid
101 )
102
103 def __iter__(self):
104 return iter(self._extensions)
105
106 def __len__(self):
107 return len(self._extensions)
108
109 def __getitem__(self, idx):
110 return self._extensions[idx]
111
112 def __repr__(self):
113 return (
114 "<Extensions({0})>".format(self._extensions)
115 )
116
117
118 @utils.register_interface(ExtensionType)
119 class CRLNumber(object):
120 oid = ExtensionOID.CRL_NUMBER
121
122 def __init__(self, crl_number):
123 if not isinstance(crl_number, six.integer_types):
124 raise TypeError("crl_number must be an integer")
125
126 self._crl_number = crl_number
127
128 def __eq__(self, other):
129 if not isinstance(other, CRLNumber):
130 return NotImplemented
131
132 return self.crl_number == other.crl_number
133
134 def __ne__(self, other):
135 return not self == other
136
137 def __hash__(self):
138 return hash(self.crl_number)
139
140 def __repr__(self):
141 return "<CRLNumber({0})>".format(self.crl_number)
142
143 crl_number = utils.read_only_property("_crl_number")
144
145
146 @utils.register_interface(ExtensionType)
147 class AuthorityKeyIdentifier(object):
148 oid = ExtensionOID.AUTHORITY_KEY_IDENTIFIER
149
150 def __init__(self, key_identifier, authority_cert_issuer,
151 authority_cert_serial_number):
152 if (authority_cert_issuer is None) != (
153 authority_cert_serial_number is None
154 ):
155 raise ValueError(
156 "authority_cert_issuer and authority_cert_serial_number "
157 "must both be present or both None"
158 )
159
160 if authority_cert_issuer is not None:
161 authority_cert_issuer = list(authority_cert_issuer)
162 if not all(
163 isinstance(x, GeneralName) for x in authority_cert_issuer
164 ):
165 raise TypeError(
166 "authority_cert_issuer must be a list of GeneralName "
167 "objects"
168 )
169
170 if authority_cert_serial_number is not None and not isinstance(
171 authority_cert_serial_number, six.integer_types
172 ):
173 raise TypeError(
174 "authority_cert_serial_number must be an integer"
175 )
176
177 self._key_identifier = key_identifier
178 self._authority_cert_issuer = authority_cert_issuer
179 self._authority_cert_serial_number = authority_cert_serial_number
180
181 @classmethod
182 def from_issuer_public_key(cls, public_key):
183 digest = _key_identifier_from_public_key(public_key)
184 return cls(
185 key_identifier=digest,
186 authority_cert_issuer=None,
187 authority_cert_serial_number=None
188 )
189
190 @classmethod
191 def from_issuer_subject_key_identifier(cls, ski):
192 return cls(
193 key_identifier=ski.value.digest,
194 authority_cert_issuer=None,
195 authority_cert_serial_number=None
196 )
197
198 def __repr__(self):
199 return (
200 "<AuthorityKeyIdentifier(key_identifier={0.key_identifier!r}, "
201 "authority_cert_issuer={0.authority_cert_issuer}, "
202 "authority_cert_serial_number={0.authority_cert_serial_number}"
203 ")>".format(self)
204 )
205
206 def __eq__(self, other):
207 if not isinstance(other, AuthorityKeyIdentifier):
208 return NotImplemented
209
210 return (
211 self.key_identifier == other.key_identifier and
212 self.authority_cert_issuer == other.authority_cert_issuer and
213 self.authority_cert_serial_number ==
214 other.authority_cert_serial_number
215 )
216
217 def __ne__(self, other):
218 return not self == other
219
220 key_identifier = utils.read_only_property("_key_identifier")
221 authority_cert_issuer = utils.read_only_property("_authority_cert_issuer")
222 authority_cert_serial_number = utils.read_only_property(
223 "_authority_cert_serial_number"
224 )
225
226
227 @utils.register_interface(ExtensionType)
228 class SubjectKeyIdentifier(object):
229 oid = ExtensionOID.SUBJECT_KEY_IDENTIFIER
230
231 def __init__(self, digest):
232 self._digest = digest
233
234 @classmethod
235 def from_public_key(cls, public_key):
236 return cls(_key_identifier_from_public_key(public_key))
237
238 digest = utils.read_only_property("_digest")
239
240 def __repr__(self):
241 return "<SubjectKeyIdentifier(digest={0!r})>".format(self.digest)
242
243 def __eq__(self, other):
244 if not isinstance(other, SubjectKeyIdentifier):
245 return NotImplemented
246
247 return constant_time.bytes_eq(self.digest, other.digest)
248
249 def __ne__(self, other):
250 return not self == other
251
252 def __hash__(self):
253 return hash(self.digest)
254
255
256 @utils.register_interface(ExtensionType)
257 class AuthorityInformationAccess(object):
258 oid = ExtensionOID.AUTHORITY_INFORMATION_ACCESS
259
260 def __init__(self, descriptions):
261 descriptions = list(descriptions)
262 if not all(isinstance(x, AccessDescription) for x in descriptions):
263 raise TypeError(
264 "Every item in the descriptions list must be an "
265 "AccessDescription"
266 )
267
268 self._descriptions = descriptions
269
270 def __iter__(self):
271 return iter(self._descriptions)
272
273 def __len__(self):
274 return len(self._descriptions)
275
276 def __repr__(self):
277 return "<AuthorityInformationAccess({0})>".format(self._descriptions)
278
279 def __eq__(self, other):
280 if not isinstance(other, AuthorityInformationAccess):
281 return NotImplemented
282
283 return self._descriptions == other._descriptions
284
285 def __ne__(self, other):
286 return not self == other
287
288 def __getitem__(self, idx):
289 return self._descriptions[idx]
290
291
292 class AccessDescription(object):
293 def __init__(self, access_method, access_location):
294 if not isinstance(access_method, ObjectIdentifier):
295 raise TypeError("access_method must be an ObjectIdentifier")
296
297 if not isinstance(access_location, GeneralName):
298 raise TypeError("access_location must be a GeneralName")
299
300 self._access_method = access_method
301 self._access_location = access_location
302
303 def __repr__(self):
304 return (
305 "<AccessDescription(access_method={0.access_method}, access_locati"
306 "on={0.access_location})>".format(self)
307 )
308
309 def __eq__(self, other):
310 if not isinstance(other, AccessDescription):
311 return NotImplemented
312
313 return (
314 self.access_method == other.access_method and
315 self.access_location == other.access_location
316 )
317
318 def __ne__(self, other):
319 return not self == other
320
321 def __hash__(self):
322 return hash((self.access_method, self.access_location))
323
324 access_method = utils.read_only_property("_access_method")
325 access_location = utils.read_only_property("_access_location")
326
327
328 @utils.register_interface(ExtensionType)
329 class BasicConstraints(object):
330 oid = ExtensionOID.BASIC_CONSTRAINTS
331
332 def __init__(self, ca, path_length):
333 if not isinstance(ca, bool):
334 raise TypeError("ca must be a boolean value")
335
336 if path_length is not None and not ca:
337 raise ValueError("path_length must be None when ca is False")
338
339 if (
340 path_length is not None and
341 (not isinstance(path_length, six.integer_types) or path_length < 0)
342 ):
343 raise TypeError(
344 "path_length must be a non-negative integer or None"
345 )
346
347 self._ca = ca
348 self._path_length = path_length
349
350 ca = utils.read_only_property("_ca")
351 path_length = utils.read_only_property("_path_length")
352
353 def __repr__(self):
354 return ("<BasicConstraints(ca={0.ca}, "
355 "path_length={0.path_length})>").format(self)
356
357 def __eq__(self, other):
358 if not isinstance(other, BasicConstraints):
359 return NotImplemented
360
361 return self.ca == other.ca and self.path_length == other.path_length
362
363 def __ne__(self, other):
364 return not self == other
365
366 def __hash__(self):
367 return hash((self.ca, self.path_length))
368
369
370 @utils.register_interface(ExtensionType)
371 class CRLDistributionPoints(object):
372 oid = ExtensionOID.CRL_DISTRIBUTION_POINTS
373
374 def __init__(self, distribution_points):
375 distribution_points = list(distribution_points)
376 if not all(
377 isinstance(x, DistributionPoint) for x in distribution_points
378 ):
379 raise TypeError(
380 "distribution_points must be a list of DistributionPoint "
381 "objects"
382 )
383
384 self._distribution_points = distribution_points
385
386 def __iter__(self):
387 return iter(self._distribution_points)
388
389 def __len__(self):
390 return len(self._distribution_points)
391
392 def __repr__(self):
393 return "<CRLDistributionPoints({0})>".format(self._distribution_points)
394
395 def __eq__(self, other):
396 if not isinstance(other, CRLDistributionPoints):
397 return NotImplemented
398
399 return self._distribution_points == other._distribution_points
400
401 def __ne__(self, other):
402 return not self == other
403
404 def __getitem__(self, idx):
405 return self._distribution_points[idx]
406
407
408 class DistributionPoint(object):
409 def __init__(self, full_name, relative_name, reasons, crl_issuer):
410 if full_name and relative_name:
411 raise ValueError(
412 "You cannot provide both full_name and relative_name, at "
413 "least one must be None."
414 )
415
416 if full_name:
417 full_name = list(full_name)
418 if not all(isinstance(x, GeneralName) for x in full_name):
419 raise TypeError(
420 "full_name must be a list of GeneralName objects"
421 )
422
423 if relative_name:
424 if not isinstance(relative_name, RelativeDistinguishedName):
425 raise TypeError(
426 "relative_name must be a RelativeDistinguishedName"
427 )
428
429 if crl_issuer:
430 crl_issuer = list(crl_issuer)
431 if not all(isinstance(x, GeneralName) for x in crl_issuer):
432 raise TypeError(
433 "crl_issuer must be None or a list of general names"
434 )
435
436 if reasons and (not isinstance(reasons, frozenset) or not all(
437 isinstance(x, ReasonFlags) for x in reasons
438 )):
439 raise TypeError("reasons must be None or frozenset of ReasonFlags")
440
441 if reasons and (
442 ReasonFlags.unspecified in reasons or
443 ReasonFlags.remove_from_crl in reasons
444 ):
445 raise ValueError(
446 "unspecified and remove_from_crl are not valid reasons in a "
447 "DistributionPoint"
448 )
449
450 if reasons and not crl_issuer and not (full_name or relative_name):
451 raise ValueError(
452 "You must supply crl_issuer, full_name, or relative_name when "
453 "reasons is not None"
454 )
455
456 self._full_name = full_name
457 self._relative_name = relative_name
458 self._reasons = reasons
459 self._crl_issuer = crl_issuer
460
461 def __repr__(self):
462 return (
463 "<DistributionPoint(full_name={0.full_name}, relative_name={0.rela"
464 "tive_name}, reasons={0.reasons}, crl_issuer={0.crl_is"
465 "suer})>".format(self)
466 )
467
468 def __eq__(self, other):
469 if not isinstance(other, DistributionPoint):
470 return NotImplemented
471
472 return (
473 self.full_name == other.full_name and
474 self.relative_name == other.relative_name and
475 self.reasons == other.reasons and
476 self.crl_issuer == other.crl_issuer
477 )
478
479 def __ne__(self, other):
480 return not self == other
481
482 full_name = utils.read_only_property("_full_name")
483 relative_name = utils.read_only_property("_relative_name")
484 reasons = utils.read_only_property("_reasons")
485 crl_issuer = utils.read_only_property("_crl_issuer")
486
487
488 class ReasonFlags(Enum):
489 unspecified = "unspecified"
490 key_compromise = "keyCompromise"
491 ca_compromise = "cACompromise"
492 affiliation_changed = "affiliationChanged"
493 superseded = "superseded"
494 cessation_of_operation = "cessationOfOperation"
495 certificate_hold = "certificateHold"
496 privilege_withdrawn = "privilegeWithdrawn"
497 aa_compromise = "aACompromise"
498 remove_from_crl = "removeFromCRL"
499
500
501 @utils.register_interface(ExtensionType)
502 class PolicyConstraints(object):
503 oid = ExtensionOID.POLICY_CONSTRAINTS
504
505 def __init__(self, require_explicit_policy, inhibit_policy_mapping):
506 if require_explicit_policy is not None and not isinstance(
507 require_explicit_policy, six.integer_types
508 ):
509 raise TypeError(
510 "require_explicit_policy must be a non-negative integer or "
511 "None"
512 )
513
514 if inhibit_policy_mapping is not None and not isinstance(
515 inhibit_policy_mapping, six.integer_types
516 ):
517 raise TypeError(
518 "inhibit_policy_mapping must be a non-negative integer or None"
519 )
520
521 if inhibit_policy_mapping is None and require_explicit_policy is None:
522 raise ValueError(
523 "At least one of require_explicit_policy and "
524 "inhibit_policy_mapping must not be None"
525 )
526
527 self._require_explicit_policy = require_explicit_policy
528 self._inhibit_policy_mapping = inhibit_policy_mapping
529
530 def __repr__(self):
531 return (
532 u"<PolicyConstraints(require_explicit_policy={0.require_explicit"
533 u"_policy}, inhibit_policy_mapping={0.inhibit_policy_"
534 u"mapping})>".format(self)
535 )
536
537 def __eq__(self, other):
538 if not isinstance(other, PolicyConstraints):
539 return NotImplemented
540
541 return (
542 self.require_explicit_policy == other.require_explicit_policy and
543 self.inhibit_policy_mapping == other.inhibit_policy_mapping
544 )
545
546 def __ne__(self, other):
547 return not self == other
548
549 require_explicit_policy = utils.read_only_property(
550 "_require_explicit_policy"
551 )
552 inhibit_policy_mapping = utils.read_only_property(
553 "_inhibit_policy_mapping"
554 )
555
556
557 @utils.register_interface(ExtensionType)
558 class CertificatePolicies(object):
559 oid = ExtensionOID.CERTIFICATE_POLICIES
560
561 def __init__(self, policies):
562 policies = list(policies)
563 if not all(isinstance(x, PolicyInformation) for x in policies):
564 raise TypeError(
565 "Every item in the policies list must be a "
566 "PolicyInformation"
567 )
568
569 self._policies = policies
570
571 def __iter__(self):
572 return iter(self._policies)
573
574 def __len__(self):
575 return len(self._policies)
576
577 def __repr__(self):
578 return "<CertificatePolicies({0})>".format(self._policies)
579
580 def __eq__(self, other):
581 if not isinstance(other, CertificatePolicies):
582 return NotImplemented
583
584 return self._policies == other._policies
585
586 def __ne__(self, other):
587 return not self == other
588
589 def __getitem__(self, idx):
590 return self._policies[idx]
591
592
593 class PolicyInformation(object):
594 def __init__(self, policy_identifier, policy_qualifiers):
595 if not isinstance(policy_identifier, ObjectIdentifier):
596 raise TypeError("policy_identifier must be an ObjectIdentifier")
597
598 self._policy_identifier = policy_identifier
599
600 if policy_qualifiers:
601 policy_qualifiers = list(policy_qualifiers)
602 if not all(
603 isinstance(x, (six.text_type, UserNotice))
604 for x in policy_qualifiers
605 ):
606 raise TypeError(
607 "policy_qualifiers must be a list of strings and/or "
608 "UserNotice objects or None"
609 )
610
611 self._policy_qualifiers = policy_qualifiers
612
613 def __repr__(self):
614 return (
615 "<PolicyInformation(policy_identifier={0.policy_identifier}, polic"
616 "y_qualifiers={0.policy_qualifiers})>".format(self)
617 )
618
619 def __eq__(self, other):
620 if not isinstance(other, PolicyInformation):
621 return NotImplemented
622
623 return (
624 self.policy_identifier == other.policy_identifier and
625 self.policy_qualifiers == other.policy_qualifiers
626 )
627
628 def __ne__(self, other):
629 return not self == other
630
631 policy_identifier = utils.read_only_property("_policy_identifier")
632 policy_qualifiers = utils.read_only_property("_policy_qualifiers")
633
634
635 class UserNotice(object):
636 def __init__(self, notice_reference, explicit_text):
637 if notice_reference and not isinstance(
638 notice_reference, NoticeReference
639 ):
640 raise TypeError(
641 "notice_reference must be None or a NoticeReference"
642 )
643
644 self._notice_reference = notice_reference
645 self._explicit_text = explicit_text
646
647 def __repr__(self):
648 return (
649 "<UserNotice(notice_reference={0.notice_reference}, explicit_text="
650 "{0.explicit_text!r})>".format(self)
651 )
652
653 def __eq__(self, other):
654 if not isinstance(other, UserNotice):
655 return NotImplemented
656
657 return (
658 self.notice_reference == other.notice_reference and
659 self.explicit_text == other.explicit_text
660 )
661
662 def __ne__(self, other):
663 return not self == other
664
665 notice_reference = utils.read_only_property("_notice_reference")
666 explicit_text = utils.read_only_property("_explicit_text")
667
668
669 class NoticeReference(object):
670 def __init__(self, organization, notice_numbers):
671 self._organization = organization
672 notice_numbers = list(notice_numbers)
673 if not all(isinstance(x, int) for x in notice_numbers):
674 raise TypeError(
675 "notice_numbers must be a list of integers"
676 )
677
678 self._notice_numbers = notice_numbers
679
680 def __repr__(self):
681 return (
682 "<NoticeReference(organization={0.organization!r}, notice_numbers="
683 "{0.notice_numbers})>".format(self)
684 )
685
686 def __eq__(self, other):
687 if not isinstance(other, NoticeReference):
688 return NotImplemented
689
690 return (
691 self.organization == other.organization and
692 self.notice_numbers == other.notice_numbers
693 )
694
695 def __ne__(self, other):
696 return not self == other
697
698 organization = utils.read_only_property("_organization")
699 notice_numbers = utils.read_only_property("_notice_numbers")
700
701
702 @utils.register_interface(ExtensionType)
703 class ExtendedKeyUsage(object):
704 oid = ExtensionOID.EXTENDED_KEY_USAGE
705
706 def __init__(self, usages):
707 usages = list(usages)
708 if not all(isinstance(x, ObjectIdentifier) for x in usages):
709 raise TypeError(
710 "Every item in the usages list must be an ObjectIdentifier"
711 )
712
713 self._usages = usages
714
715 def __iter__(self):
716 return iter(self._usages)
717
718 def __len__(self):
719 return len(self._usages)
720
721 def __repr__(self):
722 return "<ExtendedKeyUsage({0})>".format(self._usages)
723
724 def __eq__(self, other):
725 if not isinstance(other, ExtendedKeyUsage):
726 return NotImplemented
727
728 return self._usages == other._usages
729
730 def __ne__(self, other):
731 return not self == other
732
733
734 @utils.register_interface(ExtensionType)
735 class OCSPNoCheck(object):
736 oid = ExtensionOID.OCSP_NO_CHECK
737
738
739 @utils.register_interface(ExtensionType)
740 class InhibitAnyPolicy(object):
741 oid = ExtensionOID.INHIBIT_ANY_POLICY
742
743 def __init__(self, skip_certs):
744 if not isinstance(skip_certs, six.integer_types):
745 raise TypeError("skip_certs must be an integer")
746
747 if skip_certs < 0:
748 raise ValueError("skip_certs must be a non-negative integer")
749
750 self._skip_certs = skip_certs
751
752 def __repr__(self):
753 return "<InhibitAnyPolicy(skip_certs={0.skip_certs})>".format(self)
754
755 def __eq__(self, other):
756 if not isinstance(other, InhibitAnyPolicy):
757 return NotImplemented
758
759 return self.skip_certs == other.skip_certs
760
761 def __ne__(self, other):
762 return not self == other
763
764 def __hash__(self):
765 return hash(self.skip_certs)
766
767 skip_certs = utils.read_only_property("_skip_certs")
768
769
770 @utils.register_interface(ExtensionType)
771 class KeyUsage(object):
772 oid = ExtensionOID.KEY_USAGE
773
774 def __init__(self, digital_signature, content_commitment, key_encipherment,
775 data_encipherment, key_agreement, key_cert_sign, crl_sign,
776 encipher_only, decipher_only):
777 if not key_agreement and (encipher_only or decipher_only):
778 raise ValueError(
779 "encipher_only and decipher_only can only be true when "
780 "key_agreement is true"
781 )
782
783 self._digital_signature = digital_signature
784 self._content_commitment = content_commitment
785 self._key_encipherment = key_encipherment
786 self._data_encipherment = data_encipherment
787 self._key_agreement = key_agreement
788 self._key_cert_sign = key_cert_sign
789 self._crl_sign = crl_sign
790 self._encipher_only = encipher_only
791 self._decipher_only = decipher_only
792
793 digital_signature = utils.read_only_property("_digital_signature")
794 content_commitment = utils.read_only_property("_content_commitment")
795 key_encipherment = utils.read_only_property("_key_encipherment")
796 data_encipherment = utils.read_only_property("_data_encipherment")
797 key_agreement = utils.read_only_property("_key_agreement")
798 key_cert_sign = utils.read_only_property("_key_cert_sign")
799 crl_sign = utils.read_only_property("_crl_sign")
800
801 @property
802 def encipher_only(self):
803 if not self.key_agreement:
804 raise ValueError(
805 "encipher_only is undefined unless key_agreement is true"
806 )
807 else:
808 return self._encipher_only
809
810 @property
811 def decipher_only(self):
812 if not self.key_agreement:
813 raise ValueError(
814 "decipher_only is undefined unless key_agreement is true"
815 )
816 else:
817 return self._decipher_only
818
819 def __repr__(self):
820 try:
821 encipher_only = self.encipher_only
822 decipher_only = self.decipher_only
823 except ValueError:
824 encipher_only = None
825 decipher_only = None
826
827 return ("<KeyUsage(digital_signature={0.digital_signature}, "
828 "content_commitment={0.content_commitment}, "
829 "key_encipherment={0.key_encipherment}, "
830 "data_encipherment={0.data_encipherment}, "
831 "key_agreement={0.key_agreement}, "
832 "key_cert_sign={0.key_cert_sign}, crl_sign={0.crl_sign}, "
833 "encipher_only={1}, decipher_only={2})>").format(
834 self, encipher_only, decipher_only)
835
836 def __eq__(self, other):
837 if not isinstance(other, KeyUsage):
838 return NotImplemented
839
840 return (
841 self.digital_signature == other.digital_signature and
842 self.content_commitment == other.content_commitment and
843 self.key_encipherment == other.key_encipherment and
844 self.data_encipherment == other.data_encipherment and
845 self.key_agreement == other.key_agreement and
846 self.key_cert_sign == other.key_cert_sign and
847 self.crl_sign == other.crl_sign and
848 self._encipher_only == other._encipher_only and
849 self._decipher_only == other._decipher_only
850 )
851
852 def __ne__(self, other):
853 return not self == other
854
855
856 @utils.register_interface(ExtensionType)
857 class NameConstraints(object):
858 oid = ExtensionOID.NAME_CONSTRAINTS
859
860 def __init__(self, permitted_subtrees, excluded_subtrees):
861 if permitted_subtrees is not None:
862 permitted_subtrees = list(permitted_subtrees)
863 if not all(
864 isinstance(x, GeneralName) for x in permitted_subtrees
865 ):
866 raise TypeError(
867 "permitted_subtrees must be a list of GeneralName objects "
868 "or None"
869 )
870
871 self._validate_ip_name(permitted_subtrees)
872
873 if excluded_subtrees is not None:
874 excluded_subtrees = list(excluded_subtrees)
875 if not all(
876 isinstance(x, GeneralName) for x in excluded_subtrees
877 ):
878 raise TypeError(
879 "excluded_subtrees must be a list of GeneralName objects "
880 "or None"
881 )
882
883 self._validate_ip_name(excluded_subtrees)
884
885 if permitted_subtrees is None and excluded_subtrees is None:
886 raise ValueError(
887 "At least one of permitted_subtrees and excluded_subtrees "
888 "must not be None"
889 )
890
891 self._permitted_subtrees = permitted_subtrees
892 self._excluded_subtrees = excluded_subtrees
893
894 def __eq__(self, other):
895 if not isinstance(other, NameConstraints):
896 return NotImplemented
897
898 return (
899 self.excluded_subtrees == other.excluded_subtrees and
900 self.permitted_subtrees == other.permitted_subtrees
901 )
902
903 def __ne__(self, other):
904 return not self == other
905
906 def _validate_ip_name(self, tree):
907 if any(isinstance(name, IPAddress) and not isinstance(
908 name.value, (ipaddress.IPv4Network, ipaddress.IPv6Network)
909 ) for name in tree):
910 raise TypeError(
911 "IPAddress name constraints must be an IPv4Network or"
912 " IPv6Network object"
913 )
914
915 def __repr__(self):
916 return (
917 u"<NameConstraints(permitted_subtrees={0.permitted_subtrees}, "
918 u"excluded_subtrees={0.excluded_subtrees})>".format(self)
919 )
920
921 permitted_subtrees = utils.read_only_property("_permitted_subtrees")
922 excluded_subtrees = utils.read_only_property("_excluded_subtrees")
923
924
925 class Extension(object):
926 def __init__(self, oid, critical, value):
927 if not isinstance(oid, ObjectIdentifier):
928 raise TypeError(
929 "oid argument must be an ObjectIdentifier instance."
930 )
931
932 if not isinstance(critical, bool):
933 raise TypeError("critical must be a boolean value")
934
935 self._oid = oid
936 self._critical = critical
937 self._value = value
938
939 oid = utils.read_only_property("_oid")
940 critical = utils.read_only_property("_critical")
941 value = utils.read_only_property("_value")
942
943 def __repr__(self):
944 return ("<Extension(oid={0.oid}, critical={0.critical}, "
945 "value={0.value})>").format(self)
946
947 def __eq__(self, other):
948 if not isinstance(other, Extension):
949 return NotImplemented
950
951 return (
952 self.oid == other.oid and
953 self.critical == other.critical and
954 self.value == other.value
955 )
956
957 def __ne__(self, other):
958 return not self == other
959
960
961 class GeneralNames(object):
962 def __init__(self, general_names):
963 general_names = list(general_names)
964 if not all(isinstance(x, GeneralName) for x in general_names):
965 raise TypeError(
966 "Every item in the general_names list must be an "
967 "object conforming to the GeneralName interface"
968 )
969
970 self._general_names = general_names
971
972 def __iter__(self):
973 return iter(self._general_names)
974
975 def __len__(self):
976 return len(self._general_names)
977
978 def get_values_for_type(self, type):
979 # Return the value of each GeneralName, except for OtherName instances
980 # which we return directly because it has two important properties not
981 # just one value.
982 objs = (i for i in self if isinstance(i, type))
983 if type != OtherName:
984 objs = (i.value for i in objs)
985 return list(objs)
986
987 def __repr__(self):
988 return "<GeneralNames({0})>".format(self._general_names)
989
990 def __eq__(self, other):
991 if not isinstance(other, GeneralNames):
992 return NotImplemented
993
994 return self._general_names == other._general_names
995
996 def __ne__(self, other):
997 return not self == other
998
999 def __getitem__(self, idx):
1000 return self._general_names[idx]
1001
1002
1003 @utils.register_interface(ExtensionType)
1004 class SubjectAlternativeName(object):
1005 oid = ExtensionOID.SUBJECT_ALTERNATIVE_NAME
1006
1007 def __init__(self, general_names):
1008 self._general_names = GeneralNames(general_names)
1009
1010 def __iter__(self):
1011 return iter(self._general_names)
1012
1013 def __len__(self):
1014 return len(self._general_names)
1015
1016 def get_values_for_type(self, type):
1017 return self._general_names.get_values_for_type(type)
1018
1019 def __repr__(self):
1020 return "<SubjectAlternativeName({0})>".format(self._general_names)
1021
1022 def __eq__(self, other):
1023 if not isinstance(other, SubjectAlternativeName):
1024 return NotImplemented
1025
1026 return self._general_names == other._general_names
1027
1028 def __getitem__(self, idx):
1029 return self._general_names[idx]
1030
1031 def __ne__(self, other):
1032 return not self == other
1033
1034
1035 @utils.register_interface(ExtensionType)
1036 class IssuerAlternativeName(object):
1037 oid = ExtensionOID.ISSUER_ALTERNATIVE_NAME
1038
1039 def __init__(self, general_names):
1040 self._general_names = GeneralNames(general_names)
1041
1042 def __iter__(self):
1043 return iter(self._general_names)
1044
1045 def __len__(self):
1046 return len(self._general_names)
1047
1048 def get_values_for_type(self, type):
1049 return self._general_names.get_values_for_type(type)
1050
1051 def __repr__(self):
1052 return "<IssuerAlternativeName({0})>".format(self._general_names)
1053
1054 def __eq__(self, other):
1055 if not isinstance(other, IssuerAlternativeName):
1056 return NotImplemented
1057
1058 return self._general_names == other._general_names
1059
1060 def __ne__(self, other):
1061 return not self == other
1062
1063 def __getitem__(self, idx):
1064 return self._general_names[idx]
1065
1066
1067 @utils.register_interface(ExtensionType)
1068 class CertificateIssuer(object):
1069 oid = CRLEntryExtensionOID.CERTIFICATE_ISSUER
1070
1071 def __init__(self, general_names):
1072 self._general_names = GeneralNames(general_names)
1073
1074 def __iter__(self):
1075 return iter(self._general_names)
1076
1077 def __len__(self):
1078 return len(self._general_names)
1079
1080 def get_values_for_type(self, type):
1081 return self._general_names.get_values_for_type(type)
1082
1083 def __repr__(self):
1084 return "<CertificateIssuer({0})>".format(self._general_names)
1085
1086 def __eq__(self, other):
1087 if not isinstance(other, CertificateIssuer):
1088 return NotImplemented
1089
1090 return self._general_names == other._general_names
1091
1092 def __ne__(self, other):
1093 return not self == other
1094
1095 def __getitem__(self, idx):
1096 return self._general_names[idx]
1097
1098
1099 @utils.register_interface(ExtensionType)
1100 class CRLReason(object):
1101 oid = CRLEntryExtensionOID.CRL_REASON
1102
1103 def __init__(self, reason):
1104 if not isinstance(reason, ReasonFlags):
1105 raise TypeError("reason must be an element from ReasonFlags")
1106
1107 self._reason = reason
1108
1109 def __repr__(self):
1110 return "<CRLReason(reason={0})>".format(self._reason)
1111
1112 def __eq__(self, other):
1113 if not isinstance(other, CRLReason):
1114 return NotImplemented
1115
1116 return self.reason == other.reason
1117
1118 def __ne__(self, other):
1119 return not self == other
1120
1121 def __hash__(self):
1122 return hash(self.reason)
1123
1124 reason = utils.read_only_property("_reason")
1125
1126
1127 @utils.register_interface(ExtensionType)
1128 class InvalidityDate(object):
1129 oid = CRLEntryExtensionOID.INVALIDITY_DATE
1130
1131 def __init__(self, invalidity_date):
1132 if not isinstance(invalidity_date, datetime.datetime):
1133 raise TypeError("invalidity_date must be a datetime.datetime")
1134
1135 self._invalidity_date = invalidity_date
1136
1137 def __repr__(self):
1138 return "<InvalidityDate(invalidity_date={0})>".format(
1139 self._invalidity_date
1140 )
1141
1142 def __eq__(self, other):
1143 if not isinstance(other, InvalidityDate):
1144 return NotImplemented
1145
1146 return self.invalidity_date == other.invalidity_date
1147
1148 def __ne__(self, other):
1149 return not self == other
1150
1151 def __hash__(self):
1152 return hash(self.invalidity_date)
1153
1154 invalidity_date = utils.read_only_property("_invalidity_date")
1155
1156
1157 @utils.register_interface(ExtensionType)
1158 class PrecertificateSignedCertificateTimestamps(object):
1159 oid = ExtensionOID.PRECERT_SIGNED_CERTIFICATE_TIMESTAMPS
1160
1161 def __init__(self, signed_certificate_timestamps):
1162 signed_certificate_timestamps = list(signed_certificate_timestamps)
1163 if not all(
1164 isinstance(sct, SignedCertificateTimestamp)
1165 for sct in signed_certificate_timestamps
1166 ):
1167 raise TypeError(
1168 "Every item in the signed_certificate_timestamps list must be "
1169 "a SignedCertificateTimestamp"
1170 )
1171 self._signed_certificate_timestamps = signed_certificate_timestamps
1172
1173 def __iter__(self):
1174 return iter(self._signed_certificate_timestamps)
1175
1176 def __len__(self):
1177 return len(self._signed_certificate_timestamps)
1178
1179 def __getitem__(self, idx):
1180 return self._signed_certificate_timestamps[idx]
1181
1182 def __repr__(self):
1183 return (
1184 "<PrecertificateSignedCertificateTimestamps({0})>".format(
1185 list(self)
1186 )
1187 )
1188
1189
1190 @utils.register_interface(ExtensionType)
1191 class UnrecognizedExtension(object):
1192 def __init__(self, oid, value):
1193 if not isinstance(oid, ObjectIdentifier):
1194 raise TypeError("oid must be an ObjectIdentifier")
1195 self._oid = oid
1196 self._value = value
1197
1198 oid = utils.read_only_property("_oid")
1199 value = utils.read_only_property("_value")
1200
1201 def __repr__(self):
1202 return (
1203 "<UnrecognizedExtension(oid={0.oid}, value={0.value!r})>".format(
1204 self
1205 )
1206 )
1207
1208 def __eq__(self, other):
1209 if not isinstance(other, UnrecognizedExtension):
1210 return NotImplemented
1211
1212 return self.oid == other.oid and self.value == other.value
1213
1214 def __ne__(self, other):
1215 return not self == other
1216
1217 def __hash__(self):
1218 return hash((self.oid, self.value))
1219
[end of src/cryptography/x509/extensions.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pyca/cryptography
|
bb31501f41464c4af069565ff59167ddcb4c84c7
|
Add the ANY extended key usage OID to the ExtendedKeyUsageOID class
`2.5.29.37.0`
|
2017-07-03T01:14:01Z
|
<patch>
diff --git a/src/cryptography/x509/oid.py b/src/cryptography/x509/oid.py
--- a/src/cryptography/x509/oid.py
+++ b/src/cryptography/x509/oid.py
@@ -171,6 +171,7 @@ class ExtendedKeyUsageOID(object):
EMAIL_PROTECTION = ObjectIdentifier("1.3.6.1.5.5.7.3.4")
TIME_STAMPING = ObjectIdentifier("1.3.6.1.5.5.7.3.8")
OCSP_SIGNING = ObjectIdentifier("1.3.6.1.5.5.7.3.9")
+ ANY_EXTENDED_KEY_USAGE = ObjectIdentifier("2.5.29.37.0")
class AuthorityInformationAccessOID(object):
</patch>
|
[]
|
[]
| ||||
mesonbuild__meson-3209
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
custom_target's build_always shouldn't imply build_by_default
If you set `build_always` to `true` on a `custom_target` it will build it by default rather than explicit only.
</issue>
<code>
[start of README.md]
1 <p align="center">
2 <img src="http://mesonbuild.com/assets/images/meson_logo.png">
3 </p>
4 Meson® is a project to create the best possible next-generation
5 build system.
6
7 #### Status
8
9 [](https://pypi.python.org/pypi/meson)
10 [](https://travis-ci.org/mesonbuild/meson)
11 [](https://ci.appveyor.com/project/mesonbuild/meson)
12 [](https://codecov.io/gh/mesonbuild/meson/branch/master)
13
14 #### Dependencies
15
16 - [Python](http://python.org) (version 3.5 or newer)
17 - [Ninja](https://ninja-build.org) (version 1.5 or newer)
18
19 #### Installing from source
20
21 You can run Meson directly from a revision control checkout or an
22 extracted tarball. If you wish you can install it locally with the
23 standard Python distutils command `python3 setup.py install <your
24 options here>`.
25
26 Meson is also available from
27 [PyPi](https://pypi.python.org/pypi/meson), so it can be installed
28 with `pip3 install meson` (this does not require a source checkout,
29 pip will download the package automatically). The exact command to
30 type to install with pip can vary between systems, be sure to use the
31 Python 3 version of pip.
32
33 #### Running
34
35 Meson requires that you have a source directory and a build directory
36 and that these two are different. In your source root must exist a file
37 called 'meson.build'. To generate the build system run this command:
38
39 `meson <source directory> <build directory>`
40
41 Depending on how you obtained Meson the command might also be called
42 `meson.py` instead of plain `meson`. In the rest of this document we
43 are going to use the latter form.
44
45 You can omit either of the two directories, and Meson will substitute
46 the current directory and autodetect what you mean. This allows you to
47 do things like this:
48
49 `cd source_root; mkdir builddir; cd builddir; meson ..`
50
51 or
52
53 `cd source_root; mkdir builddir; meson builddir`
54
55 To compile, cd into your build directory and type `ninja`. To run unit
56 tests, type `ninja test`.
57
58 Install is the same but it can take an extra argument:
59
60 `DESTDIR=/destdir/path ninja install`
61
62 `DESTDIR` can be omitted. If you are installing to system directories,
63 you may need to run this command with sudo.
64
65
66 #### Contributing
67
68 We love code contributions. See the contributing.txt file for
69 details.
70
71
72 #### IRC
73
74 The irc channel for Meson is `#mesonbuild` over at Freenode.
75
76
77 #### Further info
78
79 More information about the Meson build system can be found at the
80 [project's home page](http://mesonbuild.com).
81
82 Meson is a registered trademark of Jussi Pakkanen
83
[end of README.md]
[start of mesonbuild/backend/backends.py]
1 # Copyright 2012-2016 The Meson development team
2
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6
7 # http://www.apache.org/licenses/LICENSE-2.0
8
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import os, pickle, re
16 from .. import build
17 from .. import dependencies
18 from .. import mesonlib
19 from .. import mlog
20 from .. import compilers
21 import json
22 import subprocess
23 from ..mesonlib import MesonException, OrderedSet
24 from ..mesonlib import classify_unity_sources
25 from ..mesonlib import File
26 from ..compilers import CompilerArgs
27 from collections import OrderedDict
28 import shlex
29
30 class CleanTrees:
31 '''
32 Directories outputted by custom targets that have to be manually cleaned
33 because on Linux `ninja clean` only deletes empty directories.
34 '''
35 def __init__(self, build_dir, trees):
36 self.build_dir = build_dir
37 self.trees = trees
38
39 class InstallData:
40 def __init__(self, source_dir, build_dir, prefix, strip_bin,
41 install_umask, mesonintrospect):
42 self.source_dir = source_dir
43 self.build_dir = build_dir
44 self.prefix = prefix
45 self.strip_bin = strip_bin
46 self.install_umask = install_umask
47 self.targets = []
48 self.headers = []
49 self.man = []
50 self.data = []
51 self.po_package_name = ''
52 self.po = []
53 self.install_scripts = []
54 self.install_subdirs = []
55 self.mesonintrospect = mesonintrospect
56
57 class TargetInstallData:
58 def __init__(self, fname, outdir, aliases, strip, install_name_mappings, install_rpath, install_mode):
59 self.fname = fname
60 self.outdir = outdir
61 self.aliases = aliases
62 self.strip = strip
63 self.install_name_mappings = install_name_mappings
64 self.install_rpath = install_rpath
65 self.install_mode = install_mode
66
67 class ExecutableSerialisation:
68 def __init__(self, name, fname, cmd_args, env, is_cross, exe_wrapper,
69 workdir, extra_paths, capture):
70 self.name = name
71 self.fname = fname
72 self.cmd_args = cmd_args
73 self.env = env
74 self.is_cross = is_cross
75 self.exe_runner = exe_wrapper
76 self.workdir = workdir
77 self.extra_paths = extra_paths
78 self.capture = capture
79
80 class TestSerialisation:
81 def __init__(self, name, project, suite, fname, is_cross_built, exe_wrapper, is_parallel,
82 cmd_args, env, should_fail, timeout, workdir, extra_paths):
83 self.name = name
84 self.project_name = project
85 self.suite = suite
86 self.fname = fname
87 self.is_cross_built = is_cross_built
88 self.exe_runner = exe_wrapper
89 self.is_parallel = is_parallel
90 self.cmd_args = cmd_args
91 self.env = env
92 self.should_fail = should_fail
93 self.timeout = timeout
94 self.workdir = workdir
95 self.extra_paths = extra_paths
96
97 class OptionProxy:
98 def __init__(self, name, value):
99 self.name = name
100 self.value = value
101
102 class OptionOverrideProxy:
103 '''Mimic an option list but transparently override
104 selected option values.'''
105 def __init__(self, overrides, *options):
106 self.overrides = overrides
107 self.options = options
108
109 def __getitem__(self, option_name):
110 for opts in self.options:
111 if option_name in opts:
112 return self._get_override(option_name, opts[option_name])
113 raise KeyError('Option not found', option_name)
114
115 def _get_override(self, option_name, base_opt):
116 if option_name in self.overrides:
117 return OptionProxy(base_opt.name, base_opt.validate_value(self.overrides[option_name]))
118 return base_opt
119
120 # This class contains the basic functionality that is needed by all backends.
121 # Feel free to move stuff in and out of it as you see fit.
122 class Backend:
123 def __init__(self, build):
124 self.build = build
125 self.environment = build.environment
126 self.processed_targets = {}
127 self.build_to_src = os.path.relpath(self.environment.get_source_dir(),
128 self.environment.get_build_dir())
129
130 def get_target_filename(self, t):
131 if isinstance(t, build.CustomTarget):
132 if len(t.get_outputs()) != 1:
133 mlog.warning('custom_target {!r} has more than one output! '
134 'Using the first one.'.format(t.name))
135 filename = t.get_outputs()[0]
136 else:
137 assert(isinstance(t, build.BuildTarget))
138 filename = t.get_filename()
139 return os.path.join(self.get_target_dir(t), filename)
140
141 def get_target_filename_abs(self, target):
142 return os.path.join(self.environment.get_build_dir(), self.get_target_filename(target))
143
144 def get_builtin_options_for_target(self, target):
145 return OptionOverrideProxy(target.option_overrides,
146 self.environment.coredata.builtins)
147
148 def get_base_options_for_target(self, target):
149 return OptionOverrideProxy(target.option_overrides,
150 self.environment.coredata.builtins,
151 self.environment.coredata.base_options)
152
153 def get_compiler_options_for_target(self, target):
154 return OptionOverrideProxy(target.option_overrides,
155 # no code depends on builtins for now
156 self.environment.coredata.compiler_options)
157
158 def get_option_for_target(self, option_name, target):
159 if option_name in target.option_overrides:
160 override = target.option_overrides[option_name]
161 return self.environment.coredata.validate_option_value(option_name, override)
162 return self.environment.coredata.get_builtin_option(option_name)
163
164 def get_target_filename_for_linking(self, target):
165 # On some platforms (msvc for instance), the file that is used for
166 # dynamic linking is not the same as the dynamic library itself. This
167 # file is called an import library, and we want to link against that.
168 # On all other platforms, we link to the library directly.
169 if isinstance(target, build.SharedLibrary):
170 link_lib = target.get_import_filename() or target.get_filename()
171 return os.path.join(self.get_target_dir(target), link_lib)
172 elif isinstance(target, build.StaticLibrary):
173 return os.path.join(self.get_target_dir(target), target.get_filename())
174 elif isinstance(target, build.Executable):
175 if target.import_filename:
176 return os.path.join(self.get_target_dir(target), target.get_import_filename())
177 else:
178 return None
179 raise AssertionError('BUG: Tried to link to {!r} which is not linkable'.format(target))
180
181 def get_target_dir(self, target):
182 if self.environment.coredata.get_builtin_option('layout') == 'mirror':
183 dirname = target.get_subdir()
184 else:
185 dirname = 'meson-out'
186 return dirname
187
188 def get_target_dir_relative_to(self, t, o):
189 '''Get a target dir relative to another target's directory'''
190 target_dir = os.path.join(self.environment.get_build_dir(), self.get_target_dir(t))
191 othert_dir = os.path.join(self.environment.get_build_dir(), self.get_target_dir(o))
192 return os.path.relpath(target_dir, othert_dir)
193
194 def get_target_source_dir(self, target):
195 # if target dir is empty, avoid extraneous trailing / from os.path.join()
196 target_dir = self.get_target_dir(target)
197 if target_dir:
198 return os.path.join(self.build_to_src, target_dir)
199 return self.build_to_src
200
201 def get_target_private_dir(self, target):
202 return os.path.join(self.get_target_dir(target), target.get_id())
203
204 def get_target_private_dir_abs(self, target):
205 return os.path.join(self.environment.get_build_dir(), self.get_target_private_dir(target))
206
207 def get_target_generated_dir(self, target, gensrc, src):
208 """
209 Takes a BuildTarget, a generator source (CustomTarget or GeneratedList),
210 and a generated source filename.
211 Returns the full path of the generated source relative to the build root
212 """
213 # CustomTarget generators output to the build dir of the CustomTarget
214 if isinstance(gensrc, (build.CustomTarget, build.CustomTargetIndex)):
215 return os.path.join(self.get_target_dir(gensrc), src)
216 # GeneratedList generators output to the private build directory of the
217 # target that the GeneratedList is used in
218 return os.path.join(self.get_target_private_dir(target), src)
219
220 def get_unity_source_file(self, target, suffix):
221 # There is a potential conflict here, but it is unlikely that
222 # anyone both enables unity builds and has a file called foo-unity.cpp.
223 osrc = target.name + '-unity.' + suffix
224 return mesonlib.File.from_built_file(self.get_target_private_dir(target), osrc)
225
226 def generate_unity_files(self, target, unity_src):
227 abs_files = []
228 result = []
229 compsrcs = classify_unity_sources(target.compilers.values(), unity_src)
230
231 def init_language_file(suffix):
232 unity_src = self.get_unity_source_file(target, suffix)
233 outfileabs = unity_src.absolute_path(self.environment.get_source_dir(),
234 self.environment.get_build_dir())
235 outfileabs_tmp = outfileabs + '.tmp'
236 abs_files.append(outfileabs)
237 outfileabs_tmp_dir = os.path.dirname(outfileabs_tmp)
238 if not os.path.exists(outfileabs_tmp_dir):
239 os.makedirs(outfileabs_tmp_dir)
240 result.append(unity_src)
241 return open(outfileabs_tmp, 'w')
242
243 # For each language, generate a unity source file and return the list
244 for comp, srcs in compsrcs.items():
245 with init_language_file(comp.get_default_suffix()) as ofile:
246 for src in srcs:
247 ofile.write('#include<%s>\n' % src)
248 [mesonlib.replace_if_different(x, x + '.tmp') for x in abs_files]
249 return result
250
251 def relpath(self, todir, fromdir):
252 return os.path.relpath(os.path.join('dummyprefixdir', todir),
253 os.path.join('dummyprefixdir', fromdir))
254
255 def flatten_object_list(self, target, proj_dir_to_build_root=''):
256 return self._flatten_object_list(target, target.get_objects(), proj_dir_to_build_root)
257
258 def _flatten_object_list(self, target, objects, proj_dir_to_build_root):
259 obj_list = []
260 for obj in objects:
261 if isinstance(obj, str):
262 o = os.path.join(proj_dir_to_build_root,
263 self.build_to_src, target.get_subdir(), obj)
264 obj_list.append(o)
265 elif isinstance(obj, mesonlib.File):
266 obj_list.append(obj.rel_to_builddir(self.build_to_src))
267 elif isinstance(obj, build.ExtractedObjects):
268 if obj.recursive:
269 obj_list += self._flatten_object_list(obj.target, obj.objlist, proj_dir_to_build_root)
270 obj_list += self.determine_ext_objs(obj, proj_dir_to_build_root)
271 else:
272 raise MesonException('Unknown data type in object list.')
273 return obj_list
274
275 def serialize_executable(self, exe, cmd_args, workdir, env={},
276 extra_paths=None, capture=None):
277 import hashlib
278 if extra_paths is None:
279 # The callee didn't check if we needed extra paths, so check it here
280 if mesonlib.is_windows() or mesonlib.is_cygwin():
281 extra_paths = self.determine_windows_extra_paths(exe, [])
282 else:
283 extra_paths = []
284 # Can't just use exe.name here; it will likely be run more than once
285 if isinstance(exe, (dependencies.ExternalProgram,
286 build.BuildTarget, build.CustomTarget)):
287 basename = exe.name
288 else:
289 basename = os.path.basename(exe)
290 # Take a digest of the cmd args, env, workdir, and capture. This avoids
291 # collisions and also makes the name deterministic over regenerations
292 # which avoids a rebuild by Ninja because the cmdline stays the same.
293 data = bytes(str(sorted(env.items())) + str(cmd_args) + str(workdir) + str(capture),
294 encoding='utf-8')
295 digest = hashlib.sha1(data).hexdigest()
296 scratch_file = 'meson_exe_{0}_{1}.dat'.format(basename, digest)
297 exe_data = os.path.join(self.environment.get_scratch_dir(), scratch_file)
298 with open(exe_data, 'wb') as f:
299 if isinstance(exe, dependencies.ExternalProgram):
300 exe_cmd = exe.get_command()
301 exe_needs_wrapper = False
302 elif isinstance(exe, (build.BuildTarget, build.CustomTarget)):
303 exe_cmd = [self.get_target_filename_abs(exe)]
304 exe_needs_wrapper = exe.is_cross
305 else:
306 exe_cmd = [exe]
307 exe_needs_wrapper = False
308 is_cross_built = exe_needs_wrapper and \
309 self.environment.is_cross_build() and \
310 self.environment.cross_info.need_cross_compiler() and \
311 self.environment.cross_info.need_exe_wrapper()
312 if is_cross_built:
313 exe_wrapper = self.environment.cross_info.config['binaries'].get('exe_wrapper', None)
314 else:
315 exe_wrapper = None
316 es = ExecutableSerialisation(basename, exe_cmd, cmd_args, env,
317 is_cross_built, exe_wrapper, workdir,
318 extra_paths, capture)
319 pickle.dump(es, f)
320 return exe_data
321
322 def serialize_tests(self):
323 test_data = os.path.join(self.environment.get_scratch_dir(), 'meson_test_setup.dat')
324 with open(test_data, 'wb') as datafile:
325 self.write_test_file(datafile)
326 benchmark_data = os.path.join(self.environment.get_scratch_dir(), 'meson_benchmark_setup.dat')
327 with open(benchmark_data, 'wb') as datafile:
328 self.write_benchmark_file(datafile)
329 return test_data, benchmark_data
330
331 def determine_linker_and_stdlib_args(self, target):
332 '''
333 If we're building a static library, there is only one static linker.
334 Otherwise, we query the target for the dynamic linker.
335 '''
336 if isinstance(target, build.StaticLibrary):
337 if target.is_cross:
338 return self.build.static_cross_linker, []
339 else:
340 return self.build.static_linker, []
341 l, stdlib_args = target.get_clike_dynamic_linker_and_stdlibs()
342 return l, stdlib_args
343
344 @staticmethod
345 def _libdir_is_system(libdir, compilers):
346 for cc in compilers.values():
347 if libdir in cc.get_library_dirs():
348 return True
349 return False
350
351 def rpaths_for_bundled_shared_libraries(self, target, exclude_system=True):
352 paths = []
353 for dep in target.external_deps:
354 if not isinstance(dep, (dependencies.ExternalLibrary, dependencies.PkgConfigDependency)):
355 continue
356 la = dep.link_args
357 if len(la) != 1 or not os.path.isabs(la[0]):
358 continue
359 # The only link argument is an absolute path to a library file.
360 libpath = la[0]
361 libdir = os.path.dirname(libpath)
362 if exclude_system and self._libdir_is_system(libdir, target.compilers):
363 # No point in adding system paths.
364 continue
365 # Windows doesn't support rpaths, but we use this function to
366 # emulate rpaths by setting PATH, so also accept DLLs here
367 if os.path.splitext(libpath)[1] not in ['.dll', '.lib', '.so', '.dylib']:
368 continue
369 if libdir.startswith(self.environment.get_source_dir()):
370 rel_to_src = libdir[len(self.environment.get_source_dir()) + 1:]
371 assert not os.path.isabs(rel_to_src), 'rel_to_src: {} is absolute'.format(rel_to_src)
372 paths.append(os.path.join(self.build_to_src, rel_to_src))
373 else:
374 paths.append(libdir)
375 return paths
376
377 def determine_rpath_dirs(self, target):
378 link_deps = target.get_all_link_deps()
379 result = set()
380 for ld in link_deps:
381 if ld is target:
382 continue
383 result.add(self.get_target_dir(ld))
384 result.update(self.rpaths_for_bundled_shared_libraries(target))
385 return list(result)
386
387 def object_filename_from_source(self, target, source):
388 assert isinstance(source, mesonlib.File)
389 build_dir = self.environment.get_build_dir()
390 rel_src = source.rel_to_builddir(self.build_to_src)
391
392 # foo.vala files compile down to foo.c and then foo.c.o, not foo.vala.o
393 if rel_src.endswith(('.vala', '.gs')):
394 # See description in generate_vala_compile for this logic.
395 if source.is_built:
396 if os.path.isabs(rel_src):
397 rel_src = rel_src[len(build_dir) + 1:]
398 rel_src = os.path.relpath(rel_src, self.get_target_private_dir(target))
399 else:
400 rel_src = os.path.basename(rel_src)
401 # A meson- prefixed directory is reserved; hopefully no-one creates a file name with such a weird prefix.
402 source = 'meson-generated_' + rel_src[:-5] + '.c'
403 elif source.is_built:
404 if os.path.isabs(rel_src):
405 rel_src = rel_src[len(build_dir) + 1:]
406 targetdir = self.get_target_private_dir(target)
407 # A meson- prefixed directory is reserved; hopefully no-one creates a file name with such a weird prefix.
408 source = 'meson-generated_' + os.path.relpath(rel_src, targetdir)
409 else:
410 if os.path.isabs(rel_src):
411 # Not from the source directory; hopefully this doesn't conflict with user's source files.
412 source = os.path.basename(rel_src)
413 else:
414 source = os.path.relpath(os.path.join(build_dir, rel_src),
415 os.path.join(self.environment.get_source_dir(), target.get_subdir()))
416 return source.replace('/', '_').replace('\\', '_') + '.' + self.environment.get_object_suffix()
417
418 def determine_ext_objs(self, extobj, proj_dir_to_build_root):
419 result = []
420
421 # Merge sources and generated sources
422 sources = list(extobj.srclist)
423 for gensrc in extobj.genlist:
424 for s in gensrc.get_outputs():
425 path = self.get_target_generated_dir(extobj.target, gensrc, s)
426 dirpart, fnamepart = os.path.split(path)
427 sources.append(File(True, dirpart, fnamepart))
428
429 # Filter out headers and all non-source files
430 sources = [s for s in sources if self.environment.is_source(s) and not self.environment.is_header(s)]
431
432 # extobj could contain only objects and no sources
433 if not sources:
434 return result
435
436 targetdir = self.get_target_private_dir(extobj.target)
437
438 # With unity builds, there's just one object that contains all the
439 # sources, and we only support extracting all the objects in this mode,
440 # so just return that.
441 if self.is_unity(extobj.target):
442 compsrcs = classify_unity_sources(extobj.target.compilers.values(), sources)
443 sources = []
444 for comp in compsrcs.keys():
445 osrc = self.get_unity_source_file(extobj.target,
446 comp.get_default_suffix())
447 sources.append(osrc)
448
449 for osrc in sources:
450 objname = self.object_filename_from_source(extobj.target, osrc)
451 objpath = os.path.join(proj_dir_to_build_root, targetdir, objname)
452 result.append(objpath)
453
454 return result
455
456 def get_pch_include_args(self, compiler, target):
457 args = []
458 pchpath = self.get_target_private_dir(target)
459 includeargs = compiler.get_include_args(pchpath, False)
460 p = target.get_pch(compiler.get_language())
461 if p:
462 args += compiler.get_pch_use_args(pchpath, p[0])
463 return includeargs + args
464
465 @staticmethod
466 def escape_extra_args(compiler, args):
467 # No extra escaping/quoting needed when not running on Windows
468 if not mesonlib.is_windows():
469 return args
470 extra_args = []
471 # Compiler-specific escaping is needed for -D args but not for any others
472 if compiler.get_id() == 'msvc':
473 # MSVC needs escaping when a -D argument ends in \ or \"
474 for arg in args:
475 if arg.startswith('-D') or arg.startswith('/D'):
476 # Without extra escaping for these two, the next character
477 # gets eaten
478 if arg.endswith('\\'):
479 arg += '\\'
480 elif arg.endswith('\\"'):
481 arg = arg[:-2] + '\\\\"'
482 extra_args.append(arg)
483 else:
484 # MinGW GCC needs all backslashes in defines to be doubly-escaped
485 # FIXME: Not sure about Cygwin or Clang
486 for arg in args:
487 if arg.startswith('-D') or arg.startswith('/D'):
488 arg = arg.replace('\\', '\\\\')
489 extra_args.append(arg)
490 return extra_args
491
492 def generate_basic_compiler_args(self, target, compiler, no_warn_args=False):
493 # Create an empty commands list, and start adding arguments from
494 # various sources in the order in which they must override each other
495 # starting from hard-coded defaults followed by build options and so on.
496 commands = CompilerArgs(compiler)
497
498 copt_proxy = self.get_compiler_options_for_target(target)
499 # First, the trivial ones that are impossible to override.
500 #
501 # Add -nostdinc/-nostdinc++ if needed; can't be overridden
502 commands += self.get_cross_stdlib_args(target, compiler)
503 # Add things like /NOLOGO or -pipe; usually can't be overridden
504 commands += compiler.get_always_args()
505 # Only add warning-flags by default if the buildtype enables it, and if
506 # we weren't explicitly asked to not emit warnings (for Vala, f.ex)
507 if no_warn_args:
508 commands += compiler.get_no_warn_args()
509 elif self.get_option_for_target('buildtype', target) != 'plain':
510 commands += compiler.get_warn_args(self.get_option_for_target('warning_level', target))
511 # Add -Werror if werror=true is set in the build options set on the
512 # command-line or default_options inside project(). This only sets the
513 # action to be done for warnings if/when they are emitted, so it's ok
514 # to set it after get_no_warn_args() or get_warn_args().
515 if self.get_option_for_target('werror', target):
516 commands += compiler.get_werror_args()
517 # Add compile args for c_* or cpp_* build options set on the
518 # command-line or default_options inside project().
519 commands += compiler.get_option_compile_args(copt_proxy)
520 # Add buildtype args: optimization level, debugging, etc.
521 commands += compiler.get_buildtype_args(self.get_option_for_target('buildtype', target))
522 # Add compile args added using add_project_arguments()
523 commands += self.build.get_project_args(compiler, target.subproject)
524 # Add compile args added using add_global_arguments()
525 # These override per-project arguments
526 commands += self.build.get_global_args(compiler)
527 if not target.is_cross:
528 # Compile args added from the env: CFLAGS/CXXFLAGS, etc. We want these
529 # to override all the defaults, but not the per-target compile args.
530 commands += self.environment.coredata.get_external_args(compiler.get_language())
531 # Always set -fPIC for shared libraries
532 if isinstance(target, build.SharedLibrary):
533 commands += compiler.get_pic_args()
534 # Set -fPIC for static libraries by default unless explicitly disabled
535 if isinstance(target, build.StaticLibrary) and target.pic:
536 commands += compiler.get_pic_args()
537 # Add compile args needed to find external dependencies. Link args are
538 # added while generating the link command.
539 # NOTE: We must preserve the order in which external deps are
540 # specified, so we reverse the list before iterating over it.
541 for dep in reversed(target.get_external_deps()):
542 if not dep.found():
543 continue
544
545 if compiler.language == 'vala':
546 if isinstance(dep, dependencies.PkgConfigDependency):
547 if dep.name == 'glib-2.0' and dep.version_reqs is not None:
548 for req in dep.version_reqs:
549 if req.startswith(('>=', '==')):
550 commands += ['--target-glib', req[2:]]
551 break
552 commands += ['--pkg', dep.name]
553 elif isinstance(dep, dependencies.ExternalLibrary):
554 commands += dep.get_link_args('vala')
555 else:
556 commands += dep.get_compile_args()
557 # Qt needs -fPIC for executables
558 # XXX: We should move to -fPIC for all executables
559 if isinstance(target, build.Executable):
560 commands += dep.get_exe_args(compiler)
561 # For 'automagic' deps: Boost and GTest. Also dependency('threads').
562 # pkg-config puts the thread flags itself via `Cflags:`
563 if dep.need_threads():
564 commands += compiler.thread_flags(self.environment)
565 elif dep.need_openmp():
566 commands += compiler.openmp_flags()
567 # Fortran requires extra include directives.
568 if compiler.language == 'fortran':
569 for lt in target.link_targets:
570 priv_dir = self.get_target_private_dir(lt)
571 commands += compiler.get_include_args(priv_dir, False)
572 return commands
573
574 def build_target_link_arguments(self, compiler, deps):
575 args = []
576 for d in deps:
577 if not (d.is_linkable_target()):
578 raise RuntimeError('Tried to link with a non-library target "%s".' % d.get_basename())
579 d_arg = self.get_target_filename_for_linking(d)
580 if not d_arg:
581 continue
582 if isinstance(compiler, (compilers.LLVMDCompiler, compilers.DmdDCompiler)):
583 d_arg = '-L' + d_arg
584 args.append(d_arg)
585 return args
586
587 def get_mingw_extra_paths(self, target):
588 paths = OrderedSet()
589 # The cross bindir
590 root = self.environment.cross_info.get_root()
591 if root:
592 paths.add(os.path.join(root, 'bin'))
593 # The toolchain bindir
594 sys_root = self.environment.cross_info.get_sys_root()
595 if sys_root:
596 paths.add(os.path.join(sys_root, 'bin'))
597 # Get program and library dirs from all target compilers
598 if isinstance(target, build.BuildTarget):
599 for cc in target.compilers.values():
600 paths.update(cc.get_program_dirs())
601 paths.update(cc.get_library_dirs())
602 return list(paths)
603
604 def determine_windows_extra_paths(self, target, extra_bdeps, is_cross=False):
605 '''On Windows there is no such thing as an rpath.
606 We must determine all locations of DLLs that this exe
607 links to and return them so they can be used in unit
608 tests.'''
609 result = set()
610 prospectives = set()
611 if isinstance(target, build.BuildTarget):
612 prospectives.update(target.get_transitive_link_deps())
613 # External deps
614 for deppath in self.rpaths_for_bundled_shared_libraries(target, exclude_system=False):
615 result.add(os.path.normpath(os.path.join(self.environment.get_build_dir(), deppath)))
616 for bdep in extra_bdeps:
617 prospectives.update(bdep.get_transitive_link_deps())
618 # Internal deps
619 for ld in prospectives:
620 if ld == '' or ld == '.':
621 continue
622 dirseg = os.path.join(self.environment.get_build_dir(), self.get_target_dir(ld))
623 result.add(dirseg)
624 if is_cross:
625 result.update(self.get_mingw_extra_paths(target))
626 return list(result)
627
628 def write_benchmark_file(self, datafile):
629 self.write_test_serialisation(self.build.get_benchmarks(), datafile)
630
631 def write_test_file(self, datafile):
632 self.write_test_serialisation(self.build.get_tests(), datafile)
633
634 def write_test_serialisation(self, tests, datafile):
635 arr = []
636 for t in tests:
637 exe = t.get_exe()
638 if isinstance(exe, dependencies.ExternalProgram):
639 cmd = exe.get_command()
640 else:
641 cmd = [os.path.join(self.environment.get_build_dir(), self.get_target_filename(t.get_exe()))]
642 is_cross = self.environment.is_cross_build() and \
643 self.environment.cross_info.need_cross_compiler() and \
644 self.environment.cross_info.need_exe_wrapper()
645 if isinstance(exe, build.BuildTarget):
646 is_cross = is_cross and exe.is_cross
647 if isinstance(exe, dependencies.ExternalProgram):
648 # E.g. an external verifier or simulator program run on a generated executable.
649 # Can always be run.
650 is_cross = False
651 if is_cross:
652 exe_wrapper = self.environment.cross_info.config['binaries'].get('exe_wrapper', None)
653 else:
654 exe_wrapper = None
655 if mesonlib.for_windows(is_cross, self.environment) or \
656 mesonlib.for_cygwin(is_cross, self.environment):
657 extra_bdeps = []
658 if isinstance(exe, build.CustomTarget):
659 extra_bdeps = exe.get_transitive_build_target_deps()
660 extra_paths = self.determine_windows_extra_paths(exe, extra_bdeps, is_cross)
661 else:
662 extra_paths = []
663 cmd_args = []
664 for a in t.cmd_args:
665 if hasattr(a, 'held_object'):
666 a = a.held_object
667 if isinstance(a, build.BuildTarget):
668 extra_paths += self.determine_windows_extra_paths(a, [])
669 if isinstance(a, mesonlib.File):
670 a = os.path.join(self.environment.get_build_dir(), a.rel_to_builddir(self.build_to_src))
671 cmd_args.append(a)
672 elif isinstance(a, str):
673 cmd_args.append(a)
674 elif isinstance(a, build.Target):
675 cmd_args.append(self.get_target_filename(a))
676 else:
677 raise MesonException('Bad object in test command.')
678 ts = TestSerialisation(t.get_name(), t.project_name, t.suite, cmd, is_cross,
679 exe_wrapper, t.is_parallel, cmd_args, t.env,
680 t.should_fail, t.timeout, t.workdir, extra_paths)
681 arr.append(ts)
682 pickle.dump(arr, datafile)
683
684 def generate_depmf_install(self, d):
685 if self.build.dep_manifest_name is None:
686 return
687 ifilename = os.path.join(self.environment.get_build_dir(), 'depmf.json')
688 ofilename = os.path.join(self.environment.get_prefix(), self.build.dep_manifest_name)
689 mfobj = {'type': 'dependency manifest', 'version': '1.0', 'projects': self.build.dep_manifest}
690 with open(ifilename, 'w') as f:
691 f.write(json.dumps(mfobj))
692 # Copy file from, to, and with mode unchanged
693 d.data.append([ifilename, ofilename, None])
694
695 def get_regen_filelist(self):
696 '''List of all files whose alteration means that the build
697 definition needs to be regenerated.'''
698 deps = [os.path.join(self.build_to_src, df)
699 for df in self.interpreter.get_build_def_files()]
700 if self.environment.is_cross_build():
701 deps.append(os.path.join(self.build_to_src,
702 self.environment.coredata.cross_file))
703 deps.append('meson-private/coredata.dat')
704 if os.path.exists(os.path.join(self.environment.get_source_dir(), 'meson_options.txt')):
705 deps.append(os.path.join(self.build_to_src, 'meson_options.txt'))
706 for sp in self.build.subprojects.keys():
707 fname = os.path.join(self.environment.get_source_dir(), sp, 'meson_options.txt')
708 if os.path.isfile(fname):
709 deps.append(os.path.join(self.build_to_src, sp, 'meson_options.txt'))
710 return deps
711
712 def exe_object_to_cmd_array(self, exe):
713 if self.environment.is_cross_build() and \
714 self.environment.cross_info.need_exe_wrapper() and \
715 isinstance(exe, build.BuildTarget) and exe.is_cross:
716 if 'exe_wrapper' not in self.environment.cross_info.config['binaries']:
717 s = 'Can not use target %s as a generator because it is cross-built\n'
718 s += 'and no exe wrapper is defined. You might want to set it to native instead.'
719 s = s % exe.name
720 raise MesonException(s)
721 if isinstance(exe, build.BuildTarget):
722 exe_arr = [os.path.join(self.environment.get_build_dir(), self.get_target_filename(exe))]
723 else:
724 exe_arr = exe.get_command()
725 return exe_arr
726
727 def replace_extra_args(self, args, genlist):
728 final_args = []
729 for a in args:
730 if a == '@EXTRA_ARGS@':
731 final_args += genlist.get_extra_args()
732 else:
733 final_args.append(a)
734 return final_args
735
736 def replace_outputs(self, args, private_dir, output_list):
737 newargs = []
738 regex = re.compile('@OUTPUT(\d+)@')
739 for arg in args:
740 m = regex.search(arg)
741 while m is not None:
742 index = int(m.group(1))
743 src = '@OUTPUT%d@' % index
744 arg = arg.replace(src, os.path.join(private_dir, output_list[index]))
745 m = regex.search(arg)
746 newargs.append(arg)
747 return newargs
748
749 def get_build_by_default_targets(self):
750 result = OrderedDict()
751 # Get all build and custom targets that must be built by default
752 for name, t in self.build.get_targets().items():
753 if t.build_by_default or t.install or t.build_always:
754 result[name] = t
755 # Get all targets used as test executables and arguments. These must
756 # also be built by default. XXX: Sometime in the future these should be
757 # built only before running tests.
758 for t in self.build.get_tests():
759 exe = t.exe
760 if hasattr(exe, 'held_object'):
761 exe = exe.held_object
762 if isinstance(exe, (build.CustomTarget, build.BuildTarget)):
763 result[exe.get_id()] = exe
764 for arg in t.cmd_args:
765 if hasattr(arg, 'held_object'):
766 arg = arg.held_object
767 if not isinstance(arg, (build.CustomTarget, build.BuildTarget)):
768 continue
769 result[arg.get_id()] = arg
770 for dep in t.depends:
771 assert isinstance(dep, (build.CustomTarget, build.BuildTarget))
772 result[dep.get_id()] = dep
773 return result
774
775 def get_custom_target_provided_libraries(self, target):
776 libs = []
777 for t in target.get_generated_sources():
778 if not isinstance(t, build.CustomTarget):
779 continue
780 for f in t.get_outputs():
781 if self.environment.is_library(f):
782 libs.append(os.path.join(self.get_target_dir(t), f))
783 return libs
784
785 def is_unity(self, target):
786 optval = self.get_option_for_target('unity', target)
787 if optval == 'on' or (optval == 'subprojects' and target.subproject != ''):
788 return True
789 return False
790
791 def get_custom_target_sources(self, target):
792 '''
793 Custom target sources can be of various object types; strings, File,
794 BuildTarget, even other CustomTargets.
795 Returns the path to them relative to the build root directory.
796 '''
797 srcs = []
798 for i in target.get_sources():
799 if hasattr(i, 'held_object'):
800 i = i.held_object
801 if isinstance(i, str):
802 fname = [os.path.join(self.build_to_src, target.subdir, i)]
803 elif isinstance(i, build.BuildTarget):
804 fname = [self.get_target_filename(i)]
805 elif isinstance(i, (build.CustomTarget, build.CustomTargetIndex)):
806 fname = [os.path.join(self.get_target_dir(i), p) for p in i.get_outputs()]
807 elif isinstance(i, build.GeneratedList):
808 fname = [os.path.join(self.get_target_private_dir(target), p) for p in i.get_outputs()]
809 else:
810 fname = [i.rel_to_builddir(self.build_to_src)]
811 if target.absolute_paths:
812 fname = [os.path.join(self.environment.get_build_dir(), f) for f in fname]
813 srcs += fname
814 return srcs
815
816 def get_custom_target_depend_files(self, target, absolute_paths=False):
817 deps = []
818 for i in target.depend_files:
819 if isinstance(i, mesonlib.File):
820 if absolute_paths:
821 deps.append(i.absolute_path(self.environment.get_source_dir(),
822 self.environment.get_build_dir()))
823 else:
824 deps.append(i.rel_to_builddir(self.build_to_src))
825 else:
826 if absolute_paths:
827 deps.append(os.path.join(self.environment.get_source_dir(), target.subdir, i))
828 else:
829 deps.append(os.path.join(self.build_to_src, target.subdir, i))
830 return deps
831
832 def eval_custom_target_command(self, target, absolute_outputs=False):
833 # We want the outputs to be absolute only when using the VS backend
834 # XXX: Maybe allow the vs backend to use relative paths too?
835 source_root = self.build_to_src
836 build_root = '.'
837 outdir = self.get_target_dir(target)
838 if absolute_outputs:
839 source_root = self.environment.get_source_dir()
840 build_root = self.environment.get_source_dir()
841 outdir = os.path.join(self.environment.get_build_dir(), outdir)
842 outputs = []
843 for i in target.get_outputs():
844 outputs.append(os.path.join(outdir, i))
845 inputs = self.get_custom_target_sources(target)
846 # Evaluate the command list
847 cmd = []
848 for i in target.command:
849 if isinstance(i, build.Executable):
850 cmd += self.exe_object_to_cmd_array(i)
851 continue
852 elif isinstance(i, build.CustomTarget):
853 # GIR scanner will attempt to execute this binary but
854 # it assumes that it is in path, so always give it a full path.
855 tmp = i.get_outputs()[0]
856 i = os.path.join(self.get_target_dir(i), tmp)
857 elif isinstance(i, mesonlib.File):
858 i = i.rel_to_builddir(self.build_to_src)
859 if target.absolute_paths:
860 i = os.path.join(self.environment.get_build_dir(), i)
861 # FIXME: str types are blindly added ignoring 'target.absolute_paths'
862 # because we can't know if they refer to a file or just a string
863 elif not isinstance(i, str):
864 err_msg = 'Argument {0} is of unknown type {1}'
865 raise RuntimeError(err_msg.format(str(i), str(type(i))))
866 elif '@SOURCE_ROOT@' in i:
867 i = i.replace('@SOURCE_ROOT@', source_root)
868 elif '@BUILD_ROOT@' in i:
869 i = i.replace('@BUILD_ROOT@', build_root)
870 elif '@DEPFILE@' in i:
871 if target.depfile is None:
872 msg = 'Custom target {!r} has @DEPFILE@ but no depfile ' \
873 'keyword argument.'.format(target.name)
874 raise MesonException(msg)
875 dfilename = os.path.join(outdir, target.depfile)
876 i = i.replace('@DEPFILE@', dfilename)
877 elif '@PRIVATE_OUTDIR_' in i:
878 match = re.search('@PRIVATE_OUTDIR_(ABS_)?([^/\s*]*)@', i)
879 if not match:
880 msg = 'Custom target {!r} has an invalid argument {!r}' \
881 ''.format(target.name, i)
882 raise MesonException(msg)
883 source = match.group(0)
884 if match.group(1) is None and not target.absolute_paths:
885 lead_dir = ''
886 else:
887 lead_dir = self.environment.get_build_dir()
888 i = i.replace(source, os.path.join(lead_dir, outdir))
889 cmd.append(i)
890 # Substitute the rest of the template strings
891 values = mesonlib.get_filenames_templates_dict(inputs, outputs)
892 cmd = mesonlib.substitute_values(cmd, values)
893 # This should not be necessary but removing it breaks
894 # building GStreamer on Windows. The underlying issue
895 # is problems with quoting backslashes on Windows
896 # which is the seventh circle of hell. The downside is
897 # that this breaks custom targets whose command lines
898 # have backslashes. If you try to fix this be sure to
899 # check that it does not break GST.
900 #
901 # The bug causes file paths such as c:\foo to get escaped
902 # into c:\\foo.
903 #
904 # Unfortunately we have not been able to come up with an
905 # isolated test case for this so unless you manage to come up
906 # with one, the only way is to test the building with Gst's
907 # setup. Note this in your MR or ping us and we will get it
908 # fixed.
909 #
910 # https://github.com/mesonbuild/meson/pull/737
911 cmd = [i.replace('\\', '/') for i in cmd]
912 return inputs, outputs, cmd
913
914 def run_postconf_scripts(self):
915 env = {'MESON_SOURCE_ROOT': self.environment.get_source_dir(),
916 'MESON_BUILD_ROOT': self.environment.get_build_dir(),
917 'MESONINTROSPECT': ' '.join([shlex.quote(x) for x in self.environment.get_build_command() + ['introspect']]),
918 }
919 child_env = os.environ.copy()
920 child_env.update(env)
921
922 for s in self.build.postconf_scripts:
923 cmd = s['exe'] + s['args']
924 subprocess.check_call(cmd, env=child_env)
925
[end of mesonbuild/backend/backends.py]
[start of mesonbuild/wrap/__init__.py]
1 from enum import Enum
2
3 # Used for the --wrap-mode command-line argument
4 #
5 # Special wrap modes:
6 # nofallback: Don't download wraps for dependency() fallbacks
7 # nodownload: Don't download wraps for all subproject() calls
8 #
9 # subprojects are used for two purposes:
10 # 1. To download and build dependencies by using .wrap
11 # files if they are not provided by the system. This is
12 # usually expressed via dependency(..., fallback: ...).
13 # 2. To download and build 'copylibs' which are meant to be
14 # used by copying into your project. This is always done
15 # with an explicit subproject() call.
16 #
17 # --wrap-mode=nofallback will never do (1)
18 # --wrap-mode=nodownload will do neither (1) nor (2)
19 #
20 # If you are building from a release tarball, you should be
21 # able to safely use 'nodownload' since upstream is
22 # expected to ship all required sources with the tarball.
23 #
24 # If you are building from a git repository, you will want
25 # to use 'nofallback' so that any 'copylib' wraps will be
26 # download as subprojects.
27 #
28 # --wrap-mode=forcefallback will ignore external dependencies,
29 # even if they match the version requirements, and automatically
30 # use the fallback if one was provided. This is useful for example
31 # to make sure a project builds when using the fallbacks.
32 #
33 # Note that these options do not affect subprojects that
34 # are git submodules since those are only usable in git
35 # repositories, and you almost always want to download them.
36 WrapMode = Enum('WrapMode', 'default nofallback nodownload forcefallback')
37
[end of mesonbuild/wrap/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
mesonbuild/meson
|
d7466066e468f9297bcad480003af882920c1159
|
custom_target's build_always shouldn't imply build_by_default
If you set `build_always` to `true` on a `custom_target` it will build it by default rather than explicit only.
|
2018-03-09T17:58:47Z
|
<patch>
diff --git a/mesonbuild/backend/backends.py b/mesonbuild/backend/backends.py
--- a/mesonbuild/backend/backends.py
+++ b/mesonbuild/backend/backends.py
@@ -750,7 +750,7 @@ def get_build_by_default_targets(self):
result = OrderedDict()
# Get all build and custom targets that must be built by default
for name, t in self.build.get_targets().items():
- if t.build_by_default or t.install or t.build_always:
+ if t.build_by_default or t.install:
result[name] = t
# Get all targets used as test executables and arguments. These must
# also be built by default. XXX: Sometime in the future these should be
diff --git a/mesonbuild/backend/ninjabackend.py b/mesonbuild/backend/ninjabackend.py
--- a/mesonbuild/backend/ninjabackend.py
+++ b/mesonbuild/backend/ninjabackend.py
@@ -504,7 +504,7 @@ def generate_custom_target(self, target, outfile):
deps = self.unwrap_dep_list(target)
deps += self.get_custom_target_depend_files(target)
desc = 'Generating {0} with a {1} command.'
- if target.build_always:
+ if target.build_always_stale:
deps.append('PHONY')
if target.depfile is None:
rulename = 'CUSTOM_COMMAND'
diff --git a/mesonbuild/build.py b/mesonbuild/build.py
--- a/mesonbuild/build.py
+++ b/mesonbuild/build.py
@@ -309,7 +309,7 @@ def __init__(self, name, subdir, subproject, build_by_default):
self.subproject = subproject
self.build_by_default = build_by_default
self.install = False
- self.build_always = False
+ self.build_always_stale = False
self.option_overrides = {}
def get_basename(self):
@@ -1636,6 +1636,7 @@ class CustomTarget(Target):
'install_dir',
'install_mode',
'build_always',
+ 'build_always_stale',
'depends',
'depend_files',
'depfile',
@@ -1788,9 +1789,16 @@ def process_kwargs(self, kwargs):
self.install = False
self.install_dir = [None]
self.install_mode = None
- self.build_always = kwargs.get('build_always', False)
- if not isinstance(self.build_always, bool):
- raise InvalidArguments('Argument build_always must be a boolean.')
+ if 'build_always' in kwargs and 'build_always_stale' in kwargs:
+ raise InvalidArguments('build_always and build_always_stale are mutually exclusive. Combine build_by_default and build_always_stale.')
+ elif 'build_always' in kwargs:
+ mlog.warning('build_always is deprecated. Combine build_by_default and build_always_stale instead.')
+ self.build_by_default = kwargs['build_always']
+ self.build_always_stale = kwargs['build_always']
+ elif 'build_always_stale' in kwargs:
+ self.build_always_stale = kwargs['build_always_stale']
+ if not isinstance(self.build_always_stale, bool):
+ raise InvalidArguments('Argument build_always_stale must be a boolean.')
extra_deps, depend_files = extract_as_list(kwargs, 'depends', 'depend_files', pop = False)
for ed in extra_deps:
while hasattr(ed, 'held_object'):
diff --git a/mesonbuild/interpreter.py b/mesonbuild/interpreter.py
--- a/mesonbuild/interpreter.py
+++ b/mesonbuild/interpreter.py
@@ -1783,7 +1783,7 @@ def get_cross_property_method(self, args, kwargs):
'benchmark': {'args', 'env', 'should_fail', 'timeout', 'workdir', 'suite'},
'build_target': known_build_target_kwargs,
'configure_file': {'input', 'output', 'configuration', 'command', 'copy', 'install_dir', 'install_mode', 'capture', 'install', 'format', 'output_format', 'encoding'},
- 'custom_target': {'input', 'output', 'command', 'install', 'install_dir', 'install_mode', 'build_always', 'capture', 'depends', 'depend_files', 'depfile', 'build_by_default'},
+ 'custom_target': {'input', 'output', 'command', 'install', 'install_dir', 'install_mode', 'build_always', 'capture', 'depends', 'depend_files', 'depfile', 'build_by_default', 'build_always_stale'},
'dependency': {'default_options', 'fallback', 'language', 'main', 'method', 'modules', 'optional_modules', 'native', 'required', 'static', 'version', 'private_headers'},
'declare_dependency': {'include_directories', 'link_with', 'sources', 'dependencies', 'compile_args', 'link_args', 'link_whole', 'version'},
'executable': build.known_exe_kwargs,
@@ -3012,7 +3012,8 @@ def func_vcs_tag(self, node, args, kwargs):
source_dir,
replace_string,
regex_selector] + vcs_cmd
- kwargs.setdefault('build_always', True)
+ kwargs.setdefault('build_by_default', True)
+ kwargs.setdefault('build_always_stale', True)
return self.func_custom_target(node, [kwargs['output']], kwargs)
@FeatureNew('subdir_done', '0.46.0')
@@ -3025,7 +3026,7 @@ def func_subdir_done(self, node, args, kwargs):
raise SubdirDoneRequest()
@stringArgs
- @FeatureNewKwargs('custom_target', '0.47.0', ['install_mode'])
+ @FeatureNewKwargs('custom_target', '0.47.0', ['install_mode', 'build_always_stale'])
@FeatureNewKwargs('custom_target', '0.40.0', ['build_by_default'])
@permittedKwargs(permitted_kwargs['custom_target'])
def func_custom_target(self, node, args, kwargs):
</patch>
|
[]
|
[]
| ||||
ipython__ipython-415
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
crash of ipython when alias is used with %s and echo
This bug is present in alias of magic commands
In [1]: alias parts echo first %s second %s
In [2]: parts A
After executing the first line of command when we give command as parts A
instead of printing
"Incorrect number of arguments: 2 expected." ipython automatically crashes.
</issue>
<code>
[start of README.rst]
1 ==============
2 IPython README
3 ==============
4
5 Overview
6 ========
7
8 Welcome to IPython. Our full documentation can be found in the ``docs/dist``
9 subdirectory in ``.html`` and ``.pdf`` formats, also available online at our
10 `docs repo <http://ipython.github.com/ipython-doc>`_. The ``docs/source`` directory
11 contains the plaintext version of these manuals.
12
13
14 Dependencies and supported Python versions
15 ==========================================
16
17 For full details, see the installation section of the manual. The basic parts
18 of IPython only need the Python standard library, but much of its more advanced
19 functionality requires extra packages.
20
21 Officially, IPython requires Python version 2.6 or 2.7. An experimental port
22 of IPython to Python 3.x is being developed at
23 http://github.com/ipython/ipython-py3k.
24
25
26 Instant running
27 ===============
28
29 You can run IPython from this directory without even installing it system-wide
30 by typing at the terminal::
31
32 $ python ipython.py
33
34
[end of README.rst]
[start of IPython/core/alias.py]
1 #!/usr/bin/env python
2 # encoding: utf-8
3 """
4 System command aliases.
5
6 Authors:
7
8 * Fernando Perez
9 * Brian Granger
10 """
11
12 #-----------------------------------------------------------------------------
13 # Copyright (C) 2008-2010 The IPython Development Team
14 #
15 # Distributed under the terms of the BSD License.
16 #
17 # The full license is in the file COPYING.txt, distributed with this software.
18 #-----------------------------------------------------------------------------
19
20 #-----------------------------------------------------------------------------
21 # Imports
22 #-----------------------------------------------------------------------------
23
24 import __builtin__
25 import keyword
26 import os
27 import re
28 import sys
29
30 from IPython.config.configurable import Configurable
31 from IPython.core.splitinput import split_user_input
32
33 from IPython.utils.traitlets import List, Instance
34 from IPython.utils.autoattr import auto_attr
35 from IPython.utils.warn import warn, error
36
37 #-----------------------------------------------------------------------------
38 # Utilities
39 #-----------------------------------------------------------------------------
40
41 # This is used as the pattern for calls to split_user_input.
42 shell_line_split = re.compile(r'^(\s*)(\S*\s*)(.*$)')
43
44 def default_aliases():
45 """Return list of shell aliases to auto-define.
46 """
47 # Note: the aliases defined here should be safe to use on a kernel
48 # regardless of what frontend it is attached to. Frontends that use a
49 # kernel in-process can define additional aliases that will only work in
50 # their case. For example, things like 'less' or 'clear' that manipulate
51 # the terminal should NOT be declared here, as they will only work if the
52 # kernel is running inside a true terminal, and not over the network.
53
54 if os.name == 'posix':
55 default_aliases = [('mkdir', 'mkdir'), ('rmdir', 'rmdir'),
56 ('mv', 'mv -i'), ('rm', 'rm -i'), ('cp', 'cp -i'),
57 ('cat', 'cat'),
58 ]
59 # Useful set of ls aliases. The GNU and BSD options are a little
60 # different, so we make aliases that provide as similar as possible
61 # behavior in ipython, by passing the right flags for each platform
62 if sys.platform.startswith('linux'):
63 ls_aliases = [('ls', 'ls -F --color'),
64 # long ls
65 ('ll', 'ls -F -o --color'),
66 # ls normal files only
67 ('lf', 'ls -F -o --color %l | grep ^-'),
68 # ls symbolic links
69 ('lk', 'ls -F -o --color %l | grep ^l'),
70 # directories or links to directories,
71 ('ldir', 'ls -F -o --color %l | grep /$'),
72 # things which are executable
73 ('lx', 'ls -F -o --color %l | grep ^-..x'),
74 ]
75 else:
76 # BSD, OSX, etc.
77 ls_aliases = [('ls', 'ls -F'),
78 # long ls
79 ('ll', 'ls -F -l'),
80 # ls normal files only
81 ('lf', 'ls -F -l %l | grep ^-'),
82 # ls symbolic links
83 ('lk', 'ls -F -l %l | grep ^l'),
84 # directories or links to directories,
85 ('ldir', 'ls -F -l %l | grep /$'),
86 # things which are executable
87 ('lx', 'ls -F -l %l | grep ^-..x'),
88 ]
89 default_aliases = default_aliases + ls_aliases
90 elif os.name in ['nt', 'dos']:
91 default_aliases = [('ls', 'dir /on'),
92 ('ddir', 'dir /ad /on'), ('ldir', 'dir /ad /on'),
93 ('mkdir', 'mkdir'), ('rmdir', 'rmdir'),
94 ('echo', 'echo'), ('ren', 'ren'), ('copy', 'copy'),
95 ]
96 else:
97 default_aliases = []
98
99 return default_aliases
100
101
102 class AliasError(Exception):
103 pass
104
105
106 class InvalidAliasError(AliasError):
107 pass
108
109 #-----------------------------------------------------------------------------
110 # Main AliasManager class
111 #-----------------------------------------------------------------------------
112
113 class AliasManager(Configurable):
114
115 default_aliases = List(default_aliases(), config=True)
116 user_aliases = List(default_value=[], config=True)
117 shell = Instance('IPython.core.interactiveshell.InteractiveShellABC')
118
119 def __init__(self, shell=None, config=None):
120 super(AliasManager, self).__init__(shell=shell, config=config)
121 self.alias_table = {}
122 self.exclude_aliases()
123 self.init_aliases()
124
125 def __contains__(self, name):
126 return name in self.alias_table
127
128 @property
129 def aliases(self):
130 return [(item[0], item[1][1]) for item in self.alias_table.iteritems()]
131
132 def exclude_aliases(self):
133 # set of things NOT to alias (keywords, builtins and some magics)
134 no_alias = set(['cd','popd','pushd','dhist','alias','unalias'])
135 no_alias.update(set(keyword.kwlist))
136 no_alias.update(set(__builtin__.__dict__.keys()))
137 self.no_alias = no_alias
138
139 def init_aliases(self):
140 # Load default aliases
141 for name, cmd in self.default_aliases:
142 self.soft_define_alias(name, cmd)
143
144 # Load user aliases
145 for name, cmd in self.user_aliases:
146 self.soft_define_alias(name, cmd)
147
148 def clear_aliases(self):
149 self.alias_table.clear()
150
151 def soft_define_alias(self, name, cmd):
152 """Define an alias, but don't raise on an AliasError."""
153 try:
154 self.define_alias(name, cmd)
155 except AliasError, e:
156 error("Invalid alias: %s" % e)
157
158 def define_alias(self, name, cmd):
159 """Define a new alias after validating it.
160
161 This will raise an :exc:`AliasError` if there are validation
162 problems.
163 """
164 nargs = self.validate_alias(name, cmd)
165 self.alias_table[name] = (nargs, cmd)
166
167 def undefine_alias(self, name):
168 if self.alias_table.has_key(name):
169 del self.alias_table[name]
170
171 def validate_alias(self, name, cmd):
172 """Validate an alias and return the its number of arguments."""
173 if name in self.no_alias:
174 raise InvalidAliasError("The name %s can't be aliased "
175 "because it is a keyword or builtin." % name)
176 if not (isinstance(cmd, basestring)):
177 raise InvalidAliasError("An alias command must be a string, "
178 "got: %r" % name)
179 nargs = cmd.count('%s')
180 if nargs>0 and cmd.find('%l')>=0:
181 raise InvalidAliasError('The %s and %l specifiers are mutually '
182 'exclusive in alias definitions.')
183 return nargs
184
185 def call_alias(self, alias, rest=''):
186 """Call an alias given its name and the rest of the line."""
187 cmd = self.transform_alias(alias, rest)
188 try:
189 self.shell.system(cmd)
190 except:
191 self.shell.showtraceback()
192
193 def transform_alias(self, alias,rest=''):
194 """Transform alias to system command string."""
195 nargs, cmd = self.alias_table[alias]
196
197 if ' ' in cmd and os.path.isfile(cmd):
198 cmd = '"%s"' % cmd
199
200 # Expand the %l special to be the user's input line
201 if cmd.find('%l') >= 0:
202 cmd = cmd.replace('%l', rest)
203 rest = ''
204 if nargs==0:
205 # Simple, argument-less aliases
206 cmd = '%s %s' % (cmd, rest)
207 else:
208 # Handle aliases with positional arguments
209 args = rest.split(None, nargs)
210 if len(args) < nargs:
211 raise AliasError('Alias <%s> requires %s arguments, %s given.' %
212 (alias, nargs, len(args)))
213 cmd = '%s %s' % (cmd % tuple(args[:nargs]),' '.join(args[nargs:]))
214 return cmd
215
216 def expand_alias(self, line):
217 """ Expand an alias in the command line
218
219 Returns the provided command line, possibly with the first word
220 (command) translated according to alias expansion rules.
221
222 [ipython]|16> _ip.expand_aliases("np myfile.txt")
223 <16> 'q:/opt/np/notepad++.exe myfile.txt'
224 """
225
226 pre,fn,rest = split_user_input(line)
227 res = pre + self.expand_aliases(fn, rest)
228 return res
229
230 def expand_aliases(self, fn, rest):
231 """Expand multiple levels of aliases:
232
233 if:
234
235 alias foo bar /tmp
236 alias baz foo
237
238 then:
239
240 baz huhhahhei -> bar /tmp huhhahhei
241 """
242 line = fn + " " + rest
243
244 done = set()
245 while 1:
246 pre,fn,rest = split_user_input(line, shell_line_split)
247 if fn in self.alias_table:
248 if fn in done:
249 warn("Cyclic alias definition, repeated '%s'" % fn)
250 return ""
251 done.add(fn)
252
253 l2 = self.transform_alias(fn, rest)
254 if l2 == line:
255 break
256 # ls -> ls -F should not recurse forever
257 if l2.split(None,1)[0] == line.split(None,1)[0]:
258 line = l2
259 break
260 line=l2
261 else:
262 break
263
264 return line
265
[end of IPython/core/alias.py]
[start of IPython/core/usage.py]
1 # -*- coding: utf-8 -*-
2 """Usage information for the main IPython applications.
3 """
4 #-----------------------------------------------------------------------------
5 # Copyright (C) 2008-2010 The IPython Development Team
6 # Copyright (C) 2001-2007 Fernando Perez. <[email protected]>
7 #
8 # Distributed under the terms of the BSD License. The full license is in
9 # the file COPYING, distributed as part of this software.
10 #-----------------------------------------------------------------------------
11
12 import sys
13 from IPython.core import release
14
15 cl_usage = """\
16 ipython [options] [files]
17
18 IPython: an enhanced interactive Python shell.
19
20 A Python shell with automatic history (input and output), dynamic object
21 introspection, easier configuration, command completion, access to the
22 system shell and more. IPython can also be embedded in running programs.
23
24 If invoked with no options, it executes all the files listed in sequence
25 and exits, use -i to enter interactive mode after running the files. Files
26 ending in .py will be treated as normal Python, but files ending in .ipy
27 can contain special IPython syntax (magic commands, shell expansions, etc.)
28
29 Please note that some of the configuration options are not available at the
30 command line, simply because they are not practical here. Look into your
31 ipython_config.py configuration file for details on those.
32
33 This file is typically installed in the IPYTHON_DIR directory. For Linux
34 users, this will be $HOME/.config/ipython, and for other users it will be
35 $HOME/.ipython. For Windows users, $HOME resolves to C:\\Documents and
36 Settings\\YourUserName in most instances.
37
38 In IPython's documentation, we will refer to this directory as IPYTHON_DIR,
39 you can change its default location by setting any path you want in this
40 environment variable.
41
42 For more information, see the manual available in HTML and PDF in your
43 installation, or online at http://ipython.scipy.org.
44 """
45
46 interactive_usage = """
47 IPython -- An enhanced Interactive Python
48 =========================================
49
50 IPython offers a combination of convenient shell features, special commands
51 and a history mechanism for both input (command history) and output (results
52 caching, similar to Mathematica). It is intended to be a fully compatible
53 replacement for the standard Python interpreter, while offering vastly
54 improved functionality and flexibility.
55
56 At your system command line, type 'ipython -help' to see the command line
57 options available. This document only describes interactive features.
58
59 Warning: IPython relies on the existence of a global variable called __IP which
60 controls the shell itself. If you redefine __IP to anything, bizarre behavior
61 will quickly occur.
62
63 MAIN FEATURES
64
65 * Access to the standard Python help. As of Python 2.1, a help system is
66 available with access to object docstrings and the Python manuals. Simply
67 type 'help' (no quotes) to access it.
68
69 * Magic commands: type %magic for information on the magic subsystem.
70
71 * System command aliases, via the %alias command or the ipythonrc config file.
72
73 * Dynamic object information:
74
75 Typing ?word or word? prints detailed information about an object. If
76 certain strings in the object are too long (docstrings, code, etc.) they get
77 snipped in the center for brevity.
78
79 Typing ??word or word?? gives access to the full information without
80 snipping long strings. Long strings are sent to the screen through the less
81 pager if longer than the screen, printed otherwise.
82
83 The ?/?? system gives access to the full source code for any object (if
84 available), shows function prototypes and other useful information.
85
86 If you just want to see an object's docstring, type '%pdoc object' (without
87 quotes, and without % if you have automagic on).
88
89 Both %pdoc and ?/?? give you access to documentation even on things which are
90 not explicitely defined. Try for example typing {}.get? or after import os,
91 type os.path.abspath??. The magic functions %pdef, %source and %file operate
92 similarly.
93
94 * Completion in the local namespace, by typing TAB at the prompt.
95
96 At any time, hitting tab will complete any available python commands or
97 variable names, and show you a list of the possible completions if there's
98 no unambiguous one. It will also complete filenames in the current directory.
99
100 This feature requires the readline and rlcomplete modules, so it won't work
101 if your Python lacks readline support (such as under Windows).
102
103 * Search previous command history in two ways (also requires readline):
104
105 - Start typing, and then use Ctrl-p (previous,up) and Ctrl-n (next,down) to
106 search through only the history items that match what you've typed so
107 far. If you use Ctrl-p/Ctrl-n at a blank prompt, they just behave like
108 normal arrow keys.
109
110 - Hit Ctrl-r: opens a search prompt. Begin typing and the system searches
111 your history for lines that match what you've typed so far, completing as
112 much as it can.
113
114 * Persistent command history across sessions (readline required).
115
116 * Logging of input with the ability to save and restore a working session.
117
118 * System escape with !. Typing !ls will run 'ls' in the current directory.
119
120 * The reload command does a 'deep' reload of a module: changes made to the
121 module since you imported will actually be available without having to exit.
122
123 * Verbose and colored exception traceback printouts. See the magic xmode and
124 xcolor functions for details (just type %magic).
125
126 * Input caching system:
127
128 IPython offers numbered prompts (In/Out) with input and output caching. All
129 input is saved and can be retrieved as variables (besides the usual arrow
130 key recall).
131
132 The following GLOBAL variables always exist (so don't overwrite them!):
133 _i: stores previous input.
134 _ii: next previous.
135 _iii: next-next previous.
136 _ih : a list of all input _ih[n] is the input from line n.
137
138 Additionally, global variables named _i<n> are dynamically created (<n>
139 being the prompt counter), such that _i<n> == _ih[<n>]
140
141 For example, what you typed at prompt 14 is available as _i14 and _ih[14].
142
143 You can create macros which contain multiple input lines from this history,
144 for later re-execution, with the %macro function.
145
146 The history function %hist allows you to see any part of your input history
147 by printing a range of the _i variables. Note that inputs which contain
148 magic functions (%) appear in the history with a prepended comment. This is
149 because they aren't really valid Python code, so you can't exec them.
150
151 * Output caching system:
152
153 For output that is returned from actions, a system similar to the input
154 cache exists but using _ instead of _i. Only actions that produce a result
155 (NOT assignments, for example) are cached. If you are familiar with
156 Mathematica, IPython's _ variables behave exactly like Mathematica's %
157 variables.
158
159 The following GLOBAL variables always exist (so don't overwrite them!):
160 _ (one underscore): previous output.
161 __ (two underscores): next previous.
162 ___ (three underscores): next-next previous.
163
164 Global variables named _<n> are dynamically created (<n> being the prompt
165 counter), such that the result of output <n> is always available as _<n>.
166
167 Finally, a global dictionary named _oh exists with entries for all lines
168 which generated output.
169
170 * Directory history:
171
172 Your history of visited directories is kept in the global list _dh, and the
173 magic %cd command can be used to go to any entry in that list.
174
175 * Auto-parentheses and auto-quotes (adapted from Nathan Gray's LazyPython)
176
177 1. Auto-parentheses
178 Callable objects (i.e. functions, methods, etc) can be invoked like
179 this (notice the commas between the arguments):
180 >>> callable_ob arg1, arg2, arg3
181 and the input will be translated to this:
182 --> callable_ob(arg1, arg2, arg3)
183 You can force auto-parentheses by using '/' as the first character
184 of a line. For example:
185 >>> /globals # becomes 'globals()'
186 Note that the '/' MUST be the first character on the line! This
187 won't work:
188 >>> print /globals # syntax error
189
190 In most cases the automatic algorithm should work, so you should
191 rarely need to explicitly invoke /. One notable exception is if you
192 are trying to call a function with a list of tuples as arguments (the
193 parenthesis will confuse IPython):
194 In [1]: zip (1,2,3),(4,5,6) # won't work
195 but this will work:
196 In [2]: /zip (1,2,3),(4,5,6)
197 ------> zip ((1,2,3),(4,5,6))
198 Out[2]= [(1, 4), (2, 5), (3, 6)]
199
200 IPython tells you that it has altered your command line by
201 displaying the new command line preceded by -->. e.g.:
202 In [18]: callable list
203 -------> callable (list)
204
205 2. Auto-Quoting
206 You can force auto-quoting of a function's arguments by using ',' as
207 the first character of a line. For example:
208 >>> ,my_function /home/me # becomes my_function("/home/me")
209
210 If you use ';' instead, the whole argument is quoted as a single
211 string (while ',' splits on whitespace):
212 >>> ,my_function a b c # becomes my_function("a","b","c")
213 >>> ;my_function a b c # becomes my_function("a b c")
214
215 Note that the ',' MUST be the first character on the line! This
216 won't work:
217 >>> x = ,my_function /home/me # syntax error
218 """
219
220 interactive_usage_min = """\
221 An enhanced console for Python.
222 Some of its features are:
223 - Readline support if the readline library is present.
224 - Tab completion in the local namespace.
225 - Logging of input, see command-line options.
226 - System shell escape via ! , eg !ls.
227 - Magic commands, starting with a % (like %ls, %pwd, %cd, etc.)
228 - Keeps track of locally defined variables via %who, %whos.
229 - Show object information with a ? eg ?x or x? (use ?? for more info).
230 """
231
232 quick_reference = r"""
233 IPython -- An enhanced Interactive Python - Quick Reference Card
234 ================================================================
235
236 obj?, obj?? : Get help, or more help for object (also works as
237 ?obj, ??obj).
238 ?foo.*abc* : List names in 'foo' containing 'abc' in them.
239 %magic : Information about IPython's 'magic' % functions.
240
241 Magic functions are prefixed by %, and typically take their arguments without
242 parentheses, quotes or even commas for convenience.
243
244 Example magic function calls:
245
246 %alias d ls -F : 'd' is now an alias for 'ls -F'
247 alias d ls -F : Works if 'alias' not a python name
248 alist = %alias : Get list of aliases to 'alist'
249 cd /usr/share : Obvious. cd -<tab> to choose from visited dirs.
250 %cd?? : See help AND source for magic %cd
251
252 System commands:
253
254 !cp a.txt b/ : System command escape, calls os.system()
255 cp a.txt b/ : after %rehashx, most system commands work without !
256 cp ${f}.txt $bar : Variable expansion in magics and system commands
257 files = !ls /usr : Capture sytem command output
258 files.s, files.l, files.n: "a b c", ['a','b','c'], 'a\nb\nc'
259
260 History:
261
262 _i, _ii, _iii : Previous, next previous, next next previous input
263 _i4, _ih[2:5] : Input history line 4, lines 2-4
264 exec _i81 : Execute input history line #81 again
265 %rep 81 : Edit input history line #81
266 _, __, ___ : previous, next previous, next next previous output
267 _dh : Directory history
268 _oh : Output history
269 %hist : Command history. '%hist -g foo' search history for 'foo'
270
271 Autocall:
272
273 f 1,2 : f(1,2)
274 /f 1,2 : f(1,2) (forced autoparen)
275 ,f 1 2 : f("1","2")
276 ;f 1 2 : f("1 2")
277
278 Remember: TAB completion works in many contexts, not just file names
279 or python names.
280
281 The following magic functions are currently available:
282
283 """
284
285 gui_reference = """\
286 ===============================
287 The graphical IPython console
288 ===============================
289
290 This console is designed to emulate the look, feel and workflow of a terminal
291 environment, while adding a number of enhancements that are simply not possible
292 in a real terminal, such as inline syntax highlighting, true multiline editing,
293 inline graphics and much more.
294
295 This quick reference document contains the basic information you'll need to
296 know to make the most efficient use of it. For the various command line
297 options available at startup, type ``--help`` at the command line.
298
299
300 Multiline editing
301 =================
302
303 The graphical console is capable of true multiline editing, but it also tries
304 to behave intuitively like a terminal when possible. If you are used to
305 IPyhton's old terminal behavior, you should find the transition painless, and
306 once you learn a few basic keybindings it will be a much more efficient
307 environment.
308
309 For single expressions or indented blocks, the console behaves almost like the
310 terminal IPython: single expressions are immediately evaluated, and indented
311 blocks are evaluated once a single blank line is entered::
312
313 In [1]: print "Hello IPython!" # Enter was pressed at the end of the line
314 Hello IPython!
315
316 In [2]: for i in range(10):
317 ...: print i,
318 ...:
319 0 1 2 3 4 5 6 7 8 9
320
321 If you want to enter more than one expression in a single input block
322 (something not possible in the terminal), you can use ``Control-Enter`` at the
323 end of your first line instead of ``Enter``. At that point the console goes
324 into 'cell mode' and even if your inputs are not indented, it will continue
325 accepting arbitrarily many lines until either you enter an extra blank line or
326 you hit ``Shift-Enter`` (the key binding that forces execution). When a
327 multiline cell is entered, IPython analyzes it and executes its code producing
328 an ``Out[n]`` prompt only for the last expression in it, while the rest of the
329 cell is executed as if it was a script. An example should clarify this::
330
331 In [3]: x=1 # Hit C-Enter here
332 ...: y=2 # from now on, regular Enter is sufficient
333 ...: z=3
334 ...: x**2 # This does *not* produce an Out[] value
335 ...: x+y+z # Only the last expression does
336 ...:
337 Out[3]: 6
338
339 The behavior where an extra blank line forces execution is only active if you
340 are actually typing at the keyboard each line, and is meant to make it mimic
341 the IPython terminal behavior. If you paste a long chunk of input (for example
342 a long script copied form an editor or web browser), it can contain arbitrarily
343 many intermediate blank lines and they won't cause any problems. As always,
344 you can then make it execute by appending a blank line *at the end* or hitting
345 ``Shift-Enter`` anywhere within the cell.
346
347 With the up arrow key, you can retrieve previous blocks of input that contain
348 multiple lines. You can move inside of a multiline cell like you would in any
349 text editor. When you want it executed, the simplest thing to do is to hit the
350 force execution key, ``Shift-Enter`` (though you can also navigate to the end
351 and append a blank line by using ``Enter`` twice).
352
353 If you've edited a multiline cell and accidentally navigate out of it with the
354 up or down arrow keys, IPython will clear the cell and replace it with the
355 contents of the one above or below that you navigated to. If this was an
356 accident and you want to retrieve the cell you were editing, use the Undo
357 keybinding, ``Control-z``.
358
359
360 Key bindings
361 ============
362
363 The IPython console supports most of the basic Emacs line-oriented keybindings,
364 in addition to some of its own.
365
366 The keybinding prefixes mean:
367
368 - ``C``: Control
369 - ``S``: Shift
370 - ``M``: Meta (typically the Alt key)
371
372 The keybindings themselves are:
373
374 - ``Enter``: insert new line (may cause execution, see above).
375 - ``C-Enter``: force new line, *never* causes execution.
376 - ``S-Enter``: *force* execution regardless of where cursor is, no newline added.
377 - ``C-c``: copy highlighted text to clipboard (prompts are automatically stripped).
378 - ``C-S-c``: copy highlighted text to clipboard (prompts are not stripped).
379 - ``C-v``: paste text from clipboard.
380 - ``C-z``: undo (retrieves lost text if you move out of a cell with the arrows).
381 - ``C-S-z``: redo.
382 - ``C-o``: move to 'other' area, between pager and terminal.
383 - ``C-l``: clear terminal.
384 - ``C-a``: go to beginning of line.
385 - ``C-e``: go to end of line.
386 - ``C-k``: kill from cursor to the end of the line.
387 - ``C-y``: yank (paste)
388 - ``C-p``: previous line (like up arrow)
389 - ``C-n``: next line (like down arrow)
390 - ``C-f``: forward (like right arrow)
391 - ``C-b``: back (like left arrow)
392 - ``C-d``: delete next character.
393 - ``M-<``: move to the beginning of the input region.
394 - ``M->``: move to the end of the input region.
395 - ``M-d``: delete next word.
396 - ``M-Backspace``: delete previous word.
397 - ``C-.``: force a kernel restart (a confirmation dialog appears).
398 - ``C-+``: increase font size.
399 - ``C--``: decrease font size.
400
401 The IPython pager
402 =================
403
404 IPython will show long blocks of text from many sources using a builtin pager.
405 You can control where this pager appears with the ``--paging`` command-line
406 flag:
407
408 - ``inside`` [default]: the pager is overlaid on top of the main terminal. You
409 must quit the pager to get back to the terminal (similar to how a pager such
410 as ``less`` or ``more`` works).
411
412 - ``vsplit``: the console is made double-tall, and the pager appears on the
413 bottom area when needed. You can view its contents while using the terminal.
414
415 - ``hsplit``: the console is made double-wide, and the pager appears on the
416 right area when needed. You can view its contents while using the terminal.
417
418 - ``none``: the console never pages output.
419
420 If you use the vertical or horizontal paging modes, you can navigate between
421 terminal and pager as follows:
422
423 - Tab key: goes from pager to terminal (but not the other way around).
424 - Control-o: goes from one to another always.
425 - Mouse: click on either.
426
427 In all cases, the ``q`` or ``Escape`` keys quit the pager (when used with the
428 focus on the pager area).
429
430 Running subprocesses
431 ====================
432
433 The graphical IPython console uses the ``pexpect`` module to run subprocesses
434 when you type ``!command``. This has a number of advantages (true asynchronous
435 output from subprocesses as well as very robust termination of rogue
436 subprocesses with ``Control-C``), as well as some limitations. The main
437 limitation is that you can *not* interact back with the subprocess, so anything
438 that invokes a pager or expects you to type input into it will block and hang
439 (you can kill it with ``Control-C``).
440
441 We have provided as magics ``%less`` to page files (aliased to ``%more``),
442 ``%clear`` to clear the terminal, and ``%man`` on Linux/OSX. These cover the
443 most common commands you'd want to call in your subshell and that would cause
444 problems if invoked via ``!cmd``, but you need to be aware of this limitation.
445
446 Display
447 =======
448
449 The IPython console can now display objects in a variety of formats, including
450 HTML, PNG and SVG. This is accomplished using the display functions in
451 ``IPython.core.display``::
452
453 In [4]: from IPython.core.display import display, display_html
454
455 In [5]: from IPython.core.display import display_png, display_svg
456
457 Python objects can simply be passed to these functions and the appropriate
458 representations will be displayed in the console as long as the objects know
459 how to compute those representations. The easiest way of teaching objects how
460 to format themselves in various representations is to define special methods
461 such as: ``__html``, ``__svg__`` and ``__png__``. IPython's display formatters
462 can also be given custom formatter functions for various types::
463
464 In [6]: ip = get_ipython()
465
466 In [7]: html_formatter = ip.display_formatter.formatters['text/html']
467
468 In [8]: html_formatter.for_type(Foo, foo_to_html)
469
470 For further details, see ``IPython.core.formatters``.
471
472 Inline matplotlib graphics
473 ==========================
474
475 The IPython console is capable of displaying matplotlib figures inline, in SVG
476 format. If started with the ``--pylab inline`` flag, then all figures are
477 rendered inline automatically. If started with ``--pylab`` or ``--pylab <your
478 backend>``, then a GUI backend will be used, but IPython's ``display()`` and
479 ``getfigs()`` functions can be used to view plots inline::
480
481 In [9]: display(*getfigs()) # display all figures inline
482
483 In[10]: display(*getfigs(1,2)) # display figures 1 and 2 inline
484 """
485
486
487 quick_guide = """\
488 ? -> Introduction and overview of IPython's features.
489 %quickref -> Quick reference.
490 help -> Python's own help system.
491 object? -> Details about 'object', use 'object??' for extra details.
492 """
493
494 gui_note = """\
495 %guiref -> A brief reference about the graphical user interface.
496 """
497
498 default_banner_parts = [
499 'Python %s\n' % (sys.version.split('\n')[0],),
500 'Type "copyright", "credits" or "license" for more information.\n\n',
501 'IPython %s -- An enhanced Interactive Python.\n' % (release.version,),
502 quick_guide
503 ]
504
505 default_gui_banner_parts = default_banner_parts + [gui_note]
506
507 default_banner = ''.join(default_banner_parts)
508
509 default_gui_banner = ''.join(default_gui_banner_parts)
510
[end of IPython/core/usage.py]
[start of IPython/quarantine/InterpreterExec.py]
1 # -*- coding: utf-8 -*-
2 """Modified input prompt for executing files.
3
4 We define a special input line filter to allow typing lines which begin with
5 '~', '/' or '.'. If one of those strings is encountered, it is automatically
6 executed.
7 """
8
9 #*****************************************************************************
10 # Copyright (C) 2004 W.J. van der Laan <[email protected]>
11 # Copyright (C) 2004-2006 Fernando Perez <[email protected]>
12 #
13 # Distributed under the terms of the BSD License. The full license is in
14 # the file COPYING, distributed as part of this software.
15 #*****************************************************************************
16
17
18 def prefilter_shell(self,line,continuation):
19 """Alternate prefilter, modified for shell-like functionality.
20
21 - Execute all lines beginning with '~', '/' or '.'
22 - $var=cmd <=> %sc var=cmd
23 - $$var=cmd <=> %sc -l var=cmd
24 """
25
26 if line:
27 l0 = line[0]
28 if l0 in '~/.':
29 return self._prefilter("!%s"%line,continuation)
30 elif l0=='$':
31 lrest = line[1:]
32 if lrest.startswith('$'):
33 # $$var=cmd <=> %sc -l var=cmd
34 return self._prefilter("%ssc -l %s" % (self.ESC_MAGIC,lrest[1:]),
35 continuation)
36 else:
37 # $var=cmd <=> %sc var=cmd
38 return self._prefilter("%ssc %s" % (self.ESC_MAGIC,lrest),
39 continuation)
40 else:
41 return self._prefilter(line,continuation)
42 else:
43 return self._prefilter(line,continuation)
44
45 # Rebind this to be the new IPython prefilter:
46 from IPython.core.iplib import InteractiveShell
47 InteractiveShell.prefilter = prefilter_shell
48 # Clean up the namespace.
49 del InteractiveShell,prefilter_shell
50
51 # Provide pysh and further shell-oriented services
52 import os,sys,shutil
53 from IPython.utils.process import system,shell,getoutput,getoutputerror
54
55 # Short aliases for getting shell output as a string and a list
56 sout = getoutput
57 lout = lambda cmd: getoutput(cmd,split=1)
58
59 # Empty function, meant as a docstring holder so help(pysh) works.
60 def pysh():
61 """Pysh is a set of modules and extensions to IPython which make shell-like
62 usage with Python syntax more convenient. Keep in mind that pysh is NOT a
63 full-blown shell, so don't try to make it your /etc/passwd entry!
64
65 In particular, it has no job control, so if you type Ctrl-Z (under Unix),
66 you'll suspend pysh itself, not the process you just started.
67
68 Since pysh is really nothing but a customized IPython, you should
69 familiarize yourself with IPython's features. This brief help mainly
70 documents areas in which pysh differs from the normal IPython.
71
72 ALIASES
73 -------
74 All of your $PATH has been loaded as IPython aliases, so you should be
75 able to type any normal system command and have it executed. See %alias?
76 and %unalias? for details on the alias facilities.
77
78 SPECIAL SYNTAX
79 --------------
80 Any lines which begin with '~', '/' and '.' will be executed as shell
81 commands instead of as Python code. The special escapes below are also
82 recognized. !cmd is valid in single or multi-line input, all others are
83 only valid in single-line input:
84
85 !cmd - pass 'cmd' directly to the shell
86 !!cmd - execute 'cmd' and return output as a list (split on '\\n')
87 $var=cmd - capture output of cmd into var, as a string
88 $$var=cmd - capture output of cmd into var, as a list (split on '\\n')
89
90 The $/$$ syntaxes make Python variables from system output, which you can
91 later use for further scripting. The converse is also possible: when
92 executing an alias or calling to the system via !/!!, you can expand any
93 python variable or expression by prepending it with $. Full details of
94 the allowed syntax can be found in Python's PEP 215.
95
96 A few brief examples will illustrate these:
97
98 fperez[~/test]|3> !ls *s.py
99 scopes.py strings.py
100
101 ls is an internal alias, so there's no need to use !:
102 fperez[~/test]|4> ls *s.py
103 scopes.py* strings.py
104
105 !!ls will return the output into a Python variable:
106 fperez[~/test]|5> !!ls *s.py
107 <5> ['scopes.py', 'strings.py']
108 fperez[~/test]|6> print _5
109 ['scopes.py', 'strings.py']
110
111 $ and $$ allow direct capture to named variables:
112 fperez[~/test]|7> $astr = ls *s.py
113 fperez[~/test]|8> astr
114 <8> 'scopes.py\\nstrings.py'
115
116 fperez[~/test]|9> $$alist = ls *s.py
117 fperez[~/test]|10> alist
118 <10> ['scopes.py', 'strings.py']
119
120 alist is now a normal python list you can loop over. Using $ will expand
121 back the python values when alias calls are made:
122 fperez[~/test]|11> for f in alist:
123 |..> print 'file',f,
124 |..> wc -l $f
125 |..>
126 file scopes.py 13 scopes.py
127 file strings.py 4 strings.py
128
129 Note that you may need to protect your variables with braces if you want
130 to append strings to their names. To copy all files in alist to .bak
131 extensions, you must use:
132 fperez[~/test]|12> for f in alist:
133 |..> cp $f ${f}.bak
134
135 If you try using $f.bak, you'll get an AttributeError exception saying
136 that your string object doesn't have a .bak attribute. This is because
137 the $ expansion mechanism allows you to expand full Python expressions:
138 fperez[~/test]|13> echo "sys.platform is: $sys.platform"
139 sys.platform is: linux2
140
141 IPython's input history handling is still active, which allows you to
142 rerun a single block of multi-line input by simply using exec:
143 fperez[~/test]|14> $$alist = ls *.eps
144 fperez[~/test]|15> exec _i11
145 file image2.eps 921 image2.eps
146 file image.eps 921 image.eps
147
148 While these are new special-case syntaxes, they are designed to allow very
149 efficient use of the shell with minimal typing. At an interactive shell
150 prompt, conciseness of expression wins over readability.
151
152 USEFUL FUNCTIONS AND MODULES
153 ----------------------------
154 The os, sys and shutil modules from the Python standard library are
155 automatically loaded. Some additional functions, useful for shell usage,
156 are listed below. You can request more help about them with '?'.
157
158 shell - execute a command in the underlying system shell
159 system - like shell(), but return the exit status of the command
160 sout - capture the output of a command as a string
161 lout - capture the output of a command as a list (split on '\\n')
162 getoutputerror - capture (output,error) of a shell command
163
164 sout/lout are the functional equivalents of $/$$. They are provided to
165 allow you to capture system output in the middle of true python code,
166 function definitions, etc (where $ and $$ are invalid).
167
168 DIRECTORY MANAGEMENT
169 --------------------
170 Since each command passed by pysh to the underlying system is executed in
171 a subshell which exits immediately, you can NOT use !cd to navigate the
172 filesystem.
173
174 Pysh provides its own builtin '%cd' magic command to move in the
175 filesystem (the % is not required with automagic on). It also maintains a
176 list of visited directories (use %dhist to see it) and allows direct
177 switching to any of them. Type 'cd?' for more details.
178
179 %pushd, %popd and %dirs are provided for directory stack handling.
180
181 PROMPT CUSTOMIZATION
182 --------------------
183
184 The supplied ipythonrc-pysh profile comes with an example of a very
185 colored and detailed prompt, mainly to serve as an illustration. The
186 valid escape sequences, besides color names, are:
187
188 \\# - Prompt number.
189 \\D - Dots, as many as there are digits in \\# (so they align).
190 \\w - Current working directory (cwd).
191 \\W - Basename of current working directory.
192 \\XN - Where N=0..5. N terms of the cwd, with $HOME written as ~.
193 \\YN - Where N=0..5. Like XN, but if ~ is term N+1 it's also shown.
194 \\u - Username.
195 \\H - Full hostname.
196 \\h - Hostname up to first '.'
197 \\$ - Root symbol ($ or #).
198 \\t - Current time, in H:M:S format.
199 \\v - IPython release version.
200 \\n - Newline.
201 \\r - Carriage return.
202 \\\\ - An explicitly escaped '\\'.
203
204 You can configure your prompt colors using any ANSI color escape. Each
205 color escape sets the color for any subsequent text, until another escape
206 comes in and changes things. The valid color escapes are:
207
208 \\C_Black
209 \\C_Blue
210 \\C_Brown
211 \\C_Cyan
212 \\C_DarkGray
213 \\C_Green
214 \\C_LightBlue
215 \\C_LightCyan
216 \\C_LightGray
217 \\C_LightGreen
218 \\C_LightPurple
219 \\C_LightRed
220 \\C_Purple
221 \\C_Red
222 \\C_White
223 \\C_Yellow
224 \\C_Normal - Stop coloring, defaults to your terminal settings.
225 """
226 pass
227
228 # Configure a few things. Much of this is fairly hackish, since IPython
229 # doesn't really expose a clean API for it. Be careful if you start making
230 # many modifications here.
231
232
233 # Set the 'cd' command to quiet mode, a more shell-like behavior
234 __IPYTHON__.default_option('cd','-q')
235
236 # This is redundant, ipy_user_conf.py will determine this
237 # Load all of $PATH as aliases
238 __IPYTHON__.magic_rehashx()
239
240 # Remove %sc,%sx if present as aliases
241 __IPYTHON__.magic_unalias('sc')
242 __IPYTHON__.magic_unalias('sx')
243
244 # We need different criteria for line-splitting, so that aliases such as
245 # 'gnome-terminal' are interpreted as a single alias instead of variable
246 # 'gnome' minus variable 'terminal'.
247 import re
248 __IPYTHON__.line_split = re.compile(r'^([\s*,;/])'
249 r'([\?\w\.\-\+]+\w*\s*)'
250 r'(\(?.*$)')
251
252 # Namespace cleanup
253 del re
254
[end of IPython/quarantine/InterpreterExec.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
ipython/ipython
|
e61c64c0c2f92e3ac668a3a2dad43fe87e5c2b25
|
crash of ipython when alias is used with %s and echo
This bug is present in alias of magic commands
In [1]: alias parts echo first %s second %s
In [2]: parts A
After executing the first line of command when we give command as parts A
instead of printing
"Incorrect number of arguments: 2 expected." ipython automatically crashes.
|
Replicated in trunk
|
2011-05-04T21:48:38Z
|
<patch>
diff --git a/IPython/core/interactiveshell.py b/IPython/core/interactiveshell.py
--- a/IPython/core/interactiveshell.py
+++ b/IPython/core/interactiveshell.py
@@ -38,7 +38,7 @@
from IPython.core import prefilter
from IPython.core import shadowns
from IPython.core import ultratb
-from IPython.core.alias import AliasManager
+from IPython.core.alias import AliasManager, AliasError
from IPython.core.autocall import ExitAutocall
from IPython.core.builtin_trap import BuiltinTrap
from IPython.core.compilerop import CachingCompiler
@@ -2129,9 +2129,18 @@ def run_cell(self, raw_cell, store_history=True):
cell = self.input_splitter.source_reset()
with self.builtin_trap:
+ prefilter_failed = False
if len(cell.splitlines()) == 1:
- cell = self.prefilter_manager.prefilter_lines(cell)
-
+ try:
+ cell = self.prefilter_manager.prefilter_lines(cell)
+ except AliasError as e:
+ error(e)
+ prefilter_failed=True
+ except Exception:
+ # don't allow prefilter errors to crash IPython
+ self.showtraceback()
+ prefilter_failed = True
+
# Store raw and processed history
if store_history:
self.history_manager.store_inputs(self.execution_count,
@@ -2139,30 +2148,32 @@ def run_cell(self, raw_cell, store_history=True):
self.logger.log(cell, raw_cell)
- cell_name = self.compile.cache(cell, self.execution_count)
+ if not prefilter_failed:
+ # don't run if prefilter failed
+ cell_name = self.compile.cache(cell, self.execution_count)
- with self.display_trap:
- try:
- code_ast = ast.parse(cell, filename=cell_name)
- except (OverflowError, SyntaxError, ValueError, TypeError,
- MemoryError):
- self.showsyntaxerror()
- self.execution_count += 1
- return None
-
- self.run_ast_nodes(code_ast.body, cell_name,
- interactivity="last_expr")
-
- # Execute any registered post-execution functions.
- for func, status in self._post_execute.iteritems():
- if not status:
- continue
+ with self.display_trap:
try:
- func()
- except:
- self.showtraceback()
- # Deactivate failing function
- self._post_execute[func] = False
+ code_ast = ast.parse(cell, filename=cell_name)
+ except (OverflowError, SyntaxError, ValueError, TypeError,
+ MemoryError):
+ self.showsyntaxerror()
+ self.execution_count += 1
+ return None
+
+ self.run_ast_nodes(code_ast.body, cell_name,
+ interactivity="last_expr")
+
+ # Execute any registered post-execution functions.
+ for func, status in self._post_execute.iteritems():
+ if not status:
+ continue
+ try:
+ func()
+ except:
+ self.showtraceback()
+ # Deactivate failing function
+ self._post_execute[func] = False
if store_history:
# Write output to the database. Does nothing unless
</patch>
|
[]
|
[]
| |||
Qiskit__qiskit-377
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
make test fails locally
<!--- Provide a general summary of the issue in the Title above -->
`make test` fails locally, possibly due to the configuration of hubs in the Qconfig.
This has not been an issue with travis and submitted PRs.
## Steps to Reproduce (for bugs)
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug. Include code to reproduce, if relevant -->
`python -m unittest -v test.python.test_api_ibmq.TestApiHub`
</issue>
<code>
[start of README.md]
1 # Quantum Information Software Kit (QISKit)
2
3 [](https://pypi.python.org/pypi/qiskit)
4 [](https://travis-ci.org/QISKit/qiskit-sdk-py)
5
6 The Quantum Information Software Kit (**QISKit** for short) is a software development kit (SDK) for
7 working with [OpenQASM](https://github.com/QISKit/qiskit-openqasm) and the
8 [IBM Q Experience (QX)](https://quantumexperience.ng.bluemix.net/).
9
10 Use **QISKit** to create quantum computing programs, compile them, and execute them on one of
11 several backends (online Real quantum processors, online simulators, and local simulators). For
12 the online backends, QISKit uses our [python API client](https://github.com/QISKit/qiskit-api-py)
13 to connect to the IBM Q Experience.
14
15 **We use GitHub issues for tracking requests and bugs. Please see the**
16 [IBM Q Experience community](https://quantumexperience.ng.bluemix.net/qx/community) **for
17 questions and discussion.**
18
19 **If you'd like to contribute to QISKit, please take a look at our**
20 [contribution guidelines](CONTRIBUTING.rst).
21
22 Links to Sections:
23
24 * [Installation](#installation)
25 * [Creating your first Quantum Program](#creating-your-first-quantum-program)
26 * [More Information](#more-information)
27 * [Authors](#authors-alphabetical)
28 * [License](#license)
29
30 ## Installation
31
32 ### Dependencies
33
34 At least [Python 3.5 or later](https://www.python.org/downloads/) is needed for using QISKit. In
35 addition, [Jupyter Notebook](https://jupyter.readthedocs.io/en/latest/install.html) is recommended
36 for interacting with the tutorials.
37 For this reason we recommend installing the [Anaconda 3](https://www.continuum.io/downloads)
38 python distribution, as it comes with all of these dependencies pre-installed.
39
40 In addition, a basic understanding of quantum information is very helpful when interacting with
41 QISKit. If you're new to quantum, start with our
42 [User Guides](https://github.com/QISKit/ibmqx-user-guides)!
43
44 ### Installation
45
46 We encourage to install QISKit via the PIP tool (a python package manager):
47
48 ```
49 pip install qiskit
50 ```
51
52 PIP will handle all dependencies automatically for us and you will always install the latest (and well-tested) version.
53
54 PIP package comes with prebuilt binaries for these platforms:
55
56 * Linux x86_64
57 * Darwin
58 * Win64
59
60 If your platform is not in the list, PIP will try to build from the sources at installation time. It will require to have CMake 3.5 or higher pre-installed and at least one of the [build environments supported by CMake](https://cmake.org/cmake/help/v3.5/manual/cmake-generators.7.html).
61
62 If during the installation PIP doesn't succeed to build, don't worry, you will have QISKit installed at the end but you probably couldn't take advantage of some of the high-performance components. Anyway, we always provide a python, not-so-fast alternative as a fallback.
63
64
65 #### Setup your environment
66
67 We recommend using python virtual environments to improve your experience. Refer to our
68 [Environment Setup documentation](doc/install.rst#3.1-Setup-the-environment) for more information.
69
70 ## Creating your first Quantum Program
71
72 Now that the SDK is installed, it's time to begin working with QISKit.
73
74 We are ready to try out a quantum circuit example, which runs via the local simulator.
75
76 This is a simple example that makes an entangled state.
77
78 ```python
79 # Import the QISKit SDK
80 import qiskit
81
82 # Create a Quantum Register called "qr" with 2 qubits
83 qr = qiskit.QuantumRegister("qr", 2)
84 # Create a Classical Register called "cr" with 2 bits
85 cr = qiskit.ClassicalRegister("cr", 2)
86 # Create a Quantum Circuit called involving "qr" and "cr"
87 qc = qiskit.QuantumCircuit(qr, cr)
88
89 # Add a H gate on the 0th qubit in "qr", putting this qubit in superposition.
90 qc.h(qr[0])
91 # Add a CX (CNOT) gate on control qubit 0 and target qubit 1, putting
92 # the qubits in a Bell state.
93 qc.cx(qr[0], qr[1])
94 # Add a Measure gate to see the state.
95 # (Ommiting the index applies an operation on all qubits of the register(s))
96 qc.measure(qr, cr)
97
98 # Create a Quantum Program for execution
99 qp = qiskit.QuantumProgram()
100 # Add the circuit you created to it, and call it the "bell" circuit.
101 # (You can add multiple circuits to the same program, for batch execution)
102 qp.add_circuit("bell", qc)
103
104 # See a list of available local simulators
105 print("Local backends: ", qiskit.backends.discover_local_backends())
106
107 # Compile and run the Quantum Program on a simulator backend
108 sim_result = qp.execute("bell", backend='local_qasm_simulator', shots=1024, seed=1)
109
110 # Show the results
111 print("simulation: ", sim_result)
112 print(sim_result.get_counts("bell"))
113 ```
114
115 In this case, the output will be:
116
117 ```
118 COMPLETED
119 {'counts': {'00': 512, '11': 512}}
120 ```
121
122 This script is available [here](examples/python/hello_quantum.py), where we also show how to
123 run the same program on a real quantum computer.
124
125 ### Executing your code on a real Quantum chip
126
127 You can also use QISKit to execute your code on a
128 [real quantum chip](https://github.com/QISKit/ibmqx-backend-information).
129 In order to do so, you need to configure the SDK for using the credentials in
130 your IBM Q Experience account:
131
132
133 #### Configure your API token and QX credentials
134
135
136 1. Create an _[IBM Q Experience](https://quantumexperience.ng.bluemix.net) > Account_ if you haven't already done so.
137 2. Get an API token from the IBM Q Experience website under _My Account > Advanced > API Token_. This API token allows you to execute your programs with the IBM Q Experience backends. See: [Example](doc/example_real_backend.rst).
138 3. We are going to create a new file called `Qconfig.py` and insert the API token into it. This file must have these contents:
139
140 ```python
141 APItoken = 'MY_API_TOKEN'
142
143 config = {
144 'url': 'https://quantumexperience.ng.bluemix.net/api',
145 # The following should only be needed for IBM Q Network users.
146 'hub': 'MY_HUB',
147 'group': 'MY_GROUP',
148 'project': 'MY_PROJECT'
149 }
150 ```
151
152 4. Substitute `MY_API_TOKEN` with your real API Token extracted in step 2.
153
154 5. If you have access to the IBM Q Network features, you also need to setup the
155 values for your hub, group, and project. You can do so by filling the
156 `config` variable with the values you can find on your IBM Q account
157 page.
158
159 Once the `Qconfig.py` file is set up, you have to move it under the same directory/folder where your program/tutorial resides, so it can be imported and be used to authenticate with the `set_api()` function. For example:
160
161 ```python
162 from qiskit import QuantumProgram
163 import Qconfig
164
165 # Creating Programs create your first QuantumProgram object instance.
166 qp = QuantumProgram()
167 qp.set_api(Qconfig.APItoken, Qconfig.config["url"],
168 hub=Qconfig.config["hub"],
169 group=Qconfig.config["group"],
170 project=Qconfig.config["project"])
171 ```
172
173 For more details on this and more information see
174 [our QISKit documentation](https://www.qiskit.org/documentation/).
175
176
177 ### Next Steps
178
179 Now you're set up and ready to check out some of the other examples from our
180 [Tutorial](https://github.com/QISKit/qiskit-tutorial) repository. Start with the
181 [index tutorial](https://github.com/QISKit/qiskit-tutorial/blob/master/index.ipynb) and then go to
182 the [‘Getting Started’ example](https://github.com/QISKit/qiskit-tutorial/blob/002d054c72fc59fc5009bb9fa0ee393e15a69d07/1_introduction/getting_started.ipynb).
183 If you already have [Jupyter Notebooks installed](https://jupyter.readthedocs.io/en/latest/install.html),
184 you can copy and modify the notebooks to create your own experiments.
185
186 To install the tutorials as part of the QISKit SDK, see the following
187 [installation details](doc/install.rst#Install-Jupyter-based-tutorials). Complete SDK
188 documentation can be found in the [*doc* directory](doc/qiskit.rst) and in
189 [the official QISKit site](https://www.qiskit.org/documentation).
190
191 ## More Information
192
193 For more information on how to use QISKit, tutorial examples, and other helpful links, take a look
194 at these resources:
195
196 * **[User Guides](https://github.com/QISKit/ibmqx-user-guides)**,
197 a good starting place for learning about quantum information and computing
198 * **[Tutorials](https://github.com/QISKit/qiskit-tutorial)**,
199 for example notebooks, start with the [index](https://github.com/QISKit/qiskit-tutorial/blob/master/index.ipynb) and [‘Getting Started’ Jupyter notebook](https://github.com/QISKit/qiskit-tutorial/blob/002d054c72fc59fc5009bb9fa0ee393e15a69d07/1_introduction/getting_started.ipynb)
200 * **[OpenQASM](https://github.com/QISKit/openqasm)**,
201 for additional information and examples of QASM code
202 * **[IBM Quantum Experience Composer](https://quantumexperience.ng.bluemix.net/qx/editor)**,
203 a GUI for interacting with real and simulated quantum computers
204 * **[QISkit Python API](https://github.com/QISKit/qiskit-api-py)**, an API to use the IBM Quantum
205 Experience in Python
206
207 QISKit was originally developed by researchers and developers on the
208 [IBM-Q](http://www.research.ibm.com/ibm-q/) Team at [IBM Research](http://www.research.ibm.com/),
209 with the aim of offering a high level development kit to work with quantum computers.
210
211 Visit the [IBM Q Experience community](https://quantumexperience.ng.bluemix.net/qx/community) for
212 questions and discussions on QISKit and quantum computing more broadly. If you'd like to
213 contribute to QISKit, please take a look at our [contribution guidelines](CONTRIBUTING.rst).
214
215 ## Multilanguage guide
216
217 * **[Korean Translation](doc/ko/README.md)** - basic guide line written in Korean.
218 * **[Chinese Translation](doc/zh/README.md)** - basic guide line written in Chinese.
219
220 ## Authors (alphabetical)
221
222 QISKit was originally authored by
223 Luciano Bello, Jim Challenger, Andrew Cross, Ismael Faro, Jay Gambetta, Juan Gomez,
224 Ali Javadi-Abhari, Paco Martin, Diego Moreda, Jesus Perez, Erick Winston and Chris Wood.
225
226 And continues to grow with the help and work of [many people](CONTRIBUTORS.md) who contribute
227 to the project at different levels.
228
229 ## License
230
231 This project uses the [Apache License Version 2.0 software license](https://www.apache.org/licenses/LICENSE-2.0).
232
[end of README.md]
[start of doc/conf.py]
1 #!/usr/bin/env python3
2 # -*- coding: utf-8 -*-
3 #
4 # QISKit documentation build configuration file, created by
5 # sphinx-quickstart on Tue Jul 25 18:13:28 2017.
6 #
7 # This file is execfile()d with the current directory set to its
8 # containing dir.
9 #
10 # Note that not all possible configuration values are present in this
11 # autogenerated file.
12 #
13 # All configuration values have a default; values that are commented out
14 # serve to show the default.
15
16 # If extensions (or modules to document with autodoc) are in another directory,
17 # add these directories to sys.path here. If the directory is relative to the
18 # documentation root, use os.path.abspath to make it absolute, like shown here.
19 #
20 import os
21 import sys
22 from qiskit import __version__
23 sys.path.insert(0, os.path.abspath('.'))
24
25 # Imported manually, as otherwise it will not be fully imported.
26 import qiskit.extensions.qiskit_simulator
27
28 # -- General configuration ------------------------------------------------
29
30 # If your documentation needs a minimal Sphinx version, state it here.
31 #
32 # needs_sphinx = '1.0'
33
34 # Add any Sphinx extension module names here, as strings. They can be
35 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
36 # ones.
37 extensions = ['sphinx.ext.autodoc',
38 'sphinx.ext.autosummary',
39 'sphinx.ext.napoleon',
40 'sphinx.ext.doctest',
41 'sphinx.ext.coverage',
42 'sphinx.ext.mathjax',
43 'sphinx.ext.viewcode',
44 'sphinx.ext.githubpages',
45 'sphinxcontrib.fulltoc']
46
47 autodoc_default_flags = ['members', 'undoc-members', 'show-inheritance',
48 'inherited-members']
49
50 # Napoleon settings
51 napoleon_google_docstring = True
52 napoleon_numpy_docstring = False
53 napoleon_include_init_with_doc = True
54 napoleon_include_private_with_doc = False
55 napoleon_include_special_with_doc = False
56 napoleon_use_admonition_for_examples = False
57 napoleon_use_admonition_for_notes = False
58 napoleon_use_admonition_for_references = False
59 napoleon_use_ivar = False
60 napoleon_use_param = True
61 napoleon_use_rtype = True
62
63 # Add any paths that contain templates here, relative to this directory.
64 templates_path = ['_templates']
65
66 # The suffix(es) of source filenames.
67 # You can specify multiple suffix as a list of string:
68 #
69 # source_suffix = ['.rst', '.md']
70 source_suffix = '.rst'
71
72 # The master toctree document.
73 master_doc = 'index'
74
75 # General information about the project.
76 project = 'QISKit SDK'
77 copyright = '2017-2018 IBM Research'
78 author = 'IBM Research'
79
80 # Add description
81 html_context = {
82 'description': 'Quantum Information Software Kit'
83 }
84
85 # The version info for the project you're documenting, acts as replacement for
86 # |version| and |release|, also used in various other places throughout the
87 # built documents.
88 #
89 # The short X.Y version.
90 version = __version__
91 # The full version, including alpha/beta/rc tags.
92 release = version
93
94 # The language for content autogenerated by Sphinx. Refer to documentation
95 # for a list of supported languages.
96 #
97 # This is also used if you do content translation via gettext catalogs.
98 # Usually you set "language" from the command line for these cases.
99 language = None
100
101 # List of patterns, relative to source directory, that match files and
102 # directories to ignore when looking for source files.
103 # This patterns also effect to html_static_path and html_extra_path
104 exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store', '_autodoc/modules.rst', 'ja']
105
106 # The name of the Pygments (syntax highlighting) style to use.
107 pygments_style = 'sphinx'
108
109 # If true, `todo` and `todoList` produce output, else they produce nothing.
110 todo_include_todos = False
111
112
113 # -- Options for HTML output ----------------------------------------------
114
115 # The theme to use for HTML and HTML Help pages. See the documentation for
116 # a list of builtin themes.
117 #
118 # html_theme = 'alabaster'
119 # html_theme = 'bizstyle'
120 # html_theme = agogo
121
122 html_theme = 'theme' # use the theme in subdir 'theme'
123 html_theme_path = ['./'] # make sphinx search for themes in current dir
124
125
126 # Theme options are theme-specific and customize the look and feel of a theme
127 # further. For a list of options available for each theme, see the
128 # documentation.
129 #
130 html_theme_options = {}
131
132 # Add any paths that contain custom static files (such as style sheets) here,
133 # relative to this directory. They are copied after the builtin static files,
134 # so a file named "default.css" will overwrite the builtin "default.css".
135 html_static_path = []
136
137 # The name of an image file (relative to this directory) to place at the top
138 # of the sidebar.
139 html_logo = 'theme/static/qiskit-logo-white-no-margin.gif'
140
141 html_favicon = 'theme/static/favicon.ico'
142
143 html_last_updated_fmt = '%Y/%m/%d'
144
145 # -- Options for HTMLHelp output ------------------------------------------
146
147 # Output file base name for HTML help builder.
148 htmlhelp_basename = 'QISKitdoc'
149
150
151 # -- Options for LaTeX output ---------------------------------------------
152
153 latex_elements = {
154 # The paper size ('letterpaper' or 'a4paper').
155 #
156 # 'papersize': 'letterpaper',
157
158 # The font size ('10pt', '11pt' or '12pt').
159 #
160 # 'pointsize': '10pt',
161
162 # Additional stuff for the LaTeX preamble.
163 #
164 # 'preamble': '',
165
166 # Latex figure (float) alignment
167 #
168 # 'figure_align': 'htbp',
169 }
170
171 # Grouping the document tree into LaTeX files. List of tuples
172 # (source start file, target name, title,
173 # author, documentclass [howto, manual, or own class]).
174 latex_documents = [
175 (master_doc, 'QISKit.tex', 'QISKit Documentation',
176 '''Jim Challenger, Andrew Cross, Ismael Faro, Jay Gambetta, Jesus Perez,
177 and John Smolin''', 'manual'),
178 ]
179
180
181 # -- Options for manual page output ---------------------------------------
182
183 # One entry per manual page. List of tuples
184 # (source start file, name, description, authors, manual section).
185 man_pages = [
186 (master_doc, 'qiskit', 'QISKit Documentation',
187 [author], 1)
188 ]
189
190
191 # -- Options for Texinfo output -------------------------------------------
192
193 # Grouping the document tree into Texinfo files. List of tuples
194 # (source start file, target name, title, author,
195 # dir menu entry, description, category)
196 texinfo_documents = [
197 (master_doc, 'QISKit', 'QISKit Documentation',
198 author, 'QISKit', 'One line description of project.',
199 'Miscellaneous'),
200 ]
201
202
203 # Avoid a warning and treat the docstrings of the QasmLexer tokens as verbatim,
204 # as PLY uses docstring as a way to define the patterns the token matches.
205 def remove_module_docstring(app, what, name, obj, options, lines):
206 if name.startswith('qiskit.qasm._qasmlexer.QasmLexer.t_') and lines:
207 lines[0] = u'Token matching: ``%s``' % lines[0]
208
209
210 def setup(app):
211 app.connect('autodoc-process-docstring', remove_module_docstring)
212
[end of doc/conf.py]
[start of examples/python/rippleadd-async.py]
1 # -*- coding: utf-8 -*-
2
3 # Copyright 2017 IBM RESEARCH. All Rights Reserved.
4 #
5 # Licensed under the Apache License, Version 2.0 (the "License");
6 # you may not use this file except in compliance with the License.
7 # You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing, software
12 # distributed under the License is distributed on an "AS IS" BASIS,
13 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14 # See the License for the specific language governing permissions and
15 # limitations under the License.
16 # =============================================================================
17
18 """
19 Ripple adder example based on Cuccaro et al., quant-ph/0410184.
20
21 Note: if you have only cloned the QISKit repository but not
22 used `pip install`, the examples only work from the root directory.
23 """
24
25 import time
26 from qiskit import QuantumProgram
27 from qiskit import QuantumCircuit
28
29 import Qconfig
30
31 online_backend = "ibmqx_qasm_simulator"
32 local_backend = "local_qasm_simulator"
33
34 # Whether we have connection with API servers or not. If not, we only launch
35 # jobs to the local simulator
36 offline = False
37 NUM_JOBS = 2 # TODO Parameterize
38 n = 2
39 QPS_SPECS = {
40 "circuits": [
41 {
42 "name": "rippleadd",
43 "quantum_registers": [
44 {"name": "a",
45 "size": n},
46 {"name": "b",
47 "size": n},
48 {"name": "cin",
49 "size": 1},
50 {"name": "cout",
51 "size": 1}
52 ],
53 "classical_registers": [
54 {"name": "ans",
55 "size": n + 1},
56 ]
57 }
58 ]
59 }
60
61 qp = QuantumProgram(specs=QPS_SPECS)
62 qc = qp.get_circuit("rippleadd")
63 a = qp.get_quantum_register("a")
64 b = qp.get_quantum_register("b")
65 cin = qp.get_quantum_register("cin")
66 cout = qp.get_quantum_register("cout")
67 ans = qp.get_classical_register("ans")
68
69
70 def majority(p, a, b, c):
71 """Majority gate."""
72 p.cx(c, b)
73 p.cx(c, a)
74 p.ccx(a, b, c)
75
76
77 def unmajority(p, a, b, c):
78 """Unmajority gate."""
79 p.ccx(a, b, c)
80 p.cx(c, a)
81 p.cx(a, b)
82
83
84 # Build a temporary subcircuit that adds a to b,
85 # storing the result in b
86 adder_subcircuit = QuantumCircuit(cin, a, b, cout)
87 majority(adder_subcircuit, cin[0], b[0], a[0])
88 for j in range(n - 1):
89 majority(adder_subcircuit, a[j], b[j + 1], a[j + 1])
90 adder_subcircuit.cx(a[n - 1], cout[0])
91 for j in reversed(range(n - 1)):
92 unmajority(adder_subcircuit, a[j], b[j + 1], a[j + 1])
93 unmajority(adder_subcircuit, cin[0], b[0], a[0])
94
95 # Set the inputs to the adder
96 qc.x(a[0]) # Set input a = 0...0001
97 qc.x(b) # Set input b = 1...1111
98 # Apply the adder
99 qc += adder_subcircuit
100 # Measure the output register in the computational basis
101 for j in range(n):
102 qc.measure(b[j], ans[j])
103 qc.measure(cout[0], ans[n])
104
105 ###############################################################
106 # Set up the API and execute the program.
107 ###############################################################
108 try:
109 qp.set_api(Qconfig.APItoken, Qconfig.config["url"])
110 except:
111 offline = True
112 print("""WARNING: There's no connection with IBMQuantumExperience servers.
113 cannot test I/O intesive tasks, will only test CPU intensive tasks
114 running the jobs in the local simulator""")
115
116 qobjs = []
117 # Create online (so I/O bound) jobs if we have connetion or local (so CPU bound)
118 # jobs otherwise
119 if not offline:
120 print("Creating %d online jobs..." % NUM_JOBS)
121 for _ in range(0, NUM_JOBS):
122 qobjs.append(qp.compile(["rippleadd"], backend=online_backend,
123 coupling_map=None, shots=1024))
124
125 print("Creating %d local jobs..." % NUM_JOBS)
126 # Create CPU intensive jobs
127 for _ in range(0, NUM_JOBS):
128 qobjs.append(qp.compile(["rippleadd"], backend=local_backend,
129 coupling_map=None, shots=1024))
130
131 end = False
132 def print_results_callback(results, error=None):
133 """This function will be called once all jobs have finished."""
134 if error != None:
135 print("There was an error executing the circuits!!: Error = {}".format(error))
136 return
137
138 for result in results:
139 print("result: {}".format(result))
140 try:
141 print(result.get_counts("rippleadd"))
142 except Exception as ex:
143 print("ERROR: {}".format(ex))
144
145 print("============")
146 global end
147 end = True
148
149 print("Running jobs asynchronously....")
150 # This call is asynchronous, it won't block!
151 qp.run_batch_async(qobjs, callback=print_results_callback)
152
153 # This will concurrently run while the jobs are being processed.
154 for i in range(0, 100):
155 print("Waitting for results...")
156 time.sleep(0.5)
157 if end:
158 break
159
160 print("Running jobs synchronously...")
161 results = qp.run_batch(qobjs)
162 for result in results:
163 print("result: {}".format(result))
164 try:
165 print(result.get_counts("rippleadd"))
166 except Exception as ex:
167 print("ERROR: {}".format(ex))
168
169 print("============")
170
171 print("Done")
172
[end of examples/python/rippleadd-async.py]
[start of qiskit/_quantumjob.py]
1 # -*- coding: utf-8 -*-
2 # pylint: disable=missing-param-doc,missing-type-doc
3 #
4 # Copyright 2017 IBM RESEARCH. All Rights Reserved.
5 #
6 # Licensed under the Apache License, Version 2.0 (the "License");
7 # you may not use this file except in compliance with the License.
8 # You may obtain a copy of the License at
9 #
10 # http://www.apache.org/licenses/LICENSE-2.0
11 #
12 # Unless required by applicable law or agreed to in writing, software
13 # distributed under the License is distributed on an "AS IS" BASIS,
14 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15 # See the License for the specific language governing permissions and
16 # limitations under the License.
17 # =============================================================================
18
19 """Quantum Job class"""
20 import random
21 import string
22 import qiskit.backends as backends
23 from qiskit.unroll import Unroller, DagUnroller, JsonBackend
24 from qiskit.dagcircuit import DAGCircuit
25 from qiskit import QuantumCircuit
26 from qiskit.qasm import Qasm
27
28
29 class QuantumJob():
30 """Creates a quantum circuit job
31 """
32
33 # TODO We need to create more tests for checking all possible inputs.
34 # TODO Make this interface clearer -- circuits could be many things!
35 def __init__(self, circuits, backend='local_qasm_simulator',
36 circuit_config=None, seed=None,
37 resources=None,
38 shots=1024, names=None,
39 do_compile=False, preformatted=False):
40 """
41 Args:
42 circuits (QuantumCircuit|DagCircuit | list(QuantumCircuit|DagCircuit)):
43 QuantumCircuit|DagCircuit or list of QuantumCircuit|DagCircuit.
44 If preformatted=True, this is a raw qobj.
45 backend (str): The backend to run the circuit on.
46 circuit_config (dict): Circuit configuration.
47 seed (int): The intial seed the simulatros use.
48 resources (dict): Resource requirements of job.
49 shots (int): the number of shots
50 names (str or list(str)): names/ids for circuits
51 do_compile (boolean): compile flag.
52 preformatted (bool): the objects in circuits are already compiled
53 and formatted (qasm for online, json for local). If true the
54 parameters "names" and "circuit_config" must also be defined
55 of the same length as "circuits".
56 """
57 resources = resources or {'max_credits': 10, 'wait': 5, 'timeout': 120}
58 if isinstance(circuits, list):
59 self.circuits = circuits
60 else:
61 self.circuits = [circuits]
62 if names is None:
63 self.names = []
64 for _ in range(len(self.circuits)):
65 self.names.append(self._generate_job_id(length=10))
66 elif isinstance(names, list):
67 self.names = names
68 else:
69 self.names = [names]
70
71 self.timeout = resources['timeout']
72 self.wait = resources['wait']
73 # check whether circuits have already been compiled
74 # and formatted for backend.
75 if preformatted:
76 # circuits is actually a qobj...validate (not ideal but convenient)
77 self.qobj = circuits
78 else:
79 self.qobj = self._create_qobj(circuits, circuit_config, backend,
80 seed, resources, shots, do_compile)
81 self.backend = self.qobj['config']['backend']
82 self.resources = resources
83 self.seed = seed
84 self.result = None
85
86 def _create_qobj(self, circuits, circuit_config, backend, seed,
87 resources, shots, do_compile):
88 # local and remote backends currently need different
89 # compilied circuit formats
90 formatted_circuits = []
91 if do_compile:
92 for circuit in circuits:
93 formatted_circuits.append(None)
94 else:
95 if backend in backends.local_backends():
96 for circuit in self.circuits:
97 basis = ['u1', 'u2', 'u3', 'cx', 'id']
98 unroller = Unroller
99 # TODO: No instanceof here! Refactor this class
100 if isinstance(circuit, DAGCircuit):
101 unroller = DagUnroller
102 elif isinstance(circuit, QuantumCircuit):
103 # TODO: We should remove this code path (it's redundant and slow)
104 circuit = Qasm(data=circuit.qasm()).parse()
105 unroller_instance = unroller(circuit, JsonBackend(basis))
106 compiled_circuit = unroller_instance.execute()
107 formatted_circuits.append(compiled_circuit)
108
109 else:
110 for circuit in self.circuits:
111 formatted_circuits.append(circuit.qasm(qeflag=True))
112
113 # create circuit component of qobj
114 circuit_records = []
115 if circuit_config is None:
116 config = {'coupling_map': None,
117 'basis_gates': 'u1,u2,u3,cx,id',
118 'layout': None,
119 'seed': seed}
120 circuit_config = [config] * len(self.circuits)
121
122 for circuit, fcircuit, name, config in zip(self.circuits,
123 formatted_circuits,
124 self.names,
125 circuit_config):
126 record = {
127 'name': name,
128 'compiled_circuit': None if do_compile else fcircuit,
129 'compiled_circuit_qasm': None if do_compile else fcircuit,
130 'circuit': circuit,
131 'config': config
132 }
133 circuit_records.append(record)
134
135 return {'id': self._generate_job_id(length=10),
136 'config': {
137 'max_credits': resources['max_credits'],
138 'shots': shots,
139 'backend': backend
140 },
141 'circuits': circuit_records}
142
143 def _generate_job_id(self, length=10):
144 return ''.join([random.choice(
145 string.ascii_letters + string.digits) for i in range(length)])
146
[end of qiskit/_quantumjob.py]
[start of qiskit/extensions/quantum_initializer/_initializer.py]
1 # -*- coding: utf-8 -*-
2
3 # Copyright 2017 IBM RESEARCH. All Rights Reserved.
4 #
5 # Licensed under the Apache License, Version 2.0 (the "License");
6 # you may not use this file except in compliance with the License.
7 # You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing, software
12 # distributed under the License is distributed on an "AS IS" BASIS,
13 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14 # See the License for the specific language governing permissions and
15 # limitations under the License.
16 # =============================================================================
17 """
18 Initialize qubit registers to desired arbitrary state.
19 """
20
21 import math
22 import numpy
23 import scipy
24
25 from qiskit import CompositeGate
26 from qiskit import Gate
27 from qiskit import QISKitError
28 from qiskit import QuantumCircuit
29 from qiskit.extensions.standard.cx import CnotGate
30 from qiskit.extensions.standard.ry import RYGate
31 from qiskit.extensions.standard.rz import RZGate
32
33 _EPS = 1e-10 # global variable used to chop very small numbers to zero
34
35
36 class InitializeGate(CompositeGate):
37 """Complex amplitude initialization.
38
39 Class that implements the (complex amplitude) initialization of some
40 flexible collection of qubit registers (assuming the qubits are in the
41 zero state).
42
43 Implements a recursive initialization algorithm including optimizations
44 from "Synthesis of Quantum Logic Circuits" Shende, Bullock, Markov
45 https://arxiv.org/abs/quant-ph/0406176v5
46
47 Additionally implements some extra optimizations: remove zero rotations and
48 double cnots.`
49
50 It inherits from CompositeGate in the same way that the Fredkin (cswap)
51 gate does. Therefore self.data is the list of gates (in order) that must
52 be applied to implement this meta-gate.
53
54 param = list of complex amplitudes
55 arg = list of qubits
56 circ = QuantumCircuit or CompositeGate containing this gate
57 """
58 def __init__(self, param, arg, circ=None):
59 """Create new initialize composite gate."""
60 num_qubits = math.log2(len(param))
61
62 # Check if param is a power of 2
63 if num_qubits == 0 or not num_qubits.is_integer():
64 raise QISKitError("Desired vector not a positive power of 2.")
65
66 self.num_qubits = int(num_qubits)
67
68 # Check if number of desired qubits agrees with available qubits
69 if len(arg) != self.num_qubits:
70 raise QISKitError("Number of complex amplitudes do not correspond "
71 "to the number of qubits.")
72
73 # Check if probabilities (amplitudes squared) sum to 1
74 if not math.isclose(sum(numpy.absolute(param) ** 2), 1.0,
75 abs_tol=_EPS):
76 raise QISKitError("Sum of amplitudes-squared does not equal one.")
77
78 super().__init__("init", param, arg, circ)
79
80 # call to generate the circuit that takes the desired vector to zero
81 self.gates_to_uncompute()
82 # remove zero rotations and double cnots
83 self.optimize_gates()
84 # invert the circuit to create the desired vector from zero (assuming
85 # the qubits are in the zero state)
86 self.inverse()
87
88 def nth_qubit_from_least_sig_qubit(self, nth):
89 """
90 Return the qubit that is nth away from the least significant qubit
91 (LSB), so n=0 corresponds to the LSB.
92 """
93 # if LSB is first (as is the case with the IBM QE) and significance is
94 # in order:
95 return self.arg[nth]
96 # if MSB is first: return self.arg[self.num_qubits - 1 - n]
97 # equivalent to self.arg[-(n+1)]
98 # to generalize any mapping could be placed here or even taken from
99 # the user
100
101 def reapply(self, circ):
102 """Reapply this gate to the corresponding qubits in circ."""
103 self._modifiers(circ.initialize(self.name, self.param, self.arg))
104
105 def gates_to_uncompute(self):
106 """
107 Call to populate the self.data list with gates that takes the
108 desired vector to zero.
109 """
110 # kick start the peeling loop
111 remaining_param = self.param
112
113 for i in range(self.num_qubits):
114 # work out which rotations must be done to disentangle the LSB
115 # qubit (we peel away one qubit at a time)
116 (remaining_param,
117 thetas,
118 phis) = InitializeGate._rotations_to_disentangle(remaining_param)
119
120 # perform the required rotations to decouple the LSB qubit (so that
121 # it can be "factored" out, leaving a
122 # shorter amplitude vector to peel away)
123 self._attach(self._multiplex(RZGate, i, phis))
124 self._attach(self._multiplex(RYGate, i, thetas))
125
126 @staticmethod
127 def _rotations_to_disentangle(local_param):
128 """
129 Static internal method to work out Ry and Rz rotation angles used
130 to disentangle the LSB qubit.
131 These rotations make up the block diagonal matrix U (i.e. multiplexor)
132 that disentangles the LSB.
133
134 [[Ry(theta_1).Rz(phi_1) 0 . . 0],
135 [0 Ry(theta_2).Rz(phi_2) . 0],
136 .
137 .
138 0 0 Ry(theta_2^n).Rz(phi_2^n)]]
139 """
140 remaining_vector = []
141 thetas = []
142 phis = []
143
144 param_len = len(local_param)
145
146 for i in range(param_len // 2):
147 # Ry and Rz rotations to move bloch vector from 0 to "imaginary"
148 # qubit
149 # (imagine a qubit state signified by the amplitudes at index 2*i
150 # and 2*(i+1), corresponding to the select qubits of the
151 # multiplexor being in state |i>)
152 (remains,
153 add_theta,
154 add_phi) = InitializeGate._bloch_angles(
155 local_param[2*i: 2*(i + 1)])
156
157 remaining_vector.append(remains)
158
159 # rotations for all imaginary qubits of the full vector
160 # to move from where it is to zero, hence the negative sign
161 thetas.append(-add_theta)
162 phis.append(-add_phi)
163
164 return remaining_vector, thetas, phis
165
166 @staticmethod
167 def _bloch_angles(pair_of_complex):
168 """
169 Static internal method to work out rotation to create the passed in
170 qubit from the zero vector.
171 """
172 [a_complex, b_complex] = pair_of_complex
173 # Force a and b to be complex, as otherwise numpy.angle might fail.
174 a_complex = complex(a_complex)
175 b_complex = complex(b_complex)
176 mag_a = numpy.absolute(a_complex)
177 final_r = float(numpy.sqrt(mag_a ** 2 + numpy.absolute(b_complex) ** 2))
178 if final_r < _EPS:
179 theta = 0
180 phi = 0
181 final_r = 0
182 final_t = 0
183 else:
184 theta = float(2 * numpy.arccos(mag_a / final_r))
185 a_arg = numpy.angle(a_complex)
186 b_arg = numpy.angle(b_complex)
187 final_t = a_arg + b_arg
188 phi = b_arg - a_arg
189
190 return final_r * numpy.exp(1.J * final_t/2), theta, phi
191
192 def _multiplex(self, bottom_gate, bottom_qubit_index, list_of_angles):
193 """
194 Internal recursive method to create gates to perform rotations on the
195 imaginary qubits: works by rotating LSB (and hence ALL imaginary
196 qubits) by combo angle and then flipping sign (by flipping the bit,
197 hence moving the complex amplitudes) of half the imaginary qubits
198 (CNOT) followed by another combo angle on LSB, therefore executing
199 conditional (on MSB) rotations, thereby disentangling LSB.
200 """
201 list_len = len(list_of_angles)
202 target_qubit = self.nth_qubit_from_least_sig_qubit(bottom_qubit_index)
203
204 # Case of no multiplexing = base case for recursion
205 if list_len == 1:
206 return bottom_gate(list_of_angles[0], target_qubit)
207
208 local_num_qubits = int(math.log2(list_len)) + 1
209 control_qubit = self.nth_qubit_from_least_sig_qubit(
210 local_num_qubits - 1 + bottom_qubit_index)
211
212 # calc angle weights, assuming recursion (that is the lower-level
213 # requested angles have been correctly implemented by recursion
214 angle_weight = scipy.kron([[0.5, 0.5], [0.5, -0.5]],
215 numpy.identity(2 ** (local_num_qubits - 2)))
216
217 # calc the combo angles
218 list_of_angles = (angle_weight * numpy.matrix(
219 list_of_angles).transpose()).reshape(-1).tolist()[0]
220
221 combine_composite_gates = CompositeGate(
222 "multiplex" + local_num_qubits.__str__(), [], self.arg)
223
224 # recursive step on half the angles fulfilling the above assumption
225 combine_composite_gates._attach(
226 self._multiplex(bottom_gate, bottom_qubit_index,
227 list_of_angles[0:(list_len // 2)]))
228
229 # combine_composite_gates.cx(control_qubit,target_qubit) -> does not
230 # work as expected because checks circuit
231 # so attach CNOT as follows, thereby flipping the LSB qubit
232 combine_composite_gates._attach(CnotGate(control_qubit, target_qubit))
233
234 # implement extra efficiency from the paper of cancelling adjacent
235 # CNOTs (by leaving out last CNOT and reversing (NOT inverting) the
236 # second lower-level multiplex)
237 sub_gate = self._multiplex(
238 bottom_gate, bottom_qubit_index, list_of_angles[(list_len // 2):])
239 if isinstance(sub_gate, CompositeGate):
240 combine_composite_gates._attach(sub_gate.reverse())
241 else:
242 combine_composite_gates._attach(sub_gate)
243
244 # outer multiplex keeps final CNOT, because no adjacent CNOT to cancel
245 # with
246 if self.num_qubits == local_num_qubits + bottom_qubit_index:
247 combine_composite_gates._attach(CnotGate(control_qubit,
248 target_qubit))
249
250 return combine_composite_gates
251
252 @staticmethod
253 def chop_num(numb):
254 """
255 Set very small numbers (as defined by global variable _EPS) to zero.
256 """
257 return 0 if abs(numb) < _EPS else numb
258
259
260 # ###############################################################
261 # Add needed functionality to other classes (it feels
262 # weird following the QISKit convention of adding functionality to other
263 # classes like this ;),
264 # TODO: multiple inheritance might be better?)
265
266
267 def reverse(self):
268 """
269 Reverse (recursively) the sub-gates of this CompositeGate. Note this does
270 not invert the gates!
271 """
272 new_data = []
273 for gate in reversed(self.data):
274 if isinstance(gate, CompositeGate):
275 new_data.append(gate.reverse())
276 else:
277 new_data.append(gate)
278 self.data = new_data
279
280 # not just a high-level reverse:
281 # self.data = [gate for gate in reversed(self.data)]
282
283 return self
284
285
286 QuantumCircuit.reverse = reverse
287 CompositeGate.reverse = reverse
288
289
290 def optimize_gates(self):
291 """Remove Zero rotations and Double CNOTS."""
292 self.remove_zero_rotations()
293 while self.remove_double_cnots_once():
294 pass
295
296
297 QuantumCircuit.optimize_gates = optimize_gates
298 CompositeGate.optimize_gates = optimize_gates
299
300
301 def remove_zero_rotations(self):
302 """
303 Remove Zero Rotations by looking (recursively) at rotation gates at the
304 leaf ends.
305 """
306 # Removed at least one zero rotation.
307 zero_rotation_removed = False
308 new_data = []
309 for gate in self.data:
310 if isinstance(gate, CompositeGate):
311 zero_rotation_removed |= gate.remove_zero_rotations()
312 if gate.data:
313 new_data.append(gate)
314 else:
315 if ((not isinstance(gate, Gate)) or
316 (not (gate.name == "rz" or gate.name == "ry" or
317 gate.name == "rx") or
318 (InitializeGate.chop_num(gate.param[0]) != 0))):
319 new_data.append(gate)
320 else:
321 zero_rotation_removed = True
322
323 self.data = new_data
324
325 return zero_rotation_removed
326
327
328 QuantumCircuit.remove_zero_rotations = remove_zero_rotations
329 CompositeGate.remove_zero_rotations = remove_zero_rotations
330
331
332 def number_atomic_gates(self):
333 """Count the number of leaf gates. """
334 num = 0
335 for gate in self.data:
336 if isinstance(gate, CompositeGate):
337 num += gate.number_atomic_gates()
338 else:
339 if isinstance(gate, Gate):
340 num += 1
341 return num
342
343
344 QuantumCircuit.number_atomic_gates = number_atomic_gates
345 CompositeGate.number_atomic_gates = number_atomic_gates
346
347
348 def remove_double_cnots_once(self):
349 """
350 Remove Double CNOTS paying attention that gates may be neighbours across
351 Composite Gate boundaries.
352 """
353 num_high_level_gates = len(self.data)
354
355 if num_high_level_gates == 0:
356 return False
357 else:
358 if num_high_level_gates == 1 and isinstance(self.data[0],
359 CompositeGate):
360 return self.data[0].remove_double_cnots_once()
361
362 # Removed at least one double cnot.
363 double_cnot_removed = False
364
365 # last gate might be composite
366 if isinstance(self.data[num_high_level_gates - 1], CompositeGate):
367 double_cnot_removed = \
368 double_cnot_removed or\
369 self.data[num_high_level_gates - 1].remove_double_cnots_once()
370
371 # don't start with last gate, using reversed so that can del on the go
372 for i in reversed(range(num_high_level_gates - 1)):
373 if isinstance(self.data[i], CompositeGate):
374 double_cnot_removed =\
375 double_cnot_removed \
376 or self.data[i].remove_double_cnots_once()
377 left_gate_host = self.data[i].last_atomic_gate_host()
378 left_gate_index = -1
379 # TODO: consider adding if semantics needed:
380 # to remove empty composite gates
381 # if left_gate_host == None: del self.data[i]
382 else:
383 left_gate_host = self.data
384 left_gate_index = i
385
386 if ((left_gate_host is not None) and
387 left_gate_host[left_gate_index].name == "cx"):
388 if isinstance(self.data[i + 1], CompositeGate):
389 right_gate_host = self.data[i + 1].first_atomic_gate_host()
390 right_gate_index = 0
391 else:
392 right_gate_host = self.data
393 right_gate_index = i + 1
394
395 if (right_gate_host is not None) \
396 and right_gate_host[right_gate_index].name == "cx" \
397 and (left_gate_host[left_gate_index].arg ==
398 right_gate_host[right_gate_index].arg):
399 del right_gate_host[right_gate_index]
400 del left_gate_host[left_gate_index]
401 double_cnot_removed = True
402
403 return double_cnot_removed
404
405
406 QuantumCircuit.remove_double_cnots_once = remove_double_cnots_once
407 CompositeGate.remove_double_cnots_once = remove_double_cnots_once
408
409
410 def first_atomic_gate_host(self):
411 """Return the host list of the leaf gate on the left edge."""
412 if self.data:
413 if isinstance(self.data[0], CompositeGate):
414 return self.data[0].first_atomic_gate_host()
415 return self.data
416
417 return None
418
419
420 QuantumCircuit.first_atomic_gate_host = first_atomic_gate_host
421 CompositeGate.first_atomic_gate_host = first_atomic_gate_host
422
423
424 def last_atomic_gate_host(self):
425 """Return the host list of the leaf gate on the right edge."""
426 if self.data:
427 if isinstance(self.data[-1], CompositeGate):
428 return self.data[-1].last_atomic_gate_host()
429 return self.data
430
431 return None
432
433
434 QuantumCircuit.last_atomic_gate_host = last_atomic_gate_host
435 CompositeGate.last_atomic_gate_host = last_atomic_gate_host
436
437
438 def initialize(self, params, qubits):
439 """Apply initialize to circuit."""
440 self._check_dups(qubits)
441 for i in qubits:
442 self._check_qubit(i)
443 # TODO: make initialize an Instruction, and insert reset
444 # TODO: avoid explicit reset if compiler determines a |0> state
445
446 return self._attach(InitializeGate(params, qubits, self))
447
448
449 QuantumCircuit.initialize = initialize
450 CompositeGate.initialize = initialize
451
[end of qiskit/extensions/quantum_initializer/_initializer.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
Qiskit/qiskit
|
e1fc5a0b8654a1a54e75451cb9c710ed0c6c6f1a
|
make test fails locally
<!--- Provide a general summary of the issue in the Title above -->
`make test` fails locally, possibly due to the configuration of hubs in the Qconfig.
This has not been an issue with travis and submitted PRs.
## Steps to Reproduce (for bugs)
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug. Include code to reproduce, if relevant -->
`python -m unittest -v test.python.test_api_ibmq.TestApiHub`
|
It seems the original tests fell out of sync with the current hubs implementation, and might fail under specific circumstances - I'll look into it!
|
2018-03-30T13:39:57Z
|
<patch>
diff --git a/qiskit/_quantumprogram.py b/qiskit/_quantumprogram.py
--- a/qiskit/_quantumprogram.py
+++ b/qiskit/_quantumprogram.py
@@ -702,10 +702,14 @@ def set_api(self, token, url, hub=None, group=None, project=None,
try:
config_dict = {
'url': url,
- 'hub': hub,
- 'group': group,
- 'project': project
}
+ # Only append hub/group/project if they are different than None.
+ if all([hub, group, project]):
+ config_dict.update({
+ 'hub': hub,
+ 'group': group,
+ 'project': project
+ })
if proxies:
config_dict['proxies'] = proxies
self.__api = IBMQuantumExperience(token, config_dict, verify)
</patch>
|
[]
|
[]
| |||
pantsbuild__pants-13813
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Split nailgun server and client inputs
Triage in #13779 showed that materializing the inputs for nailgunned JVM processes represented up to a 300ms constant factor.
But because those inputs currently include _both_ the `use_nailgun: Digest` and the `input_files: Digest` fields (because the `use_nailgun` digest must be a subset of the `input_digest`: [see](https://github.com/pantsbuild/pants/blob/845111479a4b26fbfac6d6dbe8b8f85deff34438/src/rust/engine/process_execution/src/lib.rs#L254-L260)), a lot of that work is completely redundant. On top of that, because we have materialized more (unnecessary) stuff into the sandbox, we have more to clean up afterwards.
This hits source analysis processes for Java/Scala particularly hard: in some cases, it represented ~500ms of total overhead on ~150ms processes.
</issue>
<code>
[start of README.md]
1 # Pants Build System
2
3 Pants is a scalable build system for _monorepos_: codebases containing
4 multiple projects, often using multiple programming languages and frameworks,
5 in a single unified code repository.
6
7 Some noteworthy features include:
8
9 * Explicit dependency modeling.
10 * Fine-grained invalidation.
11 * Shared result caching.
12 * Concurrent execution.
13 * Remote execution.
14 * Unified interface for multiple tools and languages.
15 * Extensibility and customizability via a plugin API.
16
17 Documentation: [www.pantsbuild.org](https://www.pantsbuild.org/).
18
19 We release to [PyPI](https://pypi.org/pypi)
20 [](https://pypi.org/pypi/pantsbuild.pants)
21 [](https://pypi.org/pypi/pantsbuild.pants)
22
23 # Requirements
24
25 To run Pants, you need:
26
27 * Linux or macOS.
28 * Python 3.7+ discoverable on your `PATH`.
29 * A C compiler, system headers and Python headers (to compile native Python modules).
30 * Internet access (so that Pants can fully bootstrap itself).
31
[end of README.md]
[start of src/python/pants/backend/python/util_rules/pex.py]
1 # Copyright 2019 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 from __future__ import annotations
5
6 import dataclasses
7 import json
8 import logging
9 import os
10 import shlex
11 from dataclasses import dataclass
12 from pathlib import PurePath
13 from textwrap import dedent
14 from typing import Iterable, Iterator, Mapping
15
16 import packaging.specifiers
17 import packaging.version
18 from pkg_resources import Requirement
19
20 from pants.backend.python.pip_requirement import PipRequirement
21 from pants.backend.python.subsystems.repos import PythonRepos
22 from pants.backend.python.subsystems.setup import InvalidLockfileBehavior, PythonSetup
23 from pants.backend.python.target_types import MainSpecification, PexLayout
24 from pants.backend.python.target_types import PexPlatformsField as PythonPlatformsField
25 from pants.backend.python.target_types import PythonRequirementsField
26 from pants.backend.python.util_rules import pex_cli
27 from pants.backend.python.util_rules.interpreter_constraints import InterpreterConstraints
28 from pants.backend.python.util_rules.lockfile_metadata import (
29 InvalidLockfileError,
30 InvalidLockfileReason,
31 LockfileMetadata,
32 )
33 from pants.backend.python.util_rules.pex_cli import PexCliProcess, PexPEX
34 from pants.backend.python.util_rules.pex_environment import (
35 CompletePexEnvironment,
36 PexEnvironment,
37 PexRuntimeEnvironment,
38 PythonExecutable,
39 )
40 from pants.engine.collection import Collection, DeduplicatedCollection
41 from pants.engine.engine_aware import EngineAwareParameter
42 from pants.engine.fs import (
43 EMPTY_DIGEST,
44 AddPrefix,
45 CreateDigest,
46 Digest,
47 DigestContents,
48 FileContent,
49 GlobMatchErrorBehavior,
50 MergeDigests,
51 PathGlobs,
52 )
53 from pants.engine.platform import Platform
54 from pants.engine.process import BashBinary, Process, ProcessCacheScope, ProcessResult
55 from pants.engine.rules import Get, collect_rules, rule
56 from pants.util.docutil import doc_url
57 from pants.util.frozendict import FrozenDict
58 from pants.util.logging import LogLevel
59 from pants.util.meta import frozen_after_init
60 from pants.util.ordered_set import FrozenOrderedSet
61 from pants.util.strutil import pluralize
62
63 logger = logging.getLogger(__name__)
64
65
66 @dataclass(frozen=True)
67 class Lockfile:
68 file_path: str
69 file_path_description_of_origin: str
70 lockfile_hex_digest: str | None
71 req_strings: FrozenOrderedSet[str] | None
72
73
74 @dataclass(frozen=True)
75 class LockfileContent:
76 file_content: FileContent
77 lockfile_hex_digest: str | None
78 req_strings: FrozenOrderedSet[str] | None
79
80
81 @dataclass(frozen=True)
82 class _ToolLockfileMixin:
83 options_scope_name: str
84 uses_source_plugins: bool
85 uses_project_interpreter_constraints: bool
86
87
88 @dataclass(frozen=True)
89 class ToolDefaultLockfile(LockfileContent, _ToolLockfileMixin):
90 pass
91
92
93 @dataclass(frozen=True)
94 class ToolCustomLockfile(Lockfile, _ToolLockfileMixin):
95 pass
96
97
98 @frozen_after_init
99 @dataclass(unsafe_hash=True)
100 class PexRequirements:
101 req_strings: FrozenOrderedSet[str]
102 apply_constraints: bool
103 # TODO: The constraints.txt resolve for `resolve_all_constraints` will be removed as part of
104 # #12314, but in the meantime, it "acts like" a lockfile, but isn't actually typed as a Lockfile
105 # because the constraints are modified in memory first. This flag marks a `PexRequirements`
106 # resolve as being a request for the entire constraints file.
107 is_all_constraints_resolve: bool
108 repository_pex: Pex | None
109
110 def __init__(
111 self,
112 req_strings: Iterable[str] = (),
113 *,
114 apply_constraints: bool = False,
115 is_all_constraints_resolve: bool = False,
116 repository_pex: Pex | None = None,
117 ) -> None:
118 """
119 :param req_strings: The requirement strings to resolve.
120 :param apply_constraints: Whether to apply any configured requirement_constraints while
121 building this PEX.
122 :param repository_pex: An optional PEX to resolve requirements from via the Pex CLI
123 `--pex-repository` option.
124 """
125 self.req_strings = FrozenOrderedSet(sorted(req_strings))
126 self.apply_constraints = apply_constraints
127 self.is_all_constraints_resolve = is_all_constraints_resolve
128 self.repository_pex = repository_pex
129
130 @classmethod
131 def create_from_requirement_fields(
132 cls,
133 fields: Iterable[PythonRequirementsField],
134 *,
135 additional_requirements: Iterable[str] = (),
136 apply_constraints: bool = True,
137 ) -> PexRequirements:
138 field_requirements = {str(python_req) for field in fields for python_req in field.value}
139 return PexRequirements(
140 {*field_requirements, *additional_requirements}, apply_constraints=apply_constraints
141 )
142
143 def __bool__(self) -> bool:
144 return bool(self.req_strings)
145
146
147 class PexPlatforms(DeduplicatedCollection[str]):
148 sort_input = True
149
150 @classmethod
151 def create_from_platforms_field(cls, field: PythonPlatformsField) -> PexPlatforms:
152 return cls(field.value or ())
153
154 def generate_pex_arg_list(self) -> list[str]:
155 args = []
156 for platform in self:
157 args.extend(["--platform", platform])
158 return args
159
160
161 @frozen_after_init
162 @dataclass(unsafe_hash=True)
163 class PexRequest(EngineAwareParameter):
164 output_filename: str
165 internal_only: bool
166 layout: PexLayout | None
167 python: PythonExecutable | None
168 requirements: PexRequirements | Lockfile | LockfileContent
169 interpreter_constraints: InterpreterConstraints
170 platforms: PexPlatforms
171 sources: Digest | None
172 additional_inputs: Digest | None
173 main: MainSpecification | None
174 additional_args: tuple[str, ...]
175 pex_path: tuple[Pex, ...]
176 description: str | None = dataclasses.field(compare=False)
177
178 def __init__(
179 self,
180 *,
181 output_filename: str,
182 internal_only: bool,
183 layout: PexLayout | None = None,
184 python: PythonExecutable | None = None,
185 requirements: PexRequirements | Lockfile | LockfileContent = PexRequirements(),
186 interpreter_constraints=InterpreterConstraints(),
187 platforms=PexPlatforms(),
188 sources: Digest | None = None,
189 additional_inputs: Digest | None = None,
190 main: MainSpecification | None = None,
191 additional_args: Iterable[str] = (),
192 pex_path: Iterable[Pex] = (),
193 description: str | None = None,
194 ) -> None:
195 """A request to create a PEX from its inputs.
196
197 :param output_filename: The name of the built Pex file, which typically should end in
198 `.pex`.
199 :param internal_only: Whether we ever materialize the Pex and distribute it directly
200 to end users, such as with the `binary` goal. Typically, instead, the user never
201 directly uses the Pex, e.g. with `lint` and `test`. If True, we will use a Pex setting
202 that results in faster build time but compatibility with fewer interpreters at runtime.
203 :param layout: The filesystem layout to create the PEX with.
204 :param python: A particular PythonExecutable to use, which must match any relevant
205 interpreter_constraints.
206 :param requirements: The requirements that the PEX should contain.
207 :param interpreter_constraints: Any constraints on which Python versions may be used.
208 :param platforms: Which platforms should be supported. Setting this value will cause
209 interpreter constraints to not be used because platforms already constrain the valid
210 Python versions, e.g. by including `cp36m` in the platform string.
211 :param sources: Any source files that should be included in the Pex.
212 :param additional_inputs: Any inputs that are not source files and should not be included
213 directly in the Pex, but should be present in the environment when building the Pex.
214 :param main: The main for the built Pex, equivalent to Pex's `-e` or '-c' flag. If
215 left off, the Pex will open up as a REPL.
216 :param additional_args: Any additional Pex flags.
217 :param pex_path: Pex files to add to the PEX_PATH.
218 :param description: A human-readable description to render in the dynamic UI when building
219 the Pex.
220 """
221 self.output_filename = output_filename
222 self.internal_only = internal_only
223 self.layout = layout
224 self.python = python
225 self.requirements = requirements
226 self.interpreter_constraints = interpreter_constraints
227 self.platforms = platforms
228 self.sources = sources
229 self.additional_inputs = additional_inputs
230 self.main = main
231 self.additional_args = tuple(additional_args)
232 self.pex_path = tuple(pex_path)
233 self.description = description
234 self.__post_init__()
235
236 def __post_init__(self):
237 if self.internal_only and self.platforms:
238 raise ValueError(
239 "Internal only PEXes can only constrain interpreters with interpreter_constraints."
240 f"Given platform constraints {self.platforms} for internal only pex request: "
241 f"{self}."
242 )
243 if self.internal_only and self.layout:
244 raise ValueError(
245 "Internal only PEXes have their layout controlled centrally. Given layout "
246 f"{self.layout} for internal only pex request: {self}."
247 )
248 if self.python and self.platforms:
249 raise ValueError(
250 "Only one of platforms or a specific interpreter may be set. Got "
251 f"both {self.platforms} and {self.python}."
252 )
253 if self.python and self.interpreter_constraints:
254 raise ValueError(
255 "Only one of interpreter_constraints or a specific interpreter may be set. Got "
256 f"both {self.interpreter_constraints} and {self.python}."
257 )
258
259 def debug_hint(self) -> str:
260 return self.output_filename
261
262
263 @dataclass(frozen=True)
264 class OptionalPexRequest:
265 maybe_pex_request: PexRequest | None
266
267
268 @dataclass(frozen=True)
269 class Pex:
270 """Wrapper for a digest containing a pex file created with some filename."""
271
272 digest: Digest
273 name: str
274 python: PythonExecutable | None
275
276
277 @dataclass(frozen=True)
278 class OptionalPex:
279 maybe_pex: Pex | None
280
281
282 @rule(desc="Find Python interpreter for constraints", level=LogLevel.DEBUG)
283 async def find_interpreter(
284 interpreter_constraints: InterpreterConstraints, pex_runtime_env: PexRuntimeEnvironment
285 ) -> PythonExecutable:
286 formatted_constraints = " OR ".join(str(constraint) for constraint in interpreter_constraints)
287 result = await Get(
288 ProcessResult,
289 PexCliProcess(
290 description=f"Find interpreter for constraints: {formatted_constraints}",
291 # Here, we run the Pex CLI with no requirements, which just selects an interpreter.
292 # Normally, this would start an isolated repl. By passing `--`, we force the repl to
293 # instead act as an interpreter (the selected one) and tell us about itself. The upshot
294 # is we run the Pex interpreter selection logic unperturbed but without resolving any
295 # distributions.
296 argv=(
297 *interpreter_constraints.generate_pex_arg_list(),
298 "--",
299 "-c",
300 # N.B.: The following code snippet must be compatible with Python 2.7 and
301 # Python 3.5+.
302 #
303 # When hashing, we pick 8192 for efficiency of reads and fingerprint updates
304 # (writes) since it's a common OS buffer size and an even multiple of the
305 # hash block size.
306 dedent(
307 """\
308 import hashlib, os, sys
309
310 python = os.path.realpath(sys.executable)
311 print(python)
312
313 hasher = hashlib.sha256()
314 with open(python, "rb") as fp:
315 for chunk in iter(lambda: fp.read(8192), b""):
316 hasher.update(chunk)
317 print(hasher.hexdigest())
318 """
319 ),
320 ),
321 level=LogLevel.DEBUG,
322 # NB: We want interpreter discovery to re-run fairly frequently
323 # (PER_RESTART_SUCCESSFUL), but not on every run of Pants (NEVER, which is effectively
324 # per-Session). See #10769 for a solution that is less of a tradeoff.
325 cache_scope=ProcessCacheScope.PER_RESTART_SUCCESSFUL,
326 ),
327 )
328 path, fingerprint = result.stdout.decode().strip().splitlines()
329
330 if pex_runtime_env.verbosity > 0:
331 log_output = result.stderr.decode()
332 if log_output:
333 logger.info("%s", log_output)
334
335 return PythonExecutable(path=path, fingerprint=fingerprint)
336
337
338 @dataclass(frozen=True)
339 class BuildPexResult:
340 result: ProcessResult
341 pex_filename: str
342 digest: Digest
343 python: PythonExecutable | None
344
345 def create_pex(self) -> Pex:
346 return Pex(digest=self.digest, name=self.pex_filename, python=self.python)
347
348
349 @rule(level=LogLevel.DEBUG)
350 async def build_pex(
351 request: PexRequest,
352 python_setup: PythonSetup,
353 python_repos: PythonRepos,
354 platform: Platform,
355 pex_runtime_env: PexRuntimeEnvironment,
356 ) -> BuildPexResult:
357 """Returns a PEX with the given settings."""
358 argv = ["--output-file", request.output_filename, *request.additional_args]
359
360 repository_pex = (
361 request.requirements.repository_pex
362 if isinstance(request.requirements, PexRequirements)
363 else None
364 )
365 if repository_pex:
366 argv.extend(["--pex-repository", repository_pex.name])
367 else:
368 # NB: In setting `--no-pypi`, we rely on the default value of `--python-repos-indexes`
369 # including PyPI, which will override `--no-pypi` and result in using PyPI in the default
370 # case. Why set `--no-pypi`, then? We need to do this so that
371 # `--python-repos-repos=['custom_url']` will only point to that index and not include PyPI.
372 argv.extend(
373 [
374 "--no-pypi",
375 *(f"--index={index}" for index in python_repos.indexes),
376 *(f"--repo={repo}" for repo in python_repos.repos),
377 "--resolver-version",
378 "pip-2020-resolver",
379 ]
380 )
381
382 python: PythonExecutable | None = None
383
384 # NB: If `--platform` is specified, this signals that the PEX should not be built locally.
385 # `--interpreter-constraint` only makes sense in the context of building locally. These two
386 # flags are mutually exclusive. See https://github.com/pantsbuild/pex/issues/957.
387 if request.platforms:
388 # TODO(#9560): consider validating that these platforms are valid with the interpreter
389 # constraints.
390 argv.extend(request.platforms.generate_pex_arg_list())
391 elif request.python:
392 python = request.python
393 elif request.internal_only:
394 # NB: If it's an internal_only PEX, we do our own lookup of the interpreter based on the
395 # interpreter constraints, and then will run the PEX with that specific interpreter. We
396 # will have already validated that there were no platforms.
397 python = await Get(
398 PythonExecutable, InterpreterConstraints, request.interpreter_constraints
399 )
400 else:
401 # `--interpreter-constraint` options are mutually exclusive with the `--python` option,
402 # so we only specify them if we have not already located a concrete Python.
403 argv.extend(request.interpreter_constraints.generate_pex_arg_list())
404
405 if python:
406 argv.extend(["--python", python.path])
407
408 argv.append("--no-emit-warnings")
409
410 if python_setup.resolver_jobs:
411 argv.extend(["--jobs", str(python_setup.resolver_jobs)])
412
413 if python_setup.manylinux:
414 argv.extend(["--manylinux", python_setup.manylinux])
415 else:
416 argv.append("--no-manylinux")
417
418 if request.main is not None:
419 argv.extend(request.main.iter_pex_args())
420
421 # TODO(John Sirois): Right now any request requirements will shadow corresponding pex path
422 # requirements, which could lead to problems. Support shading python binaries.
423 # See: https://github.com/pantsbuild/pants/issues/9206
424 if request.pex_path:
425 argv.extend(["--pex-path", ":".join(pex.name for pex in request.pex_path)])
426
427 source_dir_name = "source_files"
428 argv.append(f"--sources-directory={source_dir_name}")
429 sources_digest_as_subdir = await Get(
430 Digest, AddPrefix(request.sources or EMPTY_DIGEST, source_dir_name)
431 )
432
433 additional_inputs_digest = request.additional_inputs or EMPTY_DIGEST
434 repository_pex_digest = repository_pex.digest if repository_pex else EMPTY_DIGEST
435 constraint_file_digest = EMPTY_DIGEST
436 requirements_file_digest = EMPTY_DIGEST
437
438 # TODO(#12314): Capture the resolve name for multiple user lockfiles.
439 resolve_name = (
440 request.requirements.options_scope_name
441 if isinstance(request.requirements, (ToolDefaultLockfile, ToolCustomLockfile))
442 else None
443 )
444
445 if isinstance(request.requirements, Lockfile):
446 is_monolithic_resolve = True
447 argv.extend(["--requirement", request.requirements.file_path])
448 argv.append("--no-transitive")
449 globs = PathGlobs(
450 [request.requirements.file_path],
451 glob_match_error_behavior=GlobMatchErrorBehavior.error,
452 description_of_origin=request.requirements.file_path_description_of_origin,
453 )
454 if python_setup.invalid_lockfile_behavior in {
455 InvalidLockfileBehavior.warn,
456 InvalidLockfileBehavior.error,
457 }:
458 requirements_file_digest_contents = await Get(DigestContents, PathGlobs, globs)
459 metadata = LockfileMetadata.from_lockfile(
460 requirements_file_digest_contents[0].content,
461 request.requirements.file_path,
462 resolve_name,
463 )
464 _validate_metadata(metadata, request, request.requirements, python_setup)
465 requirements_file_digest = await Get(Digest, PathGlobs, globs)
466
467 elif isinstance(request.requirements, LockfileContent):
468 is_monolithic_resolve = True
469 file_content = request.requirements.file_content
470 argv.extend(["--requirement", file_content.path])
471 argv.append("--no-transitive")
472 if python_setup.invalid_lockfile_behavior in {
473 InvalidLockfileBehavior.warn,
474 InvalidLockfileBehavior.error,
475 }:
476 metadata = LockfileMetadata.from_lockfile(
477 file_content.content, resolve_name=resolve_name
478 )
479 _validate_metadata(metadata, request, request.requirements, python_setup)
480 requirements_file_digest = await Get(Digest, CreateDigest([file_content]))
481 else:
482 assert isinstance(request.requirements, PexRequirements)
483 is_monolithic_resolve = request.requirements.is_all_constraints_resolve
484
485 if (
486 request.requirements.apply_constraints
487 and python_setup.requirement_constraints is not None
488 ):
489 argv.extend(["--constraints", python_setup.requirement_constraints])
490 constraint_file_digest = await Get(
491 Digest,
492 PathGlobs(
493 [python_setup.requirement_constraints],
494 glob_match_error_behavior=GlobMatchErrorBehavior.error,
495 description_of_origin="the option `[python].requirement_constraints`",
496 ),
497 )
498
499 argv.extend(request.requirements.req_strings)
500
501 merged_digest = await Get(
502 Digest,
503 MergeDigests(
504 (
505 sources_digest_as_subdir,
506 additional_inputs_digest,
507 constraint_file_digest,
508 requirements_file_digest,
509 repository_pex_digest,
510 *(pex.digest for pex in request.pex_path),
511 )
512 ),
513 )
514
515 if request.internal_only or is_monolithic_resolve:
516 # This is a much friendlier layout for the CAS than the default zipapp.
517 layout = PexLayout.PACKED
518 else:
519 layout = request.layout or PexLayout.ZIPAPP
520 argv.extend(["--layout", layout.value])
521
522 output_files: Iterable[str] | None = None
523 output_directories: Iterable[str] | None = None
524 if PexLayout.ZIPAPP == layout:
525 output_files = [request.output_filename]
526 else:
527 output_directories = [request.output_filename]
528
529 process = await Get(
530 Process,
531 PexCliProcess(
532 python=python,
533 argv=argv,
534 additional_input_digest=merged_digest,
535 description=_build_pex_description(request),
536 output_files=output_files,
537 output_directories=output_directories,
538 ),
539 )
540
541 process = dataclasses.replace(process, platform=platform)
542
543 # NB: Building a Pex is platform dependent, so in order to get a PEX that we can use locally
544 # without cross-building, we specify that our PEX command should be run on the current local
545 # platform.
546 result = await Get(ProcessResult, Process, process)
547
548 if pex_runtime_env.verbosity > 0:
549 log_output = result.stderr.decode()
550 if log_output:
551 logger.info("%s", log_output)
552
553 digest = (
554 await Get(
555 Digest, MergeDigests((result.output_digest, *(pex.digest for pex in request.pex_path)))
556 )
557 if request.pex_path
558 else result.output_digest
559 )
560
561 return BuildPexResult(
562 result=result, pex_filename=request.output_filename, digest=digest, python=python
563 )
564
565
566 def _validate_metadata(
567 metadata: LockfileMetadata,
568 request: PexRequest,
569 requirements: (Lockfile | LockfileContent),
570 python_setup: PythonSetup,
571 ) -> None:
572
573 # TODO(#12314): Improve this message: `Requirement.parse` raises `InvalidRequirement`, which
574 # doesn't have mypy stubs at the moment; it may be hard to catch this exception and typecheck.
575 req_strings = (
576 {PipRequirement.parse(i) for i in requirements.req_strings}
577 if requirements.req_strings is not None
578 else None
579 )
580
581 validation = metadata.is_valid_for(
582 requirements.lockfile_hex_digest,
583 request.interpreter_constraints,
584 python_setup.interpreter_universe,
585 req_strings,
586 )
587
588 if validation:
589 return
590
591 def tool_message_parts(
592 requirements: (ToolCustomLockfile | ToolDefaultLockfile),
593 ) -> Iterator[str]:
594
595 tool_name = requirements.options_scope_name
596 uses_source_plugins = requirements.uses_source_plugins
597 uses_project_interpreter_constraints = requirements.uses_project_interpreter_constraints
598
599 yield "You are using "
600
601 if isinstance(requirements, ToolDefaultLockfile):
602 yield "the `<default>` lockfile provided by Pants "
603 elif isinstance(requirements, ToolCustomLockfile):
604 yield f"the lockfile at {requirements.file_path} "
605
606 yield (
607 f"to install the tool `{tool_name}`, but it is not compatible with your "
608 "configuration: "
609 "\n\n"
610 )
611
612 if any(
613 i == InvalidLockfileReason.INVALIDATION_DIGEST_MISMATCH
614 or i == InvalidLockfileReason.REQUIREMENTS_MISMATCH
615 for i in validation.failure_reasons
616 ):
617 # TODO(12314): Add message showing _which_ requirements diverged.
618
619 yield (
620 "- You have set different requirements than those used to generate the lockfile. "
621 f"You can fix this by not setting `[{tool_name}].version`, "
622 )
623
624 if uses_source_plugins:
625 yield f"`[{tool_name}].source_plugins`, "
626
627 yield (
628 f"and `[{tool_name}].extra_requirements`, or by using a new "
629 "custom lockfile."
630 "\n"
631 )
632
633 if InvalidLockfileReason.INTERPRETER_CONSTRAINTS_MISMATCH in validation.failure_reasons:
634 yield (
635 f"- You have set interpreter constraints (`{request.interpreter_constraints}`) that "
636 "are not compatible with those used to generate the lockfile "
637 f"(`{metadata.valid_for_interpreter_constraints}`). "
638 )
639 if not uses_project_interpreter_constraints:
640 yield (
641 f"You can fix this by not setting `[{tool_name}].interpreter_constraints`, "
642 "or by using a new custom lockfile. "
643 )
644 else:
645 yield (
646 f"`{tool_name}` determines its interpreter constraints based on your code's own "
647 "constraints. To fix this error, you can either change your code's constraints "
648 f"(see {doc_url('python-interpreter-compatibility')}) or by generating a new "
649 "custom lockfile. "
650 )
651 yield "\n"
652
653 yield "\n"
654
655 if not isinstance(requirements, ToolCustomLockfile):
656 yield (
657 "To generate a custom lockfile based on your current configuration, set "
658 f"`[{tool_name}].lockfile` to where you want to create the lockfile, then run "
659 f"`./pants generate-lockfiles --resolve={tool_name}`. "
660 )
661 else:
662 yield (
663 "To regenerate your lockfile based on your current configuration, run "
664 f"`./pants generate-lockfiles --resolve={tool_name}`. "
665 )
666
667 message: str
668 if isinstance(requirements, (ToolCustomLockfile, ToolDefaultLockfile)):
669 message = "".join(tool_message_parts(requirements)).strip()
670 else:
671 # TODO(12314): Improve this message
672 raise InvalidLockfileError(f"{validation.failure_reasons}")
673
674 if python_setup.invalid_lockfile_behavior == InvalidLockfileBehavior.error:
675 raise ValueError(message)
676 else:
677 logger.warning("%s", message)
678
679
680 def _build_pex_description(request: PexRequest) -> str:
681 if request.description:
682 return request.description
683
684 if isinstance(request.requirements, Lockfile):
685 desc_suffix = f"from {request.requirements.file_path}"
686 elif isinstance(request.requirements, LockfileContent):
687 desc_suffix = f"from {request.requirements.file_content.path}"
688 else:
689 if not request.requirements.req_strings:
690 return f"Building {request.output_filename}"
691 elif request.requirements.repository_pex:
692 repo_pex = request.requirements.repository_pex.name
693 return (
694 f"Extracting {pluralize(len(request.requirements.req_strings), 'requirement')} "
695 f"to build {request.output_filename} from {repo_pex}: "
696 f"{', '.join(request.requirements.req_strings)}"
697 )
698 else:
699 desc_suffix = (
700 f"with {pluralize(len(request.requirements.req_strings), 'requirement')}: "
701 f"{', '.join(request.requirements.req_strings)}"
702 )
703 return f"Building {request.output_filename} {desc_suffix}"
704
705
706 @rule
707 async def create_pex(request: PexRequest) -> Pex:
708 result = await Get(BuildPexResult, PexRequest, request)
709 return result.create_pex()
710
711
712 @rule
713 async def create_optional_pex(request: OptionalPexRequest) -> OptionalPex:
714 if request.maybe_pex_request is None:
715 return OptionalPex(None)
716 result = await Get(Pex, PexRequest, request.maybe_pex_request)
717 return OptionalPex(result)
718
719
720 @dataclass(frozen=True)
721 class Script:
722 path: PurePath
723
724 @property
725 def argv0(self) -> str:
726 return f"./{self.path}" if self.path.parent == PurePath() else str(self.path)
727
728
729 @dataclass(frozen=True)
730 class VenvScript:
731 script: Script
732 content: FileContent
733
734
735 @dataclass(frozen=True)
736 class VenvScriptWriter:
737 complete_pex_env: CompletePexEnvironment
738 pex: Pex
739 venv_dir: PurePath
740
741 @classmethod
742 def create(
743 cls, pex_environment: PexEnvironment, pex: Pex, venv_rel_dir: PurePath
744 ) -> VenvScriptWriter:
745 # N.B.: We don't know the working directory that will be used in any given
746 # invocation of the venv scripts; so we deal with working_directory inside the scripts
747 # themselves by absolutifying all relevant paths at runtime.
748 complete_pex_env = pex_environment.in_sandbox(working_directory=None)
749 venv_dir = complete_pex_env.pex_root / venv_rel_dir
750 return cls(complete_pex_env=complete_pex_env, pex=pex, venv_dir=venv_dir)
751
752 def _create_venv_script(
753 self,
754 bash: BashBinary,
755 *,
756 script_path: PurePath,
757 venv_executable: PurePath,
758 ) -> VenvScript:
759 env_vars = (
760 f"{name}={shlex.quote(value)}"
761 for name, value in self.complete_pex_env.environment_dict(
762 python_configured=True
763 ).items()
764 )
765
766 target_venv_executable = shlex.quote(str(venv_executable))
767 venv_dir = shlex.quote(str(self.venv_dir))
768 execute_pex_args = " ".join(
769 f"$(ensure_absolute {shlex.quote(arg)})"
770 for arg in self.complete_pex_env.create_argv(self.pex.name, python=self.pex.python)
771 )
772
773 script = dedent(
774 f"""\
775 #!{bash.path}
776 set -euo pipefail
777
778 # N.B.: We convert all sandbox root relative paths to absolute paths so this script
779 # works when run with a cwd set elsewhere.
780
781 # N.B.: This relies on BASH_SOURCE which has been available since bash-3.0, released in
782 # 2004. In turn, our use of BASH_SOURCE relies on the fact that this script is executed
783 # by the engine via its absolute path.
784 ABS_SANDBOX_ROOT="${{BASH_SOURCE%/*}}"
785
786 function ensure_absolute() {{
787 local value0="$1"
788 shift
789 if [ "${{value0:0:1}}" == "/" ]; then
790 echo "${{value0}}" "$@"
791 else
792 echo "${{ABS_SANDBOX_ROOT}}/${{value0}}" "$@"
793 fi
794 }}
795
796 export {" ".join(env_vars)}
797 export PEX_ROOT="$(ensure_absolute ${{PEX_ROOT}})"
798
799 execute_pex_args="{execute_pex_args}"
800 target_venv_executable="$(ensure_absolute {target_venv_executable})"
801 venv_dir="$(ensure_absolute {venv_dir})"
802
803 # Let PEX_TOOLS invocations pass through to the original PEX file since venvs don't come
804 # with tools support.
805 if [ -n "${{PEX_TOOLS:-}}" ]; then
806 exec ${{execute_pex_args}} "$@"
807 fi
808
809 # If the seeded venv has been removed from the PEX_ROOT, we re-seed from the original
810 # `--venv` mode PEX file.
811 if [ ! -e "${{target_venv_executable}}" ]; then
812 rm -rf "${{venv_dir}}" || true
813 PEX_INTERPRETER=1 ${{execute_pex_args}} -c ''
814 fi
815
816 exec "${{target_venv_executable}}" "$@"
817 """
818 )
819 return VenvScript(
820 script=Script(script_path),
821 content=FileContent(path=str(script_path), content=script.encode(), is_executable=True),
822 )
823
824 def exe(self, bash: BashBinary) -> VenvScript:
825 """Writes a safe shim for the venv's executable `pex` script."""
826 script_path = PurePath(f"{self.pex.name}_pex_shim.sh")
827 return self._create_venv_script(
828 bash, script_path=script_path, venv_executable=self.venv_dir / "pex"
829 )
830
831 def bin(self, bash: BashBinary, name: str) -> VenvScript:
832 """Writes a safe shim for an executable or script in the venv's `bin` directory."""
833 script_path = PurePath(f"{self.pex.name}_bin_{name}_shim.sh")
834 return self._create_venv_script(
835 bash,
836 script_path=script_path,
837 venv_executable=self.venv_dir / "bin" / name,
838 )
839
840 def python(self, bash: BashBinary) -> VenvScript:
841 """Writes a safe shim for the venv's python binary."""
842 return self.bin(bash, "python")
843
844
845 @dataclass(frozen=True)
846 class VenvPex:
847 digest: Digest
848 pex_filename: str
849 pex: Script
850 python: Script
851 bin: FrozenDict[str, Script]
852 venv_rel_dir: str
853
854
855 @frozen_after_init
856 @dataclass(unsafe_hash=True)
857 class VenvPexRequest:
858 pex_request: PexRequest
859 bin_names: tuple[str, ...] = ()
860
861 def __init__(self, pex_request: PexRequest, bin_names: Iterable[str] = ()) -> None:
862 """A request for a PEX that runs in a venv and optionally exposes select venv `bin` scripts.
863
864 :param pex_request: The details of the desired PEX.
865 :param bin_names: The names of venv `bin` scripts to expose for execution.
866 """
867 self.pex_request = pex_request
868 self.bin_names = tuple(bin_names)
869
870
871 @rule
872 def wrap_venv_prex_request(pex_request: PexRequest) -> VenvPexRequest:
873 # Allow creating a VenvPex from a plain PexRequest when no extra bin scripts need to be exposed.
874 return VenvPexRequest(pex_request)
875
876
877 @rule
878 async def create_venv_pex(
879 request: VenvPexRequest, bash: BashBinary, pex_environment: PexEnvironment
880 ) -> VenvPex:
881 # VenvPex is motivated by improving performance of Python tools by eliminating traditional PEX
882 # file startup overhead.
883 #
884 # To achieve the minimal overhead (on the order of 1ms) we discard:
885 # 1. Using Pex default mode:
886 # Although this does reduce initial tool execution overhead, it still leaves a minimum
887 # O(100ms) of overhead per subsequent tool invocation. Fundamentally, Pex still needs to
888 # execute its `sys.path` isolation bootstrap code in this case.
889 # 2. Using the Pex `venv` tool:
890 # The idea here would be to create a tool venv as a Process output and then use the tool
891 # venv as an input digest for all tool invocations. This was tried and netted ~500ms of
892 # overhead over raw venv use.
893 #
894 # Instead we use Pex's `--venv` mode. In this mode you can run the Pex file and it will create a
895 # venv on the fly in the PEX_ROOT as needed. Since the PEX_ROOT is a named_cache, we avoid the
896 # digest materialization overhead present in 2 above. Since the venv is naturally isolated we
897 # avoid the `sys.path` isolation overhead of Pex itself present in 1 above.
898 #
899 # This does leave O(50ms) of overhead though for the PEX bootstrap code to detect an already
900 # created venv in the PEX_ROOT and re-exec into it. To eliminate this overhead we execute the
901 # `pex` venv script in the PEX_ROOT directly. This is not robust on its own though, since the
902 # named caches store might be pruned at any time. To guard against that case we introduce a shim
903 # bash script that checks to see if the `pex` venv script exists in the PEX_ROOT and re-creates
904 # the PEX_ROOT venv if not. Using the shim script to run Python tools gets us down to the ~1ms
905 # of overhead we currently enjoy.
906
907 pex_request = request.pex_request
908 seeded_venv_request = dataclasses.replace(
909 pex_request, additional_args=pex_request.additional_args + ("--venv", "--seed", "verbose")
910 )
911 venv_pex_result = await Get(BuildPexResult, PexRequest, seeded_venv_request)
912 # Pex verbose --seed mode outputs the absolute path of the PEX executable as well as the
913 # absolute path of the PEX_ROOT. In the --venv case this is the `pex` script in the venv root
914 # directory.
915 seed_info = json.loads(venv_pex_result.result.stdout.decode())
916 abs_pex_root = PurePath(seed_info["pex_root"])
917 abs_pex_path = PurePath(seed_info["pex"])
918 venv_rel_dir = abs_pex_path.relative_to(abs_pex_root).parent
919
920 venv_script_writer = VenvScriptWriter.create(
921 pex_environment=pex_environment, pex=venv_pex_result.create_pex(), venv_rel_dir=venv_rel_dir
922 )
923 pex = venv_script_writer.exe(bash)
924 python = venv_script_writer.python(bash)
925 scripts = {bin_name: venv_script_writer.bin(bash, bin_name) for bin_name in request.bin_names}
926 scripts_digest = await Get(
927 Digest,
928 CreateDigest(
929 (
930 pex.content,
931 python.content,
932 *(venv_script.content for venv_script in scripts.values()),
933 )
934 ),
935 )
936 input_digest = await Get(Digest, MergeDigests((venv_script_writer.pex.digest, scripts_digest)))
937
938 return VenvPex(
939 digest=input_digest,
940 pex_filename=venv_pex_result.pex_filename,
941 pex=pex.script,
942 python=python.script,
943 bin=FrozenDict((bin_name, venv_script.script) for bin_name, venv_script in scripts.items()),
944 venv_rel_dir=venv_rel_dir.as_posix(),
945 )
946
947
948 @frozen_after_init
949 @dataclass(unsafe_hash=True)
950 class PexProcess:
951 pex: Pex
952 argv: tuple[str, ...]
953 description: str = dataclasses.field(compare=False)
954 level: LogLevel
955 input_digest: Digest | None
956 working_directory: str | None
957 extra_env: FrozenDict[str, str] | None
958 output_files: tuple[str, ...] | None
959 output_directories: tuple[str, ...] | None
960 timeout_seconds: int | None
961 execution_slot_variable: str | None
962 cache_scope: ProcessCacheScope
963
964 def __init__(
965 self,
966 pex: Pex,
967 *,
968 description: str,
969 argv: Iterable[str] = (),
970 level: LogLevel = LogLevel.INFO,
971 input_digest: Digest | None = None,
972 working_directory: str | None = None,
973 extra_env: Mapping[str, str] | None = None,
974 output_files: Iterable[str] | None = None,
975 output_directories: Iterable[str] | None = None,
976 timeout_seconds: int | None = None,
977 execution_slot_variable: str | None = None,
978 cache_scope: ProcessCacheScope = ProcessCacheScope.SUCCESSFUL,
979 ) -> None:
980 self.pex = pex
981 self.argv = tuple(argv)
982 self.description = description
983 self.level = level
984 self.input_digest = input_digest
985 self.working_directory = working_directory
986 self.extra_env = FrozenDict(extra_env) if extra_env else None
987 self.output_files = tuple(output_files) if output_files else None
988 self.output_directories = tuple(output_directories) if output_directories else None
989 self.timeout_seconds = timeout_seconds
990 self.execution_slot_variable = execution_slot_variable
991 self.cache_scope = cache_scope
992
993
994 @rule
995 async def setup_pex_process(request: PexProcess, pex_environment: PexEnvironment) -> Process:
996 pex = request.pex
997 complete_pex_env = pex_environment.in_sandbox(working_directory=request.working_directory)
998 argv = complete_pex_env.create_argv(pex.name, *request.argv, python=pex.python)
999 env = {
1000 **complete_pex_env.environment_dict(python_configured=pex.python is not None),
1001 **(request.extra_env or {}),
1002 }
1003 input_digest = (
1004 await Get(Digest, MergeDigests((pex.digest, request.input_digest)))
1005 if request.input_digest
1006 else pex.digest
1007 )
1008 return Process(
1009 argv,
1010 description=request.description,
1011 level=request.level,
1012 input_digest=input_digest,
1013 working_directory=request.working_directory,
1014 env=env,
1015 output_files=request.output_files,
1016 output_directories=request.output_directories,
1017 append_only_caches=complete_pex_env.append_only_caches,
1018 timeout_seconds=request.timeout_seconds,
1019 execution_slot_variable=request.execution_slot_variable,
1020 cache_scope=request.cache_scope,
1021 )
1022
1023
1024 @frozen_after_init
1025 @dataclass(unsafe_hash=True)
1026 class VenvPexProcess:
1027 venv_pex: VenvPex
1028 argv: tuple[str, ...]
1029 description: str = dataclasses.field(compare=False)
1030 level: LogLevel
1031 input_digest: Digest | None
1032 working_directory: str | None
1033 extra_env: FrozenDict[str, str] | None
1034 output_files: tuple[str, ...] | None
1035 output_directories: tuple[str, ...] | None
1036 timeout_seconds: int | None
1037 execution_slot_variable: str | None
1038 cache_scope: ProcessCacheScope
1039
1040 def __init__(
1041 self,
1042 venv_pex: VenvPex,
1043 *,
1044 description: str,
1045 argv: Iterable[str] = (),
1046 level: LogLevel = LogLevel.INFO,
1047 input_digest: Digest | None = None,
1048 working_directory: str | None = None,
1049 extra_env: Mapping[str, str] | None = None,
1050 output_files: Iterable[str] | None = None,
1051 output_directories: Iterable[str] | None = None,
1052 timeout_seconds: int | None = None,
1053 execution_slot_variable: str | None = None,
1054 cache_scope: ProcessCacheScope = ProcessCacheScope.SUCCESSFUL,
1055 ) -> None:
1056 self.venv_pex = venv_pex
1057 self.argv = tuple(argv)
1058 self.description = description
1059 self.level = level
1060 self.input_digest = input_digest
1061 self.working_directory = working_directory
1062 self.extra_env = FrozenDict(extra_env) if extra_env else None
1063 self.output_files = tuple(output_files) if output_files else None
1064 self.output_directories = tuple(output_directories) if output_directories else None
1065 self.timeout_seconds = timeout_seconds
1066 self.execution_slot_variable = execution_slot_variable
1067 self.cache_scope = cache_scope
1068
1069
1070 @rule
1071 async def setup_venv_pex_process(
1072 request: VenvPexProcess, pex_environment: PexEnvironment
1073 ) -> Process:
1074 venv_pex = request.venv_pex
1075 pex_bin = (
1076 os.path.relpath(venv_pex.pex.argv0, request.working_directory)
1077 if request.working_directory
1078 else venv_pex.pex.argv0
1079 )
1080 argv = (pex_bin, *request.argv)
1081 input_digest = (
1082 await Get(Digest, MergeDigests((venv_pex.digest, request.input_digest)))
1083 if request.input_digest
1084 else venv_pex.digest
1085 )
1086 return Process(
1087 argv=argv,
1088 description=request.description,
1089 level=request.level,
1090 input_digest=input_digest,
1091 working_directory=request.working_directory,
1092 env=request.extra_env,
1093 output_files=request.output_files,
1094 output_directories=request.output_directories,
1095 append_only_caches=pex_environment.in_sandbox(
1096 working_directory=request.working_directory
1097 ).append_only_caches,
1098 timeout_seconds=request.timeout_seconds,
1099 execution_slot_variable=request.execution_slot_variable,
1100 cache_scope=request.cache_scope,
1101 )
1102
1103
1104 @dataclass(frozen=True)
1105 class PexDistributionInfo:
1106 """Information about an individual distribution in a PEX file, as reported by `PEX_TOOLS=1
1107 repository info -v`."""
1108
1109 project_name: str
1110 version: packaging.version.Version
1111 requires_python: packaging.specifiers.SpecifierSet | None
1112 # Note: These are parsed from metadata written by the pex tool, and are always
1113 # a valid pkg_resources.Requirement.
1114 requires_dists: tuple[Requirement, ...]
1115
1116
1117 class PexResolveInfo(Collection[PexDistributionInfo]):
1118 """Information about all distributions resolved in a PEX file, as reported by `PEX_TOOLS=1
1119 repository info -v`."""
1120
1121
1122 def parse_repository_info(repository_info: str) -> PexResolveInfo:
1123 def iter_dist_info() -> Iterator[PexDistributionInfo]:
1124 for line in repository_info.splitlines():
1125 info = json.loads(line)
1126 requires_python = info["requires_python"]
1127 yield PexDistributionInfo(
1128 project_name=info["project_name"],
1129 version=packaging.version.Version(info["version"]),
1130 requires_python=(
1131 packaging.specifiers.SpecifierSet(requires_python)
1132 if requires_python is not None
1133 else None
1134 ),
1135 requires_dists=tuple(
1136 Requirement.parse(req) for req in sorted(info["requires_dists"])
1137 ),
1138 )
1139
1140 return PexResolveInfo(sorted(iter_dist_info(), key=lambda dist: dist.project_name))
1141
1142
1143 @rule
1144 async def determine_venv_pex_resolve_info(venv_pex: VenvPex) -> PexResolveInfo:
1145 process_result = await Get(
1146 ProcessResult,
1147 VenvPexProcess(
1148 venv_pex,
1149 argv=["repository", "info", "-v"],
1150 extra_env={"PEX_TOOLS": "1"},
1151 input_digest=venv_pex.digest,
1152 description=f"Determine distributions found in {venv_pex.pex_filename}",
1153 level=LogLevel.DEBUG,
1154 ),
1155 )
1156 return parse_repository_info(process_result.stdout.decode())
1157
1158
1159 @rule
1160 async def determine_pex_resolve_info(pex_pex: PexPEX, pex: Pex) -> PexResolveInfo:
1161 process_result = await Get(
1162 ProcessResult,
1163 PexProcess(
1164 pex=Pex(digest=pex_pex.digest, name=pex_pex.exe, python=pex.python),
1165 argv=[pex.name, "repository", "info", "-v"],
1166 input_digest=pex.digest,
1167 extra_env={"PEX_MODULE": "pex.tools"},
1168 description=f"Determine distributions found in {pex.name}",
1169 level=LogLevel.DEBUG,
1170 ),
1171 )
1172 return parse_repository_info(process_result.stdout.decode())
1173
1174
1175 def rules():
1176 return [*collect_rules(), *pex_cli.rules()]
1177
[end of src/python/pants/backend/python/util_rules/pex.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pantsbuild/pants
|
f07db99830d946db57edf773de0d074306d527e1
|
Split nailgun server and client inputs
Triage in #13779 showed that materializing the inputs for nailgunned JVM processes represented up to a 300ms constant factor.
But because those inputs currently include _both_ the `use_nailgun: Digest` and the `input_files: Digest` fields (because the `use_nailgun` digest must be a subset of the `input_digest`: [see](https://github.com/pantsbuild/pants/blob/845111479a4b26fbfac6d6dbe8b8f85deff34438/src/rust/engine/process_execution/src/lib.rs#L254-L260)), a lot of that work is completely redundant. On top of that, because we have materialized more (unnecessary) stuff into the sandbox, we have more to clean up afterwards.
This hits source analysis processes for Java/Scala particularly hard: in some cases, it represented ~500ms of total overhead on ~150ms processes.
|
2021-12-05T20:57:39Z
|
<patch>
diff --git a/src/python/pants/backend/java/compile/javac.py b/src/python/pants/backend/java/compile/javac.py
--- a/src/python/pants/backend/java/compile/javac.py
+++ b/src/python/pants/backend/java/compile/javac.py
@@ -158,7 +158,6 @@ async def compile_java_source(
(
prefixed_direct_dependency_classpath_digest,
dest_dir_digest,
- jdk_setup.digest,
*(
sources.snapshot.digest
for _, sources in component_members_and_java_source_files
diff --git a/src/python/pants/backend/java/dependency_inference/java_parser.py b/src/python/pants/backend/java/dependency_inference/java_parser.py
--- a/src/python/pants/backend/java/dependency_inference/java_parser.py
+++ b/src/python/pants/backend/java/dependency_inference/java_parser.py
@@ -111,15 +111,6 @@ async def analyze_java_source_dependencies(
)
),
)
- merged_digest = await Get(
- Digest,
- MergeDigests(
- (
- tool_digest,
- prefixed_source_files_digest,
- )
- ),
- )
analysis_output_path = "__source_analysis.json"
@@ -132,7 +123,7 @@ async def analyze_java_source_dependencies(
analysis_output_path,
source_path,
],
- input_digest=merged_digest,
+ input_digest=prefixed_source_files_digest,
output_files=(analysis_output_path,),
use_nailgun=tool_digest,
append_only_caches=jdk_setup.append_only_caches,
diff --git a/src/python/pants/backend/scala/compile/scalac.py b/src/python/pants/backend/scala/compile/scalac.py
--- a/src/python/pants/backend/scala/compile/scalac.py
+++ b/src/python/pants/backend/scala/compile/scalac.py
@@ -145,18 +145,19 @@ async def compile_scala_source(
Digest, AddPrefix(merged_transitive_dependency_classpath_entries_digest, usercp)
)
- merged_digest = await Get(
- Digest,
- MergeDigests(
- (
- prefixed_transitive_dependency_classpath_digest,
- tool_classpath.digest,
- jdk_setup.digest,
- *(
- sources.snapshot.digest
- for _, sources in component_members_and_scala_source_files
- ),
- )
+ merged_tool_digest, merged_input_digest = await MultiGet(
+ Get(Digest, MergeDigests((tool_classpath.digest, jdk_setup.digest))),
+ Get(
+ Digest,
+ MergeDigests(
+ (
+ prefixed_transitive_dependency_classpath_digest,
+ *(
+ sources.snapshot.digest
+ for _, sources in component_members_and_scala_source_files
+ ),
+ )
+ ),
),
)
@@ -183,8 +184,8 @@ async def compile_scala_source(
)
),
],
- input_digest=merged_digest,
- use_nailgun=jdk_setup.digest,
+ input_digest=merged_input_digest,
+ use_nailgun=merged_tool_digest,
output_files=(output_file,),
description=f"Compile {request.component} with scalac",
level=LogLevel.DEBUG,
diff --git a/src/python/pants/backend/scala/dependency_inference/scala_parser.py b/src/python/pants/backend/scala/dependency_inference/scala_parser.py
--- a/src/python/pants/backend/scala/dependency_inference/scala_parser.py
+++ b/src/python/pants/backend/scala/dependency_inference/scala_parser.py
@@ -256,15 +256,6 @@ async def analyze_scala_source_dependencies(
)
),
)
- merged_digest = await Get(
- Digest,
- MergeDigests(
- (
- tool_digest,
- prefixed_source_files_digest,
- )
- ),
- )
analysis_output_path = "__source_analysis.json"
@@ -277,7 +268,7 @@ async def analyze_scala_source_dependencies(
analysis_output_path,
source_path,
],
- input_digest=merged_digest,
+ input_digest=prefixed_source_files_digest,
output_files=(analysis_output_path,),
use_nailgun=tool_digest,
append_only_caches=jdk_setup.append_only_caches,
</patch>
|
[]
|
[]
| ||||
pandas-dev__pandas-3464
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Sorting on Timestamp broken version 0.11
Issue: `frame.sort_index` uses `argsort` for a single sort column, but `_lexsort_indexer` for multi-columns
need to do a transform on datetimens[64] before passing to ``_lexsort_`indexer`
In [54]: a.columns
Out[54]: Index([ticker, disclosuredate, txnid], dtype=object)
In [55]: a.values
Out[55]:
array([[A, 2010-03-09 00:00:00, 11110508],
[A, 2010-03-12 00:00:00, 11121853],
[A, 2011-02-15 00:00:00, 12488915],
[A, 2011-03-08 00:00:00, 12563380],
[A, 2011-04-22 00:00:00, 12653015],
[A, 2013-01-28 00:00:00, 15244694]], dtype=object)
In [56]: a.sort(columns=['disclosuredate']).values
Out[56]:
array([[A, 2010-03-09 00:00:00, 11110508],
[A, 2010-03-12 00:00:00, 11121853],
[A, 2011-04-22 00:00:00, 12653015],
[A, 2013-01-28 00:00:00, 15244694],
[A, 2011-03-08 00:00:00, 12563380],
[A, 2011-02-15 00:00:00, 12488915]], dtype=object)
In [57]: pd.**version**
Out[57]: '0.11.0'
In [58]: import time
In [59]: a['epoch'] = a['disclosuredate'].map(lambda x: time.mktime(x.timetuple()))
In [60]: a.sort(['epoch']).values
Out[60]:
array([[A, 2010-03-09 00:00:00, 11110508, 1268110800.0],
[A, 2010-03-12 00:00:00, 11121853, 1268370000.0],
[A, 2011-02-15 00:00:00, 12488915, 1297746000.0],
[A, 2011-03-08 00:00:00, 12563380, 1299560400.0],
[A, 2011-04-22 00:00:00, 12653015, 1303444800.0],
[A, 2013-01-28 00:00:00, 15244694, 1359349200.0]], dtype=object)
</issue>
<code>
[start of README.rst]
1 =============================================
2 pandas: powerful Python data analysis toolkit
3 =============================================
4
5 .. image:: https://travis-ci.org/pydata/pandas.png
6 :target: https://travis-ci.org/pydata/pandas
7
8 What is it
9 ==========
10
11 **pandas** is a Python package providing fast, flexible, and expressive data
12 structures designed to make working with "relational" or "labeled" data both
13 easy and intuitive. It aims to be the fundamental high-level building block for
14 doing practical, **real world** data analysis in Python. Additionally, it has
15 the broader goal of becoming **the most powerful and flexible open source data
16 analysis / manipulation tool available in any language**. It is already well on
17 its way toward this goal.
18
19 Main Features
20 =============
21
22 Here are just a few of the things that pandas does well:
23
24 - Easy handling of **missing data** (represented as NaN) in floating point as
25 well as non-floating point data
26 - Size mutability: columns can be **inserted and deleted** from DataFrame and
27 higher dimensional objects
28 - Automatic and explicit **data alignment**: objects can be explicitly
29 aligned to a set of labels, or the user can simply ignore the labels and
30 let `Series`, `DataFrame`, etc. automatically align the data for you in
31 computations
32 - Powerful, flexible **group by** functionality to perform
33 split-apply-combine operations on data sets, for both aggregating and
34 transforming data
35 - Make it **easy to convert** ragged, differently-indexed data in other
36 Python and NumPy data structures into DataFrame objects
37 - Intelligent label-based **slicing**, **fancy indexing**, and **subsetting**
38 of large data sets
39 - Intuitive **merging** and **joining** data sets
40 - Flexible **reshaping** and pivoting of data sets
41 - **Hierarchical** labeling of axes (possible to have multiple labels per
42 tick)
43 - Robust IO tools for loading data from **flat files** (CSV and delimited),
44 Excel files, databases, and saving / loading data from the ultrafast **HDF5
45 format**
46 - **Time series**-specific functionality: date range generation and frequency
47 conversion, moving window statistics, moving window linear regressions,
48 date shifting and lagging, etc.
49
50 Where to get it
51 ===============
52
53 The source code is currently hosted on GitHub at: http://github.com/pydata/pandas
54
55 Binary installers for the latest released version are available at the Python
56 package index::
57
58 http://pypi.python.org/pypi/pandas/
59
60 And via ``easy_install`` or ``pip``::
61
62 easy_install pandas
63 pip install pandas
64
65 Dependencies
66 ============
67
68 * `NumPy <http://www.numpy.org>`__: 1.6.1 or higher
69 * `python-dateutil <http://labix.org/python-dateutil>`__ 1.5 or higher
70 * `pytz <http://pytz.sourceforge.net/>`__
71 * Needed for time zone support with ``date_range``
72
73 Highly Recommended Dependencies
74 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
75 * `numexpr <http://code.google.com/p/numexpr/>`__: to accelerate some expression evaluation operations
76 also required by `PyTables`
77 * `bottleneck <http://berkeleyanalytics.com/bottleneck>`__: to accelerate certain numerical operations
78
79 Optional dependencies
80 ~~~~~~~~~~~~~~~~~~~~~
81
82 * `Cython <http://www.cython.org>`__: Only necessary to build development
83 version. Version 0.17.1 or higher.
84 * `SciPy <http://www.scipy.org>`__: miscellaneous statistical functions
85 * `PyTables <http://www.pytables.org>`__: necessary for HDF5-based storage
86 * `matplotlib <http://matplotlib.sourceforge.net/>`__: for plotting
87 * `statsmodels <http://statsmodels.sourceforge.net/>`__
88 * Needed for parts of :mod:`pandas.stats`
89 * `openpyxl <http://packages.python.org/openpyxl/>`__, `xlrd/xlwt <http://www.python-excel.org/>`__
90 * openpyxl version 1.6.1 or higher, for writing .xlsx files
91 * xlrd >= 0.9.0
92 * Needed for Excel I/O
93
94
95 Installation from sources
96 =========================
97
98 To install pandas from source you need ``cython`` in addition to the normal dependencies above,
99 which can be installed from pypi::
100
101 pip install cython
102
103 In the ``pandas`` directory (same one where you found this file after cloning the git repo), execute::
104
105 python setup.py install
106
107 or for installing in `development mode <http://www.pip-installer.org/en/latest/usage.html>`__::
108
109 python setup.py develop
110
111 Alternatively, you can use `pip` if you want all the dependencies pulled in automatically
112 (the optional ``-e`` option is for installing it in
113 `development mode <http://www.pip-installer.org/en/latest/usage.html>`__)::
114
115 pip install -e .
116
117 On Windows, you will need to install MinGW and execute::
118
119 python setup.py build --compiler=mingw32
120 python setup.py install
121
122 See http://pandas.pydata.org/ for more information.
123
124 License
125 =======
126
127 BSD
128
129 Documentation
130 =============
131
132 The official documentation is hosted on PyData.org: http://pandas.pydata.org/
133
134 The Sphinx documentation should provide a good starting point for learning how
135 to use the library. Expect the docs to continue to expand as time goes on.
136
137 Background
138 ==========
139
140 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and
141 has been under active development since then.
142
143 Discussion and Development
144 ==========================
145
146 Since ``pandas`` development is related to a number of other scientific
147 Python projects, questions are welcome on the scipy-user mailing
148 list. Specialized discussions or design issues should take place on
149 the pystatsmodels mailing list / Google group, where
150 ``scikits.statsmodels`` and other libraries will also be discussed:
151
152 http://groups.google.com/group/pystatsmodels
153
154 .. _NumPy: http://numpy.scipy.org/
155
[end of README.rst]
[start of pandas/core/generic.py]
1 # pylint: disable=W0231,E1101
2
3 import numpy as np
4
5 from pandas.core.index import MultiIndex
6 import pandas.core.indexing as indexing
7 from pandas.core.indexing import _maybe_convert_indices
8 from pandas.tseries.index import DatetimeIndex
9 import pandas.core.common as com
10 import pandas.lib as lib
11
12
13 class PandasError(Exception):
14 pass
15
16
17 class PandasObject(object):
18
19 _AXIS_NUMBERS = {
20 'index': 0,
21 'columns': 1
22 }
23
24 _AXIS_ALIASES = {}
25 _AXIS_NAMES = dict((v, k) for k, v in _AXIS_NUMBERS.iteritems())
26
27 def save(self, path):
28 com.save(self, path)
29
30 @classmethod
31 def load(cls, path):
32 return com.load(path)
33
34 #----------------------------------------------------------------------
35 # Axis name business
36
37 def _get_axis_number(self, axis):
38 axis = self._AXIS_ALIASES.get(axis, axis)
39 if com.is_integer(axis):
40 if axis in self._AXIS_NAMES:
41 return axis
42 else:
43 try:
44 return self._AXIS_NUMBERS[axis]
45 except:
46 pass
47 raise ValueError('No axis named %s' % axis)
48
49 def _get_axis_name(self, axis):
50 axis = self._AXIS_ALIASES.get(axis, axis)
51 if isinstance(axis, basestring):
52 if axis in self._AXIS_NUMBERS:
53 return axis
54 else:
55 try:
56 return self._AXIS_NAMES[axis]
57 except:
58 pass
59 raise ValueError('No axis named %s' % axis)
60
61 def _get_axis(self, axis):
62 name = self._get_axis_name(axis)
63 return getattr(self, name)
64
65 #----------------------------------------------------------------------
66 # Indexers
67 @classmethod
68 def _create_indexer(cls, name, indexer):
69 """ create an indexer like _name in the class """
70 iname = '_%s' % name
71 setattr(cls,iname,None)
72
73 def _indexer(self):
74 if getattr(self,iname,None) is None:
75 setattr(self,iname,indexer(self, name))
76 return getattr(self,iname)
77
78 setattr(cls,name,property(_indexer))
79
80 def abs(self):
81 """
82 Return an object with absolute value taken. Only applicable to objects
83 that are all numeric
84
85 Returns
86 -------
87 abs: type of caller
88 """
89 return np.abs(self)
90
91 def get(self, key, default=None):
92 """
93 Get item from object for given key (DataFrame column, Panel slice,
94 etc.). Returns default value if not found
95
96 Parameters
97 ----------
98 key : object
99
100 Returns
101 -------
102 value : type of items contained in object
103 """
104 try:
105 return self[key]
106 except KeyError:
107 return default
108
109 def groupby(self, by=None, axis=0, level=None, as_index=True, sort=True,
110 group_keys=True):
111 """
112 Group series using mapper (dict or key function, apply given function
113 to group, return result as series) or by a series of columns
114
115 Parameters
116 ----------
117 by : mapping function / list of functions, dict, Series, or tuple /
118 list of column names.
119 Called on each element of the object index to determine the groups.
120 If a dict or Series is passed, the Series or dict VALUES will be
121 used to determine the groups
122 axis : int, default 0
123 level : int, level name, or sequence of such, default None
124 If the axis is a MultiIndex (hierarchical), group by a particular
125 level or levels
126 as_index : boolean, default True
127 For aggregated output, return object with group labels as the
128 index. Only relevant for DataFrame input. as_index=False is
129 effectively "SQL-style" grouped output
130 sort : boolean, default True
131 Sort group keys. Get better performance by turning this off
132 group_keys : boolean, default True
133 When calling apply, add group keys to index to identify pieces
134
135 Examples
136 --------
137 # DataFrame result
138 >>> data.groupby(func, axis=0).mean()
139
140 # DataFrame result
141 >>> data.groupby(['col1', 'col2'])['col3'].mean()
142
143 # DataFrame with hierarchical index
144 >>> data.groupby(['col1', 'col2']).mean()
145
146 Returns
147 -------
148 GroupBy object
149 """
150 from pandas.core.groupby import groupby
151 axis = self._get_axis_number(axis)
152 return groupby(self, by, axis=axis, level=level, as_index=as_index,
153 sort=sort, group_keys=group_keys)
154
155 def asfreq(self, freq, method=None, how=None, normalize=False):
156 """
157 Convert all TimeSeries inside to specified frequency using DateOffset
158 objects. Optionally provide fill method to pad/backfill missing values.
159
160 Parameters
161 ----------
162 freq : DateOffset object, or string
163 method : {'backfill', 'bfill', 'pad', 'ffill', None}
164 Method to use for filling holes in reindexed Series
165 pad / ffill: propagate last valid observation forward to next valid
166 backfill / bfill: use NEXT valid observation to fill methdo
167 how : {'start', 'end'}, default end
168 For PeriodIndex only, see PeriodIndex.asfreq
169 normalize : bool, default False
170 Whether to reset output index to midnight
171
172 Returns
173 -------
174 converted : type of caller
175 """
176 from pandas.tseries.resample import asfreq
177 return asfreq(self, freq, method=method, how=how,
178 normalize=normalize)
179
180 def at_time(self, time, asof=False):
181 """
182 Select values at particular time of day (e.g. 9:30AM)
183
184 Parameters
185 ----------
186 time : datetime.time or string
187
188 Returns
189 -------
190 values_at_time : type of caller
191 """
192 try:
193 indexer = self.index.indexer_at_time(time, asof=asof)
194 return self.take(indexer, convert=False)
195 except AttributeError:
196 raise TypeError('Index must be DatetimeIndex')
197
198 def between_time(self, start_time, end_time, include_start=True,
199 include_end=True):
200 """
201 Select values between particular times of the day (e.g., 9:00-9:30 AM)
202
203 Parameters
204 ----------
205 start_time : datetime.time or string
206 end_time : datetime.time or string
207 include_start : boolean, default True
208 include_end : boolean, default True
209
210 Returns
211 -------
212 values_between_time : type of caller
213 """
214 try:
215 indexer = self.index.indexer_between_time(
216 start_time, end_time, include_start=include_start,
217 include_end=include_end)
218 return self.take(indexer, convert=False)
219 except AttributeError:
220 raise TypeError('Index must be DatetimeIndex')
221
222 def resample(self, rule, how=None, axis=0, fill_method=None,
223 closed=None, label=None, convention='start',
224 kind=None, loffset=None, limit=None, base=0):
225 """
226 Convenience method for frequency conversion and resampling of regular
227 time-series data.
228
229 Parameters
230 ----------
231 rule : the offset string or object representing target conversion
232 how : string, method for down- or re-sampling, default to 'mean' for
233 downsampling
234 axis : int, optional, default 0
235 fill_method : string, fill_method for upsampling, default None
236 closed : {'right', 'left'}
237 Which side of bin interval is closed
238 label : {'right', 'left'}
239 Which bin edge label to label bucket with
240 convention : {'start', 'end', 's', 'e'}
241 kind: "period"/"timestamp"
242 loffset: timedelta
243 Adjust the resampled time labels
244 limit: int, default None
245 Maximum size gap to when reindexing with fill_method
246 base : int, default 0
247 For frequencies that evenly subdivide 1 day, the "origin" of the
248 aggregated intervals. For example, for '5min' frequency, base could
249 range from 0 through 4. Defaults to 0
250 """
251 from pandas.tseries.resample import TimeGrouper
252 axis = self._get_axis_number(axis)
253 sampler = TimeGrouper(rule, label=label, closed=closed, how=how,
254 axis=axis, kind=kind, loffset=loffset,
255 fill_method=fill_method, convention=convention,
256 limit=limit, base=base)
257 return sampler.resample(self)
258
259 def first(self, offset):
260 """
261 Convenience method for subsetting initial periods of time series data
262 based on a date offset
263
264 Parameters
265 ----------
266 offset : string, DateOffset, dateutil.relativedelta
267
268 Examples
269 --------
270 ts.last('10D') -> First 10 days
271
272 Returns
273 -------
274 subset : type of caller
275 """
276 from pandas.tseries.frequencies import to_offset
277 if not isinstance(self.index, DatetimeIndex):
278 raise NotImplementedError
279
280 if len(self.index) == 0:
281 return self
282
283 offset = to_offset(offset)
284 end_date = end = self.index[0] + offset
285
286 # Tick-like, e.g. 3 weeks
287 if not offset.isAnchored() and hasattr(offset, '_inc'):
288 if end_date in self.index:
289 end = self.index.searchsorted(end_date, side='left')
290
291 return self.ix[:end]
292
293 def last(self, offset):
294 """
295 Convenience method for subsetting final periods of time series data
296 based on a date offset
297
298 Parameters
299 ----------
300 offset : string, DateOffset, dateutil.relativedelta
301
302 Examples
303 --------
304 ts.last('5M') -> Last 5 months
305
306 Returns
307 -------
308 subset : type of caller
309 """
310 from pandas.tseries.frequencies import to_offset
311 if not isinstance(self.index, DatetimeIndex):
312 raise NotImplementedError
313
314 if len(self.index) == 0:
315 return self
316
317 offset = to_offset(offset)
318
319 start_date = start = self.index[-1] - offset
320 start = self.index.searchsorted(start_date, side='right')
321 return self.ix[start:]
322
323 def select(self, crit, axis=0):
324 """
325 Return data corresponding to axis labels matching criteria
326
327 Parameters
328 ----------
329 crit : function
330 To be called on each index (label). Should return True or False
331 axis : int
332
333 Returns
334 -------
335 selection : type of caller
336 """
337 axis_name = self._get_axis_name(axis)
338 axis = self._get_axis(axis)
339
340 if len(axis) > 0:
341 new_axis = axis[np.asarray([bool(crit(label)) for label in axis])]
342 else:
343 new_axis = axis
344
345 return self.reindex(**{axis_name: new_axis})
346
347 def drop(self, labels, axis=0, level=None):
348 """
349 Return new object with labels in requested axis removed
350
351 Parameters
352 ----------
353 labels : array-like
354 axis : int
355 level : int or name, default None
356 For MultiIndex
357
358 Returns
359 -------
360 dropped : type of caller
361 """
362 axis_name = self._get_axis_name(axis)
363 axis, axis_ = self._get_axis(axis), axis
364
365 if axis.is_unique:
366 if level is not None:
367 if not isinstance(axis, MultiIndex):
368 raise AssertionError('axis must be a MultiIndex')
369 new_axis = axis.drop(labels, level=level)
370 else:
371 new_axis = axis.drop(labels)
372 dropped = self.reindex(**{axis_name: new_axis})
373 try:
374 dropped.axes[axis_].names = axis.names
375 except AttributeError:
376 pass
377 return dropped
378
379 else:
380 if level is not None:
381 if not isinstance(axis, MultiIndex):
382 raise AssertionError('axis must be a MultiIndex')
383 indexer = -lib.ismember(axis.get_level_values(level),
384 set(labels))
385 else:
386 indexer = -axis.isin(labels)
387
388 slicer = [slice(None)] * self.ndim
389 slicer[self._get_axis_number(axis_name)] = indexer
390
391 return self.ix[tuple(slicer)]
392
393 def sort_index(self, axis=0, ascending=True):
394 """
395 Sort object by labels (along an axis)
396
397 Parameters
398 ----------
399 axis : {0, 1}
400 Sort index/rows versus columns
401 ascending : boolean, default True
402 Sort ascending vs. descending
403
404 Returns
405 -------
406 sorted_obj : type of caller
407 """
408 axis = self._get_axis_number(axis)
409 axis_name = self._get_axis_name(axis)
410 labels = self._get_axis(axis)
411
412 sort_index = labels.argsort()
413 if not ascending:
414 sort_index = sort_index[::-1]
415
416 new_axis = labels.take(sort_index)
417 return self.reindex(**{axis_name: new_axis})
418
419 def reindex(self, *args, **kwds):
420 raise NotImplementedError
421
422 def tshift(self, periods=1, freq=None, **kwds):
423 """
424 Shift the time index, using the index's frequency if available
425
426 Parameters
427 ----------
428 periods : int
429 Number of periods to move, can be positive or negative
430 freq : DateOffset, timedelta, or time rule string, default None
431 Increment to use from datetools module or time rule (e.g. 'EOM')
432
433 Notes
434 -----
435 If freq is not specified then tries to use the freq or inferred_freq
436 attributes of the index. If neither of those attributes exist, a
437 ValueError is thrown
438
439 Returns
440 -------
441 shifted : Series
442 """
443 if freq is None:
444 freq = getattr(self.index, 'freq', None)
445
446 if freq is None:
447 freq = getattr(self.index, 'inferred_freq', None)
448
449 if freq is None:
450 msg = 'Freq was not given and was not set in the index'
451 raise ValueError(msg)
452
453 return self.shift(periods, freq, **kwds)
454
455 def pct_change(self, periods=1, fill_method='pad', limit=None, freq=None,
456 **kwds):
457 """
458 Percent change over given number of periods
459
460 Parameters
461 ----------
462 periods : int, default 1
463 Periods to shift for forming percent change
464 fill_method : str, default 'pad'
465 How to handle NAs before computing percent changes
466 limit : int, default None
467 The number of consecutive NAs to fill before stopping
468 freq : DateOffset, timedelta, or offset alias string, optional
469 Increment to use from time series API (e.g. 'M' or BDay())
470
471 Returns
472 -------
473 chg : Series or DataFrame
474 """
475 if fill_method is None:
476 data = self
477 else:
478 data = self.fillna(method=fill_method, limit=limit)
479 rs = data / data.shift(periods=periods, freq=freq, **kwds) - 1
480 if freq is None:
481 mask = com.isnull(self.values)
482 np.putmask(rs.values, mask, np.nan)
483 return rs
484
485 def to_hdf(self, path_or_buf, key, **kwargs):
486 """ activate the HDFStore """
487 from pandas.io import pytables
488 return pytables.to_hdf(path_or_buf, key, self, **kwargs)
489
490 # install the indexerse
491 for _name, _indexer in indexing.get_indexers_list():
492 PandasObject._create_indexer(_name,_indexer)
493
494 class NDFrame(PandasObject):
495 """
496 N-dimensional analogue of DataFrame. Store multi-dimensional in a
497 size-mutable, labeled data structure
498
499 Parameters
500 ----------
501 data : BlockManager
502 axes : list
503 copy : boolean, default False
504 """
505 # kludge
506 _default_stat_axis = 0
507
508 def __init__(self, data, axes=None, copy=False, dtype=None):
509 if dtype is not None:
510 data = data.astype(dtype)
511 elif copy:
512 data = data.copy()
513
514 if axes is not None:
515 for i, ax in enumerate(axes):
516 data = data.reindex_axis(ax, axis=i)
517
518 object.__setattr__(self, '_data', data)
519 object.__setattr__(self, '_item_cache', {})
520
521 def astype(self, dtype, copy = True, raise_on_error = True):
522 """
523 Cast object to input numpy.dtype
524 Return a copy when copy = True (be really careful with this!)
525
526 Parameters
527 ----------
528 dtype : numpy.dtype or Python type
529 raise_on_error : raise on invalid input
530
531 Returns
532 -------
533 casted : type of caller
534 """
535
536 mgr = self._data.astype(dtype, copy = copy, raise_on_error = raise_on_error)
537 return self._constructor(mgr)
538
539 @property
540 def _constructor(self):
541 return NDFrame
542
543 @property
544 def axes(self):
545 return self._data.axes
546
547 def __repr__(self):
548 return 'NDFrame'
549
550 @property
551 def values(self):
552 return self._data.as_matrix()
553
554 @property
555 def ndim(self):
556 return self._data.ndim
557
558 def _set_axis(self, axis, labels):
559 self._data.set_axis(axis, labels)
560 self._clear_item_cache()
561
562 def __getitem__(self, item):
563 return self._get_item_cache(item)
564
565 def _get_item_cache(self, item):
566 cache = self._item_cache
567 try:
568 return cache[item]
569 except Exception:
570 values = self._data.get(item)
571 res = self._box_item_values(item, values)
572 cache[item] = res
573 return res
574
575 def _box_item_values(self, key, values):
576 raise NotImplementedError
577
578 def _clear_item_cache(self):
579 self._item_cache.clear()
580
581 def _set_item(self, key, value):
582 self._data.set(key, value)
583 self._clear_item_cache()
584
585 def __delitem__(self, key):
586 """
587 Delete item
588 """
589 deleted = False
590
591 maybe_shortcut = False
592 if hasattr(self, 'columns') and isinstance(self.columns, MultiIndex):
593 try:
594 maybe_shortcut = key not in self.columns._engine
595 except TypeError:
596 pass
597
598 if maybe_shortcut:
599 # Allow shorthand to delete all columns whose first len(key)
600 # elements match key:
601 if not isinstance(key, tuple):
602 key = (key,)
603 for col in self.columns:
604 if isinstance(col, tuple) and col[:len(key)] == key:
605 del self[col]
606 deleted = True
607 if not deleted:
608 # If the above loop ran and didn't delete anything because
609 # there was no match, this call should raise the appropriate
610 # exception:
611 self._data.delete(key)
612
613 try:
614 del self._item_cache[key]
615 except KeyError:
616 pass
617
618 def get_dtype_counts(self):
619 """ return the counts of dtypes in this frame """
620 from pandas import Series
621 return Series(self._data.get_dtype_counts())
622
623 def pop(self, item):
624 """
625 Return item and drop from frame. Raise KeyError if not found.
626 """
627 result = self[item]
628 del self[item]
629 return result
630
631 def squeeze(self):
632 """ squeeze length 1 dimensions """
633 try:
634 return self.ix[tuple([ slice(None) if len(a) > 1 else a[0] for a in self.axes ])]
635 except:
636 return self
637
638 def _expand_axes(self, key):
639 new_axes = []
640 for k, ax in zip(key, self.axes):
641 if k not in ax:
642 if type(k) != ax.dtype.type:
643 ax = ax.astype('O')
644 new_axes.append(ax.insert(len(ax), k))
645 else:
646 new_axes.append(ax)
647
648 return new_axes
649
650 #----------------------------------------------------------------------
651 # Consolidation of internals
652
653 def _consolidate_inplace(self):
654 self._clear_item_cache()
655 self._data = self._data.consolidate()
656
657 def consolidate(self, inplace=False):
658 """
659 Compute NDFrame with "consolidated" internals (data of each dtype
660 grouped together in a single ndarray). Mainly an internal API function,
661 but available here to the savvy user
662
663 Parameters
664 ----------
665 inplace : boolean, default False
666 If False return new object, otherwise modify existing object
667
668 Returns
669 -------
670 consolidated : type of caller
671 """
672 if inplace:
673 self._consolidate_inplace()
674 else:
675 cons_data = self._data.consolidate()
676 if cons_data is self._data:
677 cons_data = cons_data.copy()
678 return self._constructor(cons_data)
679
680 @property
681 def _is_mixed_type(self):
682 return self._data.is_mixed_type
683
684 @property
685 def _is_numeric_mixed_type(self):
686 return self._data.is_numeric_mixed_type
687
688 def _reindex_axis(self, new_index, fill_method, axis, copy):
689 new_data = self._data.reindex_axis(new_index, axis=axis,
690 method=fill_method, copy=copy)
691
692 if new_data is self._data and not copy:
693 return self
694 else:
695 return self._constructor(new_data)
696
697 def cumsum(self, axis=None, skipna=True):
698 """
699 Return DataFrame of cumulative sums over requested axis.
700
701 Parameters
702 ----------
703 axis : {0, 1}
704 0 for row-wise, 1 for column-wise
705 skipna : boolean, default True
706 Exclude NA/null values. If an entire row/column is NA, the result
707 will be NA
708
709 Returns
710 -------
711 y : DataFrame
712 """
713 if axis is None:
714 axis = self._default_stat_axis
715 else:
716 axis = self._get_axis_number(axis)
717
718 y = self.values.copy()
719 if not issubclass(y.dtype.type, np.integer):
720 mask = np.isnan(self.values)
721
722 if skipna:
723 np.putmask(y, mask, 0.)
724
725 result = y.cumsum(axis)
726
727 if skipna:
728 np.putmask(result, mask, np.nan)
729 else:
730 result = y.cumsum(axis)
731 return self._wrap_array(result, self.axes, copy=False)
732
733 def _wrap_array(self, array, axes, copy=False):
734 raise NotImplementedError
735
736 def cumprod(self, axis=None, skipna=True):
737 """
738 Return cumulative product over requested axis as DataFrame
739
740 Parameters
741 ----------
742 axis : {0, 1}
743 0 for row-wise, 1 for column-wise
744 skipna : boolean, default True
745 Exclude NA/null values. If an entire row/column is NA, the result
746 will be NA
747
748 Returns
749 -------
750 y : DataFrame
751 """
752 if axis is None:
753 axis = self._default_stat_axis
754 else:
755 axis = self._get_axis_number(axis)
756
757 y = self.values.copy()
758 if not issubclass(y.dtype.type, np.integer):
759 mask = np.isnan(self.values)
760
761 if skipna:
762 np.putmask(y, mask, 1.)
763 result = y.cumprod(axis)
764
765 if skipna:
766 np.putmask(result, mask, np.nan)
767 else:
768 result = y.cumprod(axis)
769 return self._wrap_array(result, self.axes, copy=False)
770
771 def cummax(self, axis=None, skipna=True):
772 """
773 Return DataFrame of cumulative max over requested axis.
774
775 Parameters
776 ----------
777 axis : {0, 1}
778 0 for row-wise, 1 for column-wise
779 skipna : boolean, default True
780 Exclude NA/null values. If an entire row/column is NA, the result
781 will be NA
782
783 Returns
784 -------
785 y : DataFrame
786 """
787 if axis is None:
788 axis = self._default_stat_axis
789 else:
790 axis = self._get_axis_number(axis)
791
792 y = self.values.copy()
793 if not issubclass(y.dtype.type, np.integer):
794 mask = np.isnan(self.values)
795
796 if skipna:
797 np.putmask(y, mask, -np.inf)
798
799 result = np.maximum.accumulate(y, axis)
800
801 if skipna:
802 np.putmask(result, mask, np.nan)
803 else:
804 result = np.maximum.accumulate(y, axis)
805 return self._wrap_array(result, self.axes, copy=False)
806
807 def cummin(self, axis=None, skipna=True):
808 """
809 Return DataFrame of cumulative min over requested axis.
810
811 Parameters
812 ----------
813 axis : {0, 1}
814 0 for row-wise, 1 for column-wise
815 skipna : boolean, default True
816 Exclude NA/null values. If an entire row/column is NA, the result
817 will be NA
818
819 Returns
820 -------
821 y : DataFrame
822 """
823 if axis is None:
824 axis = self._default_stat_axis
825 else:
826 axis = self._get_axis_number(axis)
827
828 y = self.values.copy()
829 if not issubclass(y.dtype.type, np.integer):
830 mask = np.isnan(self.values)
831
832 if skipna:
833 np.putmask(y, mask, np.inf)
834
835 result = np.minimum.accumulate(y, axis)
836
837 if skipna:
838 np.putmask(result, mask, np.nan)
839 else:
840 result = np.minimum.accumulate(y, axis)
841 return self._wrap_array(result, self.axes, copy=False)
842
843 def copy(self, deep=True):
844 """
845 Make a copy of this object
846
847 Parameters
848 ----------
849 deep : boolean, default True
850 Make a deep copy, i.e. also copy data
851
852 Returns
853 -------
854 copy : type of caller
855 """
856 data = self._data
857 if deep:
858 data = data.copy()
859 return self._constructor(data)
860
861 def swaplevel(self, i, j, axis=0):
862 """
863 Swap levels i and j in a MultiIndex on a particular axis
864
865 Parameters
866 ----------
867 i, j : int, string (can be mixed)
868 Level of index to be swapped. Can pass level name as string.
869
870 Returns
871 -------
872 swapped : type of caller (new object)
873 """
874 axis = self._get_axis_number(axis)
875 result = self.copy()
876 labels = result._data.axes[axis]
877 result._data.set_axis(axis, labels.swaplevel(i, j))
878 return result
879
880 def add_prefix(self, prefix):
881 """
882 Concatenate prefix string with panel items names.
883
884 Parameters
885 ----------
886 prefix : string
887
888 Returns
889 -------
890 with_prefix : type of caller
891 """
892 new_data = self._data.add_prefix(prefix)
893 return self._constructor(new_data)
894
895 def add_suffix(self, suffix):
896 """
897 Concatenate suffix string with panel items names
898
899 Parameters
900 ----------
901 suffix : string
902
903 Returns
904 -------
905 with_suffix : type of caller
906 """
907 new_data = self._data.add_suffix(suffix)
908 return self._constructor(new_data)
909
910 def rename_axis(self, mapper, axis=0, copy=True):
911 """
912 Alter index and / or columns using input function or functions.
913 Function / dict values must be unique (1-to-1). Labels not contained in
914 a dict / Series will be left as-is.
915
916 Parameters
917 ----------
918 mapper : dict-like or function, optional
919 axis : int, default 0
920 copy : boolean, default True
921 Also copy underlying data
922
923 See also
924 --------
925 DataFrame.rename
926
927 Returns
928 -------
929 renamed : type of caller
930 """
931 # should move this at some point
932 from pandas.core.series import _get_rename_function
933
934 mapper_f = _get_rename_function(mapper)
935
936 axis = self._get_axis_number(axis)
937 if axis == 0:
938 new_data = self._data.rename_items(mapper_f, copydata=copy)
939 else:
940 new_data = self._data.rename_axis(mapper_f, axis=axis)
941 if copy:
942 new_data = new_data.copy()
943
944 return self._constructor(new_data)
945
946 def take(self, indices, axis=0, convert=True):
947 """
948 Analogous to ndarray.take
949
950 Parameters
951 ----------
952 indices : list / array of ints
953 axis : int, default 0
954 convert : translate neg to pos indices (default)
955
956 Returns
957 -------
958 taken : type of caller
959 """
960
961 # check/convert indicies here
962 if convert:
963 axis = self._get_axis_number(axis)
964 indices = _maybe_convert_indices(indices, len(self._get_axis(axis)))
965
966 if axis == 0:
967 labels = self._get_axis(axis)
968 new_items = labels.take(indices)
969 new_data = self._data.reindex_axis(new_items, axis=0)
970 else:
971 new_data = self._data.take(indices, axis=axis, verify=False)
972 return self._constructor(new_data)
973
974 def tz_convert(self, tz, axis=0, copy=True):
975 """
976 Convert TimeSeries to target time zone. If it is time zone naive, it
977 will be localized to the passed time zone.
978
979 Parameters
980 ----------
981 tz : string or pytz.timezone object
982 copy : boolean, default True
983 Also make a copy of the underlying data
984
985 Returns
986 -------
987 """
988 axis = self._get_axis_number(axis)
989 ax = self._get_axis(axis)
990
991 if not hasattr(ax, 'tz_convert'):
992 ax_name = self._get_axis_name(axis)
993 raise TypeError('%s is not a valid DatetimeIndex or PeriodIndex' %
994 ax_name)
995
996 new_data = self._data
997 if copy:
998 new_data = new_data.copy()
999
1000 new_obj = self._constructor(new_data)
1001 new_ax = ax.tz_convert(tz)
1002
1003 if axis == 0:
1004 new_obj._set_axis(1, new_ax)
1005 elif axis == 1:
1006 new_obj._set_axis(0, new_ax)
1007 self._clear_item_cache()
1008
1009 return new_obj
1010
1011 def tz_localize(self, tz, axis=0, copy=True):
1012 """
1013 Localize tz-naive TimeSeries to target time zone
1014
1015 Parameters
1016 ----------
1017 tz : string or pytz.timezone object
1018 copy : boolean, default True
1019 Also make a copy of the underlying data
1020
1021 Returns
1022 -------
1023 """
1024 axis = self._get_axis_number(axis)
1025 ax = self._get_axis(axis)
1026
1027 if not hasattr(ax, 'tz_localize'):
1028 ax_name = self._get_axis_name(axis)
1029 raise TypeError('%s is not a valid DatetimeIndex or PeriodIndex' %
1030 ax_name)
1031
1032 new_data = self._data
1033 if copy:
1034 new_data = new_data.copy()
1035
1036 new_obj = self._constructor(new_data)
1037 new_ax = ax.tz_localize(tz)
1038
1039 if axis == 0:
1040 new_obj._set_axis(1, new_ax)
1041 elif axis == 1:
1042 new_obj._set_axis(0, new_ax)
1043 self._clear_item_cache()
1044
1045 return new_obj
1046
1047 # Good for either Series or DataFrame
1048
1049
1050 def truncate(self, before=None, after=None, copy=True):
1051 """Function truncate a sorted DataFrame / Series before and/or after
1052 some particular dates.
1053
1054 Parameters
1055 ----------
1056 before : date
1057 Truncate before date
1058 after : date
1059 Truncate after date
1060 copy : boolean, default True
1061
1062 Returns
1063 -------
1064 truncated : type of caller
1065 """
1066 from pandas.tseries.tools import to_datetime
1067 before = to_datetime(before)
1068 after = to_datetime(after)
1069
1070 if before is not None and after is not None:
1071 if before > after:
1072 raise AssertionError('Truncate: %s must be after %s' %
1073 (before, after))
1074
1075 result = self.ix[before:after]
1076
1077 if isinstance(self.index, MultiIndex):
1078 result.index = self.index.truncate(before, after)
1079
1080 if copy:
1081 result = result.copy()
1082
1083 return result
1084
[end of pandas/core/generic.py]
[start of pandas/tseries/util.py]
1 import numpy as np
2
3 import pandas as pd
4
5 import pandas.core.common as com
6 from pandas.core.frame import DataFrame
7 import pandas.core.nanops as nanops
8
9
10 def pivot_annual(series, freq=None):
11 """
12 Group a series by years, taking leap years into account.
13
14 The output has as many rows as distinct years in the original series,
15 and as many columns as the length of a leap year in the units corresponding
16 to the original frequency (366 for daily frequency, 366*24 for hourly...).
17 The fist column of the output corresponds to Jan. 1st, 00:00:00,
18 while the last column corresponds to Dec, 31st, 23:59:59.
19 Entries corresponding to Feb. 29th are masked for non-leap years.
20
21 For example, if the initial series has a daily frequency, the 59th column
22 of the output always corresponds to Feb. 28th, the 61st column to Mar. 1st,
23 and the 60th column is masked for non-leap years.
24 With a hourly initial frequency, the (59*24)th column of the output always
25 correspond to Feb. 28th 23:00, the (61*24)th column to Mar. 1st, 00:00, and
26 the 24 columns between (59*24) and (61*24) are masked.
27
28 If the original frequency is less than daily, the output is equivalent to
29 ``series.convert('A', func=None)``.
30
31 Parameters
32 ----------
33 series : TimeSeries
34 freq : string or None, default None
35
36 Returns
37 -------
38 annual : DataFrame
39 """
40 index = series.index
41 year = index.year
42 years = nanops.unique1d(year)
43
44 if freq is not None:
45 freq = freq.upper()
46 else:
47 freq = series.index.freq
48
49 if freq == 'D':
50 width = 366
51 offset = index.dayofyear - 1
52
53 # adjust for leap year
54 offset[(-isleapyear(year)) & (offset >= 59)] += 1
55
56 columns = range(1, 367)
57 # todo: strings like 1/1, 1/25, etc.?
58 elif freq in ('M', 'BM'):
59 width = 12
60 offset = index.month - 1
61 columns = range(1, 13)
62 elif freq == 'H':
63 width = 8784
64 grouped = series.groupby(series.index.year)
65 defaulted = grouped.apply(lambda x: x.reset_index(drop=True))
66 defaulted.index = defaulted.index.droplevel(0)
67 offset = np.asarray(defaulted.index)
68 offset[-isleapyear(year) & (offset >= 1416)] += 24
69 columns = range(1, 8785)
70 else:
71 raise NotImplementedError(freq)
72
73 flat_index = (year - years.min()) * width + offset
74 flat_index = com._ensure_platform_int(flat_index)
75
76 values = np.empty((len(years), width))
77 values.fill(np.nan)
78 values.put(flat_index, series.values)
79
80 return DataFrame(values, index=years, columns=columns)
81
82
83 def isleapyear(year):
84 """
85 Returns true if year is a leap year.
86
87 Parameters
88 ----------
89 year : integer / sequence
90 A given (list of) year(s).
91 """
92 year = np.asarray(year)
93 return np.logical_or(year % 400 == 0,
94 np.logical_and(year % 4 == 0, year % 100 > 0))
95
[end of pandas/tseries/util.py]
[start of vb_suite/parser.py]
1 from vbench.api import Benchmark
2 from datetime import datetime
3
4 common_setup = """from pandas_vb_common import *
5 from pandas import read_csv, read_table
6 """
7
8 setup = common_setup + """
9 import os
10 N = 10000
11 K = 8
12 df = DataFrame(np.random.randn(N, K) * np.random.randint(100, 10000, (N, K)))
13 df.to_csv('test.csv', sep='|')
14 """
15
16 read_csv_vb = Benchmark("read_csv('test.csv', sep='|')", setup,
17 cleanup="os.remove('test.csv')",
18 start_date=datetime(2012, 5, 7))
19
20
21 setup = common_setup + """
22 import os
23 N = 10000
24 K = 8
25 format = lambda x: '{:,}'.format(x)
26 df = DataFrame(np.random.randn(N, K) * np.random.randint(100, 10000, (N, K)))
27 df = df.applymap(format)
28 df.to_csv('test.csv', sep='|')
29 """
30
31 read_csv_thou_vb = Benchmark("read_csv('test.csv', sep='|', thousands=',')",
32 setup,
33 cleanup="os.remove('test.csv')",
34 start_date=datetime(2012, 5, 7))
35
36 setup = common_setup + """
37 data = ['A,B,C']
38 data = data + ['1,2,3 # comment'] * 100000
39 data = '\\n'.join(data)
40 """
41
42 stmt = "read_csv(StringIO(data), comment='#')"
43 read_csv_comment2 = Benchmark(stmt, setup,
44 start_date=datetime(2011, 11, 1))
45
46 setup = common_setup + """
47 from cStringIO import StringIO
48 import os
49 N = 10000
50 K = 8
51 data = '''\
52 KORD,19990127, 19:00:00, 18:56:00, 0.8100, 2.8100, 7.2000, 0.0000, 280.0000
53 KORD,19990127, 20:00:00, 19:56:00, 0.0100, 2.2100, 7.2000, 0.0000, 260.0000
54 KORD,19990127, 21:00:00, 20:56:00, -0.5900, 2.2100, 5.7000, 0.0000, 280.0000
55 KORD,19990127, 21:00:00, 21:18:00, -0.9900, 2.0100, 3.6000, 0.0000, 270.0000
56 KORD,19990127, 22:00:00, 21:56:00, -0.5900, 1.7100, 5.1000, 0.0000, 290.0000
57 '''
58 data = data * 200
59 """
60 cmd = ("read_table(StringIO(data), sep=',', header=None, "
61 "parse_dates=[[1,2], [1,3]])")
62 sdate = datetime(2012, 5, 7)
63 read_table_multiple_date = Benchmark(cmd, setup, start_date=sdate)
64
65 setup = common_setup + """
66 from cStringIO import StringIO
67 import os
68 N = 10000
69 K = 8
70 data = '''\
71 KORD,19990127 19:00:00, 18:56:00, 0.8100, 2.8100, 7.2000, 0.0000, 280.0000
72 KORD,19990127 20:00:00, 19:56:00, 0.0100, 2.2100, 7.2000, 0.0000, 260.0000
73 KORD,19990127 21:00:00, 20:56:00, -0.5900, 2.2100, 5.7000, 0.0000, 280.0000
74 KORD,19990127 21:00:00, 21:18:00, -0.9900, 2.0100, 3.6000, 0.0000, 270.0000
75 KORD,19990127 22:00:00, 21:56:00, -0.5900, 1.7100, 5.1000, 0.0000, 290.0000
76 '''
77 data = data * 200
78 """
79 cmd = "read_table(StringIO(data), sep=',', header=None, parse_dates=[1])"
80 sdate = datetime(2012, 5, 7)
81 read_table_multiple_date_baseline = Benchmark(cmd, setup, start_date=sdate)
82
[end of vb_suite/parser.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pandas-dev/pandas
|
0bd5e7773f7ec13630c47dd1b3dcf7f3c0c6ec14
|
Sorting on Timestamp broken version 0.11
Issue: `frame.sort_index` uses `argsort` for a single sort column, but `_lexsort_indexer` for multi-columns
need to do a transform on datetimens[64] before passing to ``_lexsort_`indexer`
In [54]: a.columns
Out[54]: Index([ticker, disclosuredate, txnid], dtype=object)
In [55]: a.values
Out[55]:
array([[A, 2010-03-09 00:00:00, 11110508],
[A, 2010-03-12 00:00:00, 11121853],
[A, 2011-02-15 00:00:00, 12488915],
[A, 2011-03-08 00:00:00, 12563380],
[A, 2011-04-22 00:00:00, 12653015],
[A, 2013-01-28 00:00:00, 15244694]], dtype=object)
In [56]: a.sort(columns=['disclosuredate']).values
Out[56]:
array([[A, 2010-03-09 00:00:00, 11110508],
[A, 2010-03-12 00:00:00, 11121853],
[A, 2011-04-22 00:00:00, 12653015],
[A, 2013-01-28 00:00:00, 15244694],
[A, 2011-03-08 00:00:00, 12563380],
[A, 2011-02-15 00:00:00, 12488915]], dtype=object)
In [57]: pd.**version**
Out[57]: '0.11.0'
In [58]: import time
In [59]: a['epoch'] = a['disclosuredate'].map(lambda x: time.mktime(x.timetuple()))
In [60]: a.sort(['epoch']).values
Out[60]:
array([[A, 2010-03-09 00:00:00, 11110508, 1268110800.0],
[A, 2010-03-12 00:00:00, 11121853, 1268370000.0],
[A, 2011-02-15 00:00:00, 12488915, 1297746000.0],
[A, 2011-03-08 00:00:00, 12563380, 1299560400.0],
[A, 2011-04-22 00:00:00, 12653015, 1303444800.0],
[A, 2013-01-28 00:00:00, 15244694, 1359349200.0]], dtype=object)
|
When you do
```
a['disclosuredate'].map(lambda x: time.mktime(x.timetuple()))
```
I get (using my example)
```
In [15]: df['C'].map(lambda x: time.mktime(x.timetuple()))
Out[15]:
2013-01-01 1357016400
2013-01-02 1357102800
2013-01-03 1357189200
2013-01-04 1357275600
2013-01-05 1357362000
Freq: D, Name: C, dtype: float64
```
Floats are not dates are very unweidlty to deal with, instead
You must have dtype of `datetime64[ns]`
you can convert using `pd.to_datetime(a['discolsuredate'])`
Here's an example
```
In [6]: df = DataFrame(randn(5,2),columns=list('AB'),index=date_range('20130101',periods=5))
In [7]: df
Out[7]:
A B
2013-01-01 -0.220543 1.059028
2013-01-02 1.174234 0.277359
2013-01-03 0.185800 -0.144194
2013-01-04 -0.562132 -0.144710
2013-01-05 -0.211064 -1.138927
In [8]: df['C'] = df.index
In [9]: df
Out[9]:
A B C
2013-01-01 -0.220543 1.059028 2013-01-01 00:00:00
2013-01-02 1.174234 0.277359 2013-01-02 00:00:00
2013-01-03 0.185800 -0.144194 2013-01-03 00:00:00
2013-01-04 -0.562132 -0.144710 2013-01-04 00:00:00
2013-01-05 -0.211064 -1.138927 2013-01-05 00:00:00
In [10]: df.info()
<class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 5 entries, 2013-01-01 00:00:00 to 2013-01-05 00:00:00
Freq: D
Data columns (total 3 columns):
A 5 non-null values
B 5 non-null values
C 5 non-null values
dtypes: datetime64[ns](1), float64(2)
In [12]: df.sort(columns='C',ascending=False)
Out[12]:
A B C
2013-01-05 -0.211064 -1.138927 2013-01-05 00:00:00
2013-01-04 -0.562132 -0.144710 2013-01-04 00:00:00
2013-01-03 0.185800 -0.144194 2013-01-03 00:00:00
2013-01-02 1.174234 0.277359 2013-01-02 00:00:00
2013-01-01 -0.220543 1.059028 2013-01-01 00:00:00
```
This must be an issue with pandas.tslib.Timestamp on import then.
import pandas as pd
data = pd.read_csv('../data/txns_baseline-has10b5-filtered.csv', sep='|', parse_dates=['disclosuredate'])
type(data['disclosuredate'][0])
Out[3]: pandas.tslib.Timestamp
a = data[data['ticker']=='A']
a[['ticker', 'disclosuredate', 'txnid']].sort(['disclosuredate']).values
Out[5]:
array([[A, 2009-11-18 00:00:00, 10826094],
[A, 2009-11-18 00:00:00, 10826175],
[A, 2010-12-20 00:00:00, 12010207],
[A, 2010-03-09 00:00:00, 11110508],
[A, 2011-01-25 00:00:00, 12080828],
[A, 2010-03-12 00:00:00, 11121853],
[A, 2010-11-03 00:00:00, 11896650],
[A, 2010-03-03 00:00:00, 11093189],
[A, 2009-11-17 00:00:00, 10820169],
[A, 2011-07-05 00:00:00, 12803928],
[A, 2012-03-28 00:00:00, 14719198],
[A, 2011-04-22 00:00:00, 12653015],
[A, 2011-07-06 00:00:00, 12812873],
[A, 2011-11-22 00:00:00, 13022405],
[A, 2012-01-25 00:00:00, 14535135],
[A, 2011-05-25 00:00:00, 12740039],
[A, 2011-02-08 00:00:00, 12443239],
[A, 2011-03-23 00:00:00, 12599229],
[A, 2011-08-31 00:00:00, 12901258],
[A, 2011-09-22 00:00:00, 12930155],
[A, 2011-10-24 00:00:00, 12966477],
[A, 2011-07-22 00:00:00, 12835679],
[A, 2012-04-03 00:00:00, 14732867],
[A, 2011-02-02 00:00:00, 12422169],
[A, 2011-02-24 00:00:00, 12521296],
[A, 2013-01-28 00:00:00, 15244694],
[A, 2011-11-18 00:00:00, 13018933],
[A, 2011-11-18 00:00:00, 13018937],
[A, 2011-03-29 00:00:00, 12607129],
[A, 2011-03-08 00:00:00, 12563380],
[A, 2011-02-15 00:00:00, 12488915],
[A, 2011-06-23 00:00:00, 12791504]], dtype=object)
a[['ticker', 'disclosuredate', 'txnid']].values
Out[6]:
array([[A, 2011-09-22 00:00:00, 12930155],
[A, 2010-11-03 00:00:00, 11896650],
[A, 2011-11-22 00:00:00, 13022405],
[A, 2010-12-20 00:00:00, 12010207],
[A, 2011-08-31 00:00:00, 12901258],
[A, 2011-10-24 00:00:00, 12966477],
[A, 2011-11-18 00:00:00, 13018933],
[A, 2011-11-18 00:00:00, 13018937],
[A, 2009-11-18 00:00:00, 10826094],
[A, 2009-11-18 00:00:00, 10826175],
[A, 2009-11-17 00:00:00, 10820169],
[A, 2011-01-25 00:00:00, 12080828],
[A, 2011-07-22 00:00:00, 12835679],
[A, 2011-02-02 00:00:00, 12422169],
[A, 2012-04-03 00:00:00, 14732867],
[A, 2010-03-03 00:00:00, 11093189],
[A, 2010-03-12 00:00:00, 11121853],
[A, 2012-03-28 00:00:00, 14719198],
[A, 2012-01-25 00:00:00, 14535135],
[A, 2010-03-09 00:00:00, 11110508],
[A, 2011-02-15 00:00:00, 12488915],
[A, 2011-02-24 00:00:00, 12521296],
[A, 2011-03-08 00:00:00, 12563380],
[A, 2011-02-08 00:00:00, 12443239],
[A, 2011-04-22 00:00:00, 12653015],
[A, 2011-06-23 00:00:00, 12791504],
[A, 2011-03-23 00:00:00, 12599229],
[A, 2011-07-06 00:00:00, 12812873],
[A, 2011-03-29 00:00:00, 12607129],
[A, 2011-07-05 00:00:00, 12803928],
[A, 2011-05-25 00:00:00, 12740039],
[A, 2013-01-28 00:00:00, 15244694]], dtype=object)
import time
a['epoch'] = a['disclosuredate'].map(lambda x: time.mktime(x.timetuple()))
a[['ticker', 'disclosuredate', 'txnid', 'epoch']].sort(['epoch']).values
Out[9]:
array([[A, 2009-11-17 00:00:00, 10820169, 1258434000.0],
[A, 2009-11-18 00:00:00, 10826094, 1258520400.0],
[A, 2009-11-18 00:00:00, 10826175, 1258520400.0],
[A, 2010-03-03 00:00:00, 11093189, 1267592400.0],
[A, 2010-03-09 00:00:00, 11110508, 1268110800.0],
[A, 2010-03-12 00:00:00, 11121853, 1268370000.0],
[A, 2010-11-03 00:00:00, 11896650, 1288756800.0],
[A, 2010-12-20 00:00:00, 12010207, 1292821200.0],
[A, 2011-01-25 00:00:00, 12080828, 1295931600.0],
[A, 2011-02-02 00:00:00, 12422169, 1296622800.0],
[A, 2011-02-08 00:00:00, 12443239, 1297141200.0],
[A, 2011-02-15 00:00:00, 12488915, 1297746000.0],
[A, 2011-02-24 00:00:00, 12521296, 1298523600.0],
[A, 2011-03-08 00:00:00, 12563380, 1299560400.0],
[A, 2011-03-23 00:00:00, 12599229, 1300852800.0],
[A, 2011-03-29 00:00:00, 12607129, 1301371200.0],
[A, 2011-04-22 00:00:00, 12653015, 1303444800.0],
[A, 2011-05-25 00:00:00, 12740039, 1306296000.0],
[A, 2011-06-23 00:00:00, 12791504, 1308801600.0],
[A, 2011-07-05 00:00:00, 12803928, 1309838400.0],
[A, 2011-07-06 00:00:00, 12812873, 1309924800.0],
[A, 2011-07-22 00:00:00, 12835679, 1311307200.0],
[A, 2011-08-31 00:00:00, 12901258, 1314763200.0],
[A, 2011-09-22 00:00:00, 12930155, 1316664000.0],
[A, 2011-10-24 00:00:00, 12966477, 1319428800.0],
[A, 2011-11-18 00:00:00, 13018933, 1321592400.0],
[A, 2011-11-18 00:00:00, 13018937, 1321592400.0],
[A, 2011-11-22 00:00:00, 13022405, 1321938000.0],
[A, 2012-01-25 00:00:00, 14535135, 1327467600.0],
[A, 2012-03-28 00:00:00, 14719198, 1332907200.0],
[A, 2012-04-03 00:00:00, 14732867, 1333425600.0],
[A, 2013-01-28 00:00:00, 15244694, 1359349200.0]], dtype=object)
show ng the `.values` is not useful as all of the dtypes are upcasted (this is a fyi), instead show me `df.info()`,
right after your read the csv, also if you can either include a link to your csv or a small data sample (just paste the csv in)
the disclosuredate column turns out to be an integer representation of the date
import pandas as pd
data = pd.read_csv('sample.csv', sep='|', parse_dates=['disclosuredate'])
data.info()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 9 entries, 0 to 8
Data columns (total 68 columns):
Unnamed: 0 9 non-null values
ticker 9 non-null values
companyname 9 non-null values
sector 9 non-null values
subsector 9 non-null values
industry_id 9 non-null values
mcap 9 non-null values
mcapgroup 9 non-null values
insidercik 9 non-null values
insidername 9 non-null values
position 9 non-null values
iacc 9 non-null values
txnid 9 non-null values
code 9 non-null values
has_10b5 9 non-null values
value 9 non-null values
shares 9 non-null values
pps 9 non-null values
round_limit_price 9 non-null values
nickel_limit_price 9 non-null values
quarter_limit_price 9 non-null values
fifty_cent_limit_price 9 non-null values
dollar_limit_price 9 non-null values
triggertype 2 non-null values
triggerprice 2 non-null values
round_trigger_price 9 non-null values
nickel_trigger_price 9 non-null values
quarter_trigger_price 9 non-null values
fifty_cent_trigger_price 9 non-null values
dollar_trigger_price 9 non-null values
disclosuredate 9 non-null values
disclosure_t1 9 non-null values
txndate 9 non-null values
option_gain 2 non-null values
option_gain_group 2 non-null values
option_daystoexpire 2 non-null values
option_days_group 2 non-null values
planid 1 non-null values
adoptiondisclosure 1 non-null values
adoption_days 1 non-null values
adoption_days_group 1 non-null values
adoption_change 1 non-null values
adoption_change_group 1 non-null values
first_10b5 1 non-null values
days_between_10b5 8 non-null values
expiring 0 non-null values
sharesinc 0 non-null values
prevshares 0 non-null values
sharesincpct 0 non-null values
sharesinc_change_group 0 non-null values
pricedown 0 non-null values
pricedownpct 0 non-null values
lasthighdate 9 non-null values
lasthighprice 9 non-null values
dayssincelasthigh 9 non-null values
dayssincelasthigh_group 9 non-null values
pregain3m 9 non-null values
pregain1m 9 non-null values
postgain1m 9 non-null values
postgain3m 9 non-null values
postgain6m 9 non-null values
postgain12m 9 non-null values
postgain24m 9 non-null values
relgain1m 9 non-null values
relgain3m 9 non-null values
relgain6m 9 non-null values
relgain12m 9 non-null values
relgain24m 9 non-null values
dtypes: datetime64[ns](1), float64(37), int64(6), object(24)
here is a sample of the csv
|ticker|companyname|sector|subsector|industry_id|mcap|mcapgroup|insidercik|insidername|position|iacc|txnid|code|has_10b5|value|shares|pps|round_limit_price|nickel_limit_price|quarter_limit_price|fifty_cent_limit_price|dollar_limit_price|triggertype|triggerprice|round_trigger_price|nickel_trigger_price|quarter_trigger_price|fifty_cent_trigger_price|dollar_trigger_price|disclosuredate|disclosure_t1|txndate|option_gain|option_gain_group|option_daystoexpire|option_days_group|planid|adoptiondisclosure|adoption_days|adoption_days_group|adoption_change|adoption_change_group|first_10b5|days_between_10b5|expiring|sharesinc|prevshares|sharesincpct|sharesinc_change_group|pricedown|pricedownpct|lasthighdate|lasthighprice|dayssincelasthigh|dayssincelasthigh_group|pregain3m|pregain1m|postgain1m|postgain3m|postgain6m|postgain12m|postgain24m|relgain1m|relgain3m|relgain6m|relgain12m|relgain24m
132761|DDD|3D Systems|Industrial Goods|165.0|1651.0|369034380.0|200000000.0|1105964|Charles W. Hull|Officer|7176375|11846570|S|t|47089.989999999998|9000.0|5.232221|f|f|f|f|f|||f|f|f|f|f|20101004|20101005|20101001||||||||||||18.0||||||||20100810.0|5.2666660000000007|52.0|0-89|38.447971781305121|15.696389093588797|75.095541401273877|95.73248407643311|248.72611464968156|90.191082802547783|359.74522292993635|72.800665576061704|85.484134500828006|229.612628324627|98.42748644919979|343.96402204971002
266330|[PALM]|[Acq] Palm|Technology|183.0|183.0|513518809.06|200000000.0|1134647|Donna L. Dubinsky|Director|2594539|5305372|S|t|32520.0|6000.0|5.4199999999999999|f|f|f|f|f|||f|f|f|f|f|20040211|20040212|20040211||||||||||||7.0||||||||20040209.0|5.4950000000000001|2.0|0-89|-28.356254092992803|-8.6811352253756251|19.469835466179163|61.334552102376591|247.34917733089583|110.3290676416819|225.95978062157221|25.951767941427601|68.451608829533612|262.60710915028301|110.18198115965299|217.944772977898
266329|[PALM]|[Acq] Palm|Technology|183.0|183.0|510722880.88|200000000.0|1134189|Jeffrey C. Hawkins|Officer|2553976|5305409|S|t|32310.0|6000.0|5.3849999999999998|f|f|f|f|f|||f|f|f|f|f|20040126|20040127|20040126||||||||||||4.0||||||||20040122.0|5.6200000000000001|4.0|0-89|-36.235416951294845|-3.7433155080213902|-8.9814814814814827|69.814814814814824|247.22222222222223|144.90740740740739|231.94444444444446|-5.03684905487329|73.761337564857286|259.40441159482401|148.21311051320899|224.05423442951098
347731|FTNT|Fortinet Inc.|Technology|182.0|1821.0|1738396220.0|200000000.0|1475587|Ken Xie|CEO|7168959|11839564|S|t|439150.0|35132.0|12.5|t|t|f|t|f|||f|f|f|f|f|20100928|20100929|20100927||||||||||||6.0||||||||20100921.0|12.52|6.0|0-89|53.345498783454985|25.892634207240945|19.000396667988895|27.647758825862759|241.13447044823485|34.549781832606101|95.834986116620371|13.4962225676085|15.4296789120377|224.29374267363602|29.710013427852999|63.854308161929595
347732|FTNT|Fortinet Inc.|Technology|182.0|1821.0|1738396220.0|200000000.0|1476336|John Whittle|Officer|7168962|11839581|M|t|74746.875|6250.0|11.9595|f|f|f|f|f|||f|f|f|f|f|20100928|20100929|20100927|398.3125|300-400|2221.0|1465+||||||||31.0||||||||20100923.0|12.029999999999999|4.0|0-89|53.345498783454985|25.892634207240945|19.000396667988895|27.647758825862759|241.13447044823485|34.549781832606101|95.834986116620371|13.4962225676085|15.4296789120377|224.29374267363602|29.710013427852999|63.854308161929595
106101|RMBS|Rambus|Technology|186.0|1861.0|1127384709.8399999|200000000.0|1192512|Geoffrey Tate|Director|3692973|6479611|S|t|381821.13|33300.0|11.466100000000001|f|f|f|f|f|||f|f|f|f|f|20050920|20050921|20050920||||||||||||29.0||||||||20050916.0|11.970000000000001|4.0|0-89|-23.776223776223773|-5.2997393570807994|13.486238532110091|48.807339449541281|240.82568807339447|57.981651376146793|70.366972477064223|15.315217379943599|43.311383804533101|231.63997053266601|51.757996646340104|43.7221731758072
56480|GNW|Genworth Financial|Financial|142.0|1420.0|537085993.84000003|200000000.0|1290753|Pamela S. Schutz|Officer|5973693|10057429|S|t|13245.184800000001|8887.0|1.4903999999999999|f|f|f|f|f|pricedown|1.4903999999999999|f|f|f|f|f|20081112|20081113|20081111|||||25726.0|2008-08-14 00:00:00|172.0|101-365|-93.316591928251128|<-50||91.0||||||||20081107.0|4.0899999999999999|4.0|0-89|-90.461346633416454|-70.745697896749533|83.006535947712408|33.986928104575156|239.86928104575165|636.60130718954247|653.59477124183024|96.28021230863709|74.553669342457297|248.221257182963|627.27990525069094|641.73794797561095
266327|[PALM]|[Acq] Palm|Technology|183.0|183.0|506063000.57999998|200000000.0|1134647|Donna L. Dubinsky|Director|2546712|5305411|S|t|32880.0|6000.0|5.4800000000000004|f|f|f|f|f|firstsale|5.4800000000000004|f|f|f|f|f|20040121|20040122|20040121|||||||||||1.0|||||||||20040116.0|6.2450000000000001|5.0|0-89|-37.725877661809776|4.9429657794676807|-12.137681159420289|74.909420289855078|239.31159420289853|151.90217391304347|204.16666666666671|-6.8762619117527404|78.972638490807412|252.04961809141301|155.901211199323|198.05720989204102
263834|PLX|Protalix BioTherapeutics|Healthcare|153.0|153.0|264560481.96000001|200000000.0|1385112|David Aviezer|CEO|6321807|10487387|M|t|340948.0|100000.0|3.4094800000000003|f|f|f|f|f|||f|f|f|f|f|20090512|20090513|20090512|2741.2333333333299|1000+|1671.0|1465+||||||||6.0||||||||20090508.0|3.5099999999999998|4.0|0-89|31.343283582089548|61.467889908256872|7.1022727272727266|73.011363636363626|238.92045454545453|96.022727272727295|84.090909090909065|5.8105819618977499|62.889906146485103|221.64650042939499|74.507400494242589|38.804616436195396
are you doing something different than this?
```
In [31]: df = pd.read_csv('ts.csv',sep='|',parse_dates=['disclosuredate'])
In [32]: df.get_dtype_counts()
Out[32]:
datetime64[ns] 1
float64 37
int64 6
object 24
dtype: int64
In [33]: df.sort(columns='disclosuredate').loc[:,['ticker','disclosuredate']]
Out[33]:
ticker disclosuredate
7 [PALM] 2004-01-21 00:00:00
2 [PALM] 2004-01-26 00:00:00
1 [PALM] 2004-02-11 00:00:00
5 RMBS 2005-09-20 00:00:00
6 GNW 2008-11-12 00:00:00
8 PLX 2009-05-12 00:00:00
3 FTNT 2010-09-28 00:00:00
4 FTNT 2010-09-28 00:00:00
0 DDD 2010-10-04 00:00:00
```
yes im not appending .loc[:,['ticker','disclosuredate']] after the sort
In [1]: import pandas as pd
In [2]: data = pd.read_csv('sample.csv', sep='|', parse_dates=['disclosuredate'])
In [3]: data.get_dtype_counts()
Out[3]:
datetime64[ns] 1
float64 37
int64 6
object 24
dtype: int64
In [4]: data.sort(columns='disclosuredate').loc[:,['ticker','disclosuredate']]
Out[4]:
ticker disclosuredate
7 [PALM] 2004-01-21 00:00:00
2 [PALM] 2004-01-26 00:00:00
1 [PALM] 2004-02-11 00:00:00
5 RMBS 2005-09-20 00:00:00
6 GNW 2008-11-12 00:00:00
8 PLX 2009-05-12 00:00:00
3 FTNT 2010-09-28 00:00:00
4 FTNT 2010-09-28 00:00:00
0 DDD 2010-10-04 00:00:00
In [5]: data[['ticker', 'disclosuredate']].sort(columns=['disclosuredate'])
Out[5]:
ticker disclosuredate
1 [PALM] 2004-02-11 00:00:00
7 [PALM] 2004-01-21 00:00:00
2 [PALM] 2004-01-26 00:00:00
5 RMBS 2005-09-20 00:00:00
0 DDD 2010-10-04 00:00:00
8 PLX 2009-05-12 00:00:00
6 GNW 2008-11-12 00:00:00
3 FTNT 2010-09-28 00:00:00
4 FTNT 2010-09-28 00:00:00
In [8]: data.sort(columns=['disclosuredate']).tail(1)['ticker']
Out[8]:
4 FTNT
Name: ticker, dtype: object
In [9]: data.sort(columns='disclosuredate').loc[:,['ticker','disclosuredate']].tail(1)['ticker']
Out[9]:
0 DDD
Name: ticker, dtype: object
ok...this is a bug, but very odd
the difference is that I am specifying a list when it doesn't work (bottom), but a single column name when it does
these are handled by different sorters (the single argument uses argsort which works fine with datetime64[ns])
marking the list of datetime columns as a bug...thanks...good catch!
```
In [41]: df[['ticker', 'disclosuredate']].sort(columns='disclosuredate')
Out[41]:
ticker disclosuredate
7 [PALM] 2004-01-21 00:00:00
2 [PALM] 2004-01-26 00:00:00
1 [PALM] 2004-02-11 00:00:00
5 RMBS 2005-09-20 00:00:00
6 GNW 2008-11-12 00:00:00
8 PLX 2009-05-12 00:00:00
3 FTNT 2010-09-28 00:00:00
4 FTNT 2010-09-28 00:00:00
0 DDD 2010-10-04 00:00:00
In [42]: df[['ticker', 'disclosuredate']].sort(columns=['disclosuredate'])
Out[42]:
ticker disclosuredate
1 [PALM] 2004-02-11 00:00:00
7 [PALM] 2004-01-21 00:00:00
2 [PALM] 2004-01-26 00:00:00
5 RMBS 2005-09-20 00:00:00
0 DDD 2010-10-04 00:00:00
8 PLX 2009-05-12 00:00:00
6 GNW 2008-11-12 00:00:00
3 FTNT 2010-09-28 00:00:00
4 FTNT 2010-09-28 00:00:00
```
|
2013-04-25T21:27:25Z
|
<patch>
diff --git a/RELEASE.rst b/RELEASE.rst
--- a/RELEASE.rst
+++ b/RELEASE.rst
@@ -54,6 +54,7 @@ pandas 0.12.0
- ``.loc`` was not raising when passed an integer list (GH3449_)
- Unordered time series selection was misbehaving when using label slicing (GH3448_)
- Duplicate indexes with getitem will return items in the correct order (GH3455_, GH3457_)
+ - Fix sorting in a frame with a list of columns which contains datetime64[ns] dtypes (GH3461_)
.. _GH3164: https://github.com/pydata/pandas/issues/3164
.. _GH3251: https://github.com/pydata/pandas/issues/3251
@@ -64,6 +65,7 @@ pandas 0.12.0
.. _GH3437: https://github.com/pydata/pandas/issues/3437
.. _GH3455: https://github.com/pydata/pandas/issues/3455
.. _GH3457: https://github.com/pydata/pandas/issues/3457
+.. _GH3461: https://github.com/pydata/pandas/issues/3461
.. _GH3448: https://github.com/pydata/pandas/issues/3448
.. _GH3449: https://github.com/pydata/pandas/issues/3449
diff --git a/pandas/core/common.py b/pandas/core/common.py
--- a/pandas/core/common.py
+++ b/pandas/core/common.py
@@ -1477,6 +1477,8 @@ def is_timedelta64_dtype(arr_or_dtype):
tipo = arr_or_dtype.dtype.type
return issubclass(tipo, np.timedelta64)
+def needs_i8_conversion(arr_or_dtype):
+ return is_datetime64_dtype(arr_or_dtype) or is_timedelta64_dtype(arr_or_dtype)
def is_float_dtype(arr_or_dtype):
if isinstance(arr_or_dtype, np.dtype):
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -3144,7 +3144,12 @@ def sort_index(self, axis=0, by=None, ascending=True, inplace=False):
% str(x))
keys.append(k)
- keys = [self[x].values for x in by]
+ def trans(v):
+ if com.needs_i8_conversion(v):
+ return v.view('i8')
+ return v
+
+ keys = [trans(self[x].values) for x in by]
indexer = _lexsort_indexer(keys, orders=ascending)
indexer = com._ensure_platform_int(indexer)
else:
</patch>
|
[]
|
[]
| |||
pandas-dev__pandas-3312
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
BUG: datetime selection in a DataFrame should work in the where
I think something broke:
see: http://stackoverflow.com/questions/15927451/filter-on-pandas-dataframe-with-datetime-columns-raises-error/15927797?noredirect=1#comment22689922_15927797
This is wrong (should give rows > 0)
```
In [3]: df = pd.DataFrame(dict(A = pd.date_range('20130102',periods=5), B = pd.date_range('20130104',periods=5)))
In [4]: df
Out[4]:
A B
0 2013-01-02 00:00:00 2013-01-04 00:00:00
1 2013-01-03 00:00:00 2013-01-05 00:00:00
2 2013-01-04 00:00:00 2013-01-06 00:00:00
3 2013-01-05 00:00:00 2013-01-07 00:00:00
4 2013-01-06 00:00:00 2013-01-08 00:00:00
In [5]: df[df>pd.Timestamp('20130103')]
Out[5]:
A B
0 2013-01-02 00:00:00 2013-01-04 00:00:00
1 2013-01-03 00:00:00 2013-01-05 00:00:00
2 2013-01-04 00:00:00 2013-01-06 00:00:00
3 2013-01-05 00:00:00 2013-01-07 00:00:00
4 2013-01-06 00:00:00 2013-01-08 00:00:00
```
</issue>
<code>
[start of README.rst]
1 =============================================
2 pandas: powerful Python data analysis toolkit
3 =============================================
4
5 .. image:: https://travis-ci.org/pydata/pandas.png
6 :target: https://travis-ci.org/pydata/pandas
7
8 What is it
9 ==========
10
11 **pandas** is a Python package providing fast, flexible, and expressive data
12 structures designed to make working with "relational" or "labeled" data both
13 easy and intuitive. It aims to be the fundamental high-level building block for
14 doing practical, **real world** data analysis in Python. Additionally, it has
15 the broader goal of becoming **the most powerful and flexible open source data
16 analysis / manipulation tool available in any language**. It is already well on
17 its way toward this goal.
18
19 Main Features
20 =============
21
22 Here are just a few of the things that pandas does well:
23
24 - Easy handling of **missing data** (represented as NaN) in floating point as
25 well as non-floating point data
26 - Size mutability: columns can be **inserted and deleted** from DataFrame and
27 higher dimensional objects
28 - Automatic and explicit **data alignment**: objects can be explicitly
29 aligned to a set of labels, or the user can simply ignore the labels and
30 let `Series`, `DataFrame`, etc. automatically align the data for you in
31 computations
32 - Powerful, flexible **group by** functionality to perform
33 split-apply-combine operations on data sets, for both aggregating and
34 transforming data
35 - Make it **easy to convert** ragged, differently-indexed data in other
36 Python and NumPy data structures into DataFrame objects
37 - Intelligent label-based **slicing**, **fancy indexing**, and **subsetting**
38 of large data sets
39 - Intuitive **merging** and **joining** data sets
40 - Flexible **reshaping** and pivoting of data sets
41 - **Hierarchical** labeling of axes (possible to have multiple labels per
42 tick)
43 - Robust IO tools for loading data from **flat files** (CSV and delimited),
44 Excel files, databases, and saving / loading data from the ultrafast **HDF5
45 format**
46 - **Time series**-specific functionality: date range generation and frequency
47 conversion, moving window statistics, moving window linear regressions,
48 date shifting and lagging, etc.
49
50 Where to get it
51 ===============
52
53 The source code is currently hosted on GitHub at: http://github.com/pydata/pandas
54
55 Binary installers for the latest released version are available at the Python
56 package index::
57
58 http://pypi.python.org/pypi/pandas/
59
60 And via ``easy_install`` or ``pip``::
61
62 easy_install pandas
63 pip install pandas
64
65 Dependencies
66 ============
67
68 * `NumPy <http://www.numpy.org>`__: 1.6.1 or higher
69 * `python-dateutil <http://labix.org/python-dateutil>`__ 1.5 or higher
70 * `pytz <http://pytz.sourceforge.net/>`__
71 * Needed for time zone support with ``date_range``
72
73 Highly Recommended Dependencies
74 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
75 * `numexpr <http://code.google.com/p/numexpr/>`__: to accelerate some expression evaluation operations
76 also required by `PyTables`
77 * `bottleneck <http://berkeleyanalytics.com/bottleneck>`__: to accelerate certain numerical operations
78
79 Optional dependencies
80 ~~~~~~~~~~~~~~~~~~~~~
81
82 * `Cython <http://www.cython.org>`__: Only necessary to build development
83 version. Version 0.17.1 or higher.
84 * `SciPy <http://www.scipy.org>`__: miscellaneous statistical functions
85 * `PyTables <http://www.pytables.org>`__: necessary for HDF5-based storage
86 * `matplotlib <http://matplotlib.sourceforge.net/>`__: for plotting
87 * `statsmodels <http://statsmodels.sourceforge.net/>`__
88 * Needed for parts of :mod:`pandas.stats`
89 * `openpyxl <http://packages.python.org/openpyxl/>`__, `xlrd/xlwt <http://www.python-excel.org/>`__
90 * openpyxl version 1.6.1 or higher
91 * Needed for Excel I/O
92
93
94 Installation from sources
95 =========================
96
97 To install pandas from source you need ``cython`` in addition to the normal dependencies above,
98 which can be installed from pypi::
99
100 pip install cython
101
102 In the ``pandas`` directory (same one where you found this file after cloning the git repo), execute::
103
104 python setup.py install
105
106 or for installing in `development mode <http://www.pip-installer.org/en/latest/usage.html>`__::
107
108 python setup.py develop
109
110 Alternatively, you can use `pip` if you want all the dependencies pulled in automatically
111 (the optional ``-e`` option is for installing it in
112 `development mode <http://www.pip-installer.org/en/latest/usage.html>`__)::
113
114 pip install -e .
115
116 On Windows, you will need to install MinGW and execute::
117
118 python setup.py build --compiler=mingw32
119 python setup.py install
120
121 See http://pandas.pydata.org/ for more information.
122
123 License
124 =======
125
126 BSD
127
128 Documentation
129 =============
130
131 The official documentation is hosted on PyData.org: http://pandas.pydata.org/
132
133 The Sphinx documentation should provide a good starting point for learning how
134 to use the library. Expect the docs to continue to expand as time goes on.
135
136 Background
137 ==========
138
139 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and
140 has been under active development since then.
141
142 Discussion and Development
143 ==========================
144
145 Since ``pandas`` development is related to a number of other scientific
146 Python projects, questions are welcome on the scipy-user mailing
147 list. Specialized discussions or design issues should take place on
148 the pystatsmodels mailing list / Google group, where
149 ``scikits.statsmodels`` and other libraries will also be discussed:
150
151 http://groups.google.com/group/pystatsmodels
152
153 .. _NumPy: http://numpy.scipy.org/
154
[end of README.rst]
[start of pandas/core/generic.py]
1 # pylint: disable=W0231,E1101
2
3 import numpy as np
4
5 from pandas.core.index import MultiIndex
6 import pandas.core.indexing as indexing
7 from pandas.core.indexing import _maybe_convert_indices
8 from pandas.tseries.index import DatetimeIndex
9 import pandas.core.common as com
10 import pandas.lib as lib
11
12
13 class PandasError(Exception):
14 pass
15
16
17 class PandasObject(object):
18
19 _AXIS_NUMBERS = {
20 'index': 0,
21 'columns': 1
22 }
23
24 _AXIS_ALIASES = {}
25 _AXIS_NAMES = dict((v, k) for k, v in _AXIS_NUMBERS.iteritems())
26
27 def save(self, path):
28 com.save(self, path)
29
30 @classmethod
31 def load(cls, path):
32 return com.load(path)
33
34 #----------------------------------------------------------------------
35 # Axis name business
36
37 def _get_axis_number(self, axis):
38 axis = self._AXIS_ALIASES.get(axis, axis)
39 if com.is_integer(axis):
40 if axis in self._AXIS_NAMES:
41 return axis
42 else:
43 try:
44 return self._AXIS_NUMBERS[axis]
45 except:
46 pass
47 raise ValueError('No axis named %s' % axis)
48
49 def _get_axis_name(self, axis):
50 axis = self._AXIS_ALIASES.get(axis, axis)
51 if isinstance(axis, basestring):
52 if axis in self._AXIS_NUMBERS:
53 return axis
54 else:
55 try:
56 return self._AXIS_NAMES[axis]
57 except:
58 pass
59 raise ValueError('No axis named %s' % axis)
60
61 def _get_axis(self, axis):
62 name = self._get_axis_name(axis)
63 return getattr(self, name)
64
65 #----------------------------------------------------------------------
66 # Indexers
67 @classmethod
68 def _create_indexer(cls, name, indexer):
69 """ create an indexer like _name in the class """
70 iname = '_%s' % name
71 setattr(cls,iname,None)
72
73 def _indexer(self):
74 if getattr(self,iname,None) is None:
75 setattr(self,iname,indexer(self, name))
76 return getattr(self,iname)
77
78 setattr(cls,name,property(_indexer))
79
80 def abs(self):
81 """
82 Return an object with absolute value taken. Only applicable to objects
83 that are all numeric
84
85 Returns
86 -------
87 abs: type of caller
88 """
89 return np.abs(self)
90
91 def get(self, key, default=None):
92 """
93 Get item from object for given key (DataFrame column, Panel slice,
94 etc.). Returns default value if not found
95
96 Parameters
97 ----------
98 key : object
99
100 Returns
101 -------
102 value : type of items contained in object
103 """
104 try:
105 return self[key]
106 except KeyError:
107 return default
108
109 def groupby(self, by=None, axis=0, level=None, as_index=True, sort=True,
110 group_keys=True):
111 """
112 Group series using mapper (dict or key function, apply given function
113 to group, return result as series) or by a series of columns
114
115 Parameters
116 ----------
117 by : mapping function / list of functions, dict, Series, or tuple /
118 list of column names.
119 Called on each element of the object index to determine the groups.
120 If a dict or Series is passed, the Series or dict VALUES will be
121 used to determine the groups
122 axis : int, default 0
123 level : int, level name, or sequence of such, default None
124 If the axis is a MultiIndex (hierarchical), group by a particular
125 level or levels
126 as_index : boolean, default True
127 For aggregated output, return object with group labels as the
128 index. Only relevant for DataFrame input. as_index=False is
129 effectively "SQL-style" grouped output
130 sort : boolean, default True
131 Sort group keys. Get better performance by turning this off
132 group_keys : boolean, default True
133 When calling apply, add group keys to index to identify pieces
134
135 Examples
136 --------
137 # DataFrame result
138 >>> data.groupby(func, axis=0).mean()
139
140 # DataFrame result
141 >>> data.groupby(['col1', 'col2'])['col3'].mean()
142
143 # DataFrame with hierarchical index
144 >>> data.groupby(['col1', 'col2']).mean()
145
146 Returns
147 -------
148 GroupBy object
149 """
150 from pandas.core.groupby import groupby
151 axis = self._get_axis_number(axis)
152 return groupby(self, by, axis=axis, level=level, as_index=as_index,
153 sort=sort, group_keys=group_keys)
154
155 def asfreq(self, freq, method=None, how=None, normalize=False):
156 """
157 Convert all TimeSeries inside to specified frequency using DateOffset
158 objects. Optionally provide fill method to pad/backfill missing values.
159
160 Parameters
161 ----------
162 freq : DateOffset object, or string
163 method : {'backfill', 'bfill', 'pad', 'ffill', None}
164 Method to use for filling holes in reindexed Series
165 pad / ffill: propagate last valid observation forward to next valid
166 backfill / bfill: use NEXT valid observation to fill methdo
167 how : {'start', 'end'}, default end
168 For PeriodIndex only, see PeriodIndex.asfreq
169 normalize : bool, default False
170 Whether to reset output index to midnight
171
172 Returns
173 -------
174 converted : type of caller
175 """
176 from pandas.tseries.resample import asfreq
177 return asfreq(self, freq, method=method, how=how,
178 normalize=normalize)
179
180 def at_time(self, time, asof=False):
181 """
182 Select values at particular time of day (e.g. 9:30AM)
183
184 Parameters
185 ----------
186 time : datetime.time or string
187
188 Returns
189 -------
190 values_at_time : type of caller
191 """
192 try:
193 indexer = self.index.indexer_at_time(time, asof=asof)
194 return self.take(indexer, convert=False)
195 except AttributeError:
196 raise TypeError('Index must be DatetimeIndex')
197
198 def between_time(self, start_time, end_time, include_start=True,
199 include_end=True):
200 """
201 Select values between particular times of the day (e.g., 9:00-9:30 AM)
202
203 Parameters
204 ----------
205 start_time : datetime.time or string
206 end_time : datetime.time or string
207 include_start : boolean, default True
208 include_end : boolean, default True
209
210 Returns
211 -------
212 values_between_time : type of caller
213 """
214 try:
215 indexer = self.index.indexer_between_time(
216 start_time, end_time, include_start=include_start,
217 include_end=include_end)
218 return self.take(indexer, convert=False)
219 except AttributeError:
220 raise TypeError('Index must be DatetimeIndex')
221
222 def resample(self, rule, how=None, axis=0, fill_method=None,
223 closed=None, label=None, convention='start',
224 kind=None, loffset=None, limit=None, base=0):
225 """
226 Convenience method for frequency conversion and resampling of regular
227 time-series data.
228
229 Parameters
230 ----------
231 rule : the offset string or object representing target conversion
232 how : string, method for down- or re-sampling, default to 'mean' for
233 downsampling
234 axis : int, optional, default 0
235 fill_method : string, fill_method for upsampling, default None
236 closed : {'right', 'left'}
237 Which side of bin interval is closed
238 label : {'right', 'left'}
239 Which bin edge label to label bucket with
240 convention : {'start', 'end', 's', 'e'}
241 kind: "period"/"timestamp"
242 loffset: timedelta
243 Adjust the resampled time labels
244 limit: int, default None
245 Maximum size gap to when reindexing with fill_method
246 base : int, default 0
247 For frequencies that evenly subdivide 1 day, the "origin" of the
248 aggregated intervals. For example, for '5min' frequency, base could
249 range from 0 through 4. Defaults to 0
250 """
251 from pandas.tseries.resample import TimeGrouper
252 axis = self._get_axis_number(axis)
253 sampler = TimeGrouper(rule, label=label, closed=closed, how=how,
254 axis=axis, kind=kind, loffset=loffset,
255 fill_method=fill_method, convention=convention,
256 limit=limit, base=base)
257 return sampler.resample(self)
258
259 def first(self, offset):
260 """
261 Convenience method for subsetting initial periods of time series data
262 based on a date offset
263
264 Parameters
265 ----------
266 offset : string, DateOffset, dateutil.relativedelta
267
268 Examples
269 --------
270 ts.last('10D') -> First 10 days
271
272 Returns
273 -------
274 subset : type of caller
275 """
276 from pandas.tseries.frequencies import to_offset
277 if not isinstance(self.index, DatetimeIndex):
278 raise NotImplementedError
279
280 if len(self.index) == 0:
281 return self
282
283 offset = to_offset(offset)
284 end_date = end = self.index[0] + offset
285
286 # Tick-like, e.g. 3 weeks
287 if not offset.isAnchored() and hasattr(offset, '_inc'):
288 if end_date in self.index:
289 end = self.index.searchsorted(end_date, side='left')
290
291 return self.ix[:end]
292
293 def last(self, offset):
294 """
295 Convenience method for subsetting final periods of time series data
296 based on a date offset
297
298 Parameters
299 ----------
300 offset : string, DateOffset, dateutil.relativedelta
301
302 Examples
303 --------
304 ts.last('5M') -> Last 5 months
305
306 Returns
307 -------
308 subset : type of caller
309 """
310 from pandas.tseries.frequencies import to_offset
311 if not isinstance(self.index, DatetimeIndex):
312 raise NotImplementedError
313
314 if len(self.index) == 0:
315 return self
316
317 offset = to_offset(offset)
318
319 start_date = start = self.index[-1] - offset
320 start = self.index.searchsorted(start_date, side='right')
321 return self.ix[start:]
322
323 def select(self, crit, axis=0):
324 """
325 Return data corresponding to axis labels matching criteria
326
327 Parameters
328 ----------
329 crit : function
330 To be called on each index (label). Should return True or False
331 axis : int
332
333 Returns
334 -------
335 selection : type of caller
336 """
337 axis_name = self._get_axis_name(axis)
338 axis = self._get_axis(axis)
339
340 if len(axis) > 0:
341 new_axis = axis[np.asarray([bool(crit(label)) for label in axis])]
342 else:
343 new_axis = axis
344
345 return self.reindex(**{axis_name: new_axis})
346
347 def drop(self, labels, axis=0, level=None):
348 """
349 Return new object with labels in requested axis removed
350
351 Parameters
352 ----------
353 labels : array-like
354 axis : int
355 level : int or name, default None
356 For MultiIndex
357
358 Returns
359 -------
360 dropped : type of caller
361 """
362 axis_name = self._get_axis_name(axis)
363 axis, axis_ = self._get_axis(axis), axis
364
365 if axis.is_unique:
366 if level is not None:
367 if not isinstance(axis, MultiIndex):
368 raise AssertionError('axis must be a MultiIndex')
369 new_axis = axis.drop(labels, level=level)
370 else:
371 new_axis = axis.drop(labels)
372 dropped = self.reindex(**{axis_name: new_axis})
373 try:
374 dropped.axes[axis_].names = axis.names
375 except AttributeError:
376 pass
377 return dropped
378
379 else:
380 if level is not None:
381 if not isinstance(axis, MultiIndex):
382 raise AssertionError('axis must be a MultiIndex')
383 indexer = -lib.ismember(axis.get_level_values(level),
384 set(labels))
385 else:
386 indexer = -axis.isin(labels)
387
388 slicer = [slice(None)] * self.ndim
389 slicer[self._get_axis_number(axis_name)] = indexer
390
391 return self.ix[tuple(slicer)]
392
393 def sort_index(self, axis=0, ascending=True):
394 """
395 Sort object by labels (along an axis)
396
397 Parameters
398 ----------
399 axis : {0, 1}
400 Sort index/rows versus columns
401 ascending : boolean, default True
402 Sort ascending vs. descending
403
404 Returns
405 -------
406 sorted_obj : type of caller
407 """
408 axis = self._get_axis_number(axis)
409 axis_name = self._get_axis_name(axis)
410 labels = self._get_axis(axis)
411
412 sort_index = labels.argsort()
413 if not ascending:
414 sort_index = sort_index[::-1]
415
416 new_axis = labels.take(sort_index)
417 return self.reindex(**{axis_name: new_axis})
418
419 def reindex(self, *args, **kwds):
420 raise NotImplementedError
421
422 def tshift(self, periods=1, freq=None, **kwds):
423 """
424 Shift the time index, using the index's frequency if available
425
426 Parameters
427 ----------
428 periods : int
429 Number of periods to move, can be positive or negative
430 freq : DateOffset, timedelta, or time rule string, default None
431 Increment to use from datetools module or time rule (e.g. 'EOM')
432
433 Notes
434 -----
435 If freq is not specified then tries to use the freq or inferred_freq
436 attributes of the index. If neither of those attributes exist, a
437 ValueError is thrown
438
439 Returns
440 -------
441 shifted : Series
442 """
443 if freq is None:
444 freq = getattr(self.index, 'freq', None)
445
446 if freq is None:
447 freq = getattr(self.index, 'inferred_freq', None)
448
449 if freq is None:
450 msg = 'Freq was not given and was not set in the index'
451 raise ValueError(msg)
452
453 return self.shift(periods, freq, **kwds)
454
455 def pct_change(self, periods=1, fill_method='pad', limit=None, freq=None,
456 **kwds):
457 """
458 Percent change over given number of periods
459
460 Parameters
461 ----------
462 periods : int, default 1
463 Periods to shift for forming percent change
464 fill_method : str, default 'pad'
465 How to handle NAs before computing percent changes
466 limit : int, default None
467 The number of consecutive NAs to fill before stopping
468 freq : DateOffset, timedelta, or offset alias string, optional
469 Increment to use from time series API (e.g. 'M' or BDay())
470
471 Returns
472 -------
473 chg : Series or DataFrame
474 """
475 if fill_method is None:
476 data = self
477 else:
478 data = self.fillna(method=fill_method, limit=limit)
479 rs = data / data.shift(periods=periods, freq=freq, **kwds) - 1
480 if freq is None:
481 mask = com.isnull(self.values)
482 np.putmask(rs.values, mask, np.nan)
483 return rs
484
485 def to_hdf(self, path_or_buf, key, **kwargs):
486 """ activate the HDFStore """
487 from pandas.io import pytables
488 return pytables.to_hdf(path_or_buf, key, self, **kwargs)
489
490 # install the indexerse
491 for _name, _indexer in indexing.get_indexers_list():
492 PandasObject._create_indexer(_name,_indexer)
493
494 class NDFrame(PandasObject):
495 """
496 N-dimensional analogue of DataFrame. Store multi-dimensional in a
497 size-mutable, labeled data structure
498
499 Parameters
500 ----------
501 data : BlockManager
502 axes : list
503 copy : boolean, default False
504 """
505 # kludge
506 _default_stat_axis = 0
507
508 def __init__(self, data, axes=None, copy=False, dtype=None):
509 if dtype is not None:
510 data = data.astype(dtype)
511 elif copy:
512 data = data.copy()
513
514 if axes is not None:
515 for i, ax in enumerate(axes):
516 data = data.reindex_axis(ax, axis=i)
517
518 object.__setattr__(self, '_data', data)
519 object.__setattr__(self, '_item_cache', {})
520
521 def astype(self, dtype, copy = True, raise_on_error = True):
522 """
523 Cast object to input numpy.dtype
524 Return a copy when copy = True (be really careful with this!)
525
526 Parameters
527 ----------
528 dtype : numpy.dtype or Python type
529 raise_on_error : raise on invalid input
530
531 Returns
532 -------
533 casted : type of caller
534 """
535
536 mgr = self._data.astype(dtype, copy = copy, raise_on_error = raise_on_error)
537 return self._constructor(mgr)
538
539 @property
540 def _constructor(self):
541 return NDFrame
542
543 @property
544 def axes(self):
545 return self._data.axes
546
547 def __repr__(self):
548 return 'NDFrame'
549
550 @property
551 def values(self):
552 return self._data.as_matrix()
553
554 @property
555 def ndim(self):
556 return self._data.ndim
557
558 def _set_axis(self, axis, labels):
559 self._data.set_axis(axis, labels)
560 self._clear_item_cache()
561
562 def __getitem__(self, item):
563 return self._get_item_cache(item)
564
565 def _get_item_cache(self, item):
566 cache = self._item_cache
567 try:
568 return cache[item]
569 except Exception:
570 values = self._data.get(item)
571 res = self._box_item_values(item, values)
572 cache[item] = res
573 return res
574
575 def _box_item_values(self, key, values):
576 raise NotImplementedError
577
578 def _clear_item_cache(self):
579 self._item_cache.clear()
580
581 def _set_item(self, key, value):
582 self._data.set(key, value)
583 self._clear_item_cache()
584
585 def __delitem__(self, key):
586 """
587 Delete item
588 """
589 deleted = False
590
591 maybe_shortcut = False
592 if hasattr(self, 'columns') and isinstance(self.columns, MultiIndex):
593 try:
594 maybe_shortcut = key not in self.columns._engine
595 except TypeError:
596 pass
597
598 if maybe_shortcut:
599 # Allow shorthand to delete all columns whose first len(key)
600 # elements match key:
601 if not isinstance(key, tuple):
602 key = (key,)
603 for col in self.columns:
604 if isinstance(col, tuple) and col[:len(key)] == key:
605 del self[col]
606 deleted = True
607 if not deleted:
608 # If the above loop ran and didn't delete anything because
609 # there was no match, this call should raise the appropriate
610 # exception:
611 self._data.delete(key)
612
613 try:
614 del self._item_cache[key]
615 except KeyError:
616 pass
617
618 def get_dtype_counts(self):
619 """ return the counts of dtypes in this frame """
620 from pandas import Series
621 return Series(self._data.get_dtype_counts())
622
623 def pop(self, item):
624 """
625 Return item and drop from frame. Raise KeyError if not found.
626 """
627 result = self[item]
628 del self[item]
629 return result
630
631 def squeeze(self):
632 """ squeeze length 1 dimensions """
633 try:
634 return self.ix[tuple([ slice(None) if len(a) > 1 else a[0] for a in self.axes ])]
635 except:
636 return self
637
638 def _expand_axes(self, key):
639 new_axes = []
640 for k, ax in zip(key, self.axes):
641 if k not in ax:
642 if type(k) != ax.dtype.type:
643 ax = ax.astype('O')
644 new_axes.append(ax.insert(len(ax), k))
645 else:
646 new_axes.append(ax)
647
648 return new_axes
649
650 #----------------------------------------------------------------------
651 # Consolidation of internals
652
653 def _consolidate_inplace(self):
654 self._clear_item_cache()
655 self._data = self._data.consolidate()
656
657 def consolidate(self, inplace=False):
658 """
659 Compute NDFrame with "consolidated" internals (data of each dtype
660 grouped together in a single ndarray). Mainly an internal API function,
661 but available here to the savvy user
662
663 Parameters
664 ----------
665 inplace : boolean, default False
666 If False return new object, otherwise modify existing object
667
668 Returns
669 -------
670 consolidated : type of caller
671 """
672 if inplace:
673 self._consolidate_inplace()
674 else:
675 cons_data = self._data.consolidate()
676 if cons_data is self._data:
677 cons_data = cons_data.copy()
678 return self._constructor(cons_data)
679
680 @property
681 def _is_mixed_type(self):
682 return self._data.is_mixed_type
683
684 @property
685 def _is_numeric_mixed_type(self):
686 return self._data.is_numeric_mixed_type
687
688 def _reindex_axis(self, new_index, fill_method, axis, copy):
689 new_data = self._data.reindex_axis(new_index, axis=axis,
690 method=fill_method, copy=copy)
691
692 if new_data is self._data and not copy:
693 return self
694 else:
695 return self._constructor(new_data)
696
697 def cumsum(self, axis=None, skipna=True):
698 """
699 Return DataFrame of cumulative sums over requested axis.
700
701 Parameters
702 ----------
703 axis : {0, 1}
704 0 for row-wise, 1 for column-wise
705 skipna : boolean, default True
706 Exclude NA/null values. If an entire row/column is NA, the result
707 will be NA
708
709 Returns
710 -------
711 y : DataFrame
712 """
713 if axis is None:
714 axis = self._default_stat_axis
715 else:
716 axis = self._get_axis_number(axis)
717
718 y = self.values.copy()
719 if not issubclass(y.dtype.type, np.integer):
720 mask = np.isnan(self.values)
721
722 if skipna:
723 np.putmask(y, mask, 0.)
724
725 result = y.cumsum(axis)
726
727 if skipna:
728 np.putmask(result, mask, np.nan)
729 else:
730 result = y.cumsum(axis)
731 return self._wrap_array(result, self.axes, copy=False)
732
733 def _wrap_array(self, array, axes, copy=False):
734 raise NotImplementedError
735
736 def cumprod(self, axis=None, skipna=True):
737 """
738 Return cumulative product over requested axis as DataFrame
739
740 Parameters
741 ----------
742 axis : {0, 1}
743 0 for row-wise, 1 for column-wise
744 skipna : boolean, default True
745 Exclude NA/null values. If an entire row/column is NA, the result
746 will be NA
747
748 Returns
749 -------
750 y : DataFrame
751 """
752 if axis is None:
753 axis = self._default_stat_axis
754 else:
755 axis = self._get_axis_number(axis)
756
757 y = self.values.copy()
758 if not issubclass(y.dtype.type, np.integer):
759 mask = np.isnan(self.values)
760
761 if skipna:
762 np.putmask(y, mask, 1.)
763 result = y.cumprod(axis)
764
765 if skipna:
766 np.putmask(result, mask, np.nan)
767 else:
768 result = y.cumprod(axis)
769 return self._wrap_array(result, self.axes, copy=False)
770
771 def cummax(self, axis=None, skipna=True):
772 """
773 Return DataFrame of cumulative max over requested axis.
774
775 Parameters
776 ----------
777 axis : {0, 1}
778 0 for row-wise, 1 for column-wise
779 skipna : boolean, default True
780 Exclude NA/null values. If an entire row/column is NA, the result
781 will be NA
782
783 Returns
784 -------
785 y : DataFrame
786 """
787 if axis is None:
788 axis = self._default_stat_axis
789 else:
790 axis = self._get_axis_number(axis)
791
792 y = self.values.copy()
793 if not issubclass(y.dtype.type, np.integer):
794 mask = np.isnan(self.values)
795
796 if skipna:
797 np.putmask(y, mask, -np.inf)
798
799 result = np.maximum.accumulate(y, axis)
800
801 if skipna:
802 np.putmask(result, mask, np.nan)
803 else:
804 result = np.maximum.accumulate(y, axis)
805 return self._wrap_array(result, self.axes, copy=False)
806
807 def cummin(self, axis=None, skipna=True):
808 """
809 Return DataFrame of cumulative min over requested axis.
810
811 Parameters
812 ----------
813 axis : {0, 1}
814 0 for row-wise, 1 for column-wise
815 skipna : boolean, default True
816 Exclude NA/null values. If an entire row/column is NA, the result
817 will be NA
818
819 Returns
820 -------
821 y : DataFrame
822 """
823 if axis is None:
824 axis = self._default_stat_axis
825 else:
826 axis = self._get_axis_number(axis)
827
828 y = self.values.copy()
829 if not issubclass(y.dtype.type, np.integer):
830 mask = np.isnan(self.values)
831
832 if skipna:
833 np.putmask(y, mask, np.inf)
834
835 result = np.minimum.accumulate(y, axis)
836
837 if skipna:
838 np.putmask(result, mask, np.nan)
839 else:
840 result = np.minimum.accumulate(y, axis)
841 return self._wrap_array(result, self.axes, copy=False)
842
843 def copy(self, deep=True):
844 """
845 Make a copy of this object
846
847 Parameters
848 ----------
849 deep : boolean, default True
850 Make a deep copy, i.e. also copy data
851
852 Returns
853 -------
854 copy : type of caller
855 """
856 data = self._data
857 if deep:
858 data = data.copy()
859 return self._constructor(data)
860
861 def swaplevel(self, i, j, axis=0):
862 """
863 Swap levels i and j in a MultiIndex on a particular axis
864
865 Parameters
866 ----------
867 i, j : int, string (can be mixed)
868 Level of index to be swapped. Can pass level name as string.
869
870 Returns
871 -------
872 swapped : type of caller (new object)
873 """
874 axis = self._get_axis_number(axis)
875 result = self.copy()
876 labels = result._data.axes[axis]
877 result._data.set_axis(axis, labels.swaplevel(i, j))
878 return result
879
880 def add_prefix(self, prefix):
881 """
882 Concatenate prefix string with panel items names.
883
884 Parameters
885 ----------
886 prefix : string
887
888 Returns
889 -------
890 with_prefix : type of caller
891 """
892 new_data = self._data.add_prefix(prefix)
893 return self._constructor(new_data)
894
895 def add_suffix(self, suffix):
896 """
897 Concatenate suffix string with panel items names
898
899 Parameters
900 ----------
901 suffix : string
902
903 Returns
904 -------
905 with_suffix : type of caller
906 """
907 new_data = self._data.add_suffix(suffix)
908 return self._constructor(new_data)
909
910 def rename_axis(self, mapper, axis=0, copy=True):
911 """
912 Alter index and / or columns using input function or functions.
913 Function / dict values must be unique (1-to-1). Labels not contained in
914 a dict / Series will be left as-is.
915
916 Parameters
917 ----------
918 mapper : dict-like or function, optional
919 axis : int, default 0
920 copy : boolean, default True
921 Also copy underlying data
922
923 See also
924 --------
925 DataFrame.rename
926
927 Returns
928 -------
929 renamed : type of caller
930 """
931 # should move this at some point
932 from pandas.core.series import _get_rename_function
933
934 mapper_f = _get_rename_function(mapper)
935
936 axis = self._get_axis_number(axis)
937 if axis == 0:
938 new_data = self._data.rename_items(mapper_f, copydata=copy)
939 else:
940 new_data = self._data.rename_axis(mapper_f, axis=axis)
941 if copy:
942 new_data = new_data.copy()
943
944 return self._constructor(new_data)
945
946 def take(self, indices, axis=0, convert=True):
947 """
948 Analogous to ndarray.take
949
950 Parameters
951 ----------
952 indices : list / array of ints
953 axis : int, default 0
954 convert : translate neg to pos indices (default)
955
956 Returns
957 -------
958 taken : type of caller
959 """
960
961 # check/convert indicies here
962 if convert:
963 axis = self._get_axis_number(axis)
964 indices = _maybe_convert_indices(indices, len(self._get_axis(axis)))
965
966 if axis == 0:
967 labels = self._get_axis(axis)
968 new_items = labels.take(indices)
969 new_data = self._data.reindex_axis(new_items, axis=0)
970 else:
971 new_data = self._data.take(indices, axis=axis, verify=False)
972 return self._constructor(new_data)
973
974 def tz_convert(self, tz, axis=0, copy=True):
975 """
976 Convert TimeSeries to target time zone. If it is time zone naive, it
977 will be localized to the passed time zone.
978
979 Parameters
980 ----------
981 tz : string or pytz.timezone object
982 copy : boolean, default True
983 Also make a copy of the underlying data
984
985 Returns
986 -------
987 """
988 axis = self._get_axis_number(axis)
989 ax = self._get_axis(axis)
990
991 if not hasattr(ax, 'tz_convert'):
992 ax_name = self._get_axis_name(axis)
993 raise TypeError('%s is not a valid DatetimeIndex or PeriodIndex' %
994 ax_name)
995
996 new_data = self._data
997 if copy:
998 new_data = new_data.copy()
999
1000 new_obj = self._constructor(new_data)
1001 new_ax = ax.tz_convert(tz)
1002
1003 if axis == 0:
1004 new_obj._set_axis(1, new_ax)
1005 elif axis == 1:
1006 new_obj._set_axis(0, new_ax)
1007 self._clear_item_cache()
1008
1009 return new_obj
1010
1011 def tz_localize(self, tz, axis=0, copy=True):
1012 """
1013 Localize tz-naive TimeSeries to target time zone
1014
1015 Parameters
1016 ----------
1017 tz : string or pytz.timezone object
1018 copy : boolean, default True
1019 Also make a copy of the underlying data
1020
1021 Returns
1022 -------
1023 """
1024 axis = self._get_axis_number(axis)
1025 ax = self._get_axis(axis)
1026
1027 if not hasattr(ax, 'tz_localize'):
1028 ax_name = self._get_axis_name(axis)
1029 raise TypeError('%s is not a valid DatetimeIndex or PeriodIndex' %
1030 ax_name)
1031
1032 new_data = self._data
1033 if copy:
1034 new_data = new_data.copy()
1035
1036 new_obj = self._constructor(new_data)
1037 new_ax = ax.tz_localize(tz)
1038
1039 if axis == 0:
1040 new_obj._set_axis(1, new_ax)
1041 elif axis == 1:
1042 new_obj._set_axis(0, new_ax)
1043 self._clear_item_cache()
1044
1045 return new_obj
1046
1047 # Good for either Series or DataFrame
1048
1049
1050 def truncate(self, before=None, after=None, copy=True):
1051 """Function truncate a sorted DataFrame / Series before and/or after
1052 some particular dates.
1053
1054 Parameters
1055 ----------
1056 before : date
1057 Truncate before date
1058 after : date
1059 Truncate after date
1060 copy : boolean, default True
1061
1062 Returns
1063 -------
1064 truncated : type of caller
1065 """
1066 from pandas.tseries.tools import to_datetime
1067 before = to_datetime(before)
1068 after = to_datetime(after)
1069
1070 if before is not None and after is not None:
1071 if before > after:
1072 raise AssertionError('Truncate: %s must be after %s' %
1073 (before, after))
1074
1075 result = self.ix[before:after]
1076
1077 if isinstance(self.index, MultiIndex):
1078 result.index = self.index.truncate(before, after)
1079
1080 if copy:
1081 result = result.copy()
1082
1083 return result
1084
[end of pandas/core/generic.py]
[start of pandas/tseries/util.py]
1 import numpy as np
2
3 import pandas as pd
4
5 import pandas.core.common as com
6 from pandas.core.frame import DataFrame
7 import pandas.core.nanops as nanops
8
9
10 def pivot_annual(series, freq=None):
11 """
12 Group a series by years, taking leap years into account.
13
14 The output has as many rows as distinct years in the original series,
15 and as many columns as the length of a leap year in the units corresponding
16 to the original frequency (366 for daily frequency, 366*24 for hourly...).
17 The fist column of the output corresponds to Jan. 1st, 00:00:00,
18 while the last column corresponds to Dec, 31st, 23:59:59.
19 Entries corresponding to Feb. 29th are masked for non-leap years.
20
21 For example, if the initial series has a daily frequency, the 59th column
22 of the output always corresponds to Feb. 28th, the 61st column to Mar. 1st,
23 and the 60th column is masked for non-leap years.
24 With a hourly initial frequency, the (59*24)th column of the output always
25 correspond to Feb. 28th 23:00, the (61*24)th column to Mar. 1st, 00:00, and
26 the 24 columns between (59*24) and (61*24) are masked.
27
28 If the original frequency is less than daily, the output is equivalent to
29 ``series.convert('A', func=None)``.
30
31 Parameters
32 ----------
33 series : TimeSeries
34 freq : string or None, default None
35
36 Returns
37 -------
38 annual : DataFrame
39 """
40 index = series.index
41 year = index.year
42 years = nanops.unique1d(year)
43
44 if freq is not None:
45 freq = freq.upper()
46 else:
47 freq = series.index.freq
48
49 if freq == 'D':
50 width = 366
51 offset = index.dayofyear - 1
52
53 # adjust for leap year
54 offset[(-isleapyear(year)) & (offset >= 59)] += 1
55
56 columns = range(1, 367)
57 # todo: strings like 1/1, 1/25, etc.?
58 elif freq in ('M', 'BM'):
59 width = 12
60 offset = index.month - 1
61 columns = range(1, 13)
62 elif freq == 'H':
63 width = 8784
64 grouped = series.groupby(series.index.year)
65 defaulted = grouped.apply(lambda x: x.reset_index(drop=True))
66 defaulted.index = defaulted.index.droplevel(0)
67 offset = np.asarray(defaulted.index)
68 offset[-isleapyear(year) & (offset >= 1416)] += 24
69 columns = range(1, 8785)
70 else:
71 raise NotImplementedError(freq)
72
73 flat_index = (year - years.min()) * width + offset
74 flat_index = com._ensure_platform_int(flat_index)
75
76 values = np.empty((len(years), width))
77 values.fill(np.nan)
78 values.put(flat_index, series.values)
79
80 return DataFrame(values, index=years, columns=columns)
81
82
83 def isleapyear(year):
84 """
85 Returns true if year is a leap year.
86
87 Parameters
88 ----------
89 year : integer / sequence
90 A given (list of) year(s).
91 """
92 year = np.asarray(year)
93 return np.logical_or(year % 400 == 0,
94 np.logical_and(year % 4 == 0, year % 100 > 0))
95
[end of pandas/tseries/util.py]
[start of vb_suite/parser.py]
1 from vbench.api import Benchmark
2 from datetime import datetime
3
4 common_setup = """from pandas_vb_common import *
5 from pandas import read_csv, read_table
6 """
7
8 setup = common_setup + """
9 import os
10 N = 10000
11 K = 8
12 df = DataFrame(np.random.randn(N, K) * np.random.randint(100, 10000, (N, K)))
13 df.to_csv('test.csv', sep='|')
14 """
15
16 read_csv_vb = Benchmark("read_csv('test.csv', sep='|')", setup,
17 cleanup="os.remove('test.csv')",
18 start_date=datetime(2012, 5, 7))
19
20
21 setup = common_setup + """
22 import os
23 N = 10000
24 K = 8
25 format = lambda x: '{:,}'.format(x)
26 df = DataFrame(np.random.randn(N, K) * np.random.randint(100, 10000, (N, K)))
27 df = df.applymap(format)
28 df.to_csv('test.csv', sep='|')
29 """
30
31 read_csv_thou_vb = Benchmark("read_csv('test.csv', sep='|', thousands=',')",
32 setup,
33 cleanup="os.remove('test.csv')",
34 start_date=datetime(2012, 5, 7))
35
36 setup = common_setup + """
37 data = ['A,B,C']
38 data = data + ['1,2,3 # comment'] * 100000
39 data = '\\n'.join(data)
40 """
41
42 stmt = "read_csv(StringIO(data), comment='#')"
43 read_csv_comment2 = Benchmark(stmt, setup,
44 start_date=datetime(2011, 11, 1))
45
46 setup = common_setup + """
47 from cStringIO import StringIO
48 import os
49 N = 10000
50 K = 8
51 data = '''\
52 KORD,19990127, 19:00:00, 18:56:00, 0.8100, 2.8100, 7.2000, 0.0000, 280.0000
53 KORD,19990127, 20:00:00, 19:56:00, 0.0100, 2.2100, 7.2000, 0.0000, 260.0000
54 KORD,19990127, 21:00:00, 20:56:00, -0.5900, 2.2100, 5.7000, 0.0000, 280.0000
55 KORD,19990127, 21:00:00, 21:18:00, -0.9900, 2.0100, 3.6000, 0.0000, 270.0000
56 KORD,19990127, 22:00:00, 21:56:00, -0.5900, 1.7100, 5.1000, 0.0000, 290.0000
57 '''
58 data = data * 200
59 """
60 cmd = ("read_table(StringIO(data), sep=',', header=None, "
61 "parse_dates=[[1,2], [1,3]])")
62 sdate = datetime(2012, 5, 7)
63 read_table_multiple_date = Benchmark(cmd, setup, start_date=sdate)
64
65 setup = common_setup + """
66 from cStringIO import StringIO
67 import os
68 N = 10000
69 K = 8
70 data = '''\
71 KORD,19990127 19:00:00, 18:56:00, 0.8100, 2.8100, 7.2000, 0.0000, 280.0000
72 KORD,19990127 20:00:00, 19:56:00, 0.0100, 2.2100, 7.2000, 0.0000, 260.0000
73 KORD,19990127 21:00:00, 20:56:00, -0.5900, 2.2100, 5.7000, 0.0000, 280.0000
74 KORD,19990127 21:00:00, 21:18:00, -0.9900, 2.0100, 3.6000, 0.0000, 270.0000
75 KORD,19990127 22:00:00, 21:56:00, -0.5900, 1.7100, 5.1000, 0.0000, 290.0000
76 '''
77 data = data * 200
78 """
79 cmd = "read_table(StringIO(data), sep=',', header=None, parse_dates=[1])"
80 sdate = datetime(2012, 5, 7)
81 read_table_multiple_date_baseline = Benchmark(cmd, setup, start_date=sdate)
82
[end of vb_suite/parser.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pandas-dev/pandas
|
d749b91c5a594cad7306936385452bed084f3aa8
|
BUG: datetime selection in a DataFrame should work in the where
I think something broke:
see: http://stackoverflow.com/questions/15927451/filter-on-pandas-dataframe-with-datetime-columns-raises-error/15927797?noredirect=1#comment22689922_15927797
This is wrong (should give rows > 0)
```
In [3]: df = pd.DataFrame(dict(A = pd.date_range('20130102',periods=5), B = pd.date_range('20130104',periods=5)))
In [4]: df
Out[4]:
A B
0 2013-01-02 00:00:00 2013-01-04 00:00:00
1 2013-01-03 00:00:00 2013-01-05 00:00:00
2 2013-01-04 00:00:00 2013-01-06 00:00:00
3 2013-01-05 00:00:00 2013-01-07 00:00:00
4 2013-01-06 00:00:00 2013-01-08 00:00:00
In [5]: df[df>pd.Timestamp('20130103')]
Out[5]:
A B
0 2013-01-02 00:00:00 2013-01-04 00:00:00
1 2013-01-03 00:00:00 2013-01-05 00:00:00
2 2013-01-04 00:00:00 2013-01-06 00:00:00
3 2013-01-05 00:00:00 2013-01-07 00:00:00
4 2013-01-06 00:00:00 2013-01-08 00:00:00
```
|
2013-04-10T15:01:47Z
|
<patch>
diff --git a/RELEASE.rst b/RELEASE.rst
--- a/RELEASE.rst
+++ b/RELEASE.rst
@@ -292,6 +292,7 @@ pandas 0.11.0
spacing (GH3258_)
- fixed pretty priniting of sets (GH3294_)
- Panel() and Panel.from_dict() now respects ordering when give OrderedDict (GH3303_)
+ - DataFrame where with a datetimelike incorrectly selecting (GH3311_)
.. _GH3294: https://github.com/pydata/pandas/issues/3294
.. _GH622: https://github.com/pydata/pandas/issues/622
@@ -400,6 +401,7 @@ pandas 0.11.0
.. _GH3258: https://github.com/pydata/pandas/issues/3258
.. _GH3283: https://github.com/pydata/pandas/issues/3283
.. _GH2919: https://github.com/pydata/pandas/issues/2919
+.. _GH3311: https://github.com/pydata/pandas/issues/3311
pandas 0.10.1
=============
diff --git a/pandas/core/internals.py b/pandas/core/internals.py
--- a/pandas/core/internals.py
+++ b/pandas/core/internals.py
@@ -284,6 +284,14 @@ def _try_cast_result(self, result):
we may have roundtripped thru object in the mean-time """
return result
+ def _try_coerce_args(self, values, other):
+ """ provide coercion to our input arguments """
+ return values, other
+
+ def _try_coerce_result(self, result):
+ """ reverse of try_coerce_args """
+ return result
+
def to_native_types(self, slicer=None, na_rep='', **kwargs):
""" convert to our native types format, slicing if desired """
@@ -454,9 +462,10 @@ def eval(self, func, other, raise_on_error = True, try_cast = False):
values = values.T
is_transposed = True
+ values, other = self._try_coerce_args(values, other)
args = [ values, other ]
try:
- result = func(*args)
+ result = self._try_coerce_result(func(*args))
except (Exception), detail:
if raise_on_error:
raise TypeError('Could not operate [%s] with block values [%s]'
@@ -529,8 +538,9 @@ def func(c,v,o):
if c.ravel().all():
return v
+ v, o = self._try_coerce_args(v, o)
try:
- return expressions.where(c, v, o, raise_on_error=True)
+ return self._try_coerce_result(expressions.where(c, v, o, raise_on_error=True))
except (Exception), detail:
if raise_on_error:
raise TypeError('Could not operate [%s] with block values [%s]'
@@ -735,6 +745,29 @@ def _try_cast(self, element):
except:
return element
+ def _try_coerce_args(self, values, other):
+ """ provide coercion to our input arguments
+ we are going to compare vs i8, so coerce to integer
+ values is always ndarra like, other may not be """
+ values = values.view('i8')
+ if isinstance(other, datetime):
+ other = lib.Timestamp(other).asm8.view('i8')
+ elif isnull(other):
+ other = tslib.iNaT
+ else:
+ other = other.view('i8')
+
+ return values, other
+
+ def _try_coerce_result(self, result):
+ """ reverse of try_coerce_args """
+ if isinstance(result, np.ndarray):
+ if result.dtype == 'i8':
+ result = tslib.array_to_datetime(result.astype(object).ravel()).reshape(result.shape)
+ elif isinstance(result, np.integer):
+ result = lib.Timestamp(result)
+ return result
+
def to_native_types(self, slicer=None, na_rep=None, **kwargs):
""" convert to our native types format, slicing if desired """
</patch>
|
[]
|
[]
| ||||
pandas-dev__pandas-19039
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Roundtrip orient for JSON
Add a round-trippable json orient (maybe "roundtrip") to provide enough metadata to reconstruct a frame or series with 100% fidelity. As discussed in #9130
</issue>
<code>
[start of README.md]
1 <div align="center">
2 <img src="https://github.com/pandas-dev/pandas/blob/master/doc/logo/pandas_logo.png"><br>
3 </div>
4
5 -----------------
6
7 # pandas: powerful Python data analysis toolkit
8
9 <table>
10 <tr>
11 <td>Latest Release</td>
12 <td><img src="https://img.shields.io/pypi/v/pandas.svg" alt="latest release" /></td>
13 </tr>
14 <td></td>
15 <td><img src="https://anaconda.org/conda-forge/pandas/badges/version.svg" alt="latest release" /></td>
16 </tr>
17 <tr>
18 <td>Package Status</td>
19 <td><img src="https://img.shields.io/pypi/status/pandas.svg" alt="status" /></td>
20 </tr>
21 <tr>
22 <td>License</td>
23 <td><img src="https://img.shields.io/pypi/l/pandas.svg" alt="license" /></td>
24 </tr>
25 <tr>
26 <td>Build Status</td>
27 <td>
28 <a href="https://travis-ci.org/pandas-dev/pandas">
29 <img src="https://travis-ci.org/pandas-dev/pandas.svg?branch=master" alt="travis build status" />
30 </a>
31 </td>
32 </tr>
33 <tr>
34 <td></td>
35 <td>
36 <a href="https://circleci.com/gh/pandas-dev/pandas">
37 <img src="https://circleci.com/gh/circleci/mongofinil/tree/master.svg?style=shield&circle-token=223d8cafa7b02902c3e150242520af8944e34671" alt="circleci build status" />
38 </a>
39 </td>
40 </tr>
41 <tr>
42 <td></td>
43 <td>
44 <a href="https://ci.appveyor.com/project/pandas-dev/pandas">
45 <img src="https://ci.appveyor.com/api/projects/status/86vn83mxgnl4xf1s/branch/master?svg=true" alt="appveyor build status" />
46 </a>
47 </td>
48 </tr>
49 <tr>
50 <td>Coverage</td>
51 <td><img src="https://codecov.io/github/pandas-dev/pandas/coverage.svg?branch=master" alt="coverage" /></td>
52 </tr>
53 <tr>
54 <td>Conda</td>
55 <td>
56 <a href="https://pandas.pydata.org">
57 <img src="http://pubbadges.s3-website-us-east-1.amazonaws.com/pkgs-downloads-pandas.png" alt="conda default downloads" />
58 </a>
59 </td>
60 </tr>
61 <tr>
62 <td>Conda-forge</td>
63 <td>
64 <a href="https://pandas.pydata.org">
65 <img src="https://anaconda.org/conda-forge/pandas/badges/downloads.svg" alt="conda-forge downloads" />
66 </a>
67 </td>
68 </tr>
69 <tr>
70 <td>PyPI</td>
71 <td>
72 <a href="https://pypi.python.org/pypi/pandas/">
73 <img src="https://img.shields.io/pypi/dm/pandas.svg" alt="pypi downloads" />
74 </a>
75 </td>
76 </tr>
77 </table>
78
79 [](https://gitter.im/pydata/pandas?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
80
81 ## What is it
82
83 **pandas** is a Python package providing fast, flexible, and expressive data
84 structures designed to make working with "relational" or "labeled" data both
85 easy and intuitive. It aims to be the fundamental high-level building block for
86 doing practical, **real world** data analysis in Python. Additionally, it has
87 the broader goal of becoming **the most powerful and flexible open source data
88 analysis / manipulation tool available in any language**. It is already well on
89 its way toward this goal.
90
91 ## Main Features
92 Here are just a few of the things that pandas does well:
93
94 - Easy handling of [**missing data**][missing-data] (represented as
95 `NaN`) in floating point as well as non-floating point data
96 - Size mutability: columns can be [**inserted and
97 deleted**][insertion-deletion] from DataFrame and higher dimensional
98 objects
99 - Automatic and explicit [**data alignment**][alignment]: objects can
100 be explicitly aligned to a set of labels, or the user can simply
101 ignore the labels and let `Series`, `DataFrame`, etc. automatically
102 align the data for you in computations
103 - Powerful, flexible [**group by**][groupby] functionality to perform
104 split-apply-combine operations on data sets, for both aggregating
105 and transforming data
106 - Make it [**easy to convert**][conversion] ragged,
107 differently-indexed data in other Python and NumPy data structures
108 into DataFrame objects
109 - Intelligent label-based [**slicing**][slicing], [**fancy
110 indexing**][fancy-indexing], and [**subsetting**][subsetting] of
111 large data sets
112 - Intuitive [**merging**][merging] and [**joining**][joining] data
113 sets
114 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of
115 data sets
116 - [**Hierarchical**][mi] labeling of axes (possible to have multiple
117 labels per tick)
118 - Robust IO tools for loading data from [**flat files**][flat-files]
119 (CSV and delimited), [**Excel files**][excel], [**databases**][db],
120 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore]
121 - [**Time series**][timeseries]-specific functionality: date range
122 generation and frequency conversion, moving window statistics,
123 moving window linear regressions, date shifting and lagging, etc.
124
125
126 [missing-data]: https://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data
127 [insertion-deletion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#column-selection-addition-deletion
128 [alignment]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html?highlight=alignment#intro-to-data-structures
129 [groupby]: https://pandas.pydata.org/pandas-docs/stable/groupby.html#group-by-split-apply-combine
130 [conversion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe
131 [slicing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges
132 [fancy-indexing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#advanced-indexing-with-ix
133 [subsetting]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing
134 [merging]: https://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging
135 [joining]: https://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index
136 [reshape]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-and-pivot-tables
137 [pivot-table]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#pivot-tables-and-cross-tabulations
138 [mi]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex
139 [flat-files]: https://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files
140 [excel]: https://pandas.pydata.org/pandas-docs/stable/io.html#excel-files
141 [db]: https://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries
142 [hdfstore]: https://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables
143 [timeseries]: https://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-series-date-functionality
144
145 ## Where to get it
146 The source code is currently hosted on GitHub at:
147 https://github.com/pandas-dev/pandas
148
149 Binary installers for the latest released version are available at the [Python
150 package index](https://pypi.python.org/pypi/pandas) and on conda.
151
152 ```sh
153 # conda
154 conda install pandas
155 ```
156
157 ```sh
158 # or PyPI
159 pip install pandas
160 ```
161
162 ## Dependencies
163 - [NumPy](http://www.numpy.org): 1.9.0 or higher
164 - [python-dateutil](https://labix.org/python-dateutil): 2.5.0 or higher
165 - [pytz](https://pythonhosted.org/pytz): 2011k or higher
166
167 See the [full installation instructions](https://pandas.pydata.org/pandas-docs/stable/install.html#dependencies)
168 for recommended and optional dependencies.
169
170 ## Installation from sources
171 To install pandas from source you need Cython in addition to the normal
172 dependencies above. Cython can be installed from pypi:
173
174 ```sh
175 pip install cython
176 ```
177
178 In the `pandas` directory (same one where you found this file after
179 cloning the git repo), execute:
180
181 ```sh
182 python setup.py install
183 ```
184
185 or for installing in [development mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs):
186
187 ```sh
188 python setup.py develop
189 ```
190
191 Alternatively, you can use `pip` if you want all the dependencies pulled
192 in automatically (the `-e` option is for installing it in [development
193 mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs)):
194
195 ```sh
196 pip install -e .
197 ```
198
199 See the full instructions for [installing from source](https://pandas.pydata.org/pandas-docs/stable/install.html#installing-from-source).
200
201 ## License
202 [BSD 3](LICENSE)
203
204 ## Documentation
205 The official documentation is hosted on PyData.org: https://pandas.pydata.org/pandas-docs/stable
206
207 ## Background
208 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and
209 has been under active development since then.
210
211 ## Getting Help
212
213 For usage questions, the best place to go to is [StackOverflow](https://stackoverflow.com/questions/tagged/pandas).
214 Further, general questions and discussions can also take place on the [pydata mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata).
215
216 ## Discussion and Development
217 Most development discussion is taking place on github in this repo. Further, the [pandas-dev mailing list](https://mail.python.org/mailman/listinfo/pandas-dev) can also be used for specialized discussions or design issues, and a [Gitter channel](https://gitter.im/pydata/pandas) is available for quick development related questions.
218
219 ## Contributing to pandas
220 All contributions, bug reports, bug fixes, documentation improvements, enhancements and ideas are welcome.
221
222 A detailed overview on how to contribute can be found in the **[contributing guide.](https://pandas.pydata.org/pandas-docs/stable/contributing.html)**
223
224 If you are simply looking to start working with the pandas codebase, navigate to the [GitHub “issues” tab](https://github.com/pandas-dev/pandas/issues) and start looking through interesting issues. There are a number of issues listed under [Docs](https://github.com/pandas-dev/pandas/issues?labels=Docs&sort=updated&state=open) and [Difficulty Novice](https://github.com/pandas-dev/pandas/issues?q=is%3Aopen+is%3Aissue+label%3A%22Difficulty+Novice%22) where you could start out.
225
226 Or maybe through using pandas you have an idea of your own or are looking for something in the documentation and thinking ‘this can be improved’...you can do something about it!
227
228 Feel free to ask questions on the [mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata) or on [Gitter](https://gitter.im/pydata/pandas).
229
[end of README.md]
[start of asv_bench/benchmarks/io/json.py]
1 import numpy as np
2 import pandas.util.testing as tm
3 from pandas import DataFrame, date_range, timedelta_range, concat, read_json
4
5 from ..pandas_vb_common import setup, BaseIO # noqa
6
7
8 class ReadJSON(BaseIO):
9
10 goal_time = 0.2
11 fname = "__test__.json"
12 params = (['split', 'index', 'records'], ['int', 'datetime'])
13 param_names = ['orient', 'index']
14
15 def setup(self, orient, index):
16 N = 100000
17 indexes = {'int': np.arange(N),
18 'datetime': date_range('20000101', periods=N, freq='H')}
19 df = DataFrame(np.random.randn(N, 5),
20 columns=['float_{}'.format(i) for i in range(5)],
21 index=indexes[index])
22 df.to_json(self.fname, orient=orient)
23
24 def time_read_json(self, orient, index):
25 read_json(self.fname, orient=orient)
26
27
28 class ReadJSONLines(BaseIO):
29
30 goal_time = 0.2
31 fname = "__test_lines__.json"
32 params = ['int', 'datetime']
33 param_names = ['index']
34
35 def setup(self, index):
36 N = 100000
37 indexes = {'int': np.arange(N),
38 'datetime': date_range('20000101', periods=N, freq='H')}
39 df = DataFrame(np.random.randn(N, 5),
40 columns=['float_{}'.format(i) for i in range(5)],
41 index=indexes[index])
42 df.to_json(self.fname, orient='records', lines=True)
43
44 def time_read_json_lines(self, index):
45 read_json(self.fname, orient='records', lines=True)
46
47 def time_read_json_lines_concat(self, index):
48 concat(read_json(self.fname, orient='records', lines=True,
49 chunksize=25000))
50
51 def peakmem_read_json_lines(self, index):
52 read_json(self.fname, orient='records', lines=True)
53
54 def peakmem_read_json_lines_concat(self, index):
55 concat(read_json(self.fname, orient='records', lines=True,
56 chunksize=25000))
57
58
59 class ToJSON(BaseIO):
60
61 goal_time = 0.2
62 fname = "__test__.json"
63 params = ['split', 'columns', 'index']
64 param_names = ['orient']
65
66 def setup(self, lines_orient):
67 N = 10**5
68 ncols = 5
69 index = date_range('20000101', periods=N, freq='H')
70 timedeltas = timedelta_range(start=1, periods=N, freq='s')
71 datetimes = date_range(start=1, periods=N, freq='s')
72 ints = np.random.randint(100000000, size=N)
73 floats = np.random.randn(N)
74 strings = tm.makeStringIndex(N)
75 self.df = DataFrame(np.random.randn(N, ncols), index=np.arange(N))
76 self.df_date_idx = DataFrame(np.random.randn(N, ncols), index=index)
77 self.df_td_int_ts = DataFrame({'td_1': timedeltas,
78 'td_2': timedeltas,
79 'int_1': ints,
80 'int_2': ints,
81 'ts_1': datetimes,
82 'ts_2': datetimes},
83 index=index)
84 self.df_int_floats = DataFrame({'int_1': ints,
85 'int_2': ints,
86 'int_3': ints,
87 'float_1': floats,
88 'float_2': floats,
89 'float_3': floats},
90 index=index)
91 self.df_int_float_str = DataFrame({'int_1': ints,
92 'int_2': ints,
93 'float_1': floats,
94 'float_2': floats,
95 'str_1': strings,
96 'str_2': strings},
97 index=index)
98
99 def time_floats_with_int_index(self, orient):
100 self.df.to_json(self.fname, orient=orient)
101
102 def time_floats_with_dt_index(self, orient):
103 self.df_date_idx.to_json(self.fname, orient=orient)
104
105 def time_delta_int_tstamp(self, orient):
106 self.df_td_int_ts.to_json(self.fname, orient=orient)
107
108 def time_float_int(self, orient):
109 self.df_int_floats.to_json(self.fname, orient=orient)
110
111 def time_float_int_str(self, orient):
112 self.df_int_float_str.to_json(self.fname, orient=orient)
113
114 def time_floats_with_int_idex_lines(self, orient):
115 self.df.to_json(self.fname, orient='records', lines=True)
116
117 def time_floats_with_dt_index_lines(self, orient):
118 self.df_date_idx.to_json(self.fname, orient='records', lines=True)
119
120 def time_delta_int_tstamp_lines(self, orient):
121 self.df_td_int_ts.to_json(self.fname, orient='records', lines=True)
122
123 def time_float_int_lines(self, orient):
124 self.df_int_floats.to_json(self.fname, orient='records', lines=True)
125
126 def time_float_int_str_lines(self, orient):
127 self.df_int_float_str.to_json(self.fname, orient='records', lines=True)
128
[end of asv_bench/benchmarks/io/json.py]
[start of pandas/io/json/json.py]
1 # pylint: disable-msg=E1101,W0613,W0603
2 from itertools import islice
3 import os
4 import numpy as np
5
6 import pandas._libs.json as json
7 from pandas._libs.tslib import iNaT
8 from pandas.compat import StringIO, long, u, to_str
9 from pandas import compat, isna
10 from pandas import Series, DataFrame, to_datetime, MultiIndex
11 from pandas.io.common import (get_filepath_or_buffer, _get_handle,
12 _infer_compression, _stringify_path,
13 BaseIterator)
14 from pandas.io.parsers import _validate_integer
15 from pandas.core.common import AbstractMethodError
16 from pandas.core.reshape.concat import concat
17 from pandas.io.formats.printing import pprint_thing
18 from .normalize import _convert_to_line_delimits
19 from .table_schema import build_table_schema
20 from pandas.core.dtypes.common import is_period_dtype
21
22 loads = json.loads
23 dumps = json.dumps
24
25 TABLE_SCHEMA_VERSION = '0.20.0'
26
27
28 # interface to/from
29 def to_json(path_or_buf, obj, orient=None, date_format='epoch',
30 double_precision=10, force_ascii=True, date_unit='ms',
31 default_handler=None, lines=False, compression=None,
32 index=True):
33
34 if not index and orient not in ['split', 'table']:
35 raise ValueError("'index=False' is only valid when 'orient' is "
36 "'split' or 'table'")
37
38 path_or_buf = _stringify_path(path_or_buf)
39 if lines and orient != 'records':
40 raise ValueError(
41 "'lines' keyword only valid when 'orient' is records")
42
43 if orient == 'table' and isinstance(obj, Series):
44 obj = obj.to_frame(name=obj.name or 'values')
45 if orient == 'table' and isinstance(obj, DataFrame):
46 writer = JSONTableWriter
47 elif isinstance(obj, Series):
48 writer = SeriesWriter
49 elif isinstance(obj, DataFrame):
50 writer = FrameWriter
51 else:
52 raise NotImplementedError("'obj' should be a Series or a DataFrame")
53
54 s = writer(
55 obj, orient=orient, date_format=date_format,
56 double_precision=double_precision, ensure_ascii=force_ascii,
57 date_unit=date_unit, default_handler=default_handler,
58 index=index).write()
59
60 if lines:
61 s = _convert_to_line_delimits(s)
62
63 if isinstance(path_or_buf, compat.string_types):
64 fh, handles = _get_handle(path_or_buf, 'w', compression=compression)
65 try:
66 fh.write(s)
67 finally:
68 fh.close()
69 elif path_or_buf is None:
70 return s
71 else:
72 path_or_buf.write(s)
73
74
75 class Writer(object):
76
77 def __init__(self, obj, orient, date_format, double_precision,
78 ensure_ascii, date_unit, index, default_handler=None):
79 self.obj = obj
80
81 if orient is None:
82 orient = self._default_orient
83
84 self.orient = orient
85 self.date_format = date_format
86 self.double_precision = double_precision
87 self.ensure_ascii = ensure_ascii
88 self.date_unit = date_unit
89 self.default_handler = default_handler
90 self.index = index
91
92 self.is_copy = None
93 self._format_axes()
94
95 def _format_axes(self):
96 raise AbstractMethodError(self)
97
98 def write(self):
99 return self._write(self.obj, self.orient, self.double_precision,
100 self.ensure_ascii, self.date_unit,
101 self.date_format == 'iso', self.default_handler)
102
103 def _write(self, obj, orient, double_precision, ensure_ascii,
104 date_unit, iso_dates, default_handler):
105 return dumps(
106 obj,
107 orient=orient,
108 double_precision=double_precision,
109 ensure_ascii=ensure_ascii,
110 date_unit=date_unit,
111 iso_dates=iso_dates,
112 default_handler=default_handler
113 )
114
115
116 class SeriesWriter(Writer):
117 _default_orient = 'index'
118
119 def _format_axes(self):
120 if not self.obj.index.is_unique and self.orient == 'index':
121 raise ValueError("Series index must be unique for orient="
122 "'{orient}'".format(orient=self.orient))
123
124 def _write(self, obj, orient, double_precision, ensure_ascii,
125 date_unit, iso_dates, default_handler):
126 if not self.index and orient == 'split':
127 obj = {"name": obj.name, "data": obj.values}
128 return super(SeriesWriter, self)._write(obj, orient,
129 double_precision,
130 ensure_ascii, date_unit,
131 iso_dates, default_handler)
132
133
134 class FrameWriter(Writer):
135 _default_orient = 'columns'
136
137 def _format_axes(self):
138 """ try to axes if they are datelike """
139 if not self.obj.index.is_unique and self.orient in (
140 'index', 'columns'):
141 raise ValueError("DataFrame index must be unique for orient="
142 "'{orient}'.".format(orient=self.orient))
143 if not self.obj.columns.is_unique and self.orient in (
144 'index', 'columns', 'records'):
145 raise ValueError("DataFrame columns must be unique for orient="
146 "'{orient}'.".format(orient=self.orient))
147
148 def _write(self, obj, orient, double_precision, ensure_ascii,
149 date_unit, iso_dates, default_handler):
150 if not self.index and orient == 'split':
151 obj = obj.to_dict(orient='split')
152 del obj["index"]
153 return super(FrameWriter, self)._write(obj, orient,
154 double_precision,
155 ensure_ascii, date_unit,
156 iso_dates, default_handler)
157
158
159 class JSONTableWriter(FrameWriter):
160 _default_orient = 'records'
161
162 def __init__(self, obj, orient, date_format, double_precision,
163 ensure_ascii, date_unit, index, default_handler=None):
164 """
165 Adds a `schema` attribute with the Table Schema, resets
166 the index (can't do in caller, because the schema inference needs
167 to know what the index is, forces orient to records, and forces
168 date_format to 'iso'.
169 """
170 super(JSONTableWriter, self).__init__(
171 obj, orient, date_format, double_precision, ensure_ascii,
172 date_unit, index, default_handler=default_handler)
173
174 if date_format != 'iso':
175 msg = ("Trying to write with `orient='table'` and "
176 "`date_format='{fmt}'`. Table Schema requires dates "
177 "to be formatted with `date_format='iso'`"
178 .format(fmt=date_format))
179 raise ValueError(msg)
180
181 self.schema = build_table_schema(obj, index=self.index)
182
183 # NotImplementd on a column MultiIndex
184 if obj.ndim == 2 and isinstance(obj.columns, MultiIndex):
185 raise NotImplementedError(
186 "orient='table' is not supported for MultiIndex")
187
188 # TODO: Do this timedelta properly in objToJSON.c See GH #15137
189 if ((obj.ndim == 1) and (obj.name in set(obj.index.names)) or
190 len(obj.columns & obj.index.names)):
191 msg = "Overlapping names between the index and columns"
192 raise ValueError(msg)
193
194 obj = obj.copy()
195 timedeltas = obj.select_dtypes(include=['timedelta']).columns
196 if len(timedeltas):
197 obj[timedeltas] = obj[timedeltas].applymap(
198 lambda x: x.isoformat())
199 # Convert PeriodIndex to datetimes before serialzing
200 if is_period_dtype(obj.index):
201 obj.index = obj.index.to_timestamp()
202
203 # exclude index from obj if index=False
204 if not self.index:
205 self.obj = obj.reset_index(drop=True)
206 else:
207 self.obj = obj.reset_index(drop=False)
208 self.date_format = 'iso'
209 self.orient = 'records'
210 self.index = index
211
212 def _write(self, obj, orient, double_precision, ensure_ascii,
213 date_unit, iso_dates, default_handler):
214 data = super(JSONTableWriter, self)._write(obj, orient,
215 double_precision,
216 ensure_ascii, date_unit,
217 iso_dates,
218 default_handler)
219 serialized = '{{"schema": {schema}, "data": {data}}}'.format(
220 schema=dumps(self.schema), data=data)
221 return serialized
222
223
224 def read_json(path_or_buf=None, orient=None, typ='frame', dtype=True,
225 convert_axes=True, convert_dates=True, keep_default_dates=True,
226 numpy=False, precise_float=False, date_unit=None, encoding=None,
227 lines=False, chunksize=None, compression='infer'):
228 """
229 Convert a JSON string to pandas object
230
231 Parameters
232 ----------
233 path_or_buf : a valid JSON string or file-like, default: None
234 The string could be a URL. Valid URL schemes include http, ftp, s3, and
235 file. For file URLs, a host is expected. For instance, a local file
236 could be ``file://localhost/path/to/table.json``
237
238 orient : string,
239 Indication of expected JSON string format.
240 Compatible JSON strings can be produced by ``to_json()`` with a
241 corresponding orient value.
242 The set of possible orients is:
243
244 - ``'split'`` : dict like
245 ``{index -> [index], columns -> [columns], data -> [values]}``
246 - ``'records'`` : list like
247 ``[{column -> value}, ... , {column -> value}]``
248 - ``'index'`` : dict like ``{index -> {column -> value}}``
249 - ``'columns'`` : dict like ``{column -> {index -> value}}``
250 - ``'values'`` : just the values array
251
252 The allowed and default values depend on the value
253 of the `typ` parameter.
254
255 * when ``typ == 'series'``,
256
257 - allowed orients are ``{'split','records','index'}``
258 - default is ``'index'``
259 - The Series index must be unique for orient ``'index'``.
260
261 * when ``typ == 'frame'``,
262
263 - allowed orients are ``{'split','records','index',
264 'columns','values'}``
265 - default is ``'columns'``
266 - The DataFrame index must be unique for orients ``'index'`` and
267 ``'columns'``.
268 - The DataFrame columns must be unique for orients ``'index'``,
269 ``'columns'``, and ``'records'``.
270
271 typ : type of object to recover (series or frame), default 'frame'
272 dtype : boolean or dict, default True
273 If True, infer dtypes, if a dict of column to dtype, then use those,
274 if False, then don't infer dtypes at all, applies only to the data.
275 convert_axes : boolean, default True
276 Try to convert the axes to the proper dtypes.
277 convert_dates : boolean, default True
278 List of columns to parse for dates; If True, then try to parse
279 datelike columns default is True; a column label is datelike if
280
281 * it ends with ``'_at'``,
282
283 * it ends with ``'_time'``,
284
285 * it begins with ``'timestamp'``,
286
287 * it is ``'modified'``, or
288
289 * it is ``'date'``
290
291 keep_default_dates : boolean, default True
292 If parsing dates, then parse the default datelike columns
293 numpy : boolean, default False
294 Direct decoding to numpy arrays. Supports numeric data only, but
295 non-numeric column and index labels are supported. Note also that the
296 JSON ordering MUST be the same for each term if numpy=True.
297 precise_float : boolean, default False
298 Set to enable usage of higher precision (strtod) function when
299 decoding string to double values. Default (False) is to use fast but
300 less precise builtin functionality
301 date_unit : string, default None
302 The timestamp unit to detect if converting dates. The default behaviour
303 is to try and detect the correct precision, but if this is not desired
304 then pass one of 's', 'ms', 'us' or 'ns' to force parsing only seconds,
305 milliseconds, microseconds or nanoseconds respectively.
306 lines : boolean, default False
307 Read the file as a json object per line.
308
309 .. versionadded:: 0.19.0
310
311 encoding : str, default is 'utf-8'
312 The encoding to use to decode py3 bytes.
313
314 .. versionadded:: 0.19.0
315
316 chunksize: integer, default None
317 Return JsonReader object for iteration.
318 See the `line-delimted json docs
319 <http://pandas.pydata.org/pandas-docs/stable/io.html#io-jsonl>`_
320 for more information on ``chunksize``.
321 This can only be passed if `lines=True`.
322 If this is None, the file will be read into memory all at once.
323
324 .. versionadded:: 0.21.0
325
326 compression : {'infer', 'gzip', 'bz2', 'zip', 'xz', None}, default 'infer'
327 For on-the-fly decompression of on-disk data. If 'infer', then use
328 gzip, bz2, zip or xz if path_or_buf is a string ending in
329 '.gz', '.bz2', '.zip', or 'xz', respectively, and no decompression
330 otherwise. If using 'zip', the ZIP file must contain only one data
331 file to be read in. Set to None for no decompression.
332
333 .. versionadded:: 0.21.0
334
335 Returns
336 -------
337 result : Series or DataFrame, depending on the value of `typ`.
338
339 See Also
340 --------
341 DataFrame.to_json
342
343 Examples
344 --------
345
346 >>> df = pd.DataFrame([['a', 'b'], ['c', 'd']],
347 ... index=['row 1', 'row 2'],
348 ... columns=['col 1', 'col 2'])
349
350 Encoding/decoding a Dataframe using ``'split'`` formatted JSON:
351
352 >>> df.to_json(orient='split')
353 '{"columns":["col 1","col 2"],
354 "index":["row 1","row 2"],
355 "data":[["a","b"],["c","d"]]}'
356 >>> pd.read_json(_, orient='split')
357 col 1 col 2
358 row 1 a b
359 row 2 c d
360
361 Encoding/decoding a Dataframe using ``'index'`` formatted JSON:
362
363 >>> df.to_json(orient='index')
364 '{"row 1":{"col 1":"a","col 2":"b"},"row 2":{"col 1":"c","col 2":"d"}}'
365 >>> pd.read_json(_, orient='index')
366 col 1 col 2
367 row 1 a b
368 row 2 c d
369
370 Encoding/decoding a Dataframe using ``'records'`` formatted JSON.
371 Note that index labels are not preserved with this encoding.
372
373 >>> df.to_json(orient='records')
374 '[{"col 1":"a","col 2":"b"},{"col 1":"c","col 2":"d"}]'
375 >>> pd.read_json(_, orient='records')
376 col 1 col 2
377 0 a b
378 1 c d
379
380 Encoding with Table Schema
381
382 >>> df.to_json(orient='table')
383 '{"schema": {"fields": [{"name": "index", "type": "string"},
384 {"name": "col 1", "type": "string"},
385 {"name": "col 2", "type": "string"}],
386 "primaryKey": "index",
387 "pandas_version": "0.20.0"},
388 "data": [{"index": "row 1", "col 1": "a", "col 2": "b"},
389 {"index": "row 2", "col 1": "c", "col 2": "d"}]}'
390 """
391
392 compression = _infer_compression(path_or_buf, compression)
393 filepath_or_buffer, _, compression = get_filepath_or_buffer(
394 path_or_buf, encoding=encoding, compression=compression,
395 )
396
397 json_reader = JsonReader(
398 filepath_or_buffer, orient=orient, typ=typ, dtype=dtype,
399 convert_axes=convert_axes, convert_dates=convert_dates,
400 keep_default_dates=keep_default_dates, numpy=numpy,
401 precise_float=precise_float, date_unit=date_unit, encoding=encoding,
402 lines=lines, chunksize=chunksize, compression=compression,
403 )
404
405 if chunksize:
406 return json_reader
407
408 return json_reader.read()
409
410
411 class JsonReader(BaseIterator):
412 """
413 JsonReader provides an interface for reading in a JSON file.
414
415 If initialized with ``lines=True`` and ``chunksize``, can be iterated over
416 ``chunksize`` lines at a time. Otherwise, calling ``read`` reads in the
417 whole document.
418 """
419 def __init__(self, filepath_or_buffer, orient, typ, dtype, convert_axes,
420 convert_dates, keep_default_dates, numpy, precise_float,
421 date_unit, encoding, lines, chunksize, compression):
422
423 self.path_or_buf = filepath_or_buffer
424 self.orient = orient
425 self.typ = typ
426 self.dtype = dtype
427 self.convert_axes = convert_axes
428 self.convert_dates = convert_dates
429 self.keep_default_dates = keep_default_dates
430 self.numpy = numpy
431 self.precise_float = precise_float
432 self.date_unit = date_unit
433 self.encoding = encoding
434 self.compression = compression
435 self.lines = lines
436 self.chunksize = chunksize
437 self.nrows_seen = 0
438 self.should_close = False
439
440 if self.chunksize is not None:
441 self.chunksize = _validate_integer("chunksize", self.chunksize, 1)
442 if not self.lines:
443 raise ValueError("chunksize can only be passed if lines=True")
444
445 data = self._get_data_from_filepath(filepath_or_buffer)
446 self.data = self._preprocess_data(data)
447
448 def _preprocess_data(self, data):
449 """
450 At this point, the data either has a `read` attribute (e.g. a file
451 object or a StringIO) or is a string that is a JSON document.
452
453 If self.chunksize, we prepare the data for the `__next__` method.
454 Otherwise, we read it into memory for the `read` method.
455 """
456 if hasattr(data, 'read') and not self.chunksize:
457 data = data.read()
458 if not hasattr(data, 'read') and self.chunksize:
459 data = StringIO(data)
460
461 return data
462
463 def _get_data_from_filepath(self, filepath_or_buffer):
464 """
465 read_json accepts three input types:
466 1. filepath (string-like)
467 2. file-like object (e.g. open file object, StringIO)
468 3. JSON string
469
470 This method turns (1) into (2) to simplify the rest of the processing.
471 It returns input types (2) and (3) unchanged.
472 """
473
474 data = filepath_or_buffer
475
476 exists = False
477 if isinstance(data, compat.string_types):
478 try:
479 exists = os.path.exists(filepath_or_buffer)
480 # gh-5874: if the filepath is too long will raise here
481 except (TypeError, ValueError):
482 pass
483
484 if exists or self.compression is not None:
485 data, _ = _get_handle(filepath_or_buffer, 'r',
486 encoding=self.encoding,
487 compression=self.compression)
488 self.should_close = True
489 self.open_stream = data
490
491 return data
492
493 def _combine_lines(self, lines):
494 """Combines a list of JSON objects into one JSON object"""
495 lines = filter(None, map(lambda x: x.strip(), lines))
496 return '[' + ','.join(lines) + ']'
497
498 def read(self):
499 """Read the whole JSON input into a pandas object"""
500 if self.lines and self.chunksize:
501 obj = concat(self)
502 elif self.lines:
503
504 data = to_str(self.data)
505 obj = self._get_object_parser(
506 self._combine_lines(data.split('\n'))
507 )
508 else:
509 obj = self._get_object_parser(self.data)
510 self.close()
511 return obj
512
513 def _get_object_parser(self, json):
514 """parses a json document into a pandas object"""
515 typ = self.typ
516 dtype = self.dtype
517 kwargs = {
518 "orient": self.orient, "dtype": self.dtype,
519 "convert_axes": self.convert_axes,
520 "convert_dates": self.convert_dates,
521 "keep_default_dates": self.keep_default_dates, "numpy": self.numpy,
522 "precise_float": self.precise_float, "date_unit": self.date_unit
523 }
524 obj = None
525 if typ == 'frame':
526 obj = FrameParser(json, **kwargs).parse()
527
528 if typ == 'series' or obj is None:
529 if not isinstance(dtype, bool):
530 dtype = dict(data=dtype)
531 obj = SeriesParser(json, **kwargs).parse()
532
533 return obj
534
535 def close(self):
536 """
537 If we opened a stream earlier, in _get_data_from_filepath, we should
538 close it. If an open stream or file was passed, we leave it open.
539 """
540 if self.should_close:
541 try:
542 self.open_stream.close()
543 except (IOError, AttributeError):
544 pass
545
546 def __next__(self):
547 lines = list(islice(self.data, self.chunksize))
548 if lines:
549 lines_json = self._combine_lines(lines)
550 obj = self._get_object_parser(lines_json)
551
552 # Make sure that the returned objects have the right index.
553 obj.index = range(self.nrows_seen, self.nrows_seen + len(obj))
554 self.nrows_seen += len(obj)
555
556 return obj
557
558 self.close()
559 raise StopIteration
560
561
562 class Parser(object):
563
564 _STAMP_UNITS = ('s', 'ms', 'us', 'ns')
565 _MIN_STAMPS = {
566 's': long(31536000),
567 'ms': long(31536000000),
568 'us': long(31536000000000),
569 'ns': long(31536000000000000)}
570
571 def __init__(self, json, orient, dtype=True, convert_axes=True,
572 convert_dates=True, keep_default_dates=False, numpy=False,
573 precise_float=False, date_unit=None):
574 self.json = json
575
576 if orient is None:
577 orient = self._default_orient
578
579 self.orient = orient
580 self.dtype = dtype
581
582 if orient == "split":
583 numpy = False
584
585 if date_unit is not None:
586 date_unit = date_unit.lower()
587 if date_unit not in self._STAMP_UNITS:
588 raise ValueError('date_unit must be one of {units}'
589 .format(units=self._STAMP_UNITS))
590 self.min_stamp = self._MIN_STAMPS[date_unit]
591 else:
592 self.min_stamp = self._MIN_STAMPS['s']
593
594 self.numpy = numpy
595 self.precise_float = precise_float
596 self.convert_axes = convert_axes
597 self.convert_dates = convert_dates
598 self.date_unit = date_unit
599 self.keep_default_dates = keep_default_dates
600 self.obj = None
601
602 def check_keys_split(self, decoded):
603 "checks that dict has only the appropriate keys for orient='split'"
604 bad_keys = set(decoded.keys()).difference(set(self._split_keys))
605 if bad_keys:
606 bad_keys = ", ".join(bad_keys)
607 raise ValueError(u("JSON data had unexpected key(s): {bad_keys}")
608 .format(bad_keys=pprint_thing(bad_keys)))
609
610 def parse(self):
611
612 # try numpy
613 numpy = self.numpy
614 if numpy:
615 self._parse_numpy()
616
617 else:
618 self._parse_no_numpy()
619
620 if self.obj is None:
621 return None
622 if self.convert_axes:
623 self._convert_axes()
624 self._try_convert_types()
625 return self.obj
626
627 def _convert_axes(self):
628 """ try to convert axes """
629 for axis in self.obj._AXIS_NUMBERS.keys():
630 new_axis, result = self._try_convert_data(
631 axis, self.obj._get_axis(axis), use_dtypes=False,
632 convert_dates=True)
633 if result:
634 setattr(self.obj, axis, new_axis)
635
636 def _try_convert_types(self):
637 raise AbstractMethodError(self)
638
639 def _try_convert_data(self, name, data, use_dtypes=True,
640 convert_dates=True):
641 """ try to parse a ndarray like into a column by inferring dtype """
642
643 # don't try to coerce, unless a force conversion
644 if use_dtypes:
645 if self.dtype is False:
646 return data, False
647 elif self.dtype is True:
648 pass
649
650 else:
651
652 # dtype to force
653 dtype = (self.dtype.get(name)
654 if isinstance(self.dtype, dict) else self.dtype)
655 if dtype is not None:
656 try:
657 dtype = np.dtype(dtype)
658 return data.astype(dtype), True
659 except (TypeError, ValueError):
660 return data, False
661
662 if convert_dates:
663 new_data, result = self._try_convert_to_date(data)
664 if result:
665 return new_data, True
666
667 result = False
668
669 if data.dtype == 'object':
670
671 # try float
672 try:
673 data = data.astype('float64')
674 result = True
675 except (TypeError, ValueError):
676 pass
677
678 if data.dtype.kind == 'f':
679
680 if data.dtype != 'float64':
681
682 # coerce floats to 64
683 try:
684 data = data.astype('float64')
685 result = True
686 except (TypeError, ValueError):
687 pass
688
689 # do't coerce 0-len data
690 if len(data) and (data.dtype == 'float' or data.dtype == 'object'):
691
692 # coerce ints if we can
693 try:
694 new_data = data.astype('int64')
695 if (new_data == data).all():
696 data = new_data
697 result = True
698 except (TypeError, ValueError):
699 pass
700
701 # coerce ints to 64
702 if data.dtype == 'int':
703
704 # coerce floats to 64
705 try:
706 data = data.astype('int64')
707 result = True
708 except (TypeError, ValueError):
709 pass
710
711 return data, result
712
713 def _try_convert_to_date(self, data):
714 """ try to parse a ndarray like into a date column
715 try to coerce object in epoch/iso formats and
716 integer/float in epcoh formats, return a boolean if parsing
717 was successful """
718
719 # no conversion on empty
720 if not len(data):
721 return data, False
722
723 new_data = data
724 if new_data.dtype == 'object':
725 try:
726 new_data = data.astype('int64')
727 except (TypeError, ValueError, OverflowError):
728 pass
729
730 # ignore numbers that are out of range
731 if issubclass(new_data.dtype.type, np.number):
732 in_range = (isna(new_data.values) | (new_data > self.min_stamp) |
733 (new_data.values == iNaT))
734 if not in_range.all():
735 return data, False
736
737 date_units = (self.date_unit,) if self.date_unit else self._STAMP_UNITS
738 for date_unit in date_units:
739 try:
740 new_data = to_datetime(new_data, errors='raise',
741 unit=date_unit)
742 except ValueError:
743 continue
744 except Exception:
745 break
746 return new_data, True
747 return data, False
748
749 def _try_convert_dates(self):
750 raise AbstractMethodError(self)
751
752
753 class SeriesParser(Parser):
754 _default_orient = 'index'
755 _split_keys = ('name', 'index', 'data')
756
757 def _parse_no_numpy(self):
758
759 json = self.json
760 orient = self.orient
761 if orient == "split":
762 decoded = {str(k): v for k, v in compat.iteritems(
763 loads(json, precise_float=self.precise_float))}
764 self.check_keys_split(decoded)
765 self.obj = Series(dtype=None, **decoded)
766 else:
767 self.obj = Series(
768 loads(json, precise_float=self.precise_float), dtype=None)
769
770 def _parse_numpy(self):
771
772 json = self.json
773 orient = self.orient
774 if orient == "split":
775 decoded = loads(json, dtype=None, numpy=True,
776 precise_float=self.precise_float)
777 decoded = {str(k): v for k, v in compat.iteritems(decoded)}
778 self.check_keys_split(decoded)
779 self.obj = Series(**decoded)
780 elif orient == "columns" or orient == "index":
781 self.obj = Series(*loads(json, dtype=None, numpy=True,
782 labelled=True,
783 precise_float=self.precise_float))
784 else:
785 self.obj = Series(loads(json, dtype=None, numpy=True,
786 precise_float=self.precise_float))
787
788 def _try_convert_types(self):
789 if self.obj is None:
790 return
791 obj, result = self._try_convert_data(
792 'data', self.obj, convert_dates=self.convert_dates)
793 if result:
794 self.obj = obj
795
796
797 class FrameParser(Parser):
798 _default_orient = 'columns'
799 _split_keys = ('columns', 'index', 'data')
800
801 def _parse_numpy(self):
802
803 json = self.json
804 orient = self.orient
805
806 if orient == "columns":
807 args = loads(json, dtype=None, numpy=True, labelled=True,
808 precise_float=self.precise_float)
809 if len(args):
810 args = (args[0].T, args[2], args[1])
811 self.obj = DataFrame(*args)
812 elif orient == "split":
813 decoded = loads(json, dtype=None, numpy=True,
814 precise_float=self.precise_float)
815 decoded = {str(k): v for k, v in compat.iteritems(decoded)}
816 self.check_keys_split(decoded)
817 self.obj = DataFrame(**decoded)
818 elif orient == "values":
819 self.obj = DataFrame(loads(json, dtype=None, numpy=True,
820 precise_float=self.precise_float))
821 else:
822 self.obj = DataFrame(*loads(json, dtype=None, numpy=True,
823 labelled=True,
824 precise_float=self.precise_float))
825
826 def _parse_no_numpy(self):
827
828 json = self.json
829 orient = self.orient
830
831 if orient == "columns":
832 self.obj = DataFrame(
833 loads(json, precise_float=self.precise_float), dtype=None)
834 elif orient == "split":
835 decoded = {str(k): v for k, v in compat.iteritems(
836 loads(json, precise_float=self.precise_float))}
837 self.check_keys_split(decoded)
838 self.obj = DataFrame(dtype=None, **decoded)
839 elif orient == "index":
840 self.obj = DataFrame(
841 loads(json, precise_float=self.precise_float), dtype=None).T
842 else:
843 self.obj = DataFrame(
844 loads(json, precise_float=self.precise_float), dtype=None)
845
846 def _process_converter(self, f, filt=None):
847 """ take a conversion function and possibly recreate the frame """
848
849 if filt is None:
850 filt = lambda col, c: True
851
852 needs_new_obj = False
853 new_obj = dict()
854 for i, (col, c) in enumerate(self.obj.iteritems()):
855 if filt(col, c):
856 new_data, result = f(col, c)
857 if result:
858 c = new_data
859 needs_new_obj = True
860 new_obj[i] = c
861
862 if needs_new_obj:
863
864 # possibly handle dup columns
865 new_obj = DataFrame(new_obj, index=self.obj.index)
866 new_obj.columns = self.obj.columns
867 self.obj = new_obj
868
869 def _try_convert_types(self):
870 if self.obj is None:
871 return
872 if self.convert_dates:
873 self._try_convert_dates()
874
875 self._process_converter(
876 lambda col, c: self._try_convert_data(col, c, convert_dates=False))
877
878 def _try_convert_dates(self):
879 if self.obj is None:
880 return
881
882 # our columns to parse
883 convert_dates = self.convert_dates
884 if convert_dates is True:
885 convert_dates = []
886 convert_dates = set(convert_dates)
887
888 def is_ok(col):
889 """ return if this col is ok to try for a date parse """
890 if not isinstance(col, compat.string_types):
891 return False
892
893 col_lower = col.lower()
894 if (col_lower.endswith('_at') or
895 col_lower.endswith('_time') or
896 col_lower == 'modified' or
897 col_lower == 'date' or
898 col_lower == 'datetime' or
899 col_lower.startswith('timestamp')):
900 return True
901 return False
902
903 self._process_converter(
904 lambda col, c: self._try_convert_to_date(c),
905 lambda col, c: ((self.keep_default_dates and is_ok(col)) or
906 col in convert_dates))
907
[end of pandas/io/json/json.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pandas-dev/pandas
|
f2d8db1acccd73340988af9ad5874252fd5c3967
|
Roundtrip orient for JSON
Add a round-trippable json orient (maybe "roundtrip") to provide enough metadata to reconstruct a frame or series with 100% fidelity. As discussed in #9130
|
This could also support multi-level indexes and panels etc.
@jreback and i are currently using a schema that looks somewhat like this:
``` python
schema = {
'pandas': {
'type': 'DataFrame',
'version': pd.__version__,
'orient': 'records',
'date_unit': 'ms',
'values': df,
'dtypes': {
'str': [dtype.str for dtype in dtypes],
'characters': [dtype.char for dtype in dtypes],
'kind': [dtype.kind for dtype in dtypes]
},
'axes': {
'columns': {
'values': df.columns.values,
'dtype': df.columns.dtype.str,
'names': df.columns.names
},
'index': {
'values': df.index.values,
'dtype': df.index.dtype.str,
'names': df.index.names
}
}
}
}
```
@Komnomnomnom
the idea is the encompas enough meta data to really make a DataFrame round tripable w/o passing anything, IOW, making it a fully-capable format. Of course its a 'user-made' one, but that is true with lots of formats.
I suspect lots of people have already implemented this (painfully) by creating their own formats. This is basically trying to set a standard, in that, pandas objects become JSON serializable in a well-defined format.
This _can_ be done simply by using `pandas.io.json.dumps` with this format on the writing. @cpcloud mentioned that it might be necessary to support some sniffing the reader to de-serialize this format (e.g. look for the `pandas` tag then use the meta-data to set the next part of the serialization options).
The idea of the format is that it is pretty straightforward to extend to Series/Panel/Multi-Index.
|
2018-01-02T21:13:19Z
|
<patch>
diff --git a/doc/source/io.rst b/doc/source/io.rst
--- a/doc/source/io.rst
+++ b/doc/source/io.rst
@@ -1648,7 +1648,7 @@ with optional parameters:
DataFrame
- default is ``columns``
- - allowed values are {``split``, ``records``, ``index``, ``columns``, ``values``}
+ - allowed values are {``split``, ``records``, ``index``, ``columns``, ``values``, ``table``}
The format of the JSON string
@@ -1732,6 +1732,9 @@ values, index and columns. Name is also included for ``Series``:
dfjo.to_json(orient="split")
sjo.to_json(orient="split")
+**Table oriented** serializes to the JSON `Table Schema`_, allowing for the
+preservation of metadata including but not limited to dtypes and index names.
+
.. note::
Any orient option that encodes to a JSON object will not preserve the ordering of
@@ -1833,7 +1836,7 @@ is ``None``. To explicitly force ``Series`` parsing, pass ``typ=series``
DataFrame
- default is ``columns``
- - allowed values are {``split``, ``records``, ``index``, ``columns``, ``values``}
+ - allowed values are {``split``, ``records``, ``index``, ``columns``, ``values``, ``table``}
The format of the JSON string
@@ -1846,6 +1849,8 @@ is ``None``. To explicitly force ``Series`` parsing, pass ``typ=series``
``index``; dict like {index -> {column -> value}}
``columns``; dict like {column -> {index -> value}}
``values``; just the values array
+ ``table``; adhering to the JSON `Table Schema`_
+
- ``dtype`` : if True, infer dtypes, if a dict of column to dtype, then use those, if False, then don't infer dtypes at all, default is True, apply only to the data
- ``convert_axes`` : boolean, try to convert the axes to the proper dtypes, default is True
@@ -2202,7 +2207,39 @@ A few notes on the generated table schema:
then ``level_<i>`` is used.
-_Table Schema: http://specs.frictionlessdata.io/json-table-schema/
+.. versionadded:: 0.23.0
+
+``read_json`` also accepts ``orient='table'`` as an argument. This allows for
+the preserveration of metadata such as dtypes and index names in a
+round-trippable manner.
+
+ .. ipython:: python
+
+ df = pd.DataFrame({'foo': [1, 2, 3, 4],
+ 'bar': ['a', 'b', 'c', 'd'],
+ 'baz': pd.date_range('2018-01-01', freq='d', periods=4),
+ 'qux': pd.Categorical(['a', 'b', 'c', 'c'])
+ }, index=pd.Index(range(4), name='idx'))
+ df
+ df.dtypes
+
+ df.to_json('test.json', orient='table')
+ new_df = pd.read_json('test.json', orient='table')
+ new_df
+ new_df.dtypes
+
+Please note that the string `index` is not supported with the round trip
+format, as it is used by default in ``write_json`` to indicate a missing index
+name.
+
+.. ipython:: python
+
+ df.index.name = 'index'
+ df.to_json('test.json', orient='table')
+ new_df = pd.read_json('test.json', orient='table')
+ print(new_df.index.name)
+
+.. _Table Schema: http://specs.frictionlessdata.io/json-table-schema/
HTML
----
diff --git a/doc/source/whatsnew/v0.23.0.txt b/doc/source/whatsnew/v0.23.0.txt
--- a/doc/source/whatsnew/v0.23.0.txt
+++ b/doc/source/whatsnew/v0.23.0.txt
@@ -145,6 +145,37 @@ Current Behavior
s.rank(na_option='top')
+.. _whatsnew_0230.enhancements.round-trippable_json:
+
+JSON read/write round-trippable with ``orient='table'``
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+A ``DataFrame`` can now be written to and subsequently read back via JSON while preserving metadata through usage of the ``orient='table'`` argument (see :issue:`18912` and :issue:`9146`). Previously, none of the available ``orient`` values guaranteed the preservation of dtypes and index names, amongst other metadata.
+
+.. ipython:: python
+
+ df = pd.DataFrame({'foo': [1, 2, 3, 4],
+ 'bar': ['a', 'b', 'c', 'd'],
+ 'baz': pd.date_range('2018-01-01', freq='d', periods=4),
+ 'qux': pd.Categorical(['a', 'b', 'c', 'c'])
+ }, index=pd.Index(range(4), name='idx'))
+ df
+ df.dtypes
+ df.to_json('test.json', orient='table')
+ new_df = pd.read_json('test.json', orient='table')
+ new_df
+ new_df.dtypes
+
+Please note that the string `index` is not supported with the round trip format, as it is used by default in ``write_json`` to indicate a missing index name.
+
+.. ipython:: python
+
+ df.index.name = 'index'
+ df.to_json('test.json', orient='table')
+ new_df = pd.read_json('test.json', orient='table')
+ new_df
+ print(new_df.index.name)
+
.. _whatsnew_0230.enhancements.other:
Other Enhancements
diff --git a/pandas/io/json/json.py b/pandas/io/json/json.py
--- a/pandas/io/json/json.py
+++ b/pandas/io/json/json.py
@@ -16,7 +16,7 @@
from pandas.core.reshape.concat import concat
from pandas.io.formats.printing import pprint_thing
from .normalize import _convert_to_line_delimits
-from .table_schema import build_table_schema
+from .table_schema import build_table_schema, parse_table_schema
from pandas.core.dtypes.common import is_period_dtype
loads = json.loads
@@ -261,13 +261,16 @@ def read_json(path_or_buf=None, orient=None, typ='frame', dtype=True,
* when ``typ == 'frame'``,
- allowed orients are ``{'split','records','index',
- 'columns','values'}``
+ 'columns','values', 'table'}``
- default is ``'columns'``
- The DataFrame index must be unique for orients ``'index'`` and
``'columns'``.
- The DataFrame columns must be unique for orients ``'index'``,
``'columns'``, and ``'records'``.
+ .. versionadded:: 0.23.0
+ 'table' as an allowed value for the ``orient`` argument
+
typ : type of object to recover (series or frame), default 'frame'
dtype : boolean or dict, default True
If True, infer dtypes, if a dict of column to dtype, then use those,
@@ -336,6 +339,15 @@ def read_json(path_or_buf=None, orient=None, typ='frame', dtype=True,
-------
result : Series or DataFrame, depending on the value of `typ`.
+ Notes
+ -----
+ Specific to ``orient='table'``, if a ``DataFrame`` with a literal ``Index``
+ name of `index` gets written with ``write_json``, the subsequent read
+ operation will incorrectly set the ``Index`` name to ``None``. This is
+ because `index` is also used by ``write_json`` to denote a missing
+ ``Index`` name, and the subsequent ``read_json`` operation cannot
+ distinguish between the two.
+
See Also
--------
DataFrame.to_json
@@ -839,6 +851,9 @@ def _parse_no_numpy(self):
elif orient == "index":
self.obj = DataFrame(
loads(json, precise_float=self.precise_float), dtype=None).T
+ elif orient == 'table':
+ self.obj = parse_table_schema(json,
+ precise_float=self.precise_float)
else:
self.obj = DataFrame(
loads(json, precise_float=self.precise_float), dtype=None)
diff --git a/pandas/io/json/table_schema.py b/pandas/io/json/table_schema.py
--- a/pandas/io/json/table_schema.py
+++ b/pandas/io/json/table_schema.py
@@ -3,6 +3,9 @@
http://specs.frictionlessdata.io/json-table-schema/
"""
+import pandas._libs.json as json
+from pandas import DataFrame
+from pandas.api.types import CategoricalDtype
from pandas.core.common import _all_not_none
from pandas.core.dtypes.common import (
is_integer_dtype, is_timedelta64_dtype, is_numeric_dtype,
@@ -10,6 +13,8 @@
is_categorical_dtype, is_period_dtype, is_string_dtype
)
+loads = json.loads
+
def as_json_table_type(x):
"""
@@ -75,7 +80,7 @@ def set_default_names(data):
return data
-def make_field(arr, dtype=None):
+def convert_pandas_type_to_json_field(arr, dtype=None):
dtype = dtype or arr.dtype
if arr.name is None:
name = 'values'
@@ -103,6 +108,69 @@ def make_field(arr, dtype=None):
return field
+def convert_json_field_to_pandas_type(field):
+ """
+ Converts a JSON field descriptor into its corresponding NumPy / pandas type
+
+ Parameters
+ ----------
+ field
+ A JSON field descriptor
+
+ Returns
+ -------
+ dtype
+
+ Raises
+ -----
+ ValueError
+ If the type of the provided field is unknown or currently unsupported
+
+ Examples
+ --------
+ >>> convert_json_field_to_pandas_type({'name': 'an_int',
+ 'type': 'integer'})
+ 'int64'
+ >>> convert_json_field_to_pandas_type({'name': 'a_categorical',
+ 'type': 'any',
+ 'contraints': {'enum': [
+ 'a', 'b', 'c']},
+ 'ordered': True})
+ 'CategoricalDtype(categories=['a', 'b', 'c'], ordered=True)'
+ >>> convert_json_field_to_pandas_type({'name': 'a_datetime',
+ 'type': 'datetime'})
+ 'datetime64[ns]'
+ >>> convert_json_field_to_pandas_type({'name': 'a_datetime_with_tz',
+ 'type': 'datetime',
+ 'tz': 'US/Central'})
+ 'datetime64[ns, US/Central]'
+ """
+ typ = field['type']
+ if typ == 'string':
+ return 'object'
+ elif typ == 'integer':
+ return 'int64'
+ elif typ == 'number':
+ return 'float64'
+ elif typ == 'boolean':
+ return 'bool'
+ elif typ == 'duration':
+ return 'timedelta64'
+ elif typ == 'datetime':
+ if field.get('tz'):
+ return 'datetime64[ns, {tz}]'.format(tz=field['tz'])
+ else:
+ return 'datetime64[ns]'
+ elif typ == 'any':
+ if 'constraints' in field and 'ordered' in field:
+ return CategoricalDtype(categories=field['constraints']['enum'],
+ ordered=field['ordered'])
+ else:
+ return 'object'
+
+ raise ValueError("Unsupported or invalid field type: {}".format(typ))
+
+
def build_table_schema(data, index=True, primary_key=None, version=True):
"""
Create a Table schema from ``data``.
@@ -158,15 +226,15 @@ def build_table_schema(data, index=True, primary_key=None, version=True):
if index:
if data.index.nlevels > 1:
for level in data.index.levels:
- fields.append(make_field(level))
+ fields.append(convert_pandas_type_to_json_field(level))
else:
- fields.append(make_field(data.index))
+ fields.append(convert_pandas_type_to_json_field(data.index))
if data.ndim > 1:
for column, s in data.iteritems():
- fields.append(make_field(s))
+ fields.append(convert_pandas_type_to_json_field(s))
else:
- fields.append(make_field(data))
+ fields.append(convert_pandas_type_to_json_field(data))
schema['fields'] = fields
if index and data.index.is_unique and primary_key is None:
@@ -180,3 +248,65 @@ def build_table_schema(data, index=True, primary_key=None, version=True):
if version:
schema['pandas_version'] = '0.20.0'
return schema
+
+
+def parse_table_schema(json, precise_float):
+ """
+ Builds a DataFrame from a given schema
+
+ Parameters
+ ----------
+ json :
+ A JSON table schema
+ precise_float : boolean
+ Flag controlling precision when decoding string to double values, as
+ dictated by ``read_json``
+
+ Returns
+ -------
+ df : DataFrame
+
+ Raises
+ ------
+ NotImplementedError
+ If the JSON table schema contains either timezone or timedelta data
+
+ Notes
+ -----
+ Because ``write_json`` uses the string `index` to denote a name-less
+ ``Index``, this function sets the name of the returned ``DataFrame`` to
+ ``None`` when said string is encountered. Therefore, intentional usage
+ of `index` as the ``Index`` name is not supported.
+
+ See also
+ --------
+ build_table_schema : inverse function
+ pandas.read_json
+ """
+ table = loads(json, precise_float=precise_float)
+ col_order = [field['name'] for field in table['schema']['fields']]
+ df = DataFrame(table['data'])[col_order]
+
+ dtypes = {field['name']: convert_json_field_to_pandas_type(field)
+ for field in table['schema']['fields']}
+
+ # Cannot directly use as_type with timezone data on object; raise for now
+ if any(str(x).startswith('datetime64[ns, ') for x in dtypes.values()):
+ raise NotImplementedError('table="orient" can not yet read timezone '
+ 'data')
+
+ # No ISO constructor for Timedelta as of yet, so need to raise
+ if 'timedelta64' in dtypes.values():
+ raise NotImplementedError('table="orient" can not yet read '
+ 'ISO-formatted Timedelta data')
+
+ df = df.astype(dtypes)
+
+ df = df.set_index(table['schema']['primaryKey'])
+ if len(df.index.names) == 1 and df.index.name == 'index':
+ df.index.name = None
+ else:
+ if all(x.startswith('level_') for x in df.index.names):
+ df.index.names = [None] * len(df.index.names)
+
+ return df
</patch>
|
[]
|
[]
| |||
huggingface__transformers-18716
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
LayoutLMv3 image preparation code snippet does not work with PDFs
### System Info
- `transformers` version: 4.20.1
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.9.12
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.12.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@NielsRogge
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
This is not a bug per se, but I wasn't sure how else to file it. The official LayoutLMv3 Transformers documentation indicates that PDF files can be directly processed; however, they can't -- at least, not with the current code snippets.
For example, this [code snippet](https://huggingface.co/docs/transformers/model_doc/layoutlmv3#transformers.LayoutLMv3FeatureExtractor.__call__.example) has the lines:
```
from PIL import Image
image = Image.open("name_of_your_document - can be a png file, pdf, etc.").convert("RGB")
```
However, `PIL.Image` cannot open PDFs. In fact, the [Pillow documentation](https://pillow.readthedocs.io/en/stable/handbook/image-file-formats.html?highlight=pdf#:~:text=.palm.-,PDF,-%23) indicates that PDFs are only writable.
Reproduction is trivial, but, for completeness:
1. Download this pdf: https://slicedinvoices.com/pdf/wordpress-pdf-invoice-plugin-sample.pdf
2. Install Pillow: `pip install pillow`
3. Run this code:
```python
from PIL import Image
image = Image.open(<path_to_invoice.pdf>).convert("RGB")
```
Expected error:
```
UnidentifiedImageError: cannot identify image file '/Users/joe/Downloads/wordpress-pdf-invoice-plugin-sample.pdf'
```
### Expected behavior
The documentation should provide a working solution for processing PDFs.
I did notice that the `__call__` implementation of the `LayoutLMv3FeatureExtractor` has an `images` argument that accepts numpy arrays and torch tensors, in addition to Image objects. So, I assume one or more of the following options is the correct workflow:
1. Read PDFs into a python object that can be converted to an PIL.Image type.
2. Read/transform PDFs into an array as expected by the feature extractor.
3. Convert PDFs to an image and proceed with PIL.Image
However, as I'm new to document intelligence and modeling PDFs, I'll have to do some digging to identify the right solution. So, it would be nice if the documentation was updated so that others won't have to do the same.
One work-around (or solution?) is to just convert the PDF to an image, e.g.:
```python
import io
from wand.image import Image as WImage
import PIL
local_path = "/Users/joe/Downloads/wordpress-pdf-invoice-plugin-sample.pdf"
img = WImage(filename=local_path, resolution=100) # bigger
image = PIL.Image.open(io.BytesIO(img.make_blob("png"))).convert("RGB")
```
It also [looks like](https://stackoverflow.com/questions/47599012/how-to-convert-a-wand-image-object-to-numpy-array-without-opencv) Wand supports exporting to Numpy `array`.
</issue>
<code>
[start of README.md]
1 <!---
2 Copyright 2020 The HuggingFace Team. All rights reserved.
3
4 Licensed under the Apache License, Version 2.0 (the "License");
5 you may not use this file except in compliance with the License.
6 You may obtain a copy of the License at
7
8 http://www.apache.org/licenses/LICENSE-2.0
9
10 Unless required by applicable law or agreed to in writing, software
11 distributed under the License is distributed on an "AS IS" BASIS,
12 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 See the License for the specific language governing permissions and
14 limitations under the License.
15 -->
16
17 <p align="center">
18 <br>
19 <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers_logo_name.png" width="400"/>
20 <br>
21 <p>
22 <p align="center">
23 <a href="https://circleci.com/gh/huggingface/transformers">
24 <img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/main">
25 </a>
26 <a href="https://github.com/huggingface/transformers/blob/main/LICENSE">
27 <img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue">
28 </a>
29 <a href="https://huggingface.co/docs/transformers/index">
30 <img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers/index.svg?down_color=red&down_message=offline&up_message=online">
31 </a>
32 <a href="https://github.com/huggingface/transformers/releases">
33 <img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg">
34 </a>
35 <a href="https://github.com/huggingface/transformers/blob/main/CODE_OF_CONDUCT.md">
36 <img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg">
37 </a>
38 <a href="https://zenodo.org/badge/latestdoi/155220641"><img src="https://zenodo.org/badge/155220641.svg" alt="DOI"></a>
39 </p>
40
41 <h4 align="center">
42 <p>
43 <b>English</b> |
44 <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hans.md">简体中文</a> |
45 <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hant.md">繁體中文</a> |
46 <a href="https://github.com/huggingface/transformers/blob/main/README_ko.md">한국어</a>
47 <p>
48 </h4>
49
50 <h3 align="center">
51 <p>State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow</p>
52 </h3>
53
54 <h3 align="center">
55 <a href="https://hf.co/course"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/course_banner.png"></a>
56 </h3>
57
58 🤗 Transformers provides thousands of pretrained models to perform tasks on different modalities such as text, vision, and audio.
59
60 These models can be applied on:
61
62 * 📝 Text, for tasks like text classification, information extraction, question answering, summarization, translation, text generation, in over 100 languages.
63 * 🖼️ Images, for tasks like image classification, object detection, and segmentation.
64 * 🗣️ Audio, for tasks like speech recognition and audio classification.
65
66 Transformer models can also perform tasks on **several modalities combined**, such as table question answering, optical character recognition, information extraction from scanned documents, video classification, and visual question answering.
67
68 🤗 Transformers provides APIs to quickly download and use those pretrained models on a given text, fine-tune them on your own datasets and then share them with the community on our [model hub](https://huggingface.co/models). At the same time, each python module defining an architecture is fully standalone and can be modified to enable quick research experiments.
69
70 🤗 Transformers is backed by the three most popular deep learning libraries — [Jax](https://jax.readthedocs.io/en/latest/), [PyTorch](https://pytorch.org/) and [TensorFlow](https://www.tensorflow.org/) — with a seamless integration between them. It's straightforward to train your models with one before loading them for inference with the other.
71
72 ## Online demos
73
74 You can test most of our models directly on their pages from the [model hub](https://huggingface.co/models). We also offer [private model hosting, versioning, & an inference API](https://huggingface.co/pricing) for public and private models.
75
76 Here are a few examples:
77
78 In Natural Language Processing:
79 - [Masked word completion with BERT](https://huggingface.co/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France)
80 - [Name Entity Recognition with Electra](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city)
81 - [Text generation with GPT-2](https://huggingface.co/gpt2?text=A+long+time+ago%2C+)
82 - [Natural Language Inference with RoBERTa](https://huggingface.co/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal)
83 - [Summarization with BART](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct)
84 - [Question answering with DistilBERT](https://huggingface.co/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species)
85 - [Translation with T5](https://huggingface.co/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin)
86
87 In Computer Vision:
88 - [Image classification with ViT](https://huggingface.co/google/vit-base-patch16-224)
89 - [Object Detection with DETR](https://huggingface.co/facebook/detr-resnet-50)
90 - [Image Segmentation with DETR](https://huggingface.co/facebook/detr-resnet-50-panoptic)
91
92 In Audio:
93 - [Automatic Speech Recognition with Wav2Vec2](https://huggingface.co/facebook/wav2vec2-base-960h)
94 - [Keyword Spotting with Wav2Vec2](https://huggingface.co/superb/wav2vec2-base-superb-ks)
95
96 **[Write With Transformer](https://transformer.huggingface.co)**, built by the Hugging Face team, is the official demo of this repo’s text generation capabilities.
97
98 ## If you are looking for custom support from the Hugging Face team
99
100 <a target="_blank" href="https://huggingface.co/support">
101 <img alt="HuggingFace Expert Acceleration Program" src="https://cdn-media.huggingface.co/marketing/transformers/new-support-improved.png" style="max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);">
102 </a><br>
103
104 ## Quick tour
105
106 To immediately use a model on a given input (text, image, audio, ...), we provide the `pipeline` API. Pipelines group together a pretrained model with the preprocessing that was used during that model's training. Here is how to quickly use a pipeline to classify positive versus negative texts:
107
108 ```python
109 >>> from transformers import pipeline
110
111 # Allocate a pipeline for sentiment-analysis
112 >>> classifier = pipeline('sentiment-analysis')
113 >>> classifier('We are very happy to introduce pipeline to the transformers repository.')
114 [{'label': 'POSITIVE', 'score': 0.9996980428695679}]
115 ```
116
117 The second line of code downloads and caches the pretrained model used by the pipeline, while the third evaluates it on the given text. Here the answer is "positive" with a confidence of 99.97%.
118
119 Many tasks have a pre-trained `pipeline` ready to go, in NLP but also in computer vision and speech. For example, we can easily extract detected objects in an image:
120
121 ``` python
122 >>> import requests
123 >>> from PIL import Image
124 >>> from transformers import pipeline
125
126 # Download an image with cute cats
127 >>> url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample.png"
128 >>> image_data = requests.get(url, stream=True).raw
129 >>> image = Image.open(image_data)
130
131 # Allocate a pipeline for object detection
132 >>> object_detector = pipeline('object_detection')
133 >>> object_detector(image)
134 [{'score': 0.9982201457023621,
135 'label': 'remote',
136 'box': {'xmin': 40, 'ymin': 70, 'xmax': 175, 'ymax': 117}},
137 {'score': 0.9960021376609802,
138 'label': 'remote',
139 'box': {'xmin': 333, 'ymin': 72, 'xmax': 368, 'ymax': 187}},
140 {'score': 0.9954745173454285,
141 'label': 'couch',
142 'box': {'xmin': 0, 'ymin': 1, 'xmax': 639, 'ymax': 473}},
143 {'score': 0.9988006353378296,
144 'label': 'cat',
145 'box': {'xmin': 13, 'ymin': 52, 'xmax': 314, 'ymax': 470}},
146 {'score': 0.9986783862113953,
147 'label': 'cat',
148 'box': {'xmin': 345, 'ymin': 23, 'xmax': 640, 'ymax': 368}}]
149 ```
150
151 Here we get a list of objects detected in the image, with a box surrounding the object and a confidence score. Here is the original image on the right, with the predictions displayed on the left:
152
153 <h3 align="center">
154 <a><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample.png" width="400"></a>
155 <a><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample_post_processed.png" width="400"></a>
156 </h3>
157
158 You can learn more about the tasks supported by the `pipeline` API in [this tutorial](https://huggingface.co/docs/transformers/task_summary).
159
160 In addition to `pipeline`, to download and use any of the pretrained models on your given task, all it takes is three lines of code. Here is the PyTorch version:
161 ```python
162 >>> from transformers import AutoTokenizer, AutoModel
163
164 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
165 >>> model = AutoModel.from_pretrained("bert-base-uncased")
166
167 >>> inputs = tokenizer("Hello world!", return_tensors="pt")
168 >>> outputs = model(**inputs)
169 ```
170
171 And here is the equivalent code for TensorFlow:
172 ```python
173 >>> from transformers import AutoTokenizer, TFAutoModel
174
175 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
176 >>> model = TFAutoModel.from_pretrained("bert-base-uncased")
177
178 >>> inputs = tokenizer("Hello world!", return_tensors="tf")
179 >>> outputs = model(**inputs)
180 ```
181
182 The tokenizer is responsible for all the preprocessing the pretrained model expects, and can be called directly on a single string (as in the above examples) or a list. It will output a dictionary that you can use in downstream code or simply directly pass to your model using the ** argument unpacking operator.
183
184 The model itself is a regular [Pytorch `nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) or a [TensorFlow `tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model) (depending on your backend) which you can use as usual. [This tutorial](https://huggingface.co/docs/transformers/training) explains how to integrate such a model into a classic PyTorch or TensorFlow training loop, or how to use our `Trainer` API to quickly fine-tune on a new dataset.
185
186 ## Why should I use transformers?
187
188 1. Easy-to-use state-of-the-art models:
189 - High performance on natural language understanding & generation, computer vision, and audio tasks.
190 - Low barrier to entry for educators and practitioners.
191 - Few user-facing abstractions with just three classes to learn.
192 - A unified API for using all our pretrained models.
193
194 1. Lower compute costs, smaller carbon footprint:
195 - Researchers can share trained models instead of always retraining.
196 - Practitioners can reduce compute time and production costs.
197 - Dozens of architectures with over 60,000 pretrained models across all modalities.
198
199 1. Choose the right framework for every part of a model's lifetime:
200 - Train state-of-the-art models in 3 lines of code.
201 - Move a single model between TF2.0/PyTorch/JAX frameworks at will.
202 - Seamlessly pick the right framework for training, evaluation and production.
203
204 1. Easily customize a model or an example to your needs:
205 - We provide examples for each architecture to reproduce the results published by its original authors.
206 - Model internals are exposed as consistently as possible.
207 - Model files can be used independently of the library for quick experiments.
208
209 ## Why shouldn't I use transformers?
210
211 - This library is not a modular toolbox of building blocks for neural nets. The code in the model files is not refactored with additional abstractions on purpose, so that researchers can quickly iterate on each of the models without diving into additional abstractions/files.
212 - The training API is not intended to work on any model but is optimized to work with the models provided by the library. For generic machine learning loops, you should use another library (possibly, [Accelerate](https://huggingface.co/docs/accelerate)).
213 - While we strive to present as many use cases as possible, the scripts in our [examples folder](https://github.com/huggingface/transformers/tree/main/examples) are just that: examples. It is expected that they won't work out-of-the box on your specific problem and that you will be required to change a few lines of code to adapt them to your needs.
214
215 ## Installation
216
217 ### With pip
218
219 This repository is tested on Python 3.6+, Flax 0.3.2+, PyTorch 1.3.1+ and TensorFlow 2.3+.
220
221 You should install 🤗 Transformers in a [virtual environment](https://docs.python.org/3/library/venv.html). If you're unfamiliar with Python virtual environments, check out the [user guide](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/).
222
223 First, create a virtual environment with the version of Python you're going to use and activate it.
224
225 Then, you will need to install at least one of Flax, PyTorch or TensorFlow.
226 Please refer to [TensorFlow installation page](https://www.tensorflow.org/install/), [PyTorch installation page](https://pytorch.org/get-started/locally/#start-locally) and/or [Flax](https://github.com/google/flax#quick-install) and [Jax](https://github.com/google/jax#installation) installation pages regarding the specific install command for your platform.
227
228 When one of those backends has been installed, 🤗 Transformers can be installed using pip as follows:
229
230 ```bash
231 pip install transformers
232 ```
233
234 If you'd like to play with the examples or need the bleeding edge of the code and can't wait for a new release, you must [install the library from source](https://huggingface.co/docs/transformers/installation#installing-from-source).
235
236 ### With conda
237
238 Since Transformers version v4.0.0, we now have a conda channel: `huggingface`.
239
240 🤗 Transformers can be installed using conda as follows:
241
242 ```shell script
243 conda install -c huggingface transformers
244 ```
245
246 Follow the installation pages of Flax, PyTorch or TensorFlow to see how to install them with conda.
247
248 ## Model architectures
249
250 **[All the model checkpoints](https://huggingface.co/models)** provided by 🤗 Transformers are seamlessly integrated from the huggingface.co [model hub](https://huggingface.co) where they are uploaded directly by [users](https://huggingface.co/users) and [organizations](https://huggingface.co/organizations).
251
252 Current number of checkpoints: 
253
254 🤗 Transformers currently provides the following architectures (see [here](https://huggingface.co/docs/transformers/model_summary) for a high-level summary of each them):
255
256 1. **[ALBERT](https://huggingface.co/docs/transformers/model_doc/albert)** (from Google Research and the Toyota Technological Institute at Chicago) released with the paper [ALBERT: A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942), by Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut.
257 1. **[BART](https://huggingface.co/docs/transformers/model_doc/bart)** (from Facebook) released with the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/abs/1910.13461) by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer.
258 1. **[BARThez](https://huggingface.co/docs/transformers/model_doc/barthez)** (from École polytechnique) released with the paper [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321) by Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis.
259 1. **[BARTpho](https://huggingface.co/docs/transformers/model_doc/bartpho)** (from VinAI Research) released with the paper [BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese](https://arxiv.org/abs/2109.09701) by Nguyen Luong Tran, Duong Minh Le and Dat Quoc Nguyen.
260 1. **[BEiT](https://huggingface.co/docs/transformers/model_doc/beit)** (from Microsoft) released with the paper [BEiT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) by Hangbo Bao, Li Dong, Furu Wei.
261 1. **[BERT](https://huggingface.co/docs/transformers/model_doc/bert)** (from Google) released with the paper [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova.
262 1. **[BERT For Sequence Generation](https://huggingface.co/docs/transformers/model_doc/bert-generation)** (from Google) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn.
263 1. **[BERTweet](https://huggingface.co/docs/transformers/model_doc/bertweet)** (from VinAI Research) released with the paper [BERTweet: A pre-trained language model for English Tweets](https://aclanthology.org/2020.emnlp-demos.2/) by Dat Quoc Nguyen, Thanh Vu and Anh Tuan Nguyen.
264 1. **[BigBird-Pegasus](https://huggingface.co/docs/transformers/model_doc/bigbird_pegasus)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed.
265 1. **[BigBird-RoBERTa](https://huggingface.co/docs/transformers/model_doc/big_bird)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed.
266 1. **[Blenderbot](https://huggingface.co/docs/transformers/model_doc/blenderbot)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston.
267 1. **[BlenderbotSmall](https://huggingface.co/docs/transformers/model_doc/blenderbot-small)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston.
268 1. **[BLOOM](https://huggingface.co/docs/transformers/model_doc/bloom)** (from BigScience workshop) released by the [BigSicence Workshop](https://bigscience.huggingface.co/).
269 1. **[BORT](https://huggingface.co/docs/transformers/model_doc/bort)** (from Alexa) released with the paper [Optimal Subarchitecture Extraction For BERT](https://arxiv.org/abs/2010.10499) by Adrian de Wynter and Daniel J. Perry.
270 1. **[ByT5](https://huggingface.co/docs/transformers/model_doc/byt5)** (from Google Research) released with the paper [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) by Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel.
271 1. **[CamemBERT](https://huggingface.co/docs/transformers/model_doc/camembert)** (from Inria/Facebook/Sorbonne) released with the paper [CamemBERT: a Tasty French Language Model](https://arxiv.org/abs/1911.03894) by Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz Suárez*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot.
272 1. **[CANINE](https://huggingface.co/docs/transformers/model_doc/canine)** (from Google Research) released with the paper [CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation](https://arxiv.org/abs/2103.06874) by Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting.
273 1. **[CLIP](https://huggingface.co/docs/transformers/model_doc/clip)** (from OpenAI) released with the paper [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) by Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever.
274 1. **[CodeGen](https://huggingface.co/docs/transformers/model_doc/codegen)** (from Salesforce) released with the paper [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong.
275 1. **[ConvBERT](https://huggingface.co/docs/transformers/model_doc/convbert)** (from YituTech) released with the paper [ConvBERT: Improving BERT with Span-based Dynamic Convolution](https://arxiv.org/abs/2008.02496) by Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng Yan.
276 1. **[ConvNeXT](https://huggingface.co/docs/transformers/model_doc/convnext)** (from Facebook AI) released with the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, Saining Xie.
277 1. **[CPM](https://huggingface.co/docs/transformers/model_doc/cpm)** (from Tsinghua University) released with the paper [CPM: A Large-scale Generative Chinese Pre-trained Language Model](https://arxiv.org/abs/2012.00413) by Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen, Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, Maosong Sun.
278 1. **[CTRL](https://huggingface.co/docs/transformers/model_doc/ctrl)** (from Salesforce) released with the paper [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858) by Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher.
279 1. **[CvT](https://huggingface.co/docs/transformers/model_doc/cvt)** (from Microsoft) released with the paper [CvT: Introducing Convolutions to Vision Transformers](https://arxiv.org/abs/2103.15808) by Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan, Lei Zhang.
280 1. **[Data2Vec](https://huggingface.co/docs/transformers/model_doc/data2vec)** (from Facebook) released with the paper [Data2Vec: A General Framework for Self-supervised Learning in Speech, Vision and Language](https://arxiv.org/abs/2202.03555) by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli.
281 1. **[DeBERTa](https://huggingface.co/docs/transformers/model_doc/deberta)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen.
282 1. **[DeBERTa-v2](https://huggingface.co/docs/transformers/model_doc/deberta-v2)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen.
283 1. **[Decision Transformer](https://huggingface.co/docs/transformers/model_doc/decision_transformer)** (from Berkeley/Facebook/Google) released with the paper [Decision Transformer: Reinforcement Learning via Sequence Modeling](https://arxiv.org/abs/2106.01345) by Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch.
284 1. **[DeiT](https://huggingface.co/docs/transformers/model_doc/deit)** (from Facebook) released with the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou.
285 1. **[DETR](https://huggingface.co/docs/transformers/model_doc/detr)** (from Facebook) released with the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko.
286 1. **[DialoGPT](https://huggingface.co/docs/transformers/model_doc/dialogpt)** (from Microsoft Research) released with the paper [DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation](https://arxiv.org/abs/1911.00536) by Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan.
287 1. **[DistilBERT](https://huggingface.co/docs/transformers/model_doc/distilbert)** (from HuggingFace), released together with the paper [DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108) by Victor Sanh, Lysandre Debut and Thomas Wolf. The same method has been applied to compress GPT2 into [DistilGPT2](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation), RoBERTa into [DistilRoBERTa](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation), Multilingual BERT into [DistilmBERT](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation) and a German version of DistilBERT.
288 1. **[DiT](https://huggingface.co/docs/transformers/model_doc/dit)** (from Microsoft Research) released with the paper [DiT: Self-supervised Pre-training for Document Image Transformer](https://arxiv.org/abs/2203.02378) by Junlong Li, Yiheng Xu, Tengchao Lv, Lei Cui, Cha Zhang, Furu Wei.
289 1. **[Donut](https://huggingface.co/docs/transformers/main/model_doc/donut)** (from NAVER), released together with the paper [OCR-free Document Understanding Transformer](https://arxiv.org/abs/2111.15664) by Geewook Kim, Teakgyu Hong, Moonbin Yim, Jeongyeon Nam, Jinyoung Park, Jinyeong Yim, Wonseok Hwang, Sangdoo Yun, Dongyoon Han, Seunghyun Park.
290 1. **[DPR](https://huggingface.co/docs/transformers/model_doc/dpr)** (from Facebook) released with the paper [Dense Passage Retrieval for Open-Domain Question Answering](https://arxiv.org/abs/2004.04906) by Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih.
291 1. **[DPT](https://huggingface.co/docs/transformers/master/model_doc/dpt)** (from Intel Labs) released with the paper [Vision Transformers for Dense Prediction](https://arxiv.org/abs/2103.13413) by René Ranftl, Alexey Bochkovskiy, Vladlen Koltun.
292 1. **[ELECTRA](https://huggingface.co/docs/transformers/model_doc/electra)** (from Google Research/Stanford University) released with the paper [ELECTRA: Pre-training text encoders as discriminators rather than generators](https://arxiv.org/abs/2003.10555) by Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning.
293 1. **[EncoderDecoder](https://huggingface.co/docs/transformers/model_doc/encoder-decoder)** (from Google Research) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn.
294 1. **[FlauBERT](https://huggingface.co/docs/transformers/model_doc/flaubert)** (from CNRS) released with the paper [FlauBERT: Unsupervised Language Model Pre-training for French](https://arxiv.org/abs/1912.05372) by Hang Le, Loïc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoît Crabbé, Laurent Besacier, Didier Schwab.
295 1. **[FLAVA](https://huggingface.co/docs/transformers/model_doc/flava)** (from Facebook AI) released with the paper [FLAVA: A Foundational Language And Vision Alignment Model](https://arxiv.org/abs/2112.04482) by Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, and Douwe Kiela.
296 1. **[FNet](https://huggingface.co/docs/transformers/model_doc/fnet)** (from Google Research) released with the paper [FNet: Mixing Tokens with Fourier Transforms](https://arxiv.org/abs/2105.03824) by James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon.
297 1. **[Funnel Transformer](https://huggingface.co/docs/transformers/model_doc/funnel)** (from CMU/Google Brain) released with the paper [Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing](https://arxiv.org/abs/2006.03236) by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le.
298 1. **[GLPN](https://huggingface.co/docs/transformers/model_doc/glpn)** (from KAIST) released with the paper [Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth](https://arxiv.org/abs/2201.07436) by Doyeon Kim, Woonghyun Ga, Pyungwhan Ahn, Donggyu Joo, Sehwan Chun, Junmo Kim.
299 1. **[GPT](https://huggingface.co/docs/transformers/model_doc/openai-gpt)** (from OpenAI) released with the paper [Improving Language Understanding by Generative Pre-Training](https://blog.openai.com/language-unsupervised/) by Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever.
300 1. **[GPT Neo](https://huggingface.co/docs/transformers/model_doc/gpt_neo)** (from EleutherAI) released in the repository [EleutherAI/gpt-neo](https://github.com/EleutherAI/gpt-neo) by Sid Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy.
301 1. **[GPT NeoX](https://huggingface.co/docs/transformers/model_doc/gpt_neox)** (from EleutherAI) released with the paper [GPT-NeoX-20B: An Open-Source Autoregressive Language Model](https://arxiv.org/abs/2204.06745) by Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, Samuel Weinbach
302 1. **[GPT-2](https://huggingface.co/docs/transformers/model_doc/gpt2)** (from OpenAI) released with the paper [Language Models are Unsupervised Multitask Learners](https://blog.openai.com/better-language-models/) by Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever**.
303 1. **[GPT-J](https://huggingface.co/docs/transformers/model_doc/gptj)** (from EleutherAI) released in the repository [kingoflolz/mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/) by Ben Wang and Aran Komatsuzaki.
304 1. **[GroupViT](https://huggingface.co/docs/transformers/model_doc/groupvit)** (from UCSD, NVIDIA) released with the paper [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094) by Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang.
305 1. **[Hubert](https://huggingface.co/docs/transformers/model_doc/hubert)** (from Facebook) released with the paper [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed.
306 1. **[I-BERT](https://huggingface.co/docs/transformers/model_doc/ibert)** (from Berkeley) released with the paper [I-BERT: Integer-only BERT Quantization](https://arxiv.org/abs/2101.01321) by Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney, Kurt Keutzer.
307 1. **[ImageGPT](https://huggingface.co/docs/transformers/model_doc/imagegpt)** (from OpenAI) released with the paper [Generative Pretraining from Pixels](https://openai.com/blog/image-gpt/) by Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever.
308 1. **[LayoutLM](https://huggingface.co/docs/transformers/model_doc/layoutlm)** (from Microsoft Research Asia) released with the paper [LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318) by Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou.
309 1. **[LayoutLMv2](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (from Microsoft Research Asia) released with the paper [LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding](https://arxiv.org/abs/2012.14740) by Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou.
310 1. **[LayoutLMv3](https://huggingface.co/docs/transformers/model_doc/layoutlmv3)** (from Microsoft Research Asia) released with the paper [LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking](https://arxiv.org/abs/2204.08387) by Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei.
311 1. **[LayoutXLM](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (from Microsoft Research Asia) released with the paper [LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding](https://arxiv.org/abs/2104.08836) by Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Furu Wei.
312 1. **[LED](https://huggingface.co/docs/transformers/model_doc/led)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan.
313 1. **[LeViT](https://huggingface.co/docs/transformers/model_doc/levit)** (from Meta AI) released with the paper [LeViT: A Vision Transformer in ConvNet's Clothing for Faster Inference](https://arxiv.org/abs/2104.01136) by Ben Graham, Alaaeldin El-Nouby, Hugo Touvron, Pierre Stock, Armand Joulin, Hervé Jégou, Matthijs Douze.
314 1. **[Longformer](https://huggingface.co/docs/transformers/model_doc/longformer)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan.
315 1. **[LongT5](https://huggingface.co/docs/transformers/model_doc/longt5)** (from Google AI) released with the paper [LongT5: Efficient Text-To-Text Transformer for Long Sequences](https://arxiv.org/abs/2112.07916) by Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung, Yinfei Yang.
316 1. **[LUKE](https://huggingface.co/docs/transformers/model_doc/luke)** (from Studio Ousia) released with the paper [LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention](https://arxiv.org/abs/2010.01057) by Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, Yuji Matsumoto.
317 1. **[LXMERT](https://huggingface.co/docs/transformers/model_doc/lxmert)** (from UNC Chapel Hill) released with the paper [LXMERT: Learning Cross-Modality Encoder Representations from Transformers for Open-Domain Question Answering](https://arxiv.org/abs/1908.07490) by Hao Tan and Mohit Bansal.
318 1. **[M-CTC-T](https://huggingface.co/docs/transformers/model_doc/mctct)** (from Facebook) released with the paper [Pseudo-Labeling For Massively Multilingual Speech Recognition](https://arxiv.org/abs/2111.00161) by Loren Lugosch, Tatiana Likhomanenko, Gabriel Synnaeve, and Ronan Collobert.
319 1. **[M2M100](https://huggingface.co/docs/transformers/model_doc/m2m_100)** (from Facebook) released with the paper [Beyond English-Centric Multilingual Machine Translation](https://arxiv.org/abs/2010.11125) by Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin.
320 1. **[MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)** Machine translation models trained using [OPUS](http://opus.nlpl.eu/) data by Jörg Tiedemann. The [Marian Framework](https://marian-nmt.github.io/) is being developed by the Microsoft Translator Team.
321 1. **[MaskFormer](https://huggingface.co/docs/transformers/model_doc/maskformer)** (from Meta and UIUC) released with the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) by Bowen Cheng, Alexander G. Schwing, Alexander Kirillov.
322 1. **[mBART](https://huggingface.co/docs/transformers/model_doc/mbart)** (from Facebook) released with the paper [Multilingual Denoising Pre-training for Neural Machine Translation](https://arxiv.org/abs/2001.08210) by Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer.
323 1. **[mBART-50](https://huggingface.co/docs/transformers/model_doc/mbart)** (from Facebook) released with the paper [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) by Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, Angela Fan.
324 1. **[Megatron-BERT](https://huggingface.co/docs/transformers/model_doc/megatron-bert)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro.
325 1. **[Megatron-GPT2](https://huggingface.co/docs/transformers/model_doc/megatron_gpt2)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro.
326 1. **[mLUKE](https://huggingface.co/docs/transformers/model_doc/mluke)** (from Studio Ousia) released with the paper [mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models](https://arxiv.org/abs/2110.08151) by Ryokan Ri, Ikuya Yamada, and Yoshimasa Tsuruoka.
327 1. **[MobileBERT](https://huggingface.co/docs/transformers/model_doc/mobilebert)** (from CMU/Google Brain) released with the paper [MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices](https://arxiv.org/abs/2004.02984) by Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou.
328 1. **[MobileViT](https://huggingface.co/docs/transformers/model_doc/mobilevit)** (from Apple) released with the paper [MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer](https://arxiv.org/abs/2110.02178) by Sachin Mehta and Mohammad Rastegari.
329 1. **[MPNet](https://huggingface.co/docs/transformers/model_doc/mpnet)** (from Microsoft Research) released with the paper [MPNet: Masked and Permuted Pre-training for Language Understanding](https://arxiv.org/abs/2004.09297) by Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu.
330 1. **[MT5](https://huggingface.co/docs/transformers/model_doc/mt5)** (from Google AI) released with the paper [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) by Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel.
331 1. **[MVP](https://huggingface.co/docs/transformers/model_doc/mvp)** (from RUC AI Box) released with the paper [MVP: Multi-task Supervised Pre-training for Natural Language Generation](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen.
332 1. **[Nezha](https://huggingface.co/docs/transformers/model_doc/nezha)** (from Huawei Noah’s Ark Lab) released with the paper [NEZHA: Neural Contextualized Representation for Chinese Language Understanding](https://arxiv.org/abs/1909.00204) by Junqiu Wei, Xiaozhe Ren, Xiaoguang Li, Wenyong Huang, Yi Liao, Yasheng Wang, Jiashu Lin, Xin Jiang, Xiao Chen and Qun Liu.
333 1. **[NLLB](https://huggingface.co/docs/transformers/model_doc/nllb)** (from Meta) released with the paper [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672) by the NLLB team.
334 1. **[Nyströmformer](https://huggingface.co/docs/transformers/model_doc/nystromformer)** (from the University of Wisconsin - Madison) released with the paper [Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention](https://arxiv.org/abs/2102.03902) by Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, Vikas Singh.
335 1. **[OPT](https://huggingface.co/docs/transformers/master/model_doc/opt)** (from Meta AI) released with the paper [OPT: Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) by Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al.
336 1. **[OWL-ViT](https://huggingface.co/docs/transformers/model_doc/owlvit)** (from Google AI) released with the paper [Simple Open-Vocabulary Object Detection with Vision Transformers](https://arxiv.org/abs/2205.06230) by Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, and Neil Houlsby.
337 1. **[Pegasus](https://huggingface.co/docs/transformers/model_doc/pegasus)** (from Google) released with the paper [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777) by Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu.
338 1. **[Perceiver IO](https://huggingface.co/docs/transformers/model_doc/perceiver)** (from Deepmind) released with the paper [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) by Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hénaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, João Carreira.
339 1. **[PhoBERT](https://huggingface.co/docs/transformers/model_doc/phobert)** (from VinAI Research) released with the paper [PhoBERT: Pre-trained language models for Vietnamese](https://www.aclweb.org/anthology/2020.findings-emnlp.92/) by Dat Quoc Nguyen and Anh Tuan Nguyen.
340 1. **[PLBart](https://huggingface.co/docs/transformers/model_doc/plbart)** (from UCLA NLP) released with the paper [Unified Pre-training for Program Understanding and Generation](https://arxiv.org/abs/2103.06333) by Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, Kai-Wei Chang.
341 1. **[PoolFormer](https://huggingface.co/docs/transformers/model_doc/poolformer)** (from Sea AI Labs) released with the paper [MetaFormer is Actually What You Need for Vision](https://arxiv.org/abs/2111.11418) by Yu, Weihao and Luo, Mi and Zhou, Pan and Si, Chenyang and Zhou, Yichen and Wang, Xinchao and Feng, Jiashi and Yan, Shuicheng.
342 1. **[ProphetNet](https://huggingface.co/docs/transformers/model_doc/prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou.
343 1. **[QDQBert](https://huggingface.co/docs/transformers/model_doc/qdqbert)** (from NVIDIA) released with the paper [Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation](https://arxiv.org/abs/2004.09602) by Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev and Paulius Micikevicius.
344 1. **[RAG](https://huggingface.co/docs/transformers/model_doc/rag)** (from Facebook) released with the paper [Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks](https://arxiv.org/abs/2005.11401) by Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, Douwe Kiela.
345 1. **[REALM](https://huggingface.co/docs/transformers/model_doc/realm.html)** (from Google Research) released with the paper [REALM: Retrieval-Augmented Language Model Pre-Training](https://arxiv.org/abs/2002.08909) by Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat and Ming-Wei Chang.
346 1. **[Reformer](https://huggingface.co/docs/transformers/model_doc/reformer)** (from Google Research) released with the paper [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451) by Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya.
347 1. **[RegNet](https://huggingface.co/docs/transformers/model_doc/regnet)** (from META Platforms) released with the paper [Designing Network Design Space](https://arxiv.org/abs/2003.13678) by Ilija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, Piotr Dollár.
348 1. **[RemBERT](https://huggingface.co/docs/transformers/model_doc/rembert)** (from Google Research) released with the paper [Rethinking embedding coupling in pre-trained language models](https://arxiv.org/abs/2010.12821) by Hyung Won Chung, Thibault Févry, Henry Tsai, M. Johnson, Sebastian Ruder.
349 1. **[ResNet](https://huggingface.co/docs/transformers/model_doc/resnet)** (from Microsoft Research) released with the paper [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) by Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun.
350 1. **[RoBERTa](https://huggingface.co/docs/transformers/model_doc/roberta)** (from Facebook), released together with the paper [RoBERTa: A Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov.
351 1. **[RoFormer](https://huggingface.co/docs/transformers/model_doc/roformer)** (from ZhuiyiTechnology), released together with the paper [RoFormer: Enhanced Transformer with Rotary Position Embedding](https://arxiv.org/abs/2104.09864) by Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu.
352 1. **[SegFormer](https://huggingface.co/docs/transformers/model_doc/segformer)** (from NVIDIA) released with the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo.
353 1. **[SEW](https://huggingface.co/docs/transformers/model_doc/sew)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi.
354 1. **[SEW-D](https://huggingface.co/docs/transformers/model_doc/sew_d)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi.
355 1. **[SpeechToTextTransformer](https://huggingface.co/docs/transformers/model_doc/speech_to_text)** (from Facebook), released together with the paper [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino.
356 1. **[SpeechToTextTransformer2](https://huggingface.co/docs/transformers/model_doc/speech_to_text_2)** (from Facebook), released together with the paper [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) by Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau.
357 1. **[Splinter](https://huggingface.co/docs/transformers/model_doc/splinter)** (from Tel Aviv University), released together with the paper [Few-Shot Question Answering by Pretraining Span Selection](https://arxiv.org/abs/2101.00438) by Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy.
358 1. **[SqueezeBERT](https://huggingface.co/docs/transformers/model_doc/squeezebert)** (from Berkeley) released with the paper [SqueezeBERT: What can computer vision teach NLP about efficient neural networks?](https://arxiv.org/abs/2006.11316) by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W. Keutzer.
359 1. **[Swin Transformer](https://huggingface.co/docs/transformers/model_doc/swin)** (from Microsoft) released with the paper [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) by Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo.
360 1. **[Swin Transformer V2](https://huggingface.co/docs/transformers/main/model_doc/swinv2)** (from Microsoft) released with the paper [Swin Transformer V2: Scaling Up Capacity and Resolution](https://arxiv.org/abs/2111.09883) by Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, Furu Wei, Baining Guo.
361 1. **[T5](https://huggingface.co/docs/transformers/model_doc/t5)** (from Google AI) released with the paper [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu.
362 1. **[T5v1.1](https://huggingface.co/docs/transformers/model_doc/t5v1.1)** (from Google AI) released in the repository [google-research/text-to-text-transfer-transformer](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu.
363 1. **[TAPAS](https://huggingface.co/docs/transformers/model_doc/tapas)** (from Google AI) released with the paper [TAPAS: Weakly Supervised Table Parsing via Pre-training](https://arxiv.org/abs/2004.02349) by Jonathan Herzig, Paweł Krzysztof Nowak, Thomas Müller, Francesco Piccinno and Julian Martin Eisenschlos.
364 1. **[TAPEX](https://huggingface.co/docs/transformers/model_doc/tapex)** (from Microsoft Research) released with the paper [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou.
365 1. **[Trajectory Transformer](https://huggingface.co/docs/transformers/model_doc/trajectory_transformers)** (from the University of California at Berkeley) released with the paper [Offline Reinforcement Learning as One Big Sequence Modeling Problem](https://arxiv.org/abs/2106.02039) by Michael Janner, Qiyang Li, Sergey Levine
366 1. **[Transformer-XL](https://huggingface.co/docs/transformers/model_doc/transfo-xl)** (from Google/CMU) released with the paper [Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context](https://arxiv.org/abs/1901.02860) by Zihang Dai*, Zhilin Yang*, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov.
367 1. **[TrOCR](https://huggingface.co/docs/transformers/model_doc/trocr)** (from Microsoft), released together with the paper [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) by Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei.
368 1. **[UL2](https://huggingface.co/docs/transformers/model_doc/ul2)** (from Google Research) released with the paper [Unifying Language Learning Paradigms](https://arxiv.org/abs/2205.05131v1) by Yi Tay, Mostafa Dehghani, Vinh Q. Tran, Xavier Garcia, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Neil Houlsby, Donald Metzler
369 1. **[UniSpeech](https://huggingface.co/docs/transformers/model_doc/unispeech)** (from Microsoft Research) released with the paper [UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data](https://arxiv.org/abs/2101.07597) by Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang.
370 1. **[UniSpeechSat](https://huggingface.co/docs/transformers/model_doc/unispeech-sat)** (from Microsoft Research) released with the paper [UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER AWARE PRE-TRAINING](https://arxiv.org/abs/2110.05752) by Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu.
371 1. **[VAN](https://huggingface.co/docs/transformers/model_doc/van)** (from Tsinghua University and Nankai University) released with the paper [Visual Attention Network](https://arxiv.org/abs/2202.09741) by Meng-Hao Guo, Cheng-Ze Lu, Zheng-Ning Liu, Ming-Ming Cheng, Shi-Min Hu.
372 1. **[VideoMAE](https://huggingface.co/docs/transformers/main/model_doc/videomae)** (from Multimedia Computing Group, Nanjing University) released with the paper [VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training](https://arxiv.org/abs/2203.12602) by Zhan Tong, Yibing Song, Jue Wang, Limin Wang.
373 1. **[ViLT](https://huggingface.co/docs/transformers/model_doc/vilt)** (from NAVER AI Lab/Kakao Enterprise/Kakao Brain) released with the paper [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) by Wonjae Kim, Bokyung Son, Ildoo Kim.
374 1. **[Vision Transformer (ViT)](https://huggingface.co/docs/transformers/model_doc/vit)** (from Google AI) released with the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby.
375 1. **[VisualBERT](https://huggingface.co/docs/transformers/model_doc/visual_bert)** (from UCLA NLP) released with the paper [VisualBERT: A Simple and Performant Baseline for Vision and Language](https://arxiv.org/pdf/1908.03557) by Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang.
376 1. **[ViTMAE](https://huggingface.co/docs/transformers/model_doc/vit_mae)** (from Meta AI) released with the paper [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377) by Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick.
377 1. **[Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/wav2vec2)** (from Facebook AI) released with the paper [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.
378 1. **[Wav2Vec2-Conformer](https://huggingface.co/docs/transformers/model_doc/wav2vec2-conformer)** (from Facebook AI) released with the paper [FAIRSEQ S2T: Fast Speech-to-Text Modeling with FAIRSEQ](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino.
379 1. **[Wav2Vec2Phoneme](https://huggingface.co/docs/transformers/model_doc/wav2vec2_phoneme)** (from Facebook AI) released with the paper [Simple and Effective Zero-shot Cross-lingual Phoneme Recognition](https://arxiv.org/abs/2109.11680) by Qiantong Xu, Alexei Baevski, Michael Auli.
380 1. **[WavLM](https://huggingface.co/docs/transformers/model_doc/wavlm)** (from Microsoft Research) released with the paper [WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900) by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei.
381 1. **[XGLM](https://huggingface.co/docs/transformers/model_doc/xglm)** (From Facebook AI) released with the paper [Few-shot Learning with Multilingual Language Models](https://arxiv.org/abs/2112.10668) by Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, Xian Li.
382 1. **[XLM](https://huggingface.co/docs/transformers/model_doc/xlm)** (from Facebook) released together with the paper [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample and Alexis Conneau.
383 1. **[XLM-ProphetNet](https://huggingface.co/docs/transformers/model_doc/xlm-prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou.
384 1. **[XLM-RoBERTa](https://huggingface.co/docs/transformers/model_doc/xlm-roberta)** (from Facebook AI), released together with the paper [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Alexis Conneau*, Kartikay Khandelwal*, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov.
385 1. **[XLM-RoBERTa-XL](https://huggingface.co/docs/transformers/model_doc/xlm-roberta-xl)** (from Facebook AI), released together with the paper [Larger-Scale Transformers for Multilingual Masked Language Modeling](https://arxiv.org/abs/2105.00572) by Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, Alexis Conneau.
386 1. **[XLNet](https://huggingface.co/docs/transformers/model_doc/xlnet)** (from Google/CMU) released with the paper [XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) by Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le.
387 1. **[XLS-R](https://huggingface.co/docs/transformers/model_doc/xls_r)** (from Facebook AI) released with the paper [XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale](https://arxiv.org/abs/2111.09296) by Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, Michael Auli.
388 1. **[XLSR-Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/xlsr_wav2vec2)** (from Facebook AI) released with the paper [Unsupervised Cross-Lingual Representation Learning For Speech Recognition](https://arxiv.org/abs/2006.13979) by Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli.
389 1. **[YOLOS](https://huggingface.co/docs/transformers/model_doc/yolos)** (from Huazhong University of Science & Technology) released with the paper [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://arxiv.org/abs/2106.00666) by Yuxin Fang, Bencheng Liao, Xinggang Wang, Jiemin Fang, Jiyang Qi, Rui Wu, Jianwei Niu, Wenyu Liu.
390 1. **[YOSO](https://huggingface.co/docs/transformers/model_doc/yoso)** (from the University of Wisconsin - Madison) released with the paper [You Only Sample (Almost) Once: Linear Cost Self-Attention Via Bernoulli Sampling](https://arxiv.org/abs/2111.09714) by Zhanpeng Zeng, Yunyang Xiong, Sathya N. Ravi, Shailesh Acharya, Glenn Fung, Vikas Singh.
391 1. Want to contribute a new model? We have added a **detailed guide and templates** to guide you in the process of adding a new model. You can find them in the [`templates`](./templates) folder of the repository. Be sure to check the [contributing guidelines](./CONTRIBUTING.md) and contact the maintainers or open an issue to collect feedbacks before starting your PR.
392
393 To check if each model has an implementation in Flax, PyTorch or TensorFlow, or has an associated tokenizer backed by the 🤗 Tokenizers library, refer to [this table](https://huggingface.co/docs/transformers/index#supported-frameworks).
394
395 These implementations have been tested on several datasets (see the example scripts) and should match the performance of the original implementations. You can find more details on performance in the Examples section of the [documentation](https://huggingface.co/docs/transformers/examples).
396
397
398 ## Learn more
399
400 | Section | Description |
401 |-|-|
402 | [Documentation](https://huggingface.co/docs/transformers/) | Full API documentation and tutorials |
403 | [Task summary](https://huggingface.co/docs/transformers/task_summary) | Tasks supported by 🤗 Transformers |
404 | [Preprocessing tutorial](https://huggingface.co/docs/transformers/preprocessing) | Using the `Tokenizer` class to prepare data for the models |
405 | [Training and fine-tuning](https://huggingface.co/docs/transformers/training) | Using the models provided by 🤗 Transformers in a PyTorch/TensorFlow training loop and the `Trainer` API |
406 | [Quick tour: Fine-tuning/usage scripts](https://github.com/huggingface/transformers/tree/main/examples) | Example scripts for fine-tuning models on a wide range of tasks |
407 | [Model sharing and uploading](https://huggingface.co/docs/transformers/model_sharing) | Upload and share your fine-tuned models with the community |
408 | [Migration](https://huggingface.co/docs/transformers/migration) | Migrate to 🤗 Transformers from `pytorch-transformers` or `pytorch-pretrained-bert` |
409
410 ## Citation
411
412 We now have a [paper](https://www.aclweb.org/anthology/2020.emnlp-demos.6/) you can cite for the 🤗 Transformers library:
413 ```bibtex
414 @inproceedings{wolf-etal-2020-transformers,
415 title = "Transformers: State-of-the-Art Natural Language Processing",
416 author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush",
417 booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
418 month = oct,
419 year = "2020",
420 address = "Online",
421 publisher = "Association for Computational Linguistics",
422 url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6",
423 pages = "38--45"
424 }
425 ```
426
[end of README.md]
[start of README_ko.md]
1 <!---
2 Copyright 2020 The HuggingFace Team. All rights reserved.
3
4 Licensed under the Apache License, Version 2.0 (the "License");
5 you may not use this file except in compliance with the License.
6 You may obtain a copy of the License at
7
8 http://www.apache.org/licenses/LICENSE-2.0
9
10 Unless required by applicable law or agreed to in writing, software
11 distributed under the License is distributed on an "AS IS" BASIS,
12 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 See the License for the specific language governing permissions and
14 limitations under the License.
15 -->
16
17 <p align="center">
18 <br>
19 <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers_logo_name.png" width="400"/>
20 <br>
21 <p>
22 <p align="center">
23 <a href="https://circleci.com/gh/huggingface/transformers">
24 <img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/main">
25 </a>
26 <a href="https://github.com/huggingface/transformers/blob/main/LICENSE">
27 <img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue">
28 </a>
29 <a href="https://huggingface.co/docs/transformers/index">
30 <img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers/index.svg?down_color=red&down_message=offline&up_message=online">
31 </a>
32 <a href="https://github.com/huggingface/transformers/releases">
33 <img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg">
34 </a>
35 <a href="https://github.com/huggingface/transformers/blob/main/CODE_OF_CONDUCT.md">
36 <img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg">
37 </a>
38 <a href="https://zenodo.org/badge/latestdoi/155220641"><img src="https://zenodo.org/badge/155220641.svg" alt="DOI"></a>
39 </p>
40
41 <h4 align="center">
42 <p>
43 <a href="https://github.com/huggingface/transformers/">English</a> |
44 <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hans.md">简体中文</a> |
45 <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hant.md">繁體中文</a> |
46 <b>한국어</b>
47 <p>
48 </h4>
49
50 <h3 align="center">
51 <p> Jax, Pytorch, TensorFlow를 위한 최첨단 자연어처리</p>
52 </h3>
53
54 <h3 align="center">
55 <a href="https://hf.co/course"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/course_banner.png"></a>
56 </h3>
57
58 🤗 Transformers는 분류, 정보 추출, 질문 답변, 요약, 번역, 문장 생성 등을 100개 이상의 언어로 수행할 수 있는 수천개의 사전학습된 모델을 제공합니다. 우리의 목표는 모두가 최첨단의 NLP 기술을 쉽게 사용하는 것입니다.
59
60 🤗 Transformers는 이러한 사전학습 모델을 빠르게 다운로드해 특정 텍스트에 사용하고, 원하는 데이터로 fine-tuning해 커뮤니티나 우리의 [모델 허브](https://huggingface.co/models)에 공유할 수 있도록 API를 제공합니다. 또한, 모델 구조를 정의하는 각 파이썬 모듈은 완전히 독립적이여서 연구 실험을 위해 손쉽게 수정할 수 있습니다.
61
62 🤗 Transformers는 가장 유명한 3개의 딥러닝 라이브러리를 지원합니다. 이들은 서로 완벽히 연동됩니다 — [Jax](https://jax.readthedocs.io/en/latest/), [PyTorch](https://pytorch.org/), [TensorFlow](https://www.tensorflow.org/). 간단하게 이 라이브러리 중 하나로 모델을 학습하고, 또 다른 라이브러리로 추론을 위해 모델을 불러올 수 있습니다.
63
64 ## 온라인 데모
65
66 대부분의 모델을 [모델 허브](https://huggingface.co/models) 페이지에서 바로 테스트해볼 수 있습니다. 공개 및 비공개 모델을 위한 [비공개 모델 호스팅, 버전 관리, 추론 API](https://huggingface.co/pricing)도 제공합니다.
67
68 예시:
69 - [BERT로 마스킹된 단어 완성하기](https://huggingface.co/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France)
70 - [Electra를 이용한 개체명 인식](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city)
71 - [GPT-2로 텍스트 생성하기](https://huggingface.co/gpt2?text=A+long+time+ago%2C+)
72 - [RoBERTa로 자연어 추론하기](https://huggingface.co/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal)
73 - [BART를 이용한 요약](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct)
74 - [DistilBERT를 이용한 질문 답변](https://huggingface.co/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species)
75 - [T5로 번역하기](https://huggingface.co/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin)
76
77 **[Transformer와 글쓰기](https://transformer.huggingface.co)** 는 이 저장소의 텍스트 생성 능력에 관한 Hugging Face 팀의 공식 데모입니다.
78
79 ## Hugging Face 팀의 커스텀 지원을 원한다면
80
81 <a target="_blank" href="https://huggingface.co/support">
82 <img alt="HuggingFace Expert Acceleration Program" src="https://huggingface.co/front/thumbnails/support.png" style="max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);">
83 </a><br>
84
85 ## 퀵 투어
86
87 원하는 텍스트에 바로 모델을 사용할 수 있도록, 우리는 `pipeline` API를 제공합니다. Pipeline은 사전학습 모델과 그 모델을 학습할 때 적용한 전처리 방식을 하나로 합칩니다. 다음은 긍정적인 텍스트와 부정적인 텍스트를 분류하기 위해 pipeline을 사용한 간단한 예시입니다:
88
89 ```python
90 >>> from transformers import pipeline
91
92 # Allocate a pipeline for sentiment-analysis
93 >>> classifier = pipeline('sentiment-analysis')
94 >>> classifier('We are very happy to introduce pipeline to the transformers repository.')
95 [{'label': 'POSITIVE', 'score': 0.9996980428695679}]
96 ```
97
98 코드의 두번째 줄은 pipeline이 사용하는 사전학습 모델을 다운로드하고 캐시로 저장합니다. 세번째 줄에선 그 모델이 주어진 텍스트를 평가합니다. 여기서 모델은 99.97%의 확률로 텍스트가 긍정적이라고 평가했습니다.
99
100 많은 NLP 과제들을 `pipeline`으로 바로 수행할 수 있습니다. 예를 들어, 질문과 문맥이 주어지면 손쉽게 답변을 추출할 수 있습니다:
101
102 ``` python
103 >>> from transformers import pipeline
104
105 # Allocate a pipeline for question-answering
106 >>> question_answerer = pipeline('question-answering')
107 >>> question_answerer({
108 ... 'question': 'What is the name of the repository ?',
109 ... 'context': 'Pipeline has been included in the huggingface/transformers repository'
110 ... })
111 {'score': 0.30970096588134766, 'start': 34, 'end': 58, 'answer': 'huggingface/transformers'}
112
113 ```
114
115 답변뿐만 아니라, 여기에 사용된 사전학습 모델은 확신도와 토크나이즈된 문장 속 답변의 시작점, 끝점까지 반환합니다. [이 튜토리얼](https://huggingface.co/docs/transformers/task_summary)에서 `pipeline` API가 지원하는 다양한 과제를 확인할 수 있습니다.
116
117 코드 3줄로 원하는 과제에 맞게 사전학습 모델을 다운로드 받고 사용할 수 있습니다. 다음은 PyTorch 버전입니다:
118 ```python
119 >>> from transformers import AutoTokenizer, AutoModel
120
121 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
122 >>> model = AutoModel.from_pretrained("bert-base-uncased")
123
124 >>> inputs = tokenizer("Hello world!", return_tensors="pt")
125 >>> outputs = model(**inputs)
126 ```
127 다음은 TensorFlow 버전입니다:
128 ```python
129 >>> from transformers import AutoTokenizer, TFAutoModel
130
131 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
132 >>> model = TFAutoModel.from_pretrained("bert-base-uncased")
133
134 >>> inputs = tokenizer("Hello world!", return_tensors="tf")
135 >>> outputs = model(**inputs)
136 ```
137
138 토크나이저는 사전학습 모델의 모든 전처리를 책임집니다. 그리고 (위의 예시처럼) 1개의 스트링이나 리스트도 처리할 수 있습니다. 토크나이저는 딕셔너리를 반환하는데, 이는 다운스트림 코드에 사용하거나 언패킹 연산자 ** 를 이용해 모델에 바로 전달할 수도 있습니다.
139
140 모델 자체는 일반적으로 사용되는 [Pytorch `nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)나 [TensorFlow `tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model)입니다. [이 튜토리얼](https://huggingface.co/transformers/training.html)은 이러한 모델을 표준적인 PyTorch나 TensorFlow 학습 과정에서 사용하는 방법, 또는 새로운 데이터로 fine-tune하기 위해 `Trainer` API를 사용하는 방법을 설명해줍니다.
141
142 ## 왜 transformers를 사용해야 할까요?
143
144 1. 손쉽게 사용할 수 있는 최첨단 모델:
145 - NLU와 NLG 과제에서 뛰어난 성능을 보입니다.
146 - 교육자 실무자에게 진입 장벽이 낮습니다.
147 - 3개의 클래스만 배우면 바로 사용할 수 있습니다.
148 - 하나의 API로 모든 사전학습 모델을 사용할 수 있습니다.
149
150 1. 더 적은 계산 비용, 더 적은 탄소 발자국:
151 - 연구자들은 모델을 계속 다시 학습시키는 대신 학습된 모델을 공유할 수 있습니다.
152 - 실무자들은 학습에 필요한 시간과 비용을 절약할 수 있습니다.
153 - 수십개의 모델 구조, 2,000개 이상의 사전학습 모델, 100개 이상의 언어로 학습된 모델 등.
154
155 1. 모델의 각 생애주기에 적합한 프레임워크:
156 - 코드 3줄로 최첨단 모델을 학습하세요.
157 - 자유롭게 모델을 TF2.0나 PyTorch 프레임워크로 변환하세요.
158 - 학습, 평가, 공개 등 각 단계에 맞는 프레임워크를 원하는대로 선택하세요.
159
160 1. 필요한 대로 모델이나 예시를 커스터마이즈하세요:
161 - 우리는 저자가 공개한 결과를 재현하기 위해 각 모델 구조의 예시를 제공합니다.
162 - 모델 내부 구조는 가능한 일관적으로 공개되어 있습니다.
163 - 빠른 실험을 위해 모델 파일은 라이브러리와 독립적으로 사용될 수 있습니다.
164
165 ## 왜 transformers를 사용하지 말아야 할까요?
166
167 - 이 라이브러리는 신경망 블록을 만들기 위한 모듈이 아닙니다. 연구자들이 여러 파일을 살펴보지 않고 바로 각 모델을 사용할 수 있도록, 모델 파일 코드의 추상화 수준을 적정하게 유지했습니다.
168 - 학습 API는 모든 모델에 적용할 수 있도록 만들어지진 않았지만, 라이브러리가 제공하는 모델들에 적용할 수 있도록 최적화되었습니다. 일반적인 머신 러닝을 위해선, 다른 라이브러리를 사용하세요.
169 - 가능한 많은 사용 예시를 보여드리고 싶어서, [예시 폴더](https://github.com/huggingface/transformers/tree/main/examples)의 스크립트를 준비했습니다. 이 스크립트들을 수정 없이 특정한 문제에 바로 적용하지 못할 수 있습니다. 필요에 맞게 일부 코드를 수정해야 할 수 있습니다.
170
171 ## 설치
172
173 ### pip로 설치하기
174
175 이 저장소는 Python 3.6+, Flax 0.3.2+, PyTorch 1.3.1+, TensorFlow 2.3+에서 테스트 되었습니다.
176
177 [가상 환경](https://docs.python.org/3/library/venv.html)에 🤗 Transformers를 설치하세요. Python 가상 환경에 익숙하지 않다면, [사용자 가이드](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/)를 확인하세요.
178
179 우선, 사용할 Python 버전으로 가상 환경을 만들고 실행하세요.
180
181 그 다음, Flax, PyTorch, TensorFlow 중 적어도 하나는 설치해야 합니다.
182 플랫폼에 맞는 설치 명령어를 확인하기 위해 [TensorFlow 설치 페이지](https://www.tensorflow.org/install/), [PyTorch 설치 페이지](https://pytorch.org/get-started/locally/#start-locally), [Flax 설치 페이지](https://github.com/google/flax#quick-install)를 확인하세요.
183
184 이들 중 적어도 하나가 설치되었다면, 🤗 Transformers는 다음과 같이 pip을 이용해 설치할 수 있습니다:
185
186 ```bash
187 pip install transformers
188 ```
189
190 예시들을 체험해보고 싶거나, 최최최첨단 코드를 원하거나, 새로운 버전이 나올 때까지 기다릴 수 없다면 [라이브러리를 소스에서 바로 설치](https://huggingface.co/docs/transformers/installation#installing-from-source)하셔야 합니다.
191
192 ### conda로 설치하기
193
194 Transformers 버전 v4.0.0부터, conda 채널이 생겼습니다: `huggingface`.
195
196 🤗 Transformers는 다음과 같이 conda로 설치할 수 있습니다:
197
198 ```shell script
199 conda install -c huggingface transformers
200 ```
201
202 Flax, PyTorch, TensorFlow 설치 페이지에서 이들을 conda로 설치하는 방법을 확인하세요.
203
204 ## 모델 구조
205
206 **🤗 Transformers가 제공하는 [모든 모델 체크포인트](https://huggingface.co/models)** 는 huggingface.co [모델 허브](https://huggingface.co)에 완벽히 연동되어 있습니다. [개인](https://huggingface.co/users)과 [기관](https://huggingface.co/organizations)이 모델 허브에 직접 업로드할 수 있습니다.
207
208 현재 사용 가능한 모델 체크포인트의 개수: 
209
210 🤗 Transformers는 다음 모델들을 제공합니다 (각 모델의 요약은 [여기](https://huggingface.co/docs/transformers/model_summary)서 확인하세요):
211
212 1. **[ALBERT](https://huggingface.co/docs/transformers/model_doc/albert)** (from Google Research and the Toyota Technological Institute at Chicago) released with the paper [ALBERT: A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942), by Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut.
213 1. **[BART](https://huggingface.co/docs/transformers/model_doc/bart)** (from Facebook) released with the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/pdf/1910.13461.pdf) by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer.
214 1. **[BARThez](https://huggingface.co/docs/transformers/model_doc/barthez)** (from École polytechnique) released with the paper [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321) by Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis.
215 1. **[BARTpho](https://huggingface.co/docs/transformers/model_doc/bartpho)** (from VinAI Research) released with the paper [BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese](https://arxiv.org/abs/2109.09701) by Nguyen Luong Tran, Duong Minh Le and Dat Quoc Nguyen.
216 1. **[BEiT](https://huggingface.co/docs/transformers/model_doc/beit)** (from Microsoft) released with the paper [BEiT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) by Hangbo Bao, Li Dong, Furu Wei.
217 1. **[BERT](https://huggingface.co/docs/transformers/model_doc/bert)** (from Google) released with the paper [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova.
218 1. **[BERT For Sequence Generation](https://huggingface.co/docs/transformers/model_doc/bert-generation)** (from Google) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn.
219 1. **[BERTweet](https://huggingface.co/docs/transformers/model_doc/bertweet)** (from VinAI Research) released with the paper [BERTweet: A pre-trained language model for English Tweets](https://aclanthology.org/2020.emnlp-demos.2/) by Dat Quoc Nguyen, Thanh Vu and Anh Tuan Nguyen.
220 1. **[BigBird-Pegasus](https://huggingface.co/docs/transformers/model_doc/bigbird_pegasus)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed.
221 1. **[BigBird-RoBERTa](https://huggingface.co/docs/transformers/model_doc/big_bird)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed.
222 1. **[Blenderbot](https://huggingface.co/docs/transformers/model_doc/blenderbot)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston.
223 1. **[BlenderbotSmall](https://huggingface.co/docs/transformers/model_doc/blenderbot-small)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston.
224 1. **[BLOOM](https://huggingface.co/docs/transformers/model_doc/bloom)** (from BigScience workshop) released by the [BigSicence Workshop](https://bigscience.huggingface.co/).
225 1. **[BORT](https://huggingface.co/docs/transformers/model_doc/bort)** (from Alexa) released with the paper [Optimal Subarchitecture Extraction For BERT](https://arxiv.org/abs/2010.10499) by Adrian de Wynter and Daniel J. Perry.
226 1. **[ByT5](https://huggingface.co/docs/transformers/model_doc/byt5)** (from Google Research) released with the paper [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) by Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel.
227 1. **[CamemBERT](https://huggingface.co/docs/transformers/model_doc/camembert)** (from Inria/Facebook/Sorbonne) released with the paper [CamemBERT: a Tasty French Language Model](https://arxiv.org/abs/1911.03894) by Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz Suárez*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot.
228 1. **[CANINE](https://huggingface.co/docs/transformers/model_doc/canine)** (from Google Research) released with the paper [CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation](https://arxiv.org/abs/2103.06874) by Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting.
229 1. **[CLIP](https://huggingface.co/docs/transformers/model_doc/clip)** (from OpenAI) released with the paper [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) by Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever.
230 1. **[CodeGen](https://huggingface.co/docs/transformers/model_doc/codegen)** (from Salesforce) released with the paper [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong.
231 1. **[ConvBERT](https://huggingface.co/docs/transformers/model_doc/convbert)** (from YituTech) released with the paper [ConvBERT: Improving BERT with Span-based Dynamic Convolution](https://arxiv.org/abs/2008.02496) by Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng Yan.
232 1. **[ConvNeXT](https://huggingface.co/docs/transformers/model_doc/convnext)** (from Facebook AI) released with the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, Saining Xie.
233 1. **[CPM](https://huggingface.co/docs/transformers/model_doc/cpm)** (from Tsinghua University) released with the paper [CPM: A Large-scale Generative Chinese Pre-trained Language Model](https://arxiv.org/abs/2012.00413) by Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen, Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, Maosong Sun.
234 1. **[CTRL](https://huggingface.co/docs/transformers/model_doc/ctrl)** (from Salesforce) released with the paper [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858) by Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher.
235 1. **[CvT](https://huggingface.co/docs/transformers/model_doc/cvt)** (from Microsoft) released with the paper [CvT: Introducing Convolutions to Vision Transformers](https://arxiv.org/abs/2103.15808) by Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan, Lei Zhang.
236 1. **[Data2Vec](https://huggingface.co/docs/transformers/model_doc/data2vec)** (from Facebook) released with the paper [Data2Vec: A General Framework for Self-supervised Learning in Speech, Vision and Language](https://arxiv.org/abs/2202.03555) by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli.
237 1. **[DeBERTa](https://huggingface.co/docs/transformers/model_doc/deberta)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen.
238 1. **[DeBERTa-v2](https://huggingface.co/docs/transformers/model_doc/deberta-v2)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen.
239 1. **[Decision Transformer](https://huggingface.co/docs/transformers/model_doc/decision_transformer)** (from Berkeley/Facebook/Google) released with the paper [Decision Transformer: Reinforcement Learning via Sequence Modeling](https://arxiv.org/abs/2106.01345) by Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch.
240 1. **[DeiT](https://huggingface.co/docs/transformers/model_doc/deit)** (from Facebook) released with the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou.
241 1. **[DETR](https://huggingface.co/docs/transformers/model_doc/detr)** (from Facebook) released with the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko.
242 1. **[DialoGPT](https://huggingface.co/docs/transformers/model_doc/dialogpt)** (from Microsoft Research) released with the paper [DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation](https://arxiv.org/abs/1911.00536) by Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan.
243 1. **[DistilBERT](https://huggingface.co/docs/transformers/model_doc/distilbert)** (from HuggingFace), released together with the paper [DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108) by Victor Sanh, Lysandre Debut and Thomas Wolf. The same method has been applied to compress GPT2 into [DistilGPT2](https://github.com/huggingface/transformers/tree/main/examples/distillation), RoBERTa into [DistilRoBERTa](https://github.com/huggingface/transformers/tree/main/examples/distillation), Multilingual BERT into [DistilmBERT](https://github.com/huggingface/transformers/tree/main/examples/distillation) and a German version of DistilBERT.
244 1. **[DiT](https://huggingface.co/docs/transformers/model_doc/dit)** (from Microsoft Research) released with the paper [DiT: Self-supervised Pre-training for Document Image Transformer](https://arxiv.org/abs/2203.02378) by Junlong Li, Yiheng Xu, Tengchao Lv, Lei Cui, Cha Zhang, Furu Wei.
245 1. **[Donut](https://huggingface.co/docs/transformers/main/model_doc/donut)** (from NAVER) released with the paper [OCR-free Document Understanding Transformer](https://arxiv.org/abs/2111.15664) by Geewook Kim, Teakgyu Hong, Moonbin Yim, Jeongyeon Nam, Jinyoung Park, Jinyeong Yim, Wonseok Hwang, Sangdoo Yun, Dongyoon Han, Seunghyun Park.
246 1. **[DPR](https://huggingface.co/docs/transformers/model_doc/dpr)** (from Facebook) released with the paper [Dense Passage Retrieval for Open-Domain Question Answering](https://arxiv.org/abs/2004.04906) by Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih.
247 1. **[DPT](https://huggingface.co/docs/transformers/master/model_doc/dpt)** (from Intel Labs) released with the paper [Vision Transformers for Dense Prediction](https://arxiv.org/abs/2103.13413) by René Ranftl, Alexey Bochkovskiy, Vladlen Koltun.
248 1. **[ELECTRA](https://huggingface.co/docs/transformers/model_doc/electra)** (from Google Research/Stanford University) released with the paper [ELECTRA: Pre-training text encoders as discriminators rather than generators](https://arxiv.org/abs/2003.10555) by Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning.
249 1. **[EncoderDecoder](https://huggingface.co/docs/transformers/model_doc/encoder-decoder)** (from Google Research) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn.
250 1. **[FlauBERT](https://huggingface.co/docs/transformers/model_doc/flaubert)** (from CNRS) released with the paper [FlauBERT: Unsupervised Language Model Pre-training for French](https://arxiv.org/abs/1912.05372) by Hang Le, Loïc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoît Crabbé, Laurent Besacier, Didier Schwab.
251 1. **[FLAVA](https://huggingface.co/docs/transformers/model_doc/flava)** (from Facebook AI) released with the paper [FLAVA: A Foundational Language And Vision Alignment Model](https://arxiv.org/abs/2112.04482) by Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, and Douwe Kiela.
252 1. **[FNet](https://huggingface.co/docs/transformers/model_doc/fnet)** (from Google Research) released with the paper [FNet: Mixing Tokens with Fourier Transforms](https://arxiv.org/abs/2105.03824) by James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon.
253 1. **[Funnel Transformer](https://huggingface.co/docs/transformers/model_doc/funnel)** (from CMU/Google Brain) released with the paper [Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing](https://arxiv.org/abs/2006.03236) by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le.
254 1. **[GLPN](https://huggingface.co/docs/transformers/model_doc/glpn)** (from KAIST) released with the paper [Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth](https://arxiv.org/abs/2201.07436) by Doyeon Kim, Woonghyun Ga, Pyungwhan Ahn, Donggyu Joo, Sehwan Chun, Junmo Kim.
255 1. **[GPT](https://huggingface.co/docs/transformers/model_doc/openai-gpt)** (from OpenAI) released with the paper [Improving Language Understanding by Generative Pre-Training](https://blog.openai.com/language-unsupervised/) by Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever.
256 1. **[GPT Neo](https://huggingface.co/docs/transformers/model_doc/gpt_neo)** (from EleutherAI) released in the repository [EleutherAI/gpt-neo](https://github.com/EleutherAI/gpt-neo) by Sid Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy.
257 1. **[GPT NeoX](https://huggingface.co/docs/transformers/model_doc/gpt_neox)** (from EleutherAI) released with the paper [GPT-NeoX-20B: An Open-Source Autoregressive Language Model](https://arxiv.org/abs/2204.06745) by Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, Samuel Weinbach
258 1. **[GPT-2](https://huggingface.co/docs/transformers/model_doc/gpt2)** (from OpenAI) released with the paper [Language Models are Unsupervised Multitask Learners](https://blog.openai.com/better-language-models/) by Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever**.
259 1. **[GPT-J](https://huggingface.co/docs/transformers/model_doc/gptj)** (from EleutherAI) released in the repository [kingoflolz/mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/) by Ben Wang and Aran Komatsuzaki.
260 1. **[GroupViT](https://huggingface.co/docs/transformers/model_doc/groupvit)** (from UCSD, NVIDIA) released with the paper [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094) by Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang.
261 1. **[Hubert](https://huggingface.co/docs/transformers/model_doc/hubert)** (from Facebook) released with the paper [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed.
262 1. **[I-BERT](https://huggingface.co/docs/transformers/model_doc/ibert)** (from Berkeley) released with the paper [I-BERT: Integer-only BERT Quantization](https://arxiv.org/abs/2101.01321) by Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney, Kurt Keutzer.
263 1. **[ImageGPT](https://huggingface.co/docs/transformers/model_doc/imagegpt)** (from OpenAI) released with the paper [Generative Pretraining from Pixels](https://openai.com/blog/image-gpt/) by Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever.
264 1. **[LayoutLM](https://huggingface.co/docs/transformers/model_doc/layoutlm)** (from Microsoft Research Asia) released with the paper [LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318) by Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou.
265 1. **[LayoutLMv2](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (from Microsoft Research Asia) released with the paper [LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding](https://arxiv.org/abs/2012.14740) by Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou.
266 1. **[LayoutLMv3](https://huggingface.co/docs/transformers/model_doc/layoutlmv3)** (from Microsoft Research Asia) released with the paper [LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking](https://arxiv.org/abs/2204.08387) by Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei.
267 1. **[LayoutXLM](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (from Microsoft Research Asia) released with the paper [LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding](https://arxiv.org/abs/2104.08836) by Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Furu Wei.
268 1. **[LED](https://huggingface.co/docs/transformers/model_doc/led)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan.
269 1. **[LeViT](https://huggingface.co/docs/transformers/model_doc/levit)** (from Meta AI) released with the paper [LeViT: A Vision Transformer in ConvNet's Clothing for Faster Inference](https://arxiv.org/abs/2104.01136) by Ben Graham, Alaaeldin El-Nouby, Hugo Touvron, Pierre Stock, Armand Joulin, Hervé Jégou, Matthijs Douze.
270 1. **[Longformer](https://huggingface.co/docs/transformers/model_doc/longformer)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan.
271 1. **[LongT5](https://huggingface.co/docs/transformers/model_doc/longt5)** (from Google AI) released with the paper [LongT5: Efficient Text-To-Text Transformer for Long Sequences](https://arxiv.org/abs/2112.07916) by Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung, Yinfei Yang.
272 1. **[LUKE](https://huggingface.co/docs/transformers/model_doc/luke)** (from Studio Ousia) released with the paper [LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention](https://arxiv.org/abs/2010.01057) by Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, Yuji Matsumoto.
273 1. **[LXMERT](https://huggingface.co/docs/transformers/model_doc/lxmert)** (from UNC Chapel Hill) released with the paper [LXMERT: Learning Cross-Modality Encoder Representations from Transformers for Open-Domain Question Answering](https://arxiv.org/abs/1908.07490) by Hao Tan and Mohit Bansal.
274 1. **[M-CTC-T](https://huggingface.co/docs/transformers/model_doc/mctct)** (from Facebook) released with the paper [Pseudo-Labeling For Massively Multilingual Speech Recognition](https://arxiv.org/abs/2111.00161) by Loren Lugosch, Tatiana Likhomanenko, Gabriel Synnaeve, and Ronan Collobert.
275 1. **[M2M100](https://huggingface.co/docs/transformers/model_doc/m2m_100)** (from Facebook) released with the paper [Beyond English-Centric Multilingual Machine Translation](https://arxiv.org/abs/2010.11125) by Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin.
276 1. **[MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)** Machine translation models trained using [OPUS](http://opus.nlpl.eu/) data by Jörg Tiedemann. The [Marian Framework](https://marian-nmt.github.io/) is being developed by the Microsoft Translator Team.
277 1. **[MaskFormer](https://huggingface.co/docs/transformers/model_doc/maskformer)** (from Meta and UIUC) released with the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) by Bowen Cheng, Alexander G. Schwing, Alexander Kirillov.
278 1. **[mBART](https://huggingface.co/docs/transformers/model_doc/mbart)** (from Facebook) released with the paper [Multilingual Denoising Pre-training for Neural Machine Translation](https://arxiv.org/abs/2001.08210) by Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer.
279 1. **[mBART-50](https://huggingface.co/docs/transformers/model_doc/mbart)** (from Facebook) released with the paper [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) by Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, Angela Fan.
280 1. **[Megatron-BERT](https://huggingface.co/docs/transformers/model_doc/megatron-bert)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro.
281 1. **[Megatron-GPT2](https://huggingface.co/docs/transformers/model_doc/megatron_gpt2)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro.
282 1. **[mLUKE](https://huggingface.co/docs/transformers/model_doc/mluke)** (from Studio Ousia) released with the paper [mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models](https://arxiv.org/abs/2110.08151) by Ryokan Ri, Ikuya Yamada, and Yoshimasa Tsuruoka.
283 1. **[MobileBERT](https://huggingface.co/docs/transformers/model_doc/mobilebert)** (from CMU/Google Brain) released with the paper [MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices](https://arxiv.org/abs/2004.02984) by Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou.
284 1. **[MobileViT](https://huggingface.co/docs/transformers/model_doc/mobilevit)** (from Apple) released with the paper [MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer](https://arxiv.org/abs/2110.02178) by Sachin Mehta and Mohammad Rastegari.
285 1. **[MPNet](https://huggingface.co/docs/transformers/model_doc/mpnet)** (from Microsoft Research) released with the paper [MPNet: Masked and Permuted Pre-training for Language Understanding](https://arxiv.org/abs/2004.09297) by Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu.
286 1. **[MT5](https://huggingface.co/docs/transformers/model_doc/mt5)** (from Google AI) released with the paper [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) by Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel.
287 1. **[MVP](https://huggingface.co/docs/transformers/model_doc/mvp)** (from RUC AI Box) released with the paper [MVP: Multi-task Supervised Pre-training for Natural Language Generation](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen.
288 1. **[Nezha](https://huggingface.co/docs/transformers/model_doc/nezha)** (from Huawei Noah’s Ark Lab) released with the paper [NEZHA: Neural Contextualized Representation for Chinese Language Understanding](https://arxiv.org/abs/1909.00204) by Junqiu Wei, Xiaozhe Ren, Xiaoguang Li, Wenyong Huang, Yi Liao, Yasheng Wang, Jiashu Lin, Xin Jiang, Xiao Chen and Qun Liu.
289 1. **[NLLB](https://huggingface.co/docs/transformers/model_doc/nllb)** (from Meta) released with the paper [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672) by the NLLB team.
290 1. **[Nyströmformer](https://huggingface.co/docs/transformers/model_doc/nystromformer)** (from the University of Wisconsin - Madison) released with the paper [Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention](https://arxiv.org/abs/2102.03902) by Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, Vikas Singh.
291 1. **[OPT](https://huggingface.co/docs/transformers/master/model_doc/opt)** (from Meta AI) released with the paper [OPT: Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) by Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al.
292 1. **[OWL-ViT](https://huggingface.co/docs/transformers/model_doc/owlvit)** (from Google AI) released with the paper [Simple Open-Vocabulary Object Detection with Vision Transformers](https://arxiv.org/abs/2205.06230) by Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, and Neil Houlsby.
293 1. **[Pegasus](https://huggingface.co/docs/transformers/model_doc/pegasus)** (from Google) released with the paper [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777) by Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu.
294 1. **[Perceiver IO](https://huggingface.co/docs/transformers/model_doc/perceiver)** (from Deepmind) released with the paper [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) by Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hénaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, João Carreira.
295 1. **[PhoBERT](https://huggingface.co/docs/transformers/model_doc/phobert)** (from VinAI Research) released with the paper [PhoBERT: Pre-trained language models for Vietnamese](https://www.aclweb.org/anthology/2020.findings-emnlp.92/) by Dat Quoc Nguyen and Anh Tuan Nguyen.
296 1. **[PLBart](https://huggingface.co/docs/transformers/model_doc/plbart)** (from UCLA NLP) released with the paper [Unified Pre-training for Program Understanding and Generation](https://arxiv.org/abs/2103.06333) by Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, Kai-Wei Chang.
297 1. **[PoolFormer](https://huggingface.co/docs/transformers/model_doc/poolformer)** (from Sea AI Labs) released with the paper [MetaFormer is Actually What You Need for Vision](https://arxiv.org/abs/2111.11418) by Yu, Weihao and Luo, Mi and Zhou, Pan and Si, Chenyang and Zhou, Yichen and Wang, Xinchao and Feng, Jiashi and Yan, Shuicheng.
298 1. **[ProphetNet](https://huggingface.co/docs/transformers/model_doc/prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou.
299 1. **[QDQBert](https://huggingface.co/docs/transformers/model_doc/qdqbert)** (from NVIDIA) released with the paper [Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation](https://arxiv.org/abs/2004.09602) by Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev and Paulius Micikevicius.
300 1. **[RAG](https://huggingface.co/docs/transformers/model_doc/rag)** (from Facebook) released with the paper [Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks](https://arxiv.org/abs/2005.11401) by Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, Douwe Kiela.
301 1. **[REALM](https://huggingface.co/docs/transformers/model_doc/realm.html)** (from Google Research) released with the paper [REALM: Retrieval-Augmented Language Model Pre-Training](https://arxiv.org/abs/2002.08909) by Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat and Ming-Wei Chang.
302 1. **[Reformer](https://huggingface.co/docs/transformers/model_doc/reformer)** (from Google Research) released with the paper [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451) by Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya.
303 1. **[RegNet](https://huggingface.co/docs/transformers/model_doc/regnet)** (from META Research) released with the paper [Designing Network Design Space](https://arxiv.org/abs/2003.13678) by Ilija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, Piotr Dollár.
304 1. **[RemBERT](https://huggingface.co/docs/transformers/model_doc/rembert)** (from Google Research) released with the paper [Rethinking embedding coupling in pre-trained language models](https://arxiv.org/pdf/2010.12821.pdf) by Hyung Won Chung, Thibault Févry, Henry Tsai, M. Johnson, Sebastian Ruder.
305 1. **[ResNet](https://huggingface.co/docs/transformers/model_doc/resnet)** (from Microsoft Research) released with the paper [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) by Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun.
306 1. **[RoBERTa](https://huggingface.co/docs/transformers/model_doc/roberta)** (from Facebook), released together with the paper a [Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov.
307 1. **[RoFormer](https://huggingface.co/docs/transformers/model_doc/roformer)** (from ZhuiyiTechnology), released together with the paper a [RoFormer: Enhanced Transformer with Rotary Position Embedding](https://arxiv.org/pdf/2104.09864v1.pdf) by Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu.
308 1. **[SegFormer](https://huggingface.co/docs/transformers/model_doc/segformer)** (from NVIDIA) released with the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo.
309 1. **[SEW](https://huggingface.co/docs/transformers/model_doc/sew)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi.
310 1. **[SEW-D](https://huggingface.co/docs/transformers/model_doc/sew_d)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi.
311 1. **[SpeechToTextTransformer](https://huggingface.co/docs/transformers/model_doc/speech_to_text)** (from Facebook), released together with the paper [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino.
312 1. **[SpeechToTextTransformer2](https://huggingface.co/docs/transformers/model_doc/speech_to_text_2)** (from Facebook), released together with the paper [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) by Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau.
313 1. **[Splinter](https://huggingface.co/docs/transformers/model_doc/splinter)** (from Tel Aviv University), released together with the paper [Few-Shot Question Answering by Pretraining Span Selection](https://arxiv.org/abs/2101.00438) by Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy.
314 1. **[SqueezeBERT](https://huggingface.co/docs/transformers/model_doc/squeezebert)** (from Berkeley) released with the paper [SqueezeBERT: What can computer vision teach NLP about efficient neural networks?](https://arxiv.org/abs/2006.11316) by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W. Keutzer.
315 1. **[Swin Transformer](https://huggingface.co/docs/transformers/model_doc/swin)** (from Microsoft) released with the paper [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) by Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo.
316 1. **[Swin Transformer V2](https://huggingface.co/docs/transformers/main/model_doc/swinv2)** (from Microsoft) released with the paper [Swin Transformer V2: Scaling Up Capacity and Resolution](https://arxiv.org/abs/2111.09883) by Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, Furu Wei, Baining Guo.
317 1. **[T5](https://huggingface.co/docs/transformers/model_doc/t5)** (from Google AI) released with the paper [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu.
318 1. **[T5v1.1](https://huggingface.co/docs/transformers/model_doc/t5v1.1)** (from Google AI) released in the repository [google-research/text-to-text-transfer-transformer](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu.
319 1. **[TAPAS](https://huggingface.co/docs/transformers/model_doc/tapas)** (from Google AI) released with the paper [TAPAS: Weakly Supervised Table Parsing via Pre-training](https://arxiv.org/abs/2004.02349) by Jonathan Herzig, Paweł Krzysztof Nowak, Thomas Müller, Francesco Piccinno and Julian Martin Eisenschlos.
320 1. **[TAPEX](https://huggingface.co/docs/transformers/model_doc/tapex)** (from Microsoft Research) released with the paper [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou.
321 1. **[Trajectory Transformer](https://huggingface.co/docs/transformers/model_doc/trajectory_transformers)** (from the University of California at Berkeley) released with the paper [Offline Reinforcement Learning as One Big Sequence Modeling Problem](https://arxiv.org/abs/2106.02039) by Michael Janner, Qiyang Li, Sergey Levine
322 1. **[Transformer-XL](https://huggingface.co/docs/transformers/model_doc/transfo-xl)** (from Google/CMU) released with the paper [Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context](https://arxiv.org/abs/1901.02860) by Zihang Dai*, Zhilin Yang*, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov.
323 1. **[TrOCR](https://huggingface.co/docs/transformers/model_doc/trocr)** (from Microsoft), released together with the paper [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) by Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei.
324 1. **[UL2](https://huggingface.co/docs/transformers/model_doc/ul2)** (from Google Research) released with the paper [Unifying Language Learning Paradigms](https://arxiv.org/abs/2205.05131v1) by Yi Tay, Mostafa Dehghani, Vinh Q. Tran, Xavier Garcia, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Neil Houlsby, Donald Metzler
325 1. **[UniSpeech](https://huggingface.co/docs/transformers/model_doc/unispeech)** (from Microsoft Research) released with the paper [UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data](https://arxiv.org/abs/2101.07597) by Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang.
326 1. **[UniSpeechSat](https://huggingface.co/docs/transformers/model_doc/unispeech-sat)** (from Microsoft Research) released with the paper [UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER AWARE PRE-TRAINING](https://arxiv.org/abs/2110.05752) by Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu.
327 1. **[VAN](https://huggingface.co/docs/transformers/model_doc/van)** (from Tsinghua University and Nankai University) released with the paper [Visual Attention Network](https://arxiv.org/pdf/2202.09741.pdf) by Meng-Hao Guo, Cheng-Ze Lu, Zheng-Ning Liu, Ming-Ming Cheng, Shi-Min Hu.
328 1. **[VideoMAE](https://huggingface.co/docs/transformers/main/model_doc/videomae)** (from Multimedia Computing Group, Nanjing University) released with the paper [VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training](https://arxiv.org/abs/2203.12602) by Zhan Tong, Yibing Song, Jue Wang, Limin Wang.
329 1. **[ViLT](https://huggingface.co/docs/transformers/model_doc/vilt)** (from NAVER AI Lab/Kakao Enterprise/Kakao Brain) released with the paper [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) by Wonjae Kim, Bokyung Son, Ildoo Kim.
330 1. **[Vision Transformer (ViT)](https://huggingface.co/docs/transformers/model_doc/vit)** (from Google AI) released with the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby.
331 1. **[VisualBERT](https://huggingface.co/docs/transformers/model_doc/visual_bert)** (from UCLA NLP) released with the paper [VisualBERT: A Simple and Performant Baseline for Vision and Language](https://arxiv.org/pdf/1908.03557) by Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang.
332 1. **[ViTMAE](https://huggingface.co/docs/transformers/model_doc/vit_mae)** (from Meta AI) released with the paper [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377) by Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick.
333 1. **[Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/wav2vec2)** (from Facebook AI) released with the paper [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.
334 1. **[Wav2Vec2-Conformer](https://huggingface.co/docs/transformers/model_doc/wav2vec2-conformer)** (from Facebook AI) released with the paper [FAIRSEQ S2T: Fast Speech-to-Text Modeling with FAIRSEQ](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino.
335 1. **[Wav2Vec2Phoneme](https://huggingface.co/docs/transformers/model_doc/wav2vec2_phoneme)** (from Facebook AI) released with the paper [Simple and Effective Zero-shot Cross-lingual Phoneme Recognition](https://arxiv.org/abs/2109.11680) by Qiantong Xu, Alexei Baevski, Michael Auli.
336 1. **[WavLM](https://huggingface.co/docs/transformers/model_doc/wavlm)** (from Microsoft Research) released with the paper [WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900) by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei.
337 1. **[XGLM](https://huggingface.co/docs/transformers/model_doc/xglm)** (From Facebook AI) released with the paper [Few-shot Learning with Multilingual Language Models](https://arxiv.org/abs/2112.10668) by Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, Xian Li.
338 1. **[XLM](https://huggingface.co/docs/transformers/model_doc/xlm)** (from Facebook) released together with the paper [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample and Alexis Conneau.
339 1. **[XLM-ProphetNet](https://huggingface.co/docs/transformers/model_doc/xlm-prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou.
340 1. **[XLM-RoBERTa](https://huggingface.co/docs/transformers/model_doc/xlm-roberta)** (from Facebook AI), released together with the paper [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Alexis Conneau*, Kartikay Khandelwal*, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov.
341 1. **[XLM-RoBERTa-XL](https://huggingface.co/docs/transformers/model_doc/xlm-roberta-xl)** (from Facebook AI) released with the paper [Larger-Scale Transformers for Multilingual Masked Language Modeling](https://arxiv.org/abs/2105.00572) by Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, Alexis Conneau.
342 1. **[XLNet](https://huggingface.co/docs/transformers/model_doc/xlnet)** (from Google/CMU) released with the paper [XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) by Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le.
343 1. **[XLS-R](https://huggingface.co/docs/transformers/model_doc/xls_r)** (from Facebook AI) released with the paper [XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale](https://arxiv.org/abs/2111.09296) by Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, Michael Auli.
344 1. **[XLSR-Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/xlsr_wav2vec2)** (from Facebook AI) released with the paper [Unsupervised Cross-Lingual Representation Learning For Speech Recognition](https://arxiv.org/abs/2006.13979) by Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli.
345 1. **[YOLOS](https://huggingface.co/docs/transformers/model_doc/yolos)** (from Huazhong University of Science & Technology) released with the paper [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://arxiv.org/abs/2106.00666) by Yuxin Fang, Bencheng Liao, Xinggang Wang, Jiemin Fang, Jiyang Qi, Rui Wu, Jianwei Niu, Wenyu Liu.
346 1. **[YOSO](https://huggingface.co/docs/transformers/model_doc/yoso)** (from the University of Wisconsin - Madison) released with the paper [You Only Sample (Almost) by Zhanpeng Zeng, Yunyang Xiong, Sathya N. Ravi, Shailesh Acharya, Glenn Fung, Vikas Singh.
347 1. 새로운 모델을 올리고 싶나요? 우리가 **상세한 가이드와 템플릿** 으로 새로운 모델을 올리도록 도와드릴게요. 가이드와 템플릿은 이 저장소의 [`templates`](./templates) 폴더에서 확인하실 수 있습니다. [컨트리뷰션 가이드라인](./CONTRIBUTING.md)을 꼭 확인해주시고, PR을 올리기 전에 메인테이너에게 연락하거나 이슈를 오픈해 피드백을 받으시길 바랍니다.
348
349 각 모델이 Flax, PyTorch, TensorFlow으로 구현되었는지 또는 🤗 Tokenizers 라이브러리가 지원하는 토크나이저를 사용하는지 확인하려면, [이 표](https://huggingface.co/docs/transformers/index#supported-frameworks)를 확인하세요.
350
351 이 구현은 여러 데이터로 검증되었고 (예시 스크립트를 참고하세요) 오리지널 구현의 성능과 같아야 합니다. [도큐먼트](https://huggingface.co/docs/transformers/examples)의 Examples 섹션에서 성능에 대한 자세한 설명을 확인할 수 있습니다.
352
353 ## 더 알아보기
354
355 | 섹션 | 설명 |
356 |-|-|
357 | [도큐먼트](https://huggingface.co/transformers/) | 전체 API 도큐먼트와 튜토리얼 |
358 | [과제 요약](https://huggingface.co/docs/transformers/task_summary) | 🤗 Transformers가 지원하는 과제들 |
359 | [전처리 튜토리얼](https://huggingface.co/docs/transformers/preprocessing) | `Tokenizer` 클래스를 이용해 모델을 위한 데이터 준비하기 |
360 | [학습과 fine-tuning](https://huggingface.co/docs/transformers/training) | 🤗 Transformers가 제공하는 모델 PyTorch/TensorFlow 학습 과정과 `Trainer` API에서 사용하기 |
361 | [퀵 투어: Fine-tuning/사용 스크립트](https://github.com/huggingface/transformers/tree/main/examples) | 다양한 과제에서 모델 fine-tuning하는 예시 스크립트 |
362 | [모델 공유 및 업로드](https://huggingface.co/docs/transformers/model_sharing) | 커뮤니티에 fine-tune된 모델을 업로드 및 공유하기 |
363 | [마이그레이션](https://huggingface.co/docs/transformers/migration) | `pytorch-transformers`나 `pytorch-pretrained-bert`에서 🤗 Transformers로 이동하기|
364
365 ## 인용
366
367 🤗 Transformers 라이브러리를 인용하고 싶다면, 이 [논문](https://www.aclweb.org/anthology/2020.emnlp-demos.6/)을 인용해 주세요:
368 ```bibtex
369 @inproceedings{wolf-etal-2020-transformers,
370 title = "Transformers: State-of-the-Art Natural Language Processing",
371 author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush",
372 booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
373 month = oct,
374 year = "2020",
375 address = "Online",
376 publisher = "Association for Computational Linguistics",
377 url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6",
378 pages = "38--45"
379 }
380 ```
381
[end of README_ko.md]
[start of README_zh-hans.md]
1 <!---
2 Copyright 2020 The HuggingFace Team. All rights reserved.
3
4 Licensed under the Apache License, Version 2.0 (the "License");
5 you may not use this file except in compliance with the License.
6 You may obtain a copy of the License at
7
8 http://www.apache.org/licenses/LICENSE-2.0
9
10 Unless required by applicable law or agreed to in writing, software
11 distributed under the License is distributed on an "AS IS" BASIS,
12 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 See the License for the specific language governing permissions and
14 limitations under the License.
15 -->
16
17 <!---
18 A useful guide for English-Chinese translation of Hugging Face documentation
19 - Add space around English words and numbers when they appear between Chinese characters. E.g., 共 100 多种语言; 使用 transformers 库。
20 - Use square quotes, e.g.,「引用」
21
22 Dictionary
23
24 Hugging Face: 抱抱脸
25 token: 词符(并用括号标注原英文)
26 tokenize: 词符化(并用括号标注原英文)
27 tokenizer: 词符化器(并用括号标注原英文)
28 transformer: transformer(不翻译)
29 pipeline: 流水线
30 API: API (不翻译)
31 inference: 推理
32 Trainer: 训练器。当作为类名出现时不翻译。
33 pretrained/pretrain: 预训练
34 finetune: 微调
35 community: 社区
36 example: 当特指仓库中 example 目录时翻译为「用例」
37 Python data structures (e.g., list, set, dict): 翻译为列表,集合,词典,并用括号标注原英文
38 NLP/Natural Language Processing: 以 NLP 出现时不翻译,以 Natural Language Processing 出现时翻译为自然语言处理
39 checkpoint: 检查点
40 -->
41
42 <p align="center">
43 <br>
44 <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers_logo_name.png" width="400"/>
45 <br>
46 <p>
47 <p align="center">
48 <a href="https://circleci.com/gh/huggingface/transformers">
49 <img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/main">
50 </a>
51 <a href="https://github.com/huggingface/transformers/blob/main/LICENSE">
52 <img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue">
53 </a>
54 <a href="https://huggingface.co/docs/transformers/index">
55 <img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers/index.svg?down_color=red&down_message=offline&up_message=online">
56 </a>
57 <a href="https://github.com/huggingface/transformers/releases">
58 <img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg">
59 </a>
60 <a href="https://github.com/huggingface/transformers/blob/main/CODE_OF_CONDUCT.md">
61 <img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg">
62 </a>
63 <a href="https://zenodo.org/badge/latestdoi/155220641"><img src="https://zenodo.org/badge/155220641.svg" alt="DOI"></a>
64 </p>
65
66 <h4 align="center">
67 <p>
68 <a href="https://github.com/huggingface/transformers/">English</a> |
69 <b>简体中文</b> |
70 <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hant.md">繁體中文</a> |
71 <a href="https://github.com/huggingface/transformers/blob/main/README_ko.md">한국어</a>
72 <p>
73 </h4>
74
75 <h3 align="center">
76 <p>为 Jax、PyTorch 和 TensorFlow 打造的先进的自然语言处理</p>
77 </h3>
78
79 <h3 align="center">
80 <a href="https://hf.co/course"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/course_banner.png"></a>
81 </h3>
82
83 🤗 Transformers 提供了数以千计的预训练模型,支持 100 多种语言的文本分类、信息抽取、问答、摘要、翻译、文本生成。它的宗旨让最先进的 NLP 技术人人易用。
84
85 🤗 Transformers 提供了便于快速下载和使用的API,让你可以把预训练模型用在给定文本、在你的数据集上微调然后通过 [model hub](https://huggingface.co/models) 与社区共享。同时,每个定义的 Python 模块均完全独立,方便修改和快速研究实验。
86
87 🤗 Transformers 支持三个最热门的深度学习库: [Jax](https://jax.readthedocs.io/en/latest/), [PyTorch](https://pytorch.org/) and [TensorFlow](https://www.tensorflow.org/) — 并与之无缝整合。你可以直接使用一个框架训练你的模型然后用另一个加载和推理。
88
89 ## 在线演示
90
91 你可以直接在模型页面上测试大多数 [model hub](https://huggingface.co/models) 上的模型。 我们也提供了 [私有模型托管、模型版本管理以及推理API](https://huggingface.co/pricing)。
92
93 这里是一些例子:
94 - [用 BERT 做掩码填词](https://huggingface.co/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France)
95 - [用 Electra 做命名实体识别](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city)
96 - [用 GPT-2 做文本生成](https://huggingface.co/gpt2?text=A+long+time+ago%2C+)
97 - [用 RoBERTa 做自然语言推理](https://huggingface.co/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal)
98 - [用 BART 做文本摘要](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct)
99 - [用 DistilBERT 做问答](https://huggingface.co/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species)
100 - [用 T5 做翻译](https://huggingface.co/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin)
101
102 **[Write With Transformer](https://transformer.huggingface.co)**,由抱抱脸团队打造,是一个文本生成的官方 demo。
103
104 ## 如果你在寻找由抱抱脸团队提供的定制化支持服务
105
106 <a target="_blank" href="https://huggingface.co/support">
107 <img alt="HuggingFace Expert Acceleration Program" src="https://huggingface.co/front/thumbnails/support.png" style="max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);">
108 </a><br>
109
110 ## 快速上手
111
112 我们为快速使用模型提供了 `pipeline` (流水线)API。流水线聚合了预训练模型和对应的文本预处理。下面是一个快速使用流水线去判断正负面情绪的例子:
113
114 ```python
115 >>> from transformers import pipeline
116
117 # 使用情绪分析流水线
118 >>> classifier = pipeline('sentiment-analysis')
119 >>> classifier('We are very happy to introduce pipeline to the transformers repository.')
120 [{'label': 'POSITIVE', 'score': 0.9996980428695679}]
121 ```
122
123 第二行代码下载并缓存了流水线使用的预训练模型,而第三行代码则在给定的文本上进行了评估。这里的答案“正面” (positive) 具有 99 的置信度。
124
125 许多的 NLP 任务都有开箱即用的预训练流水线。比如说,我们可以轻松的从给定文本中抽取问题答案:
126
127 ``` python
128 >>> from transformers import pipeline
129
130 # 使用问答流水线
131 >>> question_answerer = pipeline('question-answering')
132 >>> question_answerer({
133 ... 'question': 'What is the name of the repository ?',
134 ... 'context': 'Pipeline has been included in the huggingface/transformers repository'
135 ... })
136 {'score': 0.30970096588134766, 'start': 34, 'end': 58, 'answer': 'huggingface/transformers'}
137
138 ```
139
140 除了给出答案,预训练模型还给出了对应的置信度分数、答案在词符化 (tokenized) 后的文本中开始和结束的位置。你可以从[这个教程](https://huggingface.co/docs/transformers/task_summary)了解更多流水线API支持的任务。
141
142 要在你的任务上下载和使用任意预训练模型也很简单,只需三行代码。这里是 PyTorch 版的示例:
143 ```python
144 >>> from transformers import AutoTokenizer, AutoModel
145
146 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
147 >>> model = AutoModel.from_pretrained("bert-base-uncased")
148
149 >>> inputs = tokenizer("Hello world!", return_tensors="pt")
150 >>> outputs = model(**inputs)
151 ```
152 这里是等效的 TensorFlow 代码:
153 ```python
154 >>> from transformers import AutoTokenizer, TFAutoModel
155
156 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
157 >>> model = TFAutoModel.from_pretrained("bert-base-uncased")
158
159 >>> inputs = tokenizer("Hello world!", return_tensors="tf")
160 >>> outputs = model(**inputs)
161 ```
162
163 词符化器 (tokenizer) 为所有的预训练模型提供了预处理,并可以直接对单个字符串进行调用(比如上面的例子)或对列表 (list) 调用。它会输出一个你可以在下游代码里使用或直接通过 `**` 解包表达式传给模型的词典 (dict)。
164
165 模型本身是一个常规的 [Pytorch `nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) 或 [TensorFlow `tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model)(取决于你的后端),可以常规方式使用。 [这个教程](https://huggingface.co/transformers/training.html)解释了如何将这样的模型整合到经典的 PyTorch 或 TensorFlow 训练循环中,或是如何使用我们的 `Trainer` 训练器)API 来在一个新的数据集上快速微调。
166
167 ## 为什么要用 transformers?
168
169 1. 便于使用的先进模型:
170 - NLU 和 NLG 上表现优越
171 - 对教学和实践友好且低门槛
172 - 高级抽象,只需了解三个类
173 - 对所有模型统一的API
174
175 1. 更低计算开销,更少的碳排放:
176 - 研究人员可以分享亿训练的模型而非次次从头开始训练
177 - 工程师可以减少计算用时和生产环境开销
178 - 数十种模型架构、两千多个预训练模型、100多种语言支持
179
180 1. 对于模型生命周期的每一个部分都面面俱到:
181 - 训练先进的模型,只需 3 行代码
182 - 模型在不同深度学习框架间任意转移,随你心意
183 - 为训练、评估和生产选择最适合的框架,衔接无缝
184
185 1. 为你的需求轻松定制专属模型和用例:
186 - 我们为每种模型架构提供了多个用例来复现原论文结果
187 - 模型内部结构保持透明一致
188 - 模型文件可单独使用,方便魔改和快速实验
189
190 ## 什么情况下我不该用 transformers?
191
192 - 本库并不是模块化的神经网络工具箱。模型文件中的代码特意呈若璞玉,未经额外抽象封装,以便研究人员快速迭代魔改而不致溺于抽象和文件跳转之中。
193 - `Trainer` API 并非兼容任何模型,只为本库之模型优化。若是在寻找适用于通用机器学习的训练循环实现,请另觅他库。
194 - 尽管我们已尽力而为,[examples 目录](https://github.com/huggingface/transformers/tree/main/examples)中的脚本也仅为用例而已。对于你的特定问题,它们并不一定开箱即用,可能需要改几行代码以适之。
195
196 ## 安装
197
198 ### 使用 pip
199
200 这个仓库已在 Python 3.6+、Flax 0.3.2+、PyTorch 1.3.1+ 和 TensorFlow 2.3+ 下经过测试。
201
202 你可以在[虚拟环境](https://docs.python.org/3/library/venv.html)中安装 🤗 Transformers。如果你还不熟悉 Python 的虚拟环境,请阅此[用户说明](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/)。
203
204 首先,用你打算使用的版本的 Python 创建一个虚拟环境并激活。
205
206 然后,你需要安装 Flax、PyTorch 或 TensorFlow 其中之一。关于在你使用的平台上安装这些框架,请参阅 [TensorFlow 安装页](https://www.tensorflow.org/install/), [PyTorch 安装页](https://pytorch.org/get-started/locally/#start-locally) 或 [Flax 安装页](https://github.com/google/flax#quick-install)。
207
208 当这些后端之一安装成功后, 🤗 Transformers 可依此安装:
209
210 ```bash
211 pip install transformers
212 ```
213
214 如果你想要试试用例或者想在正式发布前使用最新的开发中代码,你得[从源代码安装](https://huggingface.co/docs/transformers/installation#installing-from-source)。
215
216 ### 使用 conda
217
218 自 Transformers 4.0.0 版始,我们有了一个 conda 频道: `huggingface`。
219
220 🤗 Transformers 可以通过 conda 依此安装:
221
222 ```shell script
223 conda install -c huggingface transformers
224 ```
225
226 要通过 conda 安装 Flax、PyTorch 或 TensorFlow 其中之一,请参阅它们各自安装页的说明。
227
228 ## 模型架构
229
230 🤗 Transformers 支持的[**所有的模型检查点**](https://huggingface.co/models)由[用户](https://huggingface.co/users)和[组织](https://huggingface.co/organizations)上传,均与 huggingface.co [model hub](https://huggingface.co) 无缝整合。
231
232 目前的检查点数量: 
233
234 🤗 Transformers 目前支持如下的架构(模型概述请阅[这里](https://huggingface.co/docs/transformers/model_summary)):
235
236 1. **[ALBERT](https://huggingface.co/docs/transformers/model_doc/albert)** (来自 Google Research and the Toyota Technological Institute at Chicago) 伴随论文 [ALBERT: A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942), 由 Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut 发布。
237 1. **[BART](https://huggingface.co/docs/transformers/model_doc/bart)** (来自 Facebook) 伴随论文 [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/pdf/1910.13461.pdf) 由 Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer 发布。
238 1. **[BARThez](https://huggingface.co/docs/transformers/model_doc/barthez)** (来自 École polytechnique) 伴随论文 [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321) 由 Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis 发布。
239 1. **[BARTpho](https://huggingface.co/docs/transformers/model_doc/bartpho)** (来自 VinAI Research) 伴随论文 [BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese](https://arxiv.org/abs/2109.09701) 由 Nguyen Luong Tran, Duong Minh Le and Dat Quoc Nguyen 发布。
240 1. **[BEiT](https://huggingface.co/docs/transformers/model_doc/beit)** (来自 Microsoft) 伴随论文 [BEiT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) 由 Hangbo Bao, Li Dong, Furu Wei 发布。
241 1. **[BERT](https://huggingface.co/docs/transformers/model_doc/bert)** (来自 Google) 伴随论文 [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) 由 Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova 发布。
242 1. **[BERT For Sequence Generation](https://huggingface.co/docs/transformers/model_doc/bert-generation)** (来自 Google) 伴随论文 [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) 由 Sascha Rothe, Shashi Narayan, Aliaksei Severyn 发布。
243 1. **[BERTweet](https://huggingface.co/docs/transformers/model_doc/bertweet)** (来自 VinAI Research) 伴随论文 [BERTweet: A pre-trained language model for English Tweets](https://aclanthology.org/2020.emnlp-demos.2/) 由 Dat Quoc Nguyen, Thanh Vu and Anh Tuan Nguyen 发布。
244 1. **[BigBird-Pegasus](https://huggingface.co/docs/transformers/model_doc/bigbird_pegasus)** (来自 Google Research) 伴随论文 [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) 由 Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed 发布。
245 1. **[BigBird-RoBERTa](https://huggingface.co/docs/transformers/model_doc/big_bird)** (来自 Google Research) 伴随论文 [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) 由 Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed 发布。
246 1. **[Blenderbot](https://huggingface.co/docs/transformers/model_doc/blenderbot)** (来自 Facebook) 伴随论文 [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) 由 Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston 发布。
247 1. **[BlenderbotSmall](https://huggingface.co/docs/transformers/model_doc/blenderbot-small)** (来自 Facebook) 伴随论文 [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) 由 Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston 发布。
248 1. **[BLOOM](https://huggingface.co/docs/transformers/model_doc/bloom)** (from BigScience workshop) released by the [BigSicence Workshop](https://bigscience.huggingface.co/).
249 1. **[BORT](https://huggingface.co/docs/transformers/model_doc/bort)** (来自 Alexa) 伴随论文 [Optimal Subarchitecture Extraction For BERT](https://arxiv.org/abs/2010.10499) 由 Adrian de Wynter and Daniel J. Perry 发布。
250 1. **[ByT5](https://huggingface.co/docs/transformers/model_doc/byt5)** (来自 Google Research) 伴随论文 [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) 由 Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel 发布。
251 1. **[CamemBERT](https://huggingface.co/docs/transformers/model_doc/camembert)** (来自 Inria/Facebook/Sorbonne) 伴随论文 [CamemBERT: a Tasty French Language Model](https://arxiv.org/abs/1911.03894) 由 Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz Suárez*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot 发布。
252 1. **[CANINE](https://huggingface.co/docs/transformers/model_doc/canine)** (来自 Google Research) 伴随论文 [CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation](https://arxiv.org/abs/2103.06874) 由 Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting 发布。
253 1. **[CLIP](https://huggingface.co/docs/transformers/model_doc/clip)** (来自 OpenAI) 伴随论文 [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) 由 Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever 发布。
254 1. **[CodeGen](https://huggingface.co/docs/transformers/model_doc/codegen)** (来自 Salesforce) 伴随论文 [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) 由 Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong 发布。
255 1. **[ConvBERT](https://huggingface.co/docs/transformers/model_doc/convbert)** (来自 YituTech) 伴随论文 [ConvBERT: Improving BERT with Span-based Dynamic Convolution](https://arxiv.org/abs/2008.02496) 由 Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng Yan 发布。
256 1. **[ConvNeXT](https://huggingface.co/docs/transformers/model_doc/convnext)** (来自 Facebook AI) 伴随论文 [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) 由 Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, Saining Xie 发布。
257 1. **[CPM](https://huggingface.co/docs/transformers/model_doc/cpm)** (来自 Tsinghua University) 伴随论文 [CPM: A Large-scale Generative Chinese Pre-trained Language Model](https://arxiv.org/abs/2012.00413) 由 Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen, Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, Maosong Sun 发布。
258 1. **[CTRL](https://huggingface.co/docs/transformers/model_doc/ctrl)** (来自 Salesforce) 伴随论文 [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858) 由 Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher 发布。
259 1. **[CvT](https://huggingface.co/docs/transformers/model_doc/cvt)** (来自 Microsoft) 伴随论文 [CvT: Introducing Convolutions to Vision Transformers](https://arxiv.org/abs/2103.15808) 由 Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan, Lei Zhang 发布。
260 1. **[Data2Vec](https://huggingface.co/docs/transformers/model_doc/data2vec)** (来自 Facebook) 伴随论文 [Data2Vec: A General Framework for Self-supervised Learning in Speech, Vision and Language](https://arxiv.org/abs/2202.03555) 由 Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli 发布。
261 1. **[DeBERTa](https://huggingface.co/docs/transformers/model_doc/deberta)** (来自 Microsoft) 伴随论文 [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) 由 Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen 发布。
262 1. **[DeBERTa-v2](https://huggingface.co/docs/transformers/model_doc/deberta-v2)** (来自 Microsoft) 伴随论文 [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) 由 Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen 发布。
263 1. **[Decision Transformer](https://huggingface.co/docs/transformers/model_doc/decision_transformer)** (来自 Berkeley/Facebook/Google) 伴随论文 [Decision Transformer: Reinforcement Learning via Sequence Modeling](https://arxiv.org/abs/2106.01345) 由 Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch 发布。
264 1. **[DeiT](https://huggingface.co/docs/transformers/model_doc/deit)** (来自 Facebook) 伴随论文 [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) 由 Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou 发布。
265 1. **[DETR](https://huggingface.co/docs/transformers/model_doc/detr)** (来自 Facebook) 伴随论文 [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) 由 Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko 发布。
266 1. **[DialoGPT](https://huggingface.co/docs/transformers/model_doc/dialogpt)** (来自 Microsoft Research) 伴随论文 [DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation](https://arxiv.org/abs/1911.00536) 由 Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan 发布。
267 1. **[DistilBERT](https://huggingface.co/docs/transformers/model_doc/distilbert)** (来自 HuggingFace), 伴随论文 [DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108) 由 Victor Sanh, Lysandre Debut and Thomas Wolf 发布。 同样的方法也应用于压缩 GPT-2 到 [DistilGPT2](https://github.com/huggingface/transformers/tree/main/examples/distillation), RoBERTa 到 [DistilRoBERTa](https://github.com/huggingface/transformers/tree/main/examples/distillation), Multilingual BERT 到 [DistilmBERT](https://github.com/huggingface/transformers/tree/main/examples/distillation) 和德语版 DistilBERT。
268 1. **[DiT](https://huggingface.co/docs/transformers/model_doc/dit)** (来自 Microsoft Research) 伴随论文 [DiT: Self-supervised Pre-training for Document Image Transformer](https://arxiv.org/abs/2203.02378) 由 Junlong Li, Yiheng Xu, Tengchao Lv, Lei Cui, Cha Zhang, Furu Wei 发布。
269 1. **[Donut](https://huggingface.co/docs/transformers/main/model_doc/donut)** (来自 NAVER) 伴随论文 [OCR-free Document Understanding Transformer](https://arxiv.org/abs/2111.15664) 由 Geewook Kim, Teakgyu Hong, Moonbin Yim, Jeongyeon Nam, Jinyoung Park, Jinyeong Yim, Wonseok Hwang, Sangdoo Yun, Dongyoon Han, Seunghyun Park 发布。
270 1. **[DPR](https://huggingface.co/docs/transformers/model_doc/dpr)** (来自 Facebook) 伴随论文 [Dense Passage Retrieval for Open-Domain Question Answering](https://arxiv.org/abs/2004.04906) 由 Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih 发布。
271 1. **[DPT](https://huggingface.co/docs/transformers/master/model_doc/dpt)** (来自 Intel Labs) 伴随论文 [Vision Transformers for Dense Prediction](https://arxiv.org/abs/2103.13413) 由 René Ranftl, Alexey Bochkovskiy, Vladlen Koltun 发布。
272 1. **[ELECTRA](https://huggingface.co/docs/transformers/model_doc/electra)** (来自 Google Research/Stanford University) 伴随论文 [ELECTRA: Pre-training text encoders as discriminators rather than generators](https://arxiv.org/abs/2003.10555) 由 Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning 发布。
273 1. **[EncoderDecoder](https://huggingface.co/docs/transformers/model_doc/encoder-decoder)** (来自 Google Research) 伴随论文 [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) 由 Sascha Rothe, Shashi Narayan, Aliaksei Severyn 发布。
274 1. **[FlauBERT](https://huggingface.co/docs/transformers/model_doc/flaubert)** (来自 CNRS) 伴随论文 [FlauBERT: Unsupervised Language Model Pre-training for French](https://arxiv.org/abs/1912.05372) 由 Hang Le, Loïc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoît Crabbé, Laurent Besacier, Didier Schwab 发布。
275 1. **[FLAVA](https://huggingface.co/docs/transformers/model_doc/flava)** (来自 Facebook AI) 伴随论文 [FLAVA: A Foundational Language And Vision Alignment Model](https://arxiv.org/abs/2112.04482) 由 Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, and Douwe Kiela 发布。
276 1. **[FNet](https://huggingface.co/docs/transformers/model_doc/fnet)** (来自 Google Research) 伴随论文 [FNet: Mixing Tokens with Fourier Transforms](https://arxiv.org/abs/2105.03824) 由 James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon 发布。
277 1. **[Funnel Transformer](https://huggingface.co/docs/transformers/model_doc/funnel)** (来自 CMU/Google Brain) 伴随论文 [Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing](https://arxiv.org/abs/2006.03236) 由 Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le 发布。
278 1. **[GLPN](https://huggingface.co/docs/transformers/model_doc/glpn)** (来自 KAIST) 伴随论文 [Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth](https://arxiv.org/abs/2201.07436) 由 Doyeon Kim, Woonghyun Ga, Pyungwhan Ahn, Donggyu Joo, Sehwan Chun, Junmo Kim 发布。
279 1. **[GPT](https://huggingface.co/docs/transformers/model_doc/openai-gpt)** (来自 OpenAI) 伴随论文 [Improving Language Understanding by Generative Pre-Training](https://blog.openai.com/language-unsupervised/) 由 Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever 发布。
280 1. **[GPT Neo](https://huggingface.co/docs/transformers/model_doc/gpt_neo)** (来自 EleutherAI) 随仓库 [EleutherAI/gpt-neo](https://github.com/EleutherAI/gpt-neo) 发布。作者为 Sid Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy 发布。
281 1. **[GPT NeoX](https://huggingface.co/docs/transformers/model_doc/gpt_neox)** (from EleutherAI) released with the paper [GPT-NeoX-20B: An Open-Source Autoregressive Language Model](https://arxiv.org/abs/2204.06745) by Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, Samuel Weinbach
282 1. **[GPT-2](https://huggingface.co/docs/transformers/model_doc/gpt2)** (来自 OpenAI) 伴随论文 [Language Models are Unsupervised Multitask Learners](https://blog.openai.com/better-language-models/) 由 Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever** 发布。
283 1. **[GPT-J](https://huggingface.co/docs/transformers/model_doc/gptj)** (来自 EleutherAI) 伴随论文 [kingoflolz/mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/) 由 Ben Wang and Aran Komatsuzaki 发布。
284 1. **[GroupViT](https://huggingface.co/docs/transformers/model_doc/groupvit)** (来自 UCSD, NVIDIA) 伴随论文 [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094) 由 Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang 发布。
285 1. **[Hubert](https://huggingface.co/docs/transformers/model_doc/hubert)** (来自 Facebook) 伴随论文 [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) 由 Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed 发布。
286 1. **[I-BERT](https://huggingface.co/docs/transformers/model_doc/ibert)** (来自 Berkeley) 伴随论文 [I-BERT: Integer-only BERT Quantization](https://arxiv.org/abs/2101.01321) 由 Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney, Kurt Keutzer 发布。
287 1. **[ImageGPT](https://huggingface.co/docs/transformers/model_doc/imagegpt)** (来自 OpenAI) 伴随论文 [Generative Pretraining from Pixels](https://openai.com/blog/image-gpt/) 由 Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever 发布。
288 1. **[LayoutLM](https://huggingface.co/docs/transformers/model_doc/layoutlm)** (来自 Microsoft Research Asia) 伴随论文 [LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318) 由 Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou 发布。
289 1. **[LayoutLMv2](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (来自 Microsoft Research Asia) 伴随论文 [LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding](https://arxiv.org/abs/2012.14740) 由 Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou 发布。
290 1. **[LayoutLMv3](https://huggingface.co/docs/transformers/model_doc/layoutlmv3)** (来自 Microsoft Research Asia) 伴随论文 [LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking](https://arxiv.org/abs/2204.08387) 由 Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei 发布。
291 1. **[LayoutXLM](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (来自 Microsoft Research Asia) 伴随论文 [LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding](https://arxiv.org/abs/2104.08836) 由 Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Furu Wei 发布。
292 1. **[LED](https://huggingface.co/docs/transformers/model_doc/led)** (来自 AllenAI) 伴随论文 [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) 由 Iz Beltagy, Matthew E. Peters, Arman Cohan 发布。
293 1. **[LeViT](https://huggingface.co/docs/transformers/model_doc/levit)** (来自 Meta AI) 伴随论文 [LeViT: A Vision Transformer in ConvNet's Clothing for Faster Inference](https://arxiv.org/abs/2104.01136) 由 Ben Graham, Alaaeldin El-Nouby, Hugo Touvron, Pierre Stock, Armand Joulin, Hervé Jégou, Matthijs Douze 发布。
294 1. **[Longformer](https://huggingface.co/docs/transformers/model_doc/longformer)** (来自 AllenAI) 伴随论文 [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) 由 Iz Beltagy, Matthew E. Peters, Arman Cohan 发布。
295 1. **[LongT5](https://huggingface.co/docs/transformers/model_doc/longt5)** (来自 Google AI) released 伴随论文 [LongT5: Efficient Text-To-Text Transformer for Long Sequences](https://arxiv.org/abs/2112.07916) 由 Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung, Yinfei Yang 发布。
296 1. **[LUKE](https://huggingface.co/docs/transformers/model_doc/luke)** (来自 Studio Ousia) 伴随论文 [LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention](https://arxiv.org/abs/2010.01057) 由 Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, Yuji Matsumoto 发布。
297 1. **[LXMERT](https://huggingface.co/docs/transformers/model_doc/lxmert)** (来自 UNC Chapel Hill) 伴随论文 [LXMERT: Learning Cross-Modality Encoder Representations from Transformers for Open-Domain Question Answering](https://arxiv.org/abs/1908.07490) 由 Hao Tan and Mohit Bansal 发布。
298 1. **[M-CTC-T](https://huggingface.co/docs/transformers/model_doc/mctct)** (来自 Facebook) 伴随论文 [Pseudo-Labeling For Massively Multilingual Speech Recognition](https://arxiv.org/abs/2111.00161) 由 Loren Lugosch, Tatiana Likhomanenko, Gabriel Synnaeve, and Ronan Collobert 发布。
299 1. **[M2M100](https://huggingface.co/docs/transformers/model_doc/m2m_100)** (来自 Facebook) 伴随论文 [Beyond English-Centric Multilingual Machine Translation](https://arxiv.org/abs/2010.11125) 由 Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin 发布。
300 1. **[MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)** 用 [OPUS](http://opus.nlpl.eu/) 数据训练的机器翻译模型由 Jörg Tiedemann 发布。[Marian Framework](https://marian-nmt.github.io/) 由微软翻译团队开发。
301 1. **[MaskFormer](https://huggingface.co/docs/transformers/model_doc/maskformer)** (from Meta and UIUC) released with the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) by Bowen Cheng, Alexander G. Schwing, Alexander Kirillov
302 1. **[mBART](https://huggingface.co/docs/transformers/model_doc/mbart)** (来自 Facebook) 伴随论文 [Multilingual Denoising Pre-training for Neural Machine Translation](https://arxiv.org/abs/2001.08210) 由 Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer 发布。
303 1. **[mBART-50](https://huggingface.co/docs/transformers/model_doc/mbart)** (来自 Facebook) 伴随论文 [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) 由 Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, Angela Fan 发布。
304 1. **[Megatron-BERT](https://huggingface.co/docs/transformers/model_doc/megatron-bert)** (来自 NVIDIA) 伴随论文 [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) 由 Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro 发布。
305 1. **[Megatron-GPT2](https://huggingface.co/docs/transformers/model_doc/megatron_gpt2)** (来自 NVIDIA) 伴随论文 [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) 由 Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro 发布。
306 1. **[mLUKE](https://huggingface.co/docs/transformers/model_doc/mluke)** (来自 Studio Ousia) 伴随论文 [mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models](https://arxiv.org/abs/2110.08151) 由 Ryokan Ri, Ikuya Yamada, and Yoshimasa Tsuruoka 发布。
307 1. **[MobileBERT](https://huggingface.co/docs/transformers/model_doc/mobilebert)** (来自 CMU/Google Brain) 伴随论文 [MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices](https://arxiv.org/abs/2004.02984) 由 Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou 发布。
308 1. **[MobileViT](https://huggingface.co/docs/transformers/model_doc/mobilevit)** (来自 Apple) 伴随论文 [MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer](https://arxiv.org/abs/2110.02178) 由 Sachin Mehta and Mohammad Rastegari 发布。
309 1. **[MPNet](https://huggingface.co/docs/transformers/model_doc/mpnet)** (来自 Microsoft Research) 伴随论文 [MPNet: Masked and Permuted Pre-training for Language Understanding](https://arxiv.org/abs/2004.09297) 由 Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu 发布。
310 1. **[MT5](https://huggingface.co/docs/transformers/model_doc/mt5)** (来自 Google AI) 伴随论文 [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) 由 Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel 发布。
311 1. **[MVP](https://huggingface.co/docs/transformers/model_doc/mvp)** (来自 中国人民大学 AI Box) 伴随论文 [MVP: Multi-task Supervised Pre-training for Natural Language Generation](https://arxiv.org/abs/2206.12131) 由 Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen 发布。
312 1. **[Nezha](https://huggingface.co/docs/transformers/model_doc/nezha)** (来自华为诺亚方舟实验室) 伴随论文 [NEZHA: Neural Contextualized Representation for Chinese Language Understanding](https://arxiv.org/abs/1909.00204) 由 Junqiu Wei, Xiaozhe Ren, Xiaoguang Li, Wenyong Huang, Yi Liao, Yasheng Wang, Jiashu Lin, Xin Jiang, Xiao Chen and Qun Liu 发布。
313 1. **[NLLB](https://huggingface.co/docs/transformers/model_doc/nllb)** (来自 Meta) 伴随论文 [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672) 由 the NLLB team 发布。
314 1. **[Nyströmformer](https://huggingface.co/docs/transformers/model_doc/nystromformer)** (来自 the University of Wisconsin - Madison) 伴随论文 [Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention](https://arxiv.org/abs/2102.03902) 由 Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, Vikas Singh 发布。
315 1. **[OPT](https://huggingface.co/docs/transformers/master/model_doc/opt)** (来自 Meta AI) 伴随论文 [OPT: Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) 由 Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al 发布。
316 1. **[OWL-ViT](https://huggingface.co/docs/transformers/model_doc/owlvit)** (来自 Google AI) 伴随论文 [Simple Open-Vocabulary Object Detection with Vision Transformers](https://arxiv.org/abs/2205.06230) 由 Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, and Neil Houlsby 发布。
317 1. **[Pegasus](https://huggingface.co/docs/transformers/model_doc/pegasus)** (来自 Google) 伴随论文 [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777) 由 Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu 发布。
318 1. **[Perceiver IO](https://huggingface.co/docs/transformers/model_doc/perceiver)** (来自 Deepmind) 伴随论文 [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) 由 Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hénaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, João Carreira 发布。
319 1. **[PhoBERT](https://huggingface.co/docs/transformers/model_doc/phobert)** (来自 VinAI Research) 伴随论文 [PhoBERT: Pre-trained language models for Vietnamese](https://www.aclweb.org/anthology/2020.findings-emnlp.92/) 由 Dat Quoc Nguyen and Anh Tuan Nguyen 发布。
320 1. **[PLBart](https://huggingface.co/docs/transformers/model_doc/plbart)** (来自 UCLA NLP) 伴随论文 [Unified Pre-training for Program Understanding and Generation](https://arxiv.org/abs/2103.06333) 由 Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, Kai-Wei Chang 发布。
321 1. **[PoolFormer](https://huggingface.co/docs/transformers/model_doc/poolformer)** (来自 Sea AI Labs) 伴随论文 [MetaFormer is Actually What You Need for Vision](https://arxiv.org/abs/2111.11418) 由 Yu, Weihao and Luo, Mi and Zhou, Pan and Si, Chenyang and Zhou, Yichen and Wang, Xinchao and Feng, Jiashi and Yan, Shuicheng 发布。
322 1. **[ProphetNet](https://huggingface.co/docs/transformers/model_doc/prophetnet)** (来自 Microsoft Research) 伴随论文 [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) 由 Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou 发布。
323 1. **[QDQBert](https://huggingface.co/docs/transformers/model_doc/qdqbert)** (来自 NVIDIA) 伴随论文 [Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation](https://arxiv.org/abs/2004.09602) 由 Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev and Paulius Micikevicius 发布。
324 1. **[RAG](https://huggingface.co/docs/transformers/model_doc/rag)** (来自 Facebook) 伴随论文 [Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks](https://arxiv.org/abs/2005.11401) 由 Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, Douwe Kiela 发布。
325 1. **[REALM](https://huggingface.co/docs/transformers/model_doc/realm.html)** (来自 Google Research) 伴随论文 [REALM: Retrieval-Augmented Language Model Pre-Training](https://arxiv.org/abs/2002.08909) 由 Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat and Ming-Wei Chang 发布。
326 1. **[Reformer](https://huggingface.co/docs/transformers/model_doc/reformer)** (来自 Google Research) 伴随论文 [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451) 由 Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya 发布。
327 1. **[RegNet](https://huggingface.co/docs/transformers/model_doc/regnet)** (from META Research) released with the paper [Designing Network Design Space](https://arxiv.org/abs/2003.13678) by Ilija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, Piotr Dollár.
328 1. **[RemBERT](https://huggingface.co/docs/transformers/model_doc/rembert)** (来自 Google Research) 伴随论文 [Rethinking embedding coupling in pre-trained language models](https://arxiv.org/pdf/2010.12821.pdf) 由 Hyung Won Chung, Thibault Févry, Henry Tsai, M. Johnson, Sebastian Ruder 发布。
329 1. **[ResNet](https://huggingface.co/docs/transformers/model_doc/resnet)** (from Microsoft Research) released with the paper [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) by Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun.
330 1. **[RoBERTa](https://huggingface.co/docs/transformers/model_doc/roberta)** (来自 Facebook), 伴随论文 [Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) 由 Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov 发布。
331 1. **[RoFormer](https://huggingface.co/docs/transformers/model_doc/roformer)** (来自 ZhuiyiTechnology), 伴随论文 [RoFormer: Enhanced Transformer with Rotary Position Embedding](https://arxiv.org/pdf/2104.09864v1.pdf) 由 Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu 发布。
332 1. **[SegFormer](https://huggingface.co/docs/transformers/model_doc/segformer)** (来自 NVIDIA) 伴随论文 [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) 由 Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo 发布。
333 1. **[SEW](https://huggingface.co/docs/transformers/model_doc/sew)** (来自 ASAPP) 伴随论文 [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) 由 Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi 发布。
334 1. **[SEW-D](https://huggingface.co/docs/transformers/model_doc/sew_d)** (来自 ASAPP) 伴随论文 [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) 由 Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi 发布。
335 1. **[SpeechToTextTransformer](https://huggingface.co/docs/transformers/model_doc/speech_to_text)** (来自 Facebook), 伴随论文 [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) 由 Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino 发布。
336 1. **[SpeechToTextTransformer2](https://huggingface.co/docs/transformers/model_doc/speech_to_text_2)** (来自 Facebook) 伴随论文 [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) 由 Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau 发布。
337 1. **[Splinter](https://huggingface.co/docs/transformers/model_doc/splinter)** (来自 Tel Aviv University) 伴随论文 [Few-Shot Question Answering by Pretraining Span Selection](https://arxiv.org/abs/2101.00438) 由 Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy 发布。
338 1. **[SqueezeBERT](https://huggingface.co/docs/transformers/model_doc/squeezebert)** (来自 Berkeley) 伴随论文 [SqueezeBERT: What can computer vision teach NLP about efficient neural networks?](https://arxiv.org/abs/2006.11316) 由 Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W. Keutzer 发布。
339 1. **[Swin Transformer](https://huggingface.co/docs/transformers/model_doc/swin)** (来自 Microsoft) 伴随论文 [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) 由 Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo 发布。
340 1. **[Swin Transformer V2](https://huggingface.co/docs/transformers/main/model_doc/swinv2)** (来自 Microsoft) 伴随论文 [Swin Transformer V2: Scaling Up Capacity and Resolution](https://arxiv.org/abs/2111.09883) 由 Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, Furu Wei, Baining Guo 发布。
341 1. **[T5](https://huggingface.co/docs/transformers/model_doc/t5)** (来自 Google AI) 伴随论文 [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) 由 Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu 发布。
342 1. **[T5v1.1](https://huggingface.co/docs/transformers/model_doc/t5v1.1)** (来自 Google AI) 伴随论文 [google-research/text-to-text-transfer-transformer](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) 由 Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu 发布。
343 1. **[TAPAS](https://huggingface.co/docs/transformers/model_doc/tapas)** (来自 Google AI) 伴随论文 [TAPAS: Weakly Supervised Table Parsing via Pre-training](https://arxiv.org/abs/2004.02349) 由 Jonathan Herzig, Paweł Krzysztof Nowak, Thomas Müller, Francesco Piccinno and Julian Martin Eisenschlos 发布。
344 1. **[TAPEX](https://huggingface.co/docs/transformers/model_doc/tapex)** (来自 Microsoft Research) 伴随论文 [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) 由 Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou 发布。
345 1. **[Trajectory Transformer](https://huggingface.co/docs/transformers/model_doc/trajectory_transformers)** (from the University of California at Berkeley) released with the paper [Offline Reinforcement Learning as One Big Sequence Modeling Problem](https://arxiv.org/abs/2106.02039) by Michael Janner, Qiyang Li, Sergey Levine
346 1. **[Transformer-XL](https://huggingface.co/docs/transformers/model_doc/transfo-xl)** (来自 Google/CMU) 伴随论文 [Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context](https://arxiv.org/abs/1901.02860) 由 Zihang Dai*, Zhilin Yang*, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov 发布。
347 1. **[TrOCR](https://huggingface.co/docs/transformers/model_doc/trocr)** (来自 Microsoft) 伴随论文 [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) 由 Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei 发布。
348 1. **[UL2](https://huggingface.co/docs/transformers/model_doc/ul2)** (from Google Research) released with the paper [Unifying Language Learning Paradigms](https://arxiv.org/abs/2205.05131v1) by Yi Tay, Mostafa Dehghani, Vinh Q. Tran, Xavier Garcia, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Neil Houlsby, Donald Metzler
349 1. **[UniSpeech](https://huggingface.co/docs/transformers/model_doc/unispeech)** (来自 Microsoft Research) 伴随论文 [UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data](https://arxiv.org/abs/2101.07597) 由 Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang 发布。
350 1. **[UniSpeechSat](https://huggingface.co/docs/transformers/model_doc/unispeech-sat)** (来自 Microsoft Research) 伴随论文 [UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER AWARE PRE-TRAINING](https://arxiv.org/abs/2110.05752) 由 Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu 发布。
351 1. **[VAN](https://huggingface.co/docs/transformers/model_doc/van)** (来自 Tsinghua University and Nankai University) 伴随论文 [Visual Attention Network](https://arxiv.org/pdf/2202.09741.pdf) 由 Meng-Hao Guo, Cheng-Ze Lu, Zheng-Ning Liu, Ming-Ming Cheng, Shi-Min Hu 发布。
352 1. **[VideoMAE](https://huggingface.co/docs/transformers/main/model_doc/videomae)** (来自 Multimedia Computing Group, Nanjing University) 伴随论文 [VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training](https://arxiv.org/abs/2203.12602) 由 Zhan Tong, Yibing Song, Jue Wang, Limin Wang 发布。
353 1. **[ViLT](https://huggingface.co/docs/transformers/model_doc/vilt)** (来自 NAVER AI Lab/Kakao Enterprise/Kakao Brain) 伴随论文 [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) 由 Wonjae Kim, Bokyung Son, Ildoo Kim 发布。
354 1. **[Vision Transformer (ViT)](https://huggingface.co/docs/transformers/model_doc/vit)** (来自 Google AI) 伴随论文 [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) 由 Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby 发布。
355 1. **[VisualBERT](https://huggingface.co/docs/transformers/model_doc/visual_bert)** (来自 UCLA NLP) 伴随论文 [VisualBERT: A Simple and Performant Baseline for Vision and Language](https://arxiv.org/pdf/1908.03557) 由 Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang 发布。
356 1. **[ViTMAE](https://huggingface.co/docs/transformers/model_doc/vit_mae)** (来自 Meta AI) 伴随论文 [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377) 由 Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick 发布。
357 1. **[Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/wav2vec2)** (来自 Facebook AI) 伴随论文 [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) 由 Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli 发布。
358 1. **[Wav2Vec2-Conformer](https://huggingface.co/docs/transformers/model_doc/wav2vec2-conformer)** (来自 Facebook AI) 伴随论文 [FAIRSEQ S2T: Fast Speech-to-Text Modeling with FAIRSEQ](https://arxiv.org/abs/2010.05171) 由 Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino 发布。
359 1. **[Wav2Vec2Phoneme](https://huggingface.co/docs/transformers/model_doc/wav2vec2_phoneme)** (来自 Facebook AI) 伴随论文 [Simple and Effective Zero-shot Cross-lingual Phoneme Recognition](https://arxiv.org/abs/2109.11680) 由 Qiantong Xu, Alexei Baevski, Michael Auli 发布。
360 1. **[WavLM](https://huggingface.co/docs/transformers/model_doc/wavlm)** (from Microsoft Research) released with the paper [WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900) by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei.
361 1. **[XGLM](https://huggingface.co/docs/transformers/model_doc/xglm)** (From Facebook AI) released with the paper [Few-shot Learning with Multilingual Language Models](https://arxiv.org/abs/2112.10668) by Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, Xian Li.
362 1. **[XLM](https://huggingface.co/docs/transformers/model_doc/xlm)** (来自 Facebook) 伴随论文 [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) 由 Guillaume Lample and Alexis Conneau 发布。
363 1. **[XLM-ProphetNet](https://huggingface.co/docs/transformers/model_doc/xlm-prophetnet)** (来自 Microsoft Research) 伴随论文 [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) 由 Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou 发布。
364 1. **[XLM-RoBERTa](https://huggingface.co/docs/transformers/model_doc/xlm-roberta)** (来自 Facebook AI), 伴随论文 [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) 由 Alexis Conneau*, Kartikay Khandelwal*, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov 发布。
365 1. **[XLM-RoBERTa-XL](https://huggingface.co/docs/transformers/model_doc/xlm-roberta-xl)** (来自 Facebook AI) 伴随论文 [Larger-Scale Transformers for Multilingual Masked Language Modeling](https://arxiv.org/abs/2105.00572) 由 Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, Alexis Conneau 发布。
366 1. **[XLNet](https://huggingface.co/docs/transformers/model_doc/xlnet)** (来自 Google/CMU) 伴随论文 [XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) 由 Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le 发布。
367 1. **[XLS-R](https://huggingface.co/docs/transformers/model_doc/xls_r)** (来自 Facebook AI) 伴随论文 [XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale](https://arxiv.org/abs/2111.09296) 由 Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, Michael Auli 发布。
368 1. **[XLSR-Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/xlsr_wav2vec2)** (来自 Facebook AI) 伴随论文 [Unsupervised Cross-Lingual Representation Learning For Speech Recognition](https://arxiv.org/abs/2006.13979) 由 Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli 发布。
369 1. **[YOLOS](https://huggingface.co/docs/transformers/model_doc/yolos)** (来自 Huazhong University of Science & Technology) 伴随论文 [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://arxiv.org/abs/2106.00666) 由 Yuxin Fang, Bencheng Liao, Xinggang Wang, Jiemin Fang, Jiyang Qi, Rui Wu, Jianwei Niu, Wenyu Liu 发布。
370 1. **[YOSO](https://huggingface.co/docs/transformers/model_doc/yoso)** (来自 the University of Wisconsin - Madison) 伴随论文 [You Only Sample (Almost) 由 Zhanpeng Zeng, Yunyang Xiong, Sathya N. Ravi, Shailesh Acharya, Glenn Fung, Vikas Singh 发布。
371 1. 想要贡献新的模型?我们这里有一份**详细指引和模板**来引导你添加新的模型。你可以在 [`templates`](./templates) 目录中找到他们。记得查看 [贡献指南](./CONTRIBUTING.md) 并在开始写 PR 前联系维护人员或开一个新的 issue 来获得反馈。
372
373 要检查某个模型是否已有 Flax、PyTorch 或 TensorFlow 的实现,或其是否在 🤗 Tokenizers 库中有对应词符化器(tokenizer),敬请参阅[此表](https://huggingface.co/docs/transformers/index#supported-frameworks)。
374
375 这些实现均已于多个数据集测试(请参看用例脚本)并应于原版实现表现相当。你可以在用例文档的[此节](https://huggingface.co/docs/transformers/examples)中了解表现的细节。
376
377
378 ## 了解更多
379
380 | 章节 | 描述 |
381 |-|-|
382 | [文档](https://huggingface.co/transformers/) | 完整的 API 文档和教程 |
383 | [任务总结](https://huggingface.co/docs/transformers/task_summary) | 🤗 Transformers 支持的任务 |
384 | [预处理教程](https://huggingface.co/docs/transformers/preprocessing) | 使用 `Tokenizer` 来为模型准备数据 |
385 | [训练和微调](https://huggingface.co/docs/transformers/training) | 在 PyTorch/TensorFlow 的训练循环或 `Trainer` API 中使用 🤗 Transformers 提供的模型 |
386 | [快速上手:微调和用例脚本](https://github.com/huggingface/transformers/tree/main/examples) | 为各种任务提供的用例脚本 |
387 | [模型分享和上传](https://huggingface.co/docs/transformers/model_sharing) | 和社区上传和分享你微调的模型 |
388 | [迁移](https://huggingface.co/docs/transformers/migration) | 从 `pytorch-transformers` 或 `pytorch-pretrained-bert` 迁移到 🤗 Transformers |
389
390 ## 引用
391
392 我们已将此库的[论文](https://www.aclweb.org/anthology/2020.emnlp-demos.6/)正式发表,如果你使用了 🤗 Transformers 库,请引用:
393 ```bibtex
394 @inproceedings{wolf-etal-2020-transformers,
395 title = "Transformers: State-of-the-Art Natural Language Processing",
396 author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush",
397 booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
398 month = oct,
399 year = "2020",
400 address = "Online",
401 publisher = "Association for Computational Linguistics",
402 url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6",
403 pages = "38--45"
404 }
405 ```
406
[end of README_zh-hans.md]
[start of README_zh-hant.md]
1 <!---
2 Copyright 2020 The HuggingFace Team. All rights reserved.
3
4 Licensed under the Apache License, Version 2.0 (the "License");
5 you may not use this file except in compliance with the License.
6 You may obtain a copy of the License at
7
8 http://www.apache.org/licenses/LICENSE-2.0
9
10 Unless required by applicable law or agreed to in writing, software
11 distributed under the License is distributed on an "AS IS" BASIS,
12 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 See the License for the specific language governing permissions and
14 limitations under the License.
15 -->
16
17 <!---
18 A useful guide for English-Traditional Chinese translation of Hugging Face documentation
19 - Add space around English words and numbers when they appear between Chinese characters. E.g., 共 100 多種語言; 使用 transformers 函式庫。
20 - Use square quotes, e.g.,「引用」
21 - Some of terms in the file can be found at National Academy for Educational Research (https://terms.naer.edu.tw/), an official website providing bilingual translations between English and Traditional Chinese.
22
23 Dictionary
24
25 API: API (不翻譯)
26 add: 加入
27 checkpoint: 檢查點
28 code: 程式碼
29 community: 社群
30 confidence: 信賴度
31 dataset: 資料集
32 documentation: 文件
33 example: 基本翻譯為「範例」,或依語意翻為「例子」
34 finetune: 微調
35 Hugging Face: Hugging Face(不翻譯)
36 implementation: 實作
37 inference: 推論
38 library: 函式庫
39 module: 模組
40 NLP/Natural Language Processing: 以 NLP 出現時不翻譯,以 Natural Language Processing 出現時翻譯為自然語言處理
41 online demos: 線上Demo
42 pipeline: pipeline(不翻譯)
43 pretrained/pretrain: 預訓練
44 Python data structures (e.g., list, set, dict): 翻譯為串列,集合,字典,並用括號標註原英文
45 repository: repository(不翻譯)
46 summary: 概覽
47 token-: token-(不翻譯)
48 Trainer: Trainer(不翻譯)
49 transformer: transformer(不翻譯)
50 tutorial: 教學
51 user: 使用者
52 -->
53
54 <p align="center">
55 <br>
56 <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers_logo_name.png" width="400"/>
57 <br>
58 <p>
59 <p align="center">
60 <a href="https://circleci.com/gh/huggingface/transformers">
61 <img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/main">
62 </a>
63 <a href="https://github.com/huggingface/transformers/blob/main/LICENSE">
64 <img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue">
65 </a>
66 <a href="https://huggingface.co/docs/transformers/index">
67 <img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers/index.svg?down_color=red&down_message=offline&up_message=online">
68 </a>
69 <a href="https://github.com/huggingface/transformers/releases">
70 <img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg">
71 </a>
72 <a href="https://github.com/huggingface/transformers/blob/main/CODE_OF_CONDUCT.md">
73 <img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg">
74 </a>
75 <a href="https://zenodo.org/badge/latestdoi/155220641"><img src="https://zenodo.org/badge/155220641.svg" alt="DOI"></a>
76 </p>
77
78 <h4 align="center">
79 <p>
80 <a href="https://github.com/huggingface/transformers/">English</a> |
81 <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hans.md">简体中文</a> |
82 <b>繁體中文</b> |
83 <a href="https://github.com/huggingface/transformers/blob/main/README_ko.md">한국어</a>
84 <p>
85 </h4>
86
87 <h3 align="center">
88 <p>為 Jax、PyTorch 以及 TensorFlow 打造的先進自然語言處理函式庫</p>
89 </h3>
90
91 <h3 align="center">
92 <a href="https://hf.co/course"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/course_banner.png"></a>
93 </h3>
94
95 🤗 Transformers 提供了數以千計的預訓練模型,支援 100 多種語言的文本分類、資訊擷取、問答、摘要、翻譯、文本生成。它的宗旨是讓最先進的 NLP 技術人人易用。
96
97 🤗 Transformers 提供了便於快速下載和使用的API,讓你可以將預訓練模型用在給定文本、在你的資料集上微調然後經由 [model hub](https://huggingface.co/models) 與社群共享。同時,每個定義的 Python 模組架構均完全獨立,方便修改和快速研究實驗。
98
99 🤗 Transformers 支援三個最熱門的深度學習函式庫: [Jax](https://jax.readthedocs.io/en/latest/), [PyTorch](https://pytorch.org/) 以及 [TensorFlow](https://www.tensorflow.org/) — 並與之完美整合。你可以直接使用其中一個框架訓練你的模型,然後用另一個載入和推論。
100
101 ## 線上Demo
102
103 你可以直接在 [model hub](https://huggingface.co/models) 上測試大多數的模型。我們也提供了 [私有模型託管、模型版本管理以及推論API](https://huggingface.co/pricing)。
104
105 這裡是一些範例:
106 - [用 BERT 做遮蓋填詞](https://huggingface.co/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France)
107 - [用 Electra 做專有名詞辨識](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city)
108 - [用 GPT-2 做文本生成](https://huggingface.co/gpt2?text=A+long+time+ago%2C+)
109 - [用 RoBERTa 做自然語言推論](https://huggingface.co/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal)
110 - [用 BART 做文本摘要](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct)
111 - [用 DistilBERT 做問答](https://huggingface.co/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species)
112 - [用 T5 做翻譯](https://huggingface.co/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin)
113
114 **[Write With Transformer](https://transformer.huggingface.co)**,由 Hugging Face 團隊所打造,是一個文本生成的官方 demo。
115
116 ## 如果你在尋找由 Hugging Face 團隊所提供的客製化支援服務
117
118 <a target="_blank" href="https://huggingface.co/support">
119 <img alt="HuggingFace Expert Acceleration Program" src="https://huggingface.co/front/thumbnails/support.png" style="max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);">
120 </a><br>
121
122 ## 快速上手
123
124 我們為快速使用模型提供了 `pipeline` API。 Pipeline 包含了預訓練模型和對應的文本預處理。下面是一個快速使用 pipeline 去判斷正負面情緒的例子:
125
126 ```python
127 >>> from transformers import pipeline
128
129 # 使用情緒分析 pipeline
130 >>> classifier = pipeline('sentiment-analysis')
131 >>> classifier('We are very happy to introduce pipeline to the transformers repository.')
132 [{'label': 'POSITIVE', 'score': 0.9996980428695679}]
133 ```
134
135 第二行程式碼下載並快取 pipeline 使用的預訓練模型,而第三行程式碼則在給定的文本上進行了評估。這裡的答案“正面” (positive) 具有 99.97% 的信賴度。
136
137 許多的 NLP 任務都有隨選即用的預訓練 `pipeline`。例如,我們可以輕鬆地從給定文本中擷取問題答案:
138
139 ``` python
140 >>> from transformers import pipeline
141
142 # 使用問答 pipeline
143 >>> question_answerer = pipeline('question-answering')
144 >>> question_answerer({
145 ... 'question': 'What is the name of the repository ?',
146 ... 'context': 'Pipeline has been included in the huggingface/transformers repository'
147 ... })
148 {'score': 0.30970096588134766, 'start': 34, 'end': 58, 'answer': 'huggingface/transformers'}
149
150 ```
151
152 除了提供問題解答,預訓練模型還提供了對應的信賴度分數以及解答在 tokenized 後的文本中開始和結束的位置。你可以從[這個教學](https://huggingface.co/docs/transformers/task_summary)了解更多 `pipeline` API支援的任務。
153
154 要在你的任務中下載和使用任何預訓練模型很簡單,只需三行程式碼。這裡是 PyTorch 版的範例:
155 ```python
156 >>> from transformers import AutoTokenizer, AutoModel
157
158 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
159 >>> model = AutoModel.from_pretrained("bert-base-uncased")
160
161 >>> inputs = tokenizer("Hello world!", return_tensors="pt")
162 >>> outputs = model(**inputs)
163 ```
164 這裡是對應的 TensorFlow 程式碼:
165 ```python
166 >>> from transformers import AutoTokenizer, TFAutoModel
167
168 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
169 >>> model = TFAutoModel.from_pretrained("bert-base-uncased")
170
171 >>> inputs = tokenizer("Hello world!", return_tensors="tf")
172 >>> outputs = model(**inputs)
173 ```
174
175 Tokenizer 為所有的預訓練模型提供了預處理,並可以直接轉換單一字串(比如上面的例子)或串列 (list)。它會輸出一個的字典 (dict) 讓你可以在下游程式碼裡使用或直接藉由 `**` 運算式傳給模型。
176
177 模型本身是一個常規的 [Pytorch `nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) 或 [TensorFlow `tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model)(取決於你的後端),可依常規方式使用。 [這個教學](https://huggingface.co/transformers/training.html)解釋了如何將這樣的模型整合到一般的 PyTorch 或 TensorFlow 訓練迴圈中,或是如何使用我們的 `Trainer` API 在一個新的資料集上快速進行微調。
178
179 ## 為什麼要用 transformers?
180
181 1. 便於使用的先進模型:
182 - NLU 和 NLG 上性能卓越
183 - 對教學和實作友好且低門檻
184 - 高度抽象,使用者只須學習 3 個類別
185 - 對所有模型使用的制式化API
186
187 1. 更低的運算成本,更少的碳排放:
188 - 研究人員可以分享預訓練的模型而非從頭開始訓練
189 - 工程師可以減少計算時間以及生產成本
190 - 數十種模型架構、兩千多個預訓練模型、100多種語言支援
191
192 1. 對於模型生命週期的每一個部分都面面俱到:
193 - 訓練先進的模型,只需 3 行程式碼
194 - 模型可以在不同深度學習框架之間任意轉換
195 - 為訓練、評估和生產選擇最適合的框架,並完美銜接
196
197 1. 為你的需求輕鬆客製化專屬模型和範例:
198 - 我們為每種模型架構提供了多個範例來重現原論文結果
199 - 一致的模型內部架構
200 - 模型檔案可單獨使用,便於修改和快速實驗
201
202 ## 什麼情況下我不該用 transformers?
203
204 - 本函式庫並不是模組化的神經網絡工具箱。模型文件中的程式碼並未做額外的抽象封裝,以便研究人員快速地翻閱及修改程式碼,而不會深陷複雜的類別包裝之中。
205 - `Trainer` API 並非相容任何模型,它只為本函式庫中的模型最佳化。對於一般的機器學習用途,請使用其他函式庫。
206 - 儘管我們已盡力而為,[examples 目錄](https://github.com/huggingface/transformers/tree/main/examples)中的腳本也僅為範例而已。對於特定問題,它們並不一定隨選即用,可能需要修改幾行程式碼以符合需求。
207
208 ## 安裝
209
210 ### 使用 pip
211
212 這個 Repository 已在 Python 3.6+、Flax 0.3.2+、PyTorch 1.3.1+ 和 TensorFlow 2.3+ 下經過測試。
213
214 你可以在[虛擬環境](https://docs.python.org/3/library/venv.html)中安裝 🤗 Transformers。如果你還不熟悉 Python 的虛擬環境,請閱此[使用者指引](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/)。
215
216 首先,用你打算使用的版本的 Python 創建一個虛擬環境並進入。
217
218 然後,你需要安裝 Flax、PyTorch 或 TensorFlow 其中之一。對於該如何在你使用的平台上安裝這些框架,請參閱 [TensorFlow 安裝頁面](https://www.tensorflow.org/install/), [PyTorch 安裝頁面](https://pytorch.org/get-started/locally/#start-locally) 或 [Flax 安裝頁面](https://github.com/google/flax#quick-install)。
219
220 當其中一個後端安裝成功後,🤗 Transformers 可依此安裝:
221
222 ```bash
223 pip install transformers
224 ```
225
226 如果你想要試試範例或者想在正式發布前使用最新開發中的程式碼,你必須[從原始碼安裝](https://huggingface.co/docs/transformers/installation#installing-from-source)。
227
228 ### 使用 conda
229
230 自 Transformers 4.0.0 版始,我們有了一個 conda channel: `huggingface`。
231
232 🤗 Transformers 可以藉由 conda 依此安裝:
233
234 ```shell script
235 conda install -c huggingface transformers
236 ```
237
238 要藉由 conda 安裝 Flax、PyTorch 或 TensorFlow 其中之一,請參閱它們各自安裝頁面的說明。
239
240 ## 模型架構
241
242 **🤗 Transformers 支援的[所有的模型檢查點](https://huggingface.co/models)**,由[使用者](https://huggingface.co/users)和[組織](https://huggingface.co/organizations)上傳,均與 huggingface.co [model hub](https://huggingface.co) 完美結合。
243
244 目前的檢查點數量: 
245
246 🤗 Transformers 目前支援以下的架構(模型概覽請參閱[這裡](https://huggingface.co/docs/transformers/model_summary)):
247
248 1. **[ALBERT](https://huggingface.co/docs/transformers/model_doc/albert)** (from Google Research and the Toyota Technological Institute at Chicago) released with the paper [ALBERT: A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942), by Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut.
249 1. **[BART](https://huggingface.co/docs/transformers/model_doc/bart)** (from Facebook) released with the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/pdf/1910.13461.pdf) by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer.
250 1. **[BARThez](https://huggingface.co/docs/transformers/model_doc/barthez)** (from École polytechnique) released with the paper [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321) by Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis.
251 1. **[BARTpho](https://huggingface.co/docs/transformers/model_doc/bartpho)** (from VinAI Research) released with the paper [BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese](https://arxiv.org/abs/2109.09701) by Nguyen Luong Tran, Duong Minh Le and Dat Quoc Nguyen.
252 1. **[BEiT](https://huggingface.co/docs/transformers/model_doc/beit)** (from Microsoft) released with the paper [BEiT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) by Hangbo Bao, Li Dong, Furu Wei.
253 1. **[BERT](https://huggingface.co/docs/transformers/model_doc/bert)** (from Google) released with the paper [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova.
254 1. **[BERT For Sequence Generation](https://huggingface.co/docs/transformers/model_doc/bert-generation)** (from Google) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn.
255 1. **[BERTweet](https://huggingface.co/docs/transformers/model_doc/bertweet)** (from VinAI Research) released with the paper [BERTweet: A pre-trained language model for English Tweets](https://aclanthology.org/2020.emnlp-demos.2/) by Dat Quoc Nguyen, Thanh Vu and Anh Tuan Nguyen.
256 1. **[BigBird-Pegasus](https://huggingface.co/docs/transformers/model_doc/bigbird_pegasus)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed.
257 1. **[BigBird-RoBERTa](https://huggingface.co/docs/transformers/model_doc/big_bird)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed.
258 1. **[Blenderbot](https://huggingface.co/docs/transformers/model_doc/blenderbot)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston.
259 1. **[BlenderbotSmall](https://huggingface.co/docs/transformers/model_doc/blenderbot-small)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston.
260 1. **[BLOOM](https://huggingface.co/docs/transformers/model_doc/bloom)** (from BigScience workshop) released by the [BigSicence Workshop](https://bigscience.huggingface.co/).
261 1. **[BORT](https://huggingface.co/docs/transformers/model_doc/bort)** (from Alexa) released with the paper [Optimal Subarchitecture Extraction For BERT](https://arxiv.org/abs/2010.10499) by Adrian de Wynter and Daniel J. Perry.
262 1. **[ByT5](https://huggingface.co/docs/transformers/model_doc/byt5)** (from Google Research) released with the paper [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) by Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel.
263 1. **[CamemBERT](https://huggingface.co/docs/transformers/model_doc/camembert)** (from Inria/Facebook/Sorbonne) released with the paper [CamemBERT: a Tasty French Language Model](https://arxiv.org/abs/1911.03894) by Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz Suárez*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot.
264 1. **[CANINE](https://huggingface.co/docs/transformers/model_doc/canine)** (from Google Research) released with the paper [CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation](https://arxiv.org/abs/2103.06874) by Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting.
265 1. **[CLIP](https://huggingface.co/docs/transformers/model_doc/clip)** (from OpenAI) released with the paper [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) by Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever.
266 1. **[CodeGen](https://huggingface.co/docs/transformers/model_doc/codegen)** (from Salesforce) released with the paper [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong.
267 1. **[ConvBERT](https://huggingface.co/docs/transformers/model_doc/convbert)** (from YituTech) released with the paper [ConvBERT: Improving BERT with Span-based Dynamic Convolution](https://arxiv.org/abs/2008.02496) by Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng Yan.
268 1. **[ConvNeXT](https://huggingface.co/docs/transformers/model_doc/convnext)** (from Facebook AI) released with the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, Saining Xie.
269 1. **[CPM](https://huggingface.co/docs/transformers/model_doc/cpm)** (from Tsinghua University) released with the paper [CPM: A Large-scale Generative Chinese Pre-trained Language Model](https://arxiv.org/abs/2012.00413) by Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen, Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, Maosong Sun.
270 1. **[CTRL](https://huggingface.co/docs/transformers/model_doc/ctrl)** (from Salesforce) released with the paper [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858) by Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher.
271 1. **[CvT](https://huggingface.co/docs/transformers/model_doc/cvt)** (from Microsoft) released with the paper [CvT: Introducing Convolutions to Vision Transformers](https://arxiv.org/abs/2103.15808) by Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan, Lei Zhang.
272 1. **[Data2Vec](https://huggingface.co/docs/transformers/model_doc/data2vec)** (from Facebook) released with the paper [Data2Vec: A General Framework for Self-supervised Learning in Speech, Vision and Language](https://arxiv.org/abs/2202.03555) by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli.
273 1. **[DeBERTa](https://huggingface.co/docs/transformers/model_doc/deberta)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen.
274 1. **[DeBERTa-v2](https://huggingface.co/docs/transformers/model_doc/deberta-v2)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen.
275 1. **[Decision Transformer](https://huggingface.co/docs/transformers/model_doc/decision_transformer)** (from Berkeley/Facebook/Google) released with the paper [Decision Transformer: Reinforcement Learning via Sequence Modeling](https://arxiv.org/abs/2106.01345) by Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch.
276 1. **[DeiT](https://huggingface.co/docs/transformers/model_doc/deit)** (from Facebook) released with the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou.
277 1. **[DETR](https://huggingface.co/docs/transformers/model_doc/detr)** (from Facebook) released with the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko.
278 1. **[DialoGPT](https://huggingface.co/docs/transformers/model_doc/dialogpt)** (from Microsoft Research) released with the paper [DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation](https://arxiv.org/abs/1911.00536) by Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan.
279 1. **[DistilBERT](https://huggingface.co/docs/transformers/model_doc/distilbert)** (from HuggingFace), released together with the paper [DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108) by Victor Sanh, Lysandre Debut and Thomas Wolf. The same method has been applied to compress GPT2 into [DistilGPT2](https://github.com/huggingface/transformers/tree/main/examples/distillation), RoBERTa into [DistilRoBERTa](https://github.com/huggingface/transformers/tree/main/examples/distillation), Multilingual BERT into [DistilmBERT](https://github.com/huggingface/transformers/tree/main/examples/distillation) and a German version of DistilBERT.
280 1. **[DiT](https://huggingface.co/docs/transformers/model_doc/dit)** (from Microsoft Research) released with the paper [DiT: Self-supervised Pre-training for Document Image Transformer](https://arxiv.org/abs/2203.02378) by Junlong Li, Yiheng Xu, Tengchao Lv, Lei Cui, Cha Zhang, Furu Wei.
281 1. **[Donut](https://huggingface.co/docs/transformers/main/model_doc/donut)** (from NAVER) released with the paper [OCR-free Document Understanding Transformer](https://arxiv.org/abs/2111.15664) by Geewook Kim, Teakgyu Hong, Moonbin Yim, Jeongyeon Nam, Jinyoung Park, Jinyeong Yim, Wonseok Hwang, Sangdoo Yun, Dongyoon Han, Seunghyun Park.
282 1. **[DPR](https://huggingface.co/docs/transformers/model_doc/dpr)** (from Facebook) released with the paper [Dense Passage Retrieval for Open-Domain Question Answering](https://arxiv.org/abs/2004.04906) by Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih.
283 1. **[DPT](https://huggingface.co/docs/transformers/master/model_doc/dpt)** (from Intel Labs) released with the paper [Vision Transformers for Dense Prediction](https://arxiv.org/abs/2103.13413) by René Ranftl, Alexey Bochkovskiy, Vladlen Koltun.
284 1. **[ELECTRA](https://huggingface.co/docs/transformers/model_doc/electra)** (from Google Research/Stanford University) released with the paper [ELECTRA: Pre-training text encoders as discriminators rather than generators](https://arxiv.org/abs/2003.10555) by Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning.
285 1. **[EncoderDecoder](https://huggingface.co/docs/transformers/model_doc/encoder-decoder)** (from Google Research) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn.
286 1. **[FlauBERT](https://huggingface.co/docs/transformers/model_doc/flaubert)** (from CNRS) released with the paper [FlauBERT: Unsupervised Language Model Pre-training for French](https://arxiv.org/abs/1912.05372) by Hang Le, Loïc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoît Crabbé, Laurent Besacier, Didier Schwab.
287 1. **[FLAVA](https://huggingface.co/docs/transformers/model_doc/flava)** (from Facebook AI) released with the paper [FLAVA: A Foundational Language And Vision Alignment Model](https://arxiv.org/abs/2112.04482) by Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, and Douwe Kiela.
288 1. **[FNet](https://huggingface.co/docs/transformers/model_doc/fnet)** (from Google Research) released with the paper [FNet: Mixing Tokens with Fourier Transforms](https://arxiv.org/abs/2105.03824) by James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon.
289 1. **[Funnel Transformer](https://huggingface.co/docs/transformers/model_doc/funnel)** (from CMU/Google Brain) released with the paper [Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing](https://arxiv.org/abs/2006.03236) by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le.
290 1. **[GLPN](https://huggingface.co/docs/transformers/model_doc/glpn)** (from KAIST) released with the paper [Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth](https://arxiv.org/abs/2201.07436) by Doyeon Kim, Woonghyun Ga, Pyungwhan Ahn, Donggyu Joo, Sehwan Chun, Junmo Kim.
291 1. **[GPT](https://huggingface.co/docs/transformers/model_doc/openai-gpt)** (from OpenAI) released with the paper [Improving Language Understanding by Generative Pre-Training](https://blog.openai.com/language-unsupervised/) by Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever.
292 1. **[GPT Neo](https://huggingface.co/docs/transformers/model_doc/gpt_neo)** (from EleutherAI) released in the repository [EleutherAI/gpt-neo](https://github.com/EleutherAI/gpt-neo) by Sid Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy.
293 1. **[GPT NeoX](https://huggingface.co/docs/transformers/model_doc/gpt_neox)** (from EleutherAI) released with the paper [GPT-NeoX-20B: An Open-Source Autoregressive Language Model](https://arxiv.org/abs/2204.06745) by Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, Samuel Weinbach
294 1. **[GPT-2](https://huggingface.co/docs/transformers/model_doc/gpt2)** (from OpenAI) released with the paper [Language Models are Unsupervised Multitask Learners](https://blog.openai.com/better-language-models/) by Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever**.
295 1. **[GPT-J](https://huggingface.co/docs/transformers/model_doc/gptj)** (from EleutherAI) released with the paper [kingoflolz/mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/) by Ben Wang and Aran Komatsuzaki.
296 1. **[GroupViT](https://huggingface.co/docs/transformers/model_doc/groupvit)** (from UCSD, NVIDIA) released with the paper [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094) by Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang.
297 1. **[Hubert](https://huggingface.co/docs/transformers/model_doc/hubert)** (from Facebook) released with the paper [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed.
298 1. **[I-BERT](https://huggingface.co/docs/transformers/model_doc/ibert)** (from Berkeley) released with the paper [I-BERT: Integer-only BERT Quantization](https://arxiv.org/abs/2101.01321) by Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney, Kurt Keutzer.
299 1. **[ImageGPT](https://huggingface.co/docs/transformers/model_doc/imagegpt)** (from OpenAI) released with the paper [Generative Pretraining from Pixels](https://openai.com/blog/image-gpt/) by Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever.
300 1. **[LayoutLM](https://huggingface.co/docs/transformers/model_doc/layoutlm)** (from Microsoft Research Asia) released with the paper [LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318) by Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou.
301 1. **[LayoutLMv2](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (from Microsoft Research Asia) released with the paper [LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding](https://arxiv.org/abs/2012.14740) by Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou.
302 1. **[LayoutLMv3](https://huggingface.co/docs/transformers/model_doc/layoutlmv3)** (from Microsoft Research Asia) released with the paper [LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking](https://arxiv.org/abs/2204.08387) by Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei.
303 1. **[LayoutXLM](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (from Microsoft Research Asia) released with the paper [LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding](https://arxiv.org/abs/2104.08836) by Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Furu Wei.
304 1. **[LED](https://huggingface.co/docs/transformers/model_doc/led)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan.
305 1. **[LeViT](https://huggingface.co/docs/transformers/model_doc/levit)** (from Meta AI) released with the paper [LeViT: A Vision Transformer in ConvNet's Clothing for Faster Inference](https://arxiv.org/abs/2104.01136) by Ben Graham, Alaaeldin El-Nouby, Hugo Touvron, Pierre Stock, Armand Joulin, Hervé Jégou, Matthijs Douze.
306 1. **[Longformer](https://huggingface.co/docs/transformers/model_doc/longformer)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan.
307 1. **[LongT5](https://huggingface.co/docs/transformers/model_doc/longt5)** (from Google AI) released with the paper [LongT5: Efficient Text-To-Text Transformer for Long Sequences](https://arxiv.org/abs/2112.07916) by Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung, Yinfei Yang.
308 1. **[LUKE](https://huggingface.co/docs/transformers/model_doc/luke)** (from Studio Ousia) released with the paper [LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention](https://arxiv.org/abs/2010.01057) by Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, Yuji Matsumoto.
309 1. **[LXMERT](https://huggingface.co/docs/transformers/model_doc/lxmert)** (from UNC Chapel Hill) released with the paper [LXMERT: Learning Cross-Modality Encoder Representations from Transformers for Open-Domain Question Answering](https://arxiv.org/abs/1908.07490) by Hao Tan and Mohit Bansal.
310 1. **[M-CTC-T](https://huggingface.co/docs/transformers/model_doc/mctct)** (from Facebook) released with the paper [Pseudo-Labeling For Massively Multilingual Speech Recognition](https://arxiv.org/abs/2111.00161) by Loren Lugosch, Tatiana Likhomanenko, Gabriel Synnaeve, and Ronan Collobert.
311 1. **[M2M100](https://huggingface.co/docs/transformers/model_doc/m2m_100)** (from Facebook) released with the paper [Beyond English-Centric Multilingual Machine Translation](https://arxiv.org/abs/2010.11125) by Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin.
312 1. **[MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)** Machine translation models trained using [OPUS](http://opus.nlpl.eu/) data by Jörg Tiedemann. The [Marian Framework](https://marian-nmt.github.io/) is being developed by the Microsoft Translator Team.
313 1. **[MaskFormer](https://huggingface.co/docs/transformers/model_doc/maskformer)** (from Meta and UIUC) released with the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) by Bowen Cheng, Alexander G. Schwing, Alexander Kirillov
314 1. **[mBART](https://huggingface.co/docs/transformers/model_doc/mbart)** (from Facebook) released with the paper [Multilingual Denoising Pre-training for Neural Machine Translation](https://arxiv.org/abs/2001.08210) by Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer.
315 1. **[mBART-50](https://huggingface.co/docs/transformers/model_doc/mbart)** (from Facebook) released with the paper [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) by Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, Angela Fan.
316 1. **[Megatron-BERT](https://huggingface.co/docs/transformers/model_doc/megatron-bert)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro.
317 1. **[Megatron-GPT2](https://huggingface.co/docs/transformers/model_doc/megatron_gpt2)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro.
318 1. **[mLUKE](https://huggingface.co/docs/transformers/model_doc/mluke)** (from Studio Ousia) released with the paper [mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models](https://arxiv.org/abs/2110.08151) by Ryokan Ri, Ikuya Yamada, and Yoshimasa Tsuruoka.
319 1. **[MobileBERT](https://huggingface.co/docs/transformers/model_doc/mobilebert)** (from CMU/Google Brain) released with the paper [MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices](https://arxiv.org/abs/2004.02984) by Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou.
320 1. **[MobileViT](https://huggingface.co/docs/transformers/model_doc/mobilevit)** (from Apple) released with the paper [MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer](https://arxiv.org/abs/2110.02178) by Sachin Mehta and Mohammad Rastegari.
321 1. **[MPNet](https://huggingface.co/docs/transformers/model_doc/mpnet)** (from Microsoft Research) released with the paper [MPNet: Masked and Permuted Pre-training for Language Understanding](https://arxiv.org/abs/2004.09297) by Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu.
322 1. **[MT5](https://huggingface.co/docs/transformers/model_doc/mt5)** (from Google AI) released with the paper [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) by Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel.
323 1. **[MVP](https://huggingface.co/docs/transformers/model_doc/mvp)** (from RUC AI Box) released with the paper [MVP: Multi-task Supervised Pre-training for Natural Language Generation](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen.
324 1. **[Nezha](https://huggingface.co/docs/transformers/model_doc/nezha)** (from Huawei Noah’s Ark Lab) released with the paper [NEZHA: Neural Contextualized Representation for Chinese Language Understanding](https://arxiv.org/abs/1909.00204) by Junqiu Wei, Xiaozhe Ren, Xiaoguang Li, Wenyong Huang, Yi Liao, Yasheng Wang, Jiashu Lin, Xin Jiang, Xiao Chen and Qun Liu.
325 1. **[NLLB](https://huggingface.co/docs/transformers/model_doc/nllb)** (from Meta) released with the paper [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672) by the NLLB team.
326 1. **[Nyströmformer](https://huggingface.co/docs/transformers/model_doc/nystromformer)** (from the University of Wisconsin - Madison) released with the paper [Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention](https://arxiv.org/abs/2102.03902) by Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, Vikas Singh.
327 1. **[OPT](https://huggingface.co/docs/transformers/master/model_doc/opt)** (from Meta AI) released with the paper [OPT: Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) by Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al.
328 1. **[OWL-ViT](https://huggingface.co/docs/transformers/model_doc/owlvit)** (from Google AI) released with the paper [Simple Open-Vocabulary Object Detection with Vision Transformers](https://arxiv.org/abs/2205.06230) by Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, and Neil Houlsby.
329 1. **[Pegasus](https://huggingface.co/docs/transformers/model_doc/pegasus)** (from Google) released with the paper [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777) by Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu.
330 1. **[Perceiver IO](https://huggingface.co/docs/transformers/model_doc/perceiver)** (from Deepmind) released with the paper [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) by Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hénaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, João Carreira.
331 1. **[PhoBERT](https://huggingface.co/docs/transformers/model_doc/phobert)** (from VinAI Research) released with the paper [PhoBERT: Pre-trained language models for Vietnamese](https://www.aclweb.org/anthology/2020.findings-emnlp.92/) by Dat Quoc Nguyen and Anh Tuan Nguyen.
332 1. **[PLBart](https://huggingface.co/docs/transformers/model_doc/plbart)** (from UCLA NLP) released with the paper [Unified Pre-training for Program Understanding and Generation](https://arxiv.org/abs/2103.06333) by Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, Kai-Wei Chang.
333 1. **[PoolFormer](https://huggingface.co/docs/transformers/model_doc/poolformer)** (from Sea AI Labs) released with the paper [MetaFormer is Actually What You Need for Vision](https://arxiv.org/abs/2111.11418) by Yu, Weihao and Luo, Mi and Zhou, Pan and Si, Chenyang and Zhou, Yichen and Wang, Xinchao and Feng, Jiashi and Yan, Shuicheng.
334 1. **[ProphetNet](https://huggingface.co/docs/transformers/model_doc/prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou.
335 1. **[QDQBert](https://huggingface.co/docs/transformers/model_doc/qdqbert)** (from NVIDIA) released with the paper [Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation](https://arxiv.org/abs/2004.09602) by Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev and Paulius Micikevicius.
336 1. **[RAG](https://huggingface.co/docs/transformers/model_doc/rag)** (from Facebook) released with the paper [Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks](https://arxiv.org/abs/2005.11401) by Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, Douwe Kiela.
337 1. **[REALM](https://huggingface.co/docs/transformers/model_doc/realm.html)** (from Google Research) released with the paper [REALM: Retrieval-Augmented Language Model Pre-Training](https://arxiv.org/abs/2002.08909) by Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat and Ming-Wei Chang.
338 1. **[Reformer](https://huggingface.co/docs/transformers/model_doc/reformer)** (from Google Research) released with the paper [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451) by Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya.
339 1. **[RegNet](https://huggingface.co/docs/transformers/model_doc/regnet)** (from META Research) released with the paper [Designing Network Design Space](https://arxiv.org/abs/2003.13678) by Ilija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, Piotr Dollár.
340 1. **[RemBERT](https://huggingface.co/docs/transformers/model_doc/rembert)** (from Google Research) released with the paper [Rethinking embedding coupling in pre-trained language models](https://arxiv.org/pdf/2010.12821.pdf) by Hyung Won Chung, Thibault Févry, Henry Tsai, M. Johnson, Sebastian Ruder.
341 1. **[ResNet](https://huggingface.co/docs/transformers/model_doc/resnet)** (from Microsoft Research) released with the paper [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) by Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun.
342 1. **[RoBERTa](https://huggingface.co/docs/transformers/model_doc/roberta)** (from Facebook), released together with the paper a [Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov.
343 1. **[RoFormer](https://huggingface.co/docs/transformers/model_doc/roformer)** (from ZhuiyiTechnology), released together with the paper a [RoFormer: Enhanced Transformer with Rotary Position Embedding](https://arxiv.org/pdf/2104.09864v1.pdf) by Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu.
344 1. **[SegFormer](https://huggingface.co/docs/transformers/model_doc/segformer)** (from NVIDIA) released with the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo.
345 1. **[SEW](https://huggingface.co/docs/transformers/model_doc/sew)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi.
346 1. **[SEW-D](https://huggingface.co/docs/transformers/model_doc/sew_d)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi.
347 1. **[SpeechToTextTransformer](https://huggingface.co/docs/transformers/model_doc/speech_to_text)** (from Facebook), released together with the paper [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino.
348 1. **[SpeechToTextTransformer2](https://huggingface.co/docs/transformers/model_doc/speech_to_text_2)** (from Facebook) released with the paper [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) by Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau.
349 1. **[Splinter](https://huggingface.co/docs/transformers/model_doc/splinter)** (from Tel Aviv University) released with the paper [Few-Shot Question Answering by Pretraining Span Selection](https://arxiv.org/abs/2101.00438) by Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy.
350 1. **[SqueezeBERT](https://huggingface.co/docs/transformers/model_doc/squeezebert)** (from Berkeley) released with the paper [SqueezeBERT: What can computer vision teach NLP about efficient neural networks?](https://arxiv.org/abs/2006.11316) by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W. Keutzer.
351 1. **[Swin Transformer](https://huggingface.co/docs/transformers/model_doc/swin)** (from Microsoft) released with the paper [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) by Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo.
352 1. **[Swin Transformer V2](https://huggingface.co/docs/transformers/main/model_doc/swinv2)** (from Microsoft) released with the paper [Swin Transformer V2: Scaling Up Capacity and Resolution](https://arxiv.org/abs/2111.09883) by Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, Furu Wei, Baining Guo.
353 1. **[T5](https://huggingface.co/docs/transformers/model_doc/t5)** (from Google AI) released with the paper [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu.
354 1. **[T5v1.1](https://huggingface.co/docs/transformers/model_doc/t5v1.1)** (from Google AI) released with the paper [google-research/text-to-text-transfer-transformer](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu.
355 1. **[TAPAS](https://huggingface.co/docs/transformers/model_doc/tapas)** (from Google AI) released with the paper [TAPAS: Weakly Supervised Table Parsing via Pre-training](https://arxiv.org/abs/2004.02349) by Jonathan Herzig, Paweł Krzysztof Nowak, Thomas Müller, Francesco Piccinno and Julian Martin Eisenschlos.
356 1. **[TAPEX](https://huggingface.co/docs/transformers/model_doc/tapex)** (from Microsoft Research) released with the paper [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou.
357 1. **[Trajectory Transformer](https://huggingface.co/docs/transformers/model_doc/trajectory_transformers)** (from the University of California at Berkeley) released with the paper [Offline Reinforcement Learning as One Big Sequence Modeling Problem](https://arxiv.org/abs/2106.02039) by Michael Janner, Qiyang Li, Sergey Levine
358 1. **[Transformer-XL](https://huggingface.co/docs/transformers/model_doc/transfo-xl)** (from Google/CMU) released with the paper [Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context](https://arxiv.org/abs/1901.02860) by Zihang Dai*, Zhilin Yang*, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov.
359 1. **[TrOCR](https://huggingface.co/docs/transformers/model_doc/trocr)** (from Microsoft) released with the paper [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) by Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei.
360 1. **[UL2](https://huggingface.co/docs/transformers/model_doc/ul2)** (from Google Research) released with the paper [Unifying Language Learning Paradigms](https://arxiv.org/abs/2205.05131v1) by Yi Tay, Mostafa Dehghani, Vinh Q. Tran, Xavier Garcia, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Neil Houlsby, Donald Metzler
361 1. **[UniSpeech](https://huggingface.co/docs/transformers/model_doc/unispeech)** (from Microsoft Research) released with the paper [UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data](https://arxiv.org/abs/2101.07597) by Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang.
362 1. **[UniSpeechSat](https://huggingface.co/docs/transformers/model_doc/unispeech-sat)** (from Microsoft Research) released with the paper [UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER AWARE PRE-TRAINING](https://arxiv.org/abs/2110.05752) by Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu.
363 1. **[VAN](https://huggingface.co/docs/transformers/model_doc/van)** (from Tsinghua University and Nankai University) released with the paper [Visual Attention Network](https://arxiv.org/pdf/2202.09741.pdf) by Meng-Hao Guo, Cheng-Ze Lu, Zheng-Ning Liu, Ming-Ming Cheng, Shi-Min Hu.
364 1. **[VideoMAE](https://huggingface.co/docs/transformers/main/model_doc/videomae)** (from Multimedia Computing Group, Nanjing University) released with the paper [VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training](https://arxiv.org/abs/2203.12602) by Zhan Tong, Yibing Song, Jue Wang, Limin Wang.
365 1. **[ViLT](https://huggingface.co/docs/transformers/model_doc/vilt)** (from NAVER AI Lab/Kakao Enterprise/Kakao Brain) released with the paper [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) by Wonjae Kim, Bokyung Son, Ildoo Kim.
366 1. **[Vision Transformer (ViT)](https://huggingface.co/docs/transformers/model_doc/vit)** (from Google AI) released with the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby.
367 1. **[VisualBERT](https://huggingface.co/docs/transformers/model_doc/visual_bert)** (from UCLA NLP) released with the paper [VisualBERT: A Simple and Performant Baseline for Vision and Language](https://arxiv.org/pdf/1908.03557) by Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang.
368 1. **[ViTMAE](https://huggingface.co/docs/transformers/model_doc/vit_mae)** (from Meta AI) released with the paper [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377) by Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick.
369 1. **[Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/wav2vec2)** (from Facebook AI) released with the paper [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.
370 1. **[Wav2Vec2-Conformer](https://huggingface.co/docs/transformers/model_doc/wav2vec2-conformer)** (from Facebook AI) released with the paper [FAIRSEQ S2T: Fast Speech-to-Text Modeling with FAIRSEQ](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino.
371 1. **[Wav2Vec2Phoneme](https://huggingface.co/docs/transformers/model_doc/wav2vec2_phoneme)** (from Facebook AI) released with the paper [Simple and Effective Zero-shot Cross-lingual Phoneme Recognition](https://arxiv.org/abs/2109.11680) by Qiantong Xu, Alexei Baevski, Michael Auli.
372 1. **[WavLM](https://huggingface.co/docs/transformers/model_doc/wavlm)** (from Microsoft Research) released with the paper [WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900) by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei.
373 1. **[XGLM](https://huggingface.co/docs/transformers/model_doc/xglm)** (From Facebook AI) released with the paper [Few-shot Learning with Multilingual Language Models](https://arxiv.org/abs/2112.10668) by Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, Xian Li.
374 1. **[XLM](https://huggingface.co/docs/transformers/model_doc/xlm)** (from Facebook) released together with the paper [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample and Alexis Conneau.
375 1. **[XLM-ProphetNet](https://huggingface.co/docs/transformers/model_doc/xlm-prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou.
376 1. **[XLM-RoBERTa](https://huggingface.co/docs/transformers/model_doc/xlm-roberta)** (from Facebook AI), released together with the paper [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Alexis Conneau*, Kartikay Khandelwal*, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov.
377 1. **[XLM-RoBERTa-XL](https://huggingface.co/docs/transformers/model_doc/xlm-roberta-xl)** (from Facebook AI) released with the paper [Larger-Scale Transformers for Multilingual Masked Language Modeling](https://arxiv.org/abs/2105.00572) by Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, Alexis Conneau.
378 1. **[XLNet](https://huggingface.co/docs/transformers/model_doc/xlnet)** (from Google/CMU) released with the paper [XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) by Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le.
379 1. **[XLS-R](https://huggingface.co/docs/transformers/model_doc/xls_r)** (from Facebook AI) released with the paper [XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale](https://arxiv.org/abs/2111.09296) by Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, Michael Auli.
380 1. **[XLSR-Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/xlsr_wav2vec2)** (from Facebook AI) released with the paper [Unsupervised Cross-Lingual Representation Learning For Speech Recognition](https://arxiv.org/abs/2006.13979) by Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli.
381 1. **[YOLOS](https://huggingface.co/docs/transformers/model_doc/yolos)** (from Huazhong University of Science & Technology) released with the paper [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://arxiv.org/abs/2106.00666) by Yuxin Fang, Bencheng Liao, Xinggang Wang, Jiemin Fang, Jiyang Qi, Rui Wu, Jianwei Niu, Wenyu Liu.
382 1. **[YOSO](https://huggingface.co/docs/transformers/model_doc/yoso)** (from the University of Wisconsin - Madison) released with the paper [You Only Sample (Almost) by Zhanpeng Zeng, Yunyang Xiong, Sathya N. Ravi, Shailesh Acharya, Glenn Fung, Vikas Singh.
383 1. 想要貢獻新的模型?我們這裡有一份**詳細指引和模板**來引導你加入新的模型。你可以在 [`templates`](./templates) 目錄中找到它們。記得查看[貢獻指引](./CONTRIBUTING.md)並在開始寫 PR 前聯繫維護人員或開一個新的 issue 來獲得 feedbacks。
384
385 要檢查某個模型是否已有 Flax、PyTorch 或 TensorFlow 的實作,或其是否在🤗 Tokenizers 函式庫中有對應的 tokenizer,敬請參閱[此表](https://huggingface.co/docs/transformers/index#supported-frameworks)。
386
387 這些實作均已於多個資料集測試(請參閱範例腳本)並應與原版實作表現相當。你可以在範例文件的[此節](https://huggingface.co/docs/transformers/examples)中了解實作的細節。
388
389
390 ## 了解更多
391
392 | 章節 | 描述 |
393 |-|-|
394 | [文件](https://huggingface.co/transformers/) | 完整的 API 文件和教學 |
395 | [任務概覽](https://huggingface.co/docs/transformers/task_summary) | 🤗 Transformers 支援的任務 |
396 | [預處理教學](https://huggingface.co/docs/transformers/preprocessing) | 使用 `Tokenizer` 來為模型準備資料 |
397 | [訓練和微調](https://huggingface.co/docs/transformers/training) | 使用 PyTorch/TensorFlow 的內建的訓練方式或於 `Trainer` API 中使用 🤗 Transformers 提供的模型 |
398 | [快速上手:微調和範例腳本](https://github.com/huggingface/transformers/tree/main/examples) | 為各種任務提供的範例腳本 |
399 | [模型分享和上傳](https://huggingface.co/docs/transformers/model_sharing) | 上傳並與社群分享你微調的模型 |
400 | [遷移](https://huggingface.co/docs/transformers/migration) | 從 `pytorch-transformers` 或 `pytorch-pretrained-bert` 遷移到 🤗 Transformers |
401
402 ## 引用
403
404 我們已將此函式庫的[論文](https://www.aclweb.org/anthology/2020.emnlp-demos.6/)正式發表。如果你使用了 🤗 Transformers 函式庫,可以引用:
405 ```bibtex
406 @inproceedings{wolf-etal-2020-transformers,
407 title = "Transformers: State-of-the-Art Natural Language Processing",
408 author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush",
409 booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
410 month = oct,
411 year = "2020",
412 address = "Online",
413 publisher = "Association for Computational Linguistics",
414 url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6",
415 pages = "38--45"
416 }
417 ```
418
[end of README_zh-hant.md]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
huggingface/transformers
|
0f257a87749e0a72bda260c6f319a45dae1e7c4d
|
LayoutLMv3 image preparation code snippet does not work with PDFs
### System Info
- `transformers` version: 4.20.1
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.9.12
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.12.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@NielsRogge
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
This is not a bug per se, but I wasn't sure how else to file it. The official LayoutLMv3 Transformers documentation indicates that PDF files can be directly processed; however, they can't -- at least, not with the current code snippets.
For example, this [code snippet](https://huggingface.co/docs/transformers/model_doc/layoutlmv3#transformers.LayoutLMv3FeatureExtractor.__call__.example) has the lines:
```
from PIL import Image
image = Image.open("name_of_your_document - can be a png file, pdf, etc.").convert("RGB")
```
However, `PIL.Image` cannot open PDFs. In fact, the [Pillow documentation](https://pillow.readthedocs.io/en/stable/handbook/image-file-formats.html?highlight=pdf#:~:text=.palm.-,PDF,-%23) indicates that PDFs are only writable.
Reproduction is trivial, but, for completeness:
1. Download this pdf: https://slicedinvoices.com/pdf/wordpress-pdf-invoice-plugin-sample.pdf
2. Install Pillow: `pip install pillow`
3. Run this code:
```python
from PIL import Image
image = Image.open(<path_to_invoice.pdf>).convert("RGB")
```
Expected error:
```
UnidentifiedImageError: cannot identify image file '/Users/joe/Downloads/wordpress-pdf-invoice-plugin-sample.pdf'
```
### Expected behavior
The documentation should provide a working solution for processing PDFs.
I did notice that the `__call__` implementation of the `LayoutLMv3FeatureExtractor` has an `images` argument that accepts numpy arrays and torch tensors, in addition to Image objects. So, I assume one or more of the following options is the correct workflow:
1. Read PDFs into a python object that can be converted to an PIL.Image type.
2. Read/transform PDFs into an array as expected by the feature extractor.
3. Convert PDFs to an image and proceed with PIL.Image
However, as I'm new to document intelligence and modeling PDFs, I'll have to do some digging to identify the right solution. So, it would be nice if the documentation was updated so that others won't have to do the same.
One work-around (or solution?) is to just convert the PDF to an image, e.g.:
```python
import io
from wand.image import Image as WImage
import PIL
local_path = "/Users/joe/Downloads/wordpress-pdf-invoice-plugin-sample.pdf"
img = WImage(filename=local_path, resolution=100) # bigger
image = PIL.Image.open(io.BytesIO(img.make_blob("png"))).convert("RGB")
```
It also [looks like](https://stackoverflow.com/questions/47599012/how-to-convert-a-wand-image-object-to-numpy-array-without-opencv) Wand supports exporting to Numpy `array`.
|
cc @NielsRogge
Yes, feel free to improve the docs as was done for LayoutLMv2 in #15293
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
|
2022-08-22T13:04:57Z
|
<patch>
diff --git a/src/transformers/models/layoutlmv2/feature_extraction_layoutlmv2.py b/src/transformers/models/layoutlmv2/feature_extraction_layoutlmv2.py
--- a/src/transformers/models/layoutlmv2/feature_extraction_layoutlmv2.py
+++ b/src/transformers/models/layoutlmv2/feature_extraction_layoutlmv2.py
@@ -168,7 +168,8 @@ def __call__(
>>> from transformers import LayoutLMv2FeatureExtractor
>>> from PIL import Image
- >>> image = Image.open("name_of_your_document - can be a png file, pdf, etc.").convert("RGB")
+ >>> # Document can be a png, jpg, etc. PDFs must be converted to images.
+ >>> image = Image.open(name_of_your_document).convert("RGB")
>>> # option 1: with apply_ocr=True (default)
>>> feature_extractor = LayoutLMv2FeatureExtractor()
diff --git a/src/transformers/models/layoutlmv3/feature_extraction_layoutlmv3.py b/src/transformers/models/layoutlmv3/feature_extraction_layoutlmv3.py
--- a/src/transformers/models/layoutlmv3/feature_extraction_layoutlmv3.py
+++ b/src/transformers/models/layoutlmv3/feature_extraction_layoutlmv3.py
@@ -179,7 +179,8 @@ def __call__(
>>> from transformers import LayoutLMv3FeatureExtractor
>>> from PIL import Image
- >>> image = Image.open("name_of_your_document - can be a png file, pdf, etc.").convert("RGB")
+ >>> # Document can be a png, jpg, etc. PDFs must be converted to images.
+ >>> image = Image.open(name_of_your_document).convert("RGB")
>>> # option 1: with apply_ocr=True (default)
>>> feature_extractor = LayoutLMv3FeatureExtractor()
</patch>
|
[]
|
[]
| |||
pandas-dev__pandas-17619
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
TST: Travis 3.6 Job Failing Across Multiple PR's
https://travis-ci.org/pandas-dev/pandas/jobs/278263195
https://travis-ci.org/pandas-dev/pandas/jobs/278232682
These are a couple of examples. Seems like `numpy` is not compatible for some strange reason.
</issue>
<code>
[start of README.md]
1 <div align="center">
2 <img src="https://github.com/pandas-dev/pandas/blob/master/doc/logo/pandas_logo.png"><br>
3 </div>
4
5 -----------------
6
7 # pandas: powerful Python data analysis toolkit
8
9 <table>
10 <tr>
11 <td>Latest Release</td>
12 <td><img src="https://img.shields.io/pypi/v/pandas.svg" alt="latest release" /></td>
13 </tr>
14 <td></td>
15 <td><img src="https://anaconda.org/conda-forge/pandas/badges/version.svg" alt="latest release" /></td>
16 </tr>
17 <tr>
18 <td>Package Status</td>
19 <td><img src="https://img.shields.io/pypi/status/pandas.svg" alt="status" /></td>
20 </tr>
21 <tr>
22 <td>License</td>
23 <td><img src="https://img.shields.io/pypi/l/pandas.svg" alt="license" /></td>
24 </tr>
25 <tr>
26 <td>Build Status</td>
27 <td>
28 <a href="https://travis-ci.org/pandas-dev/pandas">
29 <img src="https://travis-ci.org/pandas-dev/pandas.svg?branch=master" alt="travis build status" />
30 </a>
31 </td>
32 </tr>
33 <tr>
34 <td></td>
35 <td>
36 <a href="https://circleci.com/gh/pandas-dev/pandas">
37 <img src="https://circleci.com/gh/circleci/mongofinil/tree/master.svg?style=shield&circle-token=223d8cafa7b02902c3e150242520af8944e34671" alt="circleci build status" />
38 </a>
39 </td>
40 </tr>
41 <tr>
42 <td></td>
43 <td>
44 <a href="https://ci.appveyor.com/project/pandas-dev/pandas">
45 <img src="https://ci.appveyor.com/api/projects/status/86vn83mxgnl4xf1s/branch/master?svg=true" alt="appveyor build status" />
46 </a>
47 </td>
48 </tr>
49 <tr>
50 <td>Coverage</td>
51 <td><img src="https://codecov.io/github/pandas-dev/pandas/coverage.svg?branch=master" alt="coverage" /></td>
52 </tr>
53 <tr>
54 <td>Conda</td>
55 <td>
56 <a href="https://pandas.pydata.org">
57 <img src="http://pubbadges.s3-website-us-east-1.amazonaws.com/pkgs-downloads-pandas.png" alt="conda default downloads" />
58 </a>
59 </td>
60 </tr>
61 <tr>
62 <td>Conda-forge</td>
63 <td>
64 <a href="https://pandas.pydata.org">
65 <img src="https://anaconda.org/conda-forge/pandas/badges/downloads.svg" alt="conda-forge downloads" />
66 </a>
67 </td>
68 </tr>
69 <tr>
70 <td>PyPI</td>
71 <td>
72 <a href="https://pypi.python.org/pypi/pandas/">
73 <img src="https://img.shields.io/pypi/dm/pandas.svg" alt="pypi downloads" />
74 </a>
75 </td>
76 </tr>
77 </table>
78
79 [](https://gitter.im/pydata/pandas?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
80
81 ## What is it
82
83 **pandas** is a Python package providing fast, flexible, and expressive data
84 structures designed to make working with "relational" or "labeled" data both
85 easy and intuitive. It aims to be the fundamental high-level building block for
86 doing practical, **real world** data analysis in Python. Additionally, it has
87 the broader goal of becoming **the most powerful and flexible open source data
88 analysis / manipulation tool available in any language**. It is already well on
89 its way toward this goal.
90
91 ## Main Features
92 Here are just a few of the things that pandas does well:
93
94 - Easy handling of [**missing data**][missing-data] (represented as
95 `NaN`) in floating point as well as non-floating point data
96 - Size mutability: columns can be [**inserted and
97 deleted**][insertion-deletion] from DataFrame and higher dimensional
98 objects
99 - Automatic and explicit [**data alignment**][alignment]: objects can
100 be explicitly aligned to a set of labels, or the user can simply
101 ignore the labels and let `Series`, `DataFrame`, etc. automatically
102 align the data for you in computations
103 - Powerful, flexible [**group by**][groupby] functionality to perform
104 split-apply-combine operations on data sets, for both aggregating
105 and transforming data
106 - Make it [**easy to convert**][conversion] ragged,
107 differently-indexed data in other Python and NumPy data structures
108 into DataFrame objects
109 - Intelligent label-based [**slicing**][slicing], [**fancy
110 indexing**][fancy-indexing], and [**subsetting**][subsetting] of
111 large data sets
112 - Intuitive [**merging**][merging] and [**joining**][joining] data
113 sets
114 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of
115 data sets
116 - [**Hierarchical**][mi] labeling of axes (possible to have multiple
117 labels per tick)
118 - Robust IO tools for loading data from [**flat files**][flat-files]
119 (CSV and delimited), [**Excel files**][excel], [**databases**][db],
120 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore]
121 - [**Time series**][timeseries]-specific functionality: date range
122 generation and frequency conversion, moving window statistics,
123 moving window linear regressions, date shifting and lagging, etc.
124
125
126 [missing-data]: https://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data
127 [insertion-deletion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#column-selection-addition-deletion
128 [alignment]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html?highlight=alignment#intro-to-data-structures
129 [groupby]: https://pandas.pydata.org/pandas-docs/stable/groupby.html#group-by-split-apply-combine
130 [conversion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe
131 [slicing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges
132 [fancy-indexing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#advanced-indexing-with-ix
133 [subsetting]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing
134 [merging]: https://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging
135 [joining]: https://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index
136 [reshape]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-and-pivot-tables
137 [pivot-table]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#pivot-tables-and-cross-tabulations
138 [mi]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex
139 [flat-files]: https://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files
140 [excel]: https://pandas.pydata.org/pandas-docs/stable/io.html#excel-files
141 [db]: https://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries
142 [hdfstore]: https://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables
143 [timeseries]: https://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-series-date-functionality
144
145 ## Where to get it
146 The source code is currently hosted on GitHub at:
147 https://github.com/pandas-dev/pandas
148
149 Binary installers for the latest released version are available at the [Python
150 package index](https://pypi.python.org/pypi/pandas) and on conda.
151
152 ```sh
153 # conda
154 conda install pandas
155 ```
156
157 ```sh
158 # or PyPI
159 pip install pandas
160 ```
161
162 ## Dependencies
163 - [NumPy](http://www.numpy.org): 1.7.0 or higher
164 - [python-dateutil](https://labix.org/python-dateutil): 1.5 or higher
165 - [pytz](https://pythonhosted.org/pytz)
166 - Needed for time zone support with ``pandas.date_range``
167
168 See the [full installation instructions](https://pandas.pydata.org/pandas-docs/stable/install.html#dependencies)
169 for recommended and optional dependencies.
170
171 ## Installation from sources
172 To install pandas from source you need Cython in addition to the normal
173 dependencies above. Cython can be installed from pypi:
174
175 ```sh
176 pip install cython
177 ```
178
179 In the `pandas` directory (same one where you found this file after
180 cloning the git repo), execute:
181
182 ```sh
183 python setup.py install
184 ```
185
186 or for installing in [development mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs):
187
188 ```sh
189 python setup.py develop
190 ```
191
192 Alternatively, you can use `pip` if you want all the dependencies pulled
193 in automatically (the `-e` option is for installing it in [development
194 mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs)):
195
196 ```sh
197 pip install -e .
198 ```
199
200 See the full instructions for [installing from source](https://pandas.pydata.org/pandas-docs/stable/install.html#installing-from-source).
201
202 ## License
203 [BSD 3](LICENSE)
204
205 ## Documentation
206 The official documentation is hosted on PyData.org: https://pandas.pydata.org/pandas-docs/stable
207
208 The Sphinx documentation should provide a good starting point for learning how
209 to use the library. Expect the docs to continue to expand as time goes on.
210
211 ## Background
212 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and
213 has been under active development since then.
214
215 ## Getting Help
216
217 For usage questions, the best place to go to is [StackOverflow](https://stackoverflow.com/questions/tagged/pandas).
218 Further, general questions and discussions can also take place on the [pydata mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata).
219
220 ## Discussion and Development
221 Most development discussion is taking place on github in this repo. Further, the [pandas-dev mailing list](https://mail.python.org/mailman/listinfo/pandas-dev) can also be used for specialized discussions or design issues, and a [Gitter channel](https://gitter.im/pydata/pandas) is available for quick development related questions.
222
223 ## Contributing to pandas
224 All contributions, bug reports, bug fixes, documentation improvements, enhancements and ideas are welcome.
225
226 A detailed overview on how to contribute can be found in the **[contributing guide.](https://pandas.pydata.org/pandas-docs/stable/contributing.html)**
227
228 If you are simply looking to start working with the pandas codebase, navigate to the [GitHub “issues” tab](https://github.com/pandas-dev/pandas/issues) and start looking through interesting issues. There are a number of issues listed under [Docs](https://github.com/pandas-dev/pandas/issues?labels=Docs&sort=updated&state=open) and [Difficulty Novice](https://github.com/pandas-dev/pandas/issues?q=is%3Aopen+is%3Aissue+label%3A%22Difficulty+Novice%22) where you could start out.
229
230 Or maybe through using pandas you have an idea of your own or are looking for something in the documentation and thinking ‘this can be improved’...you can do something about it!
231
232 Feel free to ask questions on the [mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata) or on [Gitter](https://gitter.im/pydata/pandas).
233
[end of README.md]
[start of doc/make.py]
1 #!/usr/bin/env python
2
3 """
4 Python script for building documentation.
5
6 To build the docs you must have all optional dependencies for pandas
7 installed. See the installation instructions for a list of these.
8
9 <del>Note: currently latex builds do not work because of table formats that are not
10 supported in the latex generation.</del>
11
12 2014-01-30: Latex has some issues but 'latex_forced' works ok for 0.13.0-400 or so
13
14 Usage
15 -----
16 python make.py clean
17 python make.py html
18 """
19 from __future__ import print_function
20
21 import io
22 import glob # noqa
23 import os
24 import shutil
25 import sys
26 from contextlib import contextmanager
27
28 import sphinx # noqa
29 import argparse
30 import jinja2 # noqa
31
32 os.environ['PYTHONPATH'] = '..'
33
34 SPHINX_BUILD = 'sphinxbuild'
35
36
37 def _process_user(user):
38 if user is None or user is False:
39 user = ''
40 else:
41 user = user + '@'
42 return user
43
44
45 def upload_dev(user=None):
46 'push a copy to the pydata dev directory'
47 user = _process_user(user)
48 if os.system('cd build/html; rsync -avz . {0}pandas.pydata.org'
49 ':/usr/share/nginx/pandas/pandas-docs/dev/ -essh'.format(user)):
50 raise SystemExit('Upload to Pydata Dev failed')
51
52
53 def upload_dev_pdf(user=None):
54 'push a copy to the pydata dev directory'
55 user = _process_user(user)
56 if os.system('cd build/latex; scp pandas.pdf {0}pandas.pydata.org'
57 ':/usr/share/nginx/pandas/pandas-docs/dev/'.format(user)):
58 raise SystemExit('PDF upload to Pydata Dev failed')
59
60
61 def upload_stable(user=None):
62 'push a copy to the pydata stable directory'
63 user = _process_user(user)
64 if os.system('cd build/html; rsync -avz . {0}pandas.pydata.org'
65 ':/usr/share/nginx/pandas/pandas-docs/stable/ -essh'.format(user)):
66 raise SystemExit('Upload to stable failed')
67
68
69 def upload_stable_pdf(user=None):
70 'push a copy to the pydata dev directory'
71 user = _process_user(user)
72 if os.system('cd build/latex; scp pandas.pdf {0}pandas.pydata.org'
73 ':/usr/share/nginx/pandas/pandas-docs/stable/'.format(user)):
74 raise SystemExit('PDF upload to stable failed')
75
76
77 def upload_prev(ver, doc_root='./', user=None):
78 'push a copy of older release to appropriate version directory'
79 user = _process_user(user)
80 local_dir = doc_root + 'build/html'
81 remote_dir = '/usr/share/nginx/pandas/pandas-docs/version/%s/' % ver
82 cmd = 'cd %s; rsync -avz . %spandas.pydata.org:%s -essh'
83 cmd = cmd % (local_dir, user, remote_dir)
84 print(cmd)
85 if os.system(cmd):
86 raise SystemExit(
87 'Upload to %s from %s failed' % (remote_dir, local_dir))
88
89 local_dir = doc_root + 'build/latex'
90 pdf_cmd = 'cd %s; scp pandas.pdf %spandas.pydata.org:%s'
91 pdf_cmd = pdf_cmd % (local_dir, user, remote_dir)
92 if os.system(pdf_cmd):
93 raise SystemExit('Upload PDF to %s from %s failed' % (ver, doc_root))
94
95 def build_pandas():
96 os.chdir('..')
97 os.system('python setup.py clean')
98 os.system('python setup.py build_ext --inplace')
99 os.chdir('doc')
100
101 def build_prev(ver):
102 if os.system('git checkout v%s' % ver) != 1:
103 os.chdir('..')
104 os.system('python setup.py clean')
105 os.system('python setup.py build_ext --inplace')
106 os.chdir('doc')
107 os.system('python make.py clean')
108 os.system('python make.py html')
109 os.system('python make.py latex')
110 os.system('git checkout master')
111
112
113 def clean():
114 if os.path.exists('build'):
115 shutil.rmtree('build')
116
117 if os.path.exists('source/generated'):
118 shutil.rmtree('source/generated')
119
120
121 @contextmanager
122 def maybe_exclude_notebooks():
123 """
124 Skip building the notebooks if pandoc is not installed.
125 This assumes that nbsphinx is installed.
126 """
127 base = os.path.dirname(__file__)
128 notebooks = [os.path.join(base, 'source', nb)
129 for nb in ['style.ipynb']]
130 contents = {}
131
132 def _remove_notebooks():
133 for nb in notebooks:
134 with open(nb, 'rt') as f:
135 contents[nb] = f.read()
136 os.remove(nb)
137
138 # Skip notebook conversion if
139 # 1. nbconvert isn't installed, or
140 # 2. nbconvert is installed, but pandoc isn't
141 try:
142 import nbconvert
143 except ImportError:
144 print("Warning: nbconvert not installed. Skipping notebooks.")
145 _remove_notebooks()
146 else:
147 try:
148 nbconvert.utils.pandoc.get_pandoc_version()
149 except nbconvert.utils.pandoc.PandocMissing:
150 print("Warning: Pandoc is not installed. Skipping notebooks.")
151 _remove_notebooks()
152
153 yield
154 for nb, content in contents.items():
155 with open(nb, 'wt') as f:
156 f.write(content)
157
158
159 def html():
160 check_build()
161
162 with maybe_exclude_notebooks():
163 if os.system('sphinx-build -P -b html -d build/doctrees '
164 'source build/html'):
165 raise SystemExit("Building HTML failed.")
166 try:
167 # remove stale file
168 os.remove('build/html/pandas.zip')
169 except:
170 pass
171
172
173 def zip_html():
174 try:
175 print("\nZipping up HTML docs...")
176 # just in case the wonky build box doesn't have zip
177 # don't fail this.
178 os.system('cd build; rm -f html/pandas.zip; zip html/pandas.zip -r -q html/* ')
179 print("\n")
180 except:
181 pass
182
183 def latex():
184 check_build()
185 if sys.platform != 'win32':
186 # LaTeX format.
187 if os.system('sphinx-build -j 2 -b latex -d build/doctrees '
188 'source build/latex'):
189 raise SystemExit("Building LaTeX failed.")
190 # Produce pdf.
191
192 os.chdir('build/latex')
193
194 # Call the makefile produced by sphinx...
195 if os.system('make'):
196 print("Rendering LaTeX failed.")
197 print("You may still be able to get a usable PDF file by going into 'build/latex'")
198 print("and executing 'pdflatex pandas.tex' for the requisite number of passes.")
199 print("Or using the 'latex_forced' target")
200 raise SystemExit
201
202 os.chdir('../..')
203 else:
204 print('latex build has not been tested on windows')
205
206 def latex_forced():
207 check_build()
208 if sys.platform != 'win32':
209 # LaTeX format.
210 if os.system('sphinx-build -j 2 -b latex -d build/doctrees '
211 'source build/latex'):
212 raise SystemExit("Building LaTeX failed.")
213 # Produce pdf.
214
215 os.chdir('build/latex')
216
217 # Manually call pdflatex, 3 passes should ensure latex fixes up
218 # all the required cross-references and such.
219 os.system('pdflatex -interaction=nonstopmode pandas.tex')
220 os.system('pdflatex -interaction=nonstopmode pandas.tex')
221 os.system('pdflatex -interaction=nonstopmode pandas.tex')
222 raise SystemExit("You should check the file 'build/latex/pandas.pdf' for problems.")
223
224 os.chdir('../..')
225 else:
226 print('latex build has not been tested on windows')
227
228
229 def check_build():
230 build_dirs = [
231 'build', 'build/doctrees', 'build/html',
232 'build/latex', 'build/plots', 'build/_static',
233 'build/_templates']
234 for d in build_dirs:
235 try:
236 os.mkdir(d)
237 except OSError:
238 pass
239
240
241 def all():
242 # clean()
243 html()
244
245
246 def auto_dev_build(debug=False):
247 msg = ''
248 try:
249 step = 'clean'
250 clean()
251 step = 'html'
252 html()
253 step = 'upload dev'
254 upload_dev()
255 if not debug:
256 sendmail(step)
257
258 step = 'latex'
259 latex()
260 step = 'upload pdf'
261 upload_dev_pdf()
262 if not debug:
263 sendmail(step)
264 except (Exception, SystemExit) as inst:
265 msg = str(inst) + '\n'
266 sendmail(step, '[ERROR] ' + msg)
267
268
269 def sendmail(step=None, err_msg=None):
270 from_name, to_name = _get_config()
271
272 if step is None:
273 step = ''
274
275 if err_msg is None or '[ERROR]' not in err_msg:
276 msgstr = 'Daily docs %s completed successfully' % step
277 subject = "DOC: %s successful" % step
278 else:
279 msgstr = err_msg
280 subject = "DOC: %s failed" % step
281
282 import smtplib
283 from email.MIMEText import MIMEText
284 msg = MIMEText(msgstr)
285 msg['Subject'] = subject
286 msg['From'] = from_name
287 msg['To'] = to_name
288
289 server_str, port, login, pwd = _get_credentials()
290 server = smtplib.SMTP(server_str, port)
291 server.ehlo()
292 server.starttls()
293 server.ehlo()
294
295 server.login(login, pwd)
296 try:
297 server.sendmail(from_name, to_name, msg.as_string())
298 finally:
299 server.close()
300
301
302 def _get_dir(subdir=None):
303 import getpass
304 USERNAME = getpass.getuser()
305 if sys.platform == 'darwin':
306 HOME = '/Users/%s' % USERNAME
307 else:
308 HOME = '/home/%s' % USERNAME
309
310 if subdir is None:
311 subdir = '/code/scripts/config'
312 conf_dir = '%s/%s' % (HOME, subdir)
313 return conf_dir
314
315
316 def _get_credentials():
317 tmp_dir = _get_dir()
318 cred = '%s/credentials' % tmp_dir
319 with open(cred, 'r') as fh:
320 server, port, un, domain = fh.read().split(',')
321 port = int(port)
322 login = un + '@' + domain + '.com'
323
324 import base64
325 with open('%s/cron_email_pwd' % tmp_dir, 'r') as fh:
326 pwd = base64.b64decode(fh.read())
327
328 return server, port, login, pwd
329
330
331 def _get_config():
332 tmp_dir = _get_dir()
333 with open('%s/addresses' % tmp_dir, 'r') as fh:
334 from_name, to_name = fh.read().split(',')
335 return from_name, to_name
336
337 funcd = {
338 'html': html,
339 'zip_html': zip_html,
340 'upload_dev': upload_dev,
341 'upload_stable': upload_stable,
342 'upload_dev_pdf': upload_dev_pdf,
343 'upload_stable_pdf': upload_stable_pdf,
344 'latex': latex,
345 'latex_forced': latex_forced,
346 'clean': clean,
347 'auto_dev': auto_dev_build,
348 'auto_debug': lambda: auto_dev_build(True),
349 'build_pandas': build_pandas,
350 'all': all,
351 }
352
353 small_docs = False
354
355 # current_dir = os.getcwd()
356 # os.chdir(os.path.dirname(os.path.join(current_dir, __file__)))
357
358 import argparse
359 argparser = argparse.ArgumentParser(description="""
360 pandas documentation builder
361 """.strip())
362
363 # argparser.add_argument('-arg_name', '--arg_name',
364 # metavar='label for arg help',
365 # type=str|etc,
366 # nargs='N|*|?|+|argparse.REMAINDER',
367 # required=False,
368 # #choices='abc',
369 # help='help string',
370 # action='store|store_true')
371
372 # args = argparser.parse_args()
373
374 #print args.accumulate(args.integers)
375
376 def generate_index(api=True, single=False, **kwds):
377 from jinja2 import Template
378 with open("source/index.rst.template") as f:
379 t = Template(f.read())
380
381 with open("source/index.rst","w") as f:
382 f.write(t.render(api=api,single=single,**kwds))
383
384 import argparse
385 argparser = argparse.ArgumentParser(description="pandas documentation builder",
386 epilog="Targets : %s" % funcd.keys())
387
388 argparser.add_argument('--no-api',
389 default=False,
390 help='Ommit api and autosummary',
391 action='store_true')
392 argparser.add_argument('--single',
393 metavar='FILENAME',
394 type=str,
395 default=False,
396 help='filename of section to compile, e.g. "indexing"')
397 argparser.add_argument('--user',
398 type=str,
399 default=False,
400 help='Username to connect to the pydata server')
401
402 def main():
403 args, unknown = argparser.parse_known_args()
404 sys.argv = [sys.argv[0]] + unknown
405 if args.single:
406 args.single = os.path.basename(args.single).split(".rst")[0]
407
408 if 'clean' in unknown:
409 args.single=False
410
411 generate_index(api=not args.no_api and not args.single, single=args.single)
412
413 if len(sys.argv) > 2:
414 ftype = sys.argv[1]
415 ver = sys.argv[2]
416
417 if ftype == 'build_previous':
418 build_prev(ver, user=args.user)
419 if ftype == 'upload_previous':
420 upload_prev(ver, user=args.user)
421 elif len(sys.argv) == 2:
422 for arg in sys.argv[1:]:
423 func = funcd.get(arg)
424 if func is None:
425 raise SystemExit('Do not know how to handle %s; valid args are %s' % (
426 arg, list(funcd.keys())))
427 if args.user:
428 func(user=args.user)
429 else:
430 func()
431 else:
432 small_docs = False
433 all()
434 # os.chdir(current_dir)
435
436 if __name__ == '__main__':
437 import sys
438 sys.exit(main())
439
[end of doc/make.py]
[start of pandas/io/gbq.py]
1 """ Google BigQuery support """
2
3
4 def _try_import():
5 # since pandas is a dependency of pandas-gbq
6 # we need to import on first use
7 try:
8 import pandas_gbq
9 except ImportError:
10
11 # give a nice error message
12 raise ImportError("Load data from Google BigQuery\n"
13 "\n"
14 "the pandas-gbq package is not installed\n"
15 "see the docs: https://pandas-gbq.readthedocs.io\n"
16 "\n"
17 "you can install via pip or conda:\n"
18 "pip install pandas-gbq\n"
19 "conda install pandas-gbq -c conda-forge\n")
20
21 return pandas_gbq
22
23
24 def read_gbq(query, project_id=None, index_col=None, col_order=None,
25 reauth=False, verbose=True, private_key=None, dialect='legacy',
26 **kwargs):
27 r"""Load data from Google BigQuery.
28
29 The main method a user calls to execute a Query in Google BigQuery
30 and read results into a pandas DataFrame.
31
32 Google BigQuery API Client Library v2 for Python is used.
33 Documentation is available `here
34 <https://developers.google.com/api-client-library/python/apis/bigquery/v2>`__
35
36 Authentication to the Google BigQuery service is via OAuth 2.0.
37
38 - If "private_key" is not provided:
39
40 By default "application default credentials" are used.
41
42 If default application credentials are not found or are restrictive,
43 user account credentials are used. In this case, you will be asked to
44 grant permissions for product name 'pandas GBQ'.
45
46 - If "private_key" is provided:
47
48 Service account credentials will be used to authenticate.
49
50 Parameters
51 ----------
52 query : str
53 SQL-Like Query to return data values
54 project_id : str
55 Google BigQuery Account project ID.
56 index_col : str (optional)
57 Name of result column to use for index in results DataFrame
58 col_order : list(str) (optional)
59 List of BigQuery column names in the desired order for results
60 DataFrame
61 reauth : boolean (default False)
62 Force Google BigQuery to reauthenticate the user. This is useful
63 if multiple accounts are used.
64 verbose : boolean (default True)
65 Verbose output
66 private_key : str (optional)
67 Service account private key in JSON format. Can be file path
68 or string contents. This is useful for remote server
69 authentication (eg. jupyter iPython notebook on remote host)
70
71 dialect : {'legacy', 'standard'}, default 'legacy'
72 'legacy' : Use BigQuery's legacy SQL dialect.
73 'standard' : Use BigQuery's standard SQL (beta), which is
74 compliant with the SQL 2011 standard. For more information
75 see `BigQuery SQL Reference
76 <https://cloud.google.com/bigquery/sql-reference/>`__
77
78 **kwargs : Arbitrary keyword arguments
79 configuration (dict): query config parameters for job processing.
80 For example:
81
82 configuration = {'query': {'useQueryCache': False}}
83
84 For more information see `BigQuery SQL Reference
85 <https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs#configuration.query>`__
86
87 Returns
88 -------
89 df: DataFrame
90 DataFrame representing results of query
91
92 """
93 pandas_gbq = _try_import()
94 return pandas_gbq.read_gbq(
95 query, project_id=project_id,
96 index_col=index_col, col_order=col_order,
97 reauth=reauth, verbose=verbose,
98 private_key=private_key,
99 dialect=dialect,
100 **kwargs)
101
102
103 def to_gbq(dataframe, destination_table, project_id, chunksize=10000,
104 verbose=True, reauth=False, if_exists='fail', private_key=None):
105 pandas_gbq = _try_import()
106 pandas_gbq.to_gbq(dataframe, destination_table, project_id,
107 chunksize=chunksize,
108 verbose=verbose, reauth=reauth,
109 if_exists=if_exists, private_key=private_key)
110
[end of pandas/io/gbq.py]
[start of scripts/merge-pr.py]
1 #!/usr/bin/env python
2
3 #
4 # Licensed to the Apache Software Foundation (ASF) under one or more
5 # contributor license agreements. See the NOTICE file distributed with
6 # this work for additional information regarding copyright ownership.
7 # The ASF licenses this file to You under the Apache License, Version 2.0
8 # (the "License"); you may not use this file except in compliance with
9 # the License. You may obtain a copy of the License at
10 #
11 # http://www.apache.org/licenses/LICENSE-2.0
12 #
13 # Unless required by applicable law or agreed to in writing, software
14 # distributed under the License is distributed on an "AS IS" BASIS,
15 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
16 # See the License for the specific language governing permissions and
17 # limitations under the License.
18 #
19
20 # Utility for creating well-formed pull request merges and pushing them to
21 # Apache.
22 # usage: ./apache-pr-merge.py (see config env vars below)
23 #
24 # Lightly modified from version of this script in incubator-parquet-format
25
26 from __future__ import print_function
27
28 from subprocess import check_output
29 from requests.auth import HTTPBasicAuth
30 import requests
31
32 import os
33 import six
34 import sys
35 import textwrap
36
37 from six.moves import input
38
39 PANDAS_HOME = '.'
40 PROJECT_NAME = 'pandas'
41 print("PANDAS_HOME = " + PANDAS_HOME)
42
43 # Remote name with the PR
44 PR_REMOTE_NAME = os.environ.get("PR_REMOTE_NAME", "upstream")
45
46 # Remote name where results pushed
47 PUSH_REMOTE_NAME = os.environ.get("PUSH_REMOTE_NAME", "upstream")
48
49 GITHUB_BASE = "https://github.com/pandas-dev/" + PROJECT_NAME + "/pull"
50 GITHUB_API_BASE = "https://api.github.com/repos/pandas-dev/" + PROJECT_NAME
51
52 # Prefix added to temporary branches
53 BRANCH_PREFIX = "PR_TOOL"
54
55 os.chdir(PANDAS_HOME)
56
57 auth_required = False
58
59 if auth_required:
60 GITHUB_USERNAME = os.environ['GITHUB_USER']
61 import getpass
62 GITHUB_PASSWORD = getpass.getpass('Enter github.com password for %s:'
63 % GITHUB_USERNAME)
64
65 def get_json_auth(url):
66 auth = HTTPBasicAuth(GITHUB_USERNAME, GITHUB_PASSWORD)
67 req = requests.get(url, auth=auth)
68 return req.json()
69
70 get_json = get_json_auth
71 else:
72 def get_json_no_auth(url):
73 req = requests.get(url)
74 return req.json()
75
76 get_json = get_json_no_auth
77
78
79 def fail(msg):
80 print(msg)
81 clean_up()
82 sys.exit(-1)
83
84
85 def run_cmd(cmd):
86 if isinstance(cmd, six.string_types):
87 cmd = cmd.split(' ')
88
89 output = check_output(cmd)
90
91 if isinstance(output, six.binary_type):
92 output = output.decode('utf-8')
93 return output
94
95
96 def continue_maybe(prompt):
97 result = input("\n%s (y/n): " % prompt)
98 if result.lower() != "y":
99 fail("Okay, exiting")
100
101
102 def continue_maybe2(prompt):
103 result = input("\n%s (y/n): " % prompt)
104 if result.lower() != "y":
105 return False
106 else:
107 return True
108
109
110 original_head = run_cmd("git rev-parse HEAD")[:8]
111
112
113 def clean_up():
114 print("Restoring head pointer to %s" % original_head)
115 run_cmd("git checkout %s" % original_head)
116
117 branches = run_cmd("git branch").replace(" ", "").split("\n")
118
119 for branch in [b for b in branches if b.startswith(BRANCH_PREFIX)]:
120 print("Deleting local branch %s" % branch)
121 run_cmd("git branch -D %s" % branch)
122
123
124 # Merge the requested PR and return the merge hash
125 def merge_pr(pr_num, target_ref):
126
127 pr_branch_name = "%s_MERGE_PR_%s" % (BRANCH_PREFIX, pr_num)
128 target_branch_name = "%s_MERGE_PR_%s_%s" % (BRANCH_PREFIX, pr_num,
129 target_ref.upper())
130 run_cmd("git fetch %s pull/%s/head:%s" % (PR_REMOTE_NAME, pr_num,
131 pr_branch_name))
132 run_cmd("git fetch %s %s:%s" % (PUSH_REMOTE_NAME, target_ref,
133 target_branch_name))
134 run_cmd("git checkout %s" % target_branch_name)
135
136 had_conflicts = False
137 try:
138 run_cmd(['git', 'merge', pr_branch_name, '--squash'])
139 except Exception as e:
140 msg = ("Error merging: %s\nWould you like to manually fix-up "
141 "this merge?" % e)
142 continue_maybe(msg)
143 msg = ("Okay, please fix any conflicts and 'git add' "
144 "conflicting files... Finished?")
145 continue_maybe(msg)
146 had_conflicts = True
147
148 commit_authors = run_cmd(['git', 'log', 'HEAD..%s' % pr_branch_name,
149 '--pretty=format:%an <%ae>']).split("\n")
150 distinct_authors = sorted(set(commit_authors),
151 key=lambda x: commit_authors.count(x),
152 reverse=True)
153 primary_author = distinct_authors[0]
154 commits = run_cmd(['git', 'log', 'HEAD..%s' % pr_branch_name,
155 '--pretty=format:%h [%an] %s']).split("\n\n")
156
157 merge_message_flags = []
158
159 merge_message_flags += ["-m", title]
160 if body is not None:
161 merge_message_flags += ["-m", '\n'.join(textwrap.wrap(body))]
162
163 authors = "\n".join(["Author: %s" % a for a in distinct_authors])
164
165 merge_message_flags += ["-m", authors]
166
167 if had_conflicts:
168 committer_name = run_cmd("git config --get user.name").strip()
169 committer_email = run_cmd("git config --get user.email").strip()
170 message = ("This patch had conflicts when merged, "
171 "resolved by\nCommitter: %s <%s>"
172 % (committer_name, committer_email))
173 merge_message_flags += ["-m", message]
174
175 # The string "Closes #%s" string is required for GitHub to correctly close
176 # the PR
177 merge_message_flags += [
178 "-m",
179 "Closes #%s from %s and squashes the following commits:"
180 % (pr_num, pr_repo_desc)]
181 for c in commits:
182 merge_message_flags += ["-m", c]
183
184 run_cmd(['git', 'commit', '--author="%s"' % primary_author] +
185 merge_message_flags)
186
187 continue_maybe("Merge complete (local ref %s). Push to %s?" % (
188 target_branch_name, PUSH_REMOTE_NAME))
189
190 try:
191 run_cmd('git push %s %s:%s' % (PUSH_REMOTE_NAME, target_branch_name,
192 target_ref))
193 except Exception as e:
194 clean_up()
195 fail("Exception while pushing: %s" % e)
196
197 merge_hash = run_cmd("git rev-parse %s" % target_branch_name)[:8]
198 clean_up()
199 print("Pull request #%s merged!" % pr_num)
200 print("Merge hash: %s" % merge_hash)
201 return merge_hash
202
203
204 def update_pr(pr_num, user_login, base_ref):
205
206 pr_branch_name = "%s_MERGE_PR_%s" % (BRANCH_PREFIX, pr_num)
207
208 run_cmd("git fetch %s pull/%s/head:%s" % (PR_REMOTE_NAME, pr_num,
209 pr_branch_name))
210 run_cmd("git checkout %s" % pr_branch_name)
211
212 continue_maybe("Update ready (local ref %s)? Push to %s/%s?" % (
213 pr_branch_name, user_login, base_ref))
214
215 push_user_remote = "https://github.com/%s/pandas.git" % user_login
216
217 try:
218 run_cmd('git push %s %s:%s' % (push_user_remote, pr_branch_name,
219 base_ref))
220 except Exception as e:
221
222 if continue_maybe2("Force push?"):
223 try:
224 run_cmd(
225 'git push -f %s %s:%s' % (push_user_remote, pr_branch_name,
226 base_ref))
227 except Exception as e:
228 fail("Exception while pushing: %s" % e)
229 clean_up()
230 else:
231 fail("Exception while pushing: %s" % e)
232 clean_up()
233
234 clean_up()
235 print("Pull request #%s updated!" % pr_num)
236
237
238 def cherry_pick(pr_num, merge_hash, default_branch):
239 pick_ref = input("Enter a branch name [%s]: " % default_branch)
240 if pick_ref == "":
241 pick_ref = default_branch
242
243 pick_branch_name = "%s_PICK_PR_%s_%s" % (BRANCH_PREFIX, pr_num,
244 pick_ref.upper())
245
246 run_cmd("git fetch %s %s:%s" % (PUSH_REMOTE_NAME, pick_ref,
247 pick_branch_name))
248 run_cmd("git checkout %s" % pick_branch_name)
249 run_cmd("git cherry-pick -sx %s" % merge_hash)
250
251 continue_maybe("Pick complete (local ref %s). Push to %s?" % (
252 pick_branch_name, PUSH_REMOTE_NAME))
253
254 try:
255 run_cmd('git push %s %s:%s' % (PUSH_REMOTE_NAME, pick_branch_name,
256 pick_ref))
257 except Exception as e:
258 clean_up()
259 fail("Exception while pushing: %s" % e)
260
261 pick_hash = run_cmd("git rev-parse %s" % pick_branch_name)[:8]
262 clean_up()
263
264 print("Pull request #%s picked into %s!" % (pr_num, pick_ref))
265 print("Pick hash: %s" % pick_hash)
266 return pick_ref
267
268
269 def fix_version_from_branch(branch, versions):
270 # Note: Assumes this is a sorted (newest->oldest) list of un-released
271 # versions
272 if branch == "master":
273 return versions[0]
274 else:
275 branch_ver = branch.replace("branch-", "")
276 return filter(lambda x: x.name.startswith(branch_ver), versions)[-1]
277
278 pr_num = input("Which pull request would you like to merge? (e.g. 34): ")
279 pr = get_json("%s/pulls/%s" % (GITHUB_API_BASE, pr_num))
280
281 url = pr["url"]
282 title = pr["title"]
283 body = pr["body"]
284 target_ref = pr["base"]["ref"]
285 user_login = pr["user"]["login"]
286 base_ref = pr["head"]["ref"]
287 pr_repo_desc = "%s/%s" % (user_login, base_ref)
288
289 if pr["merged"] is True:
290 print("Pull request {0} has already been merged, please backport manually"
291 .format(pr_num))
292 sys.exit(0)
293
294 if not bool(pr["mergeable"]):
295 msg = ("Pull request {0} is not mergeable in its current form.\n"
296 "Continue? (experts only!)".format(pr_num))
297 continue_maybe(msg)
298
299 print("\n=== Pull Request #%s ===" % pr_num)
300 print("title\t%s\nsource\t%s\ntarget\t%s\nurl\t%s"
301 % (title, pr_repo_desc, target_ref, url))
302
303
304
305 merged_refs = [target_ref]
306
307 print("\nProceed with updating or merging pull request #%s?" % pr_num)
308 update = input("Update PR and push to remote (r), merge locally (l), "
309 "or do nothing (n) ?")
310 update = update.lower()
311
312 if update == 'r':
313 merge_hash = update_pr(pr_num, user_login, base_ref)
314 elif update == 'l':
315 merge_hash = merge_pr(pr_num, target_ref)
316
[end of scripts/merge-pr.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pandas-dev/pandas
|
fedf92287d0216d54fe756829362e1d10a3fdc58
|
TST: Travis 3.6 Job Failing Across Multiple PR's
https://travis-ci.org/pandas-dev/pandas/jobs/278263195
https://travis-ci.org/pandas-dev/pandas/jobs/278232682
These are a couple of examples. Seems like `numpy` is not compatible for some strange reason.
|
conda forge appears to have updated the numpy build ; i think other deps need updating (iow anything that depends on numpy)
you can check in there recently merged PR list to see
Hmm...`numpy` hasn't made a new release since July...that's pretty late to update on `conda`.
|
2017-09-21T19:22:11Z
|
<patch>
diff --git a/ci/requirements-3.6.build b/ci/requirements-3.6.build
--- a/ci/requirements-3.6.build
+++ b/ci/requirements-3.6.build
@@ -2,5 +2,7 @@ python=3.6*
python-dateutil
pytz
nomkl
-numpy
cython
+
+# pin numpy that is built for all our deps
+numpy=1.13.1=py36_blas_openblas_201
</patch>
|
[]
|
[]
| |||
pandas-dev__pandas-29356
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Selecting "subsets" of a `MultiIndex` `DataFrame` sometimes changes `dtypes`
#### Code Sample, a copy-pastable example if possible
Definition of data and columns:
<details>
```python
from numpy import nan
data = [['n', 1, 0, False, 2, 1, False, 0, 0, False, 2, 0, False, 0, 1, False, 1, 1, False, 'o',
1521734085.289453, 'p', 3233, 1521734085.289494]]
columns = [('a', 'd', 'i', nan, nan),
('a', 'd', 'j', 0.0, 'k'),
('a', 'd', 'j', 0.0, 'l'),
('a', 'd', 'j', 0.0, 'm'),
('a', 'd', 'j', 1.0, 'k'),
('a', 'd', 'j', 1.0, 'l'),
('a', 'd', 'j', 1.0, 'm'),
('a', 'd', 'j', 2.0, 'k'),
('a', 'd', 'j', 2.0, 'l'),
('a', 'd', 'j', 2.0, 'm'),
('a', 'd', 'j', 3.0, 'k'),
('a', 'd', 'j', 3.0, 'l'),
('a', 'd', 'j', 3.0, 'm'),
('a', 'd', 'j', 4.0, 'k'),
('a', 'd', 'j', 4.0, 'l'),
('a', 'd', 'j', 4.0, 'm'),
('a', 'd', 'j', 5.0, 'k'),
('a', 'd', 'j', 5.0, 'l'),
('a', 'd', 'j', 5.0, 'm'),
('b', 'f', nan, nan, nan),
('b', 'h', nan, nan, nan),
('c', 'e', nan, nan, nan),
('c', 'g', nan, nan, nan),
('c', 'h', nan, nan, nan)]
```
</details>
```python
pd.DataFrame(data, columns=pd.MultiIndex.from_tuples(columns)).dtypes.a.d.i
# object
pd.DataFrame(data, columns=pd.MultiIndex.from_tuples(columns)).a.d.i.dtypes
# float64
```
this causes for example:
```python
pd.DataFrame(np.array(data), columns=pd.MultiIndex.from_tuples(columns)).a.d.i
# "n", dtype: object
pd.DataFrame(data, columns=pd.MultiIndex.from_tuples(columns)).a.d.i
# nan, dtype: float32
```
#### Problem description
I think the example is self explaining
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit: None
python: 2.7.12.final.0
python-bits: 64
OS: Linux
OS-release: 4.4.0-119-generic
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: en_US.utf8
LANG: en_US.UTF-8
LOCALE: None.None
pandas: 0.23.0.dev0+38.g6552718
pytest: 2.8.7
pip: 9.0.1
setuptools: 20.7.0
Cython: 0.23.4
numpy: 1.14.2
scipy: 1.0.0
pyarrow: None
xarray: None
IPython: 5.5.0
sphinx: 1.3.6
patsy: 0.4.1
dateutil: 2.7.2
pytz: 2018.3
blosc: None
bottleneck: None
tables: 3.2.2
numexpr: 2.6.4
feather: None
matplotlib: 2.1.2
openpyxl: 2.3.0
xlrd: 0.9.4
xlwt: 0.7.5
xlsxwriter: 0.7.3
lxml: 3.5.0
bs4: 4.4.1
html5lib: 0.9999999
sqlalchemy: 1.0.11
pymysql: None
psycopg2: 2.6.1 (dt dec mx pq3 ext lo64)
jinja2: 2.8
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None
</details>
</issue>
<code>
[start of README.md]
1 <div align="center">
2 <img src="https://dev.pandas.io/static/img/pandas.svg"><br>
3 </div>
4
5 -----------------
6
7 # pandas: powerful Python data analysis toolkit
8
9 <table>
10 <tr>
11 <td>Latest Release</td>
12 <td>
13 <a href="https://pypi.org/project/pandas/">
14 <img src="https://img.shields.io/pypi/v/pandas.svg" alt="latest release" />
15 </a>
16 </td>
17 </tr>
18 <td></td>
19 <td>
20 <a href="https://anaconda.org/anaconda/pandas/">
21 <img src="https://anaconda.org/conda-forge/pandas/badges/version.svg" alt="latest release" />
22 </a>
23 </td>
24 </tr>
25 <tr>
26 <td>Package Status</td>
27 <td>
28 <a href="https://pypi.org/project/pandas/">
29 <img src="https://img.shields.io/pypi/status/pandas.svg" alt="status" />
30 </a>
31 </td>
32 </tr>
33 <tr>
34 <td>License</td>
35 <td>
36 <a href="https://github.com/pandas-dev/pandas/blob/master/LICENSE">
37 <img src="https://img.shields.io/pypi/l/pandas.svg" alt="license" />
38 </a>
39 </td>
40 </tr>
41 <tr>
42 <td>Build Status</td>
43 <td>
44 <a href="https://travis-ci.org/pandas-dev/pandas">
45 <img src="https://travis-ci.org/pandas-dev/pandas.svg?branch=master" alt="travis build status" />
46 </a>
47 </td>
48 </tr>
49 <tr>
50 <td></td>
51 <td>
52 <a href="https://dev.azure.com/pandas-dev/pandas/_build/latest?definitionId=1&branch=master">
53 <img src="https://dev.azure.com/pandas-dev/pandas/_apis/build/status/pandas-dev.pandas?branch=master" alt="Azure Pipelines build status" />
54 </a>
55 </td>
56 </tr>
57 <tr>
58 <td>Coverage</td>
59 <td>
60 <a href="https://codecov.io/gh/pandas-dev/pandas">
61 <img src="https://codecov.io/github/pandas-dev/pandas/coverage.svg?branch=master" alt="coverage" />
62 </a>
63 </td>
64 </tr>
65 <tr>
66 <td>Downloads</td>
67 <td>
68 <a href="https://pandas.pydata.org">
69 <img src="https://anaconda.org/conda-forge/pandas/badges/downloads.svg" alt="conda-forge downloads" />
70 </a>
71 </td>
72 </tr>
73 <tr>
74 <td>Gitter</td>
75 <td>
76 <a href="https://gitter.im/pydata/pandas">
77 <img src="https://badges.gitter.im/Join%20Chat.svg" />
78 </a>
79 </td>
80 </tr>
81 </table>
82
83
84
85 ## What is it?
86
87 **pandas** is a Python package providing fast, flexible, and expressive data
88 structures designed to make working with "relational" or "labeled" data both
89 easy and intuitive. It aims to be the fundamental high-level building block for
90 doing practical, **real world** data analysis in Python. Additionally, it has
91 the broader goal of becoming **the most powerful and flexible open source data
92 analysis / manipulation tool available in any language**. It is already well on
93 its way towards this goal.
94
95 ## Main Features
96 Here are just a few of the things that pandas does well:
97
98 - Easy handling of [**missing data**][missing-data] (represented as
99 `NaN`) in floating point as well as non-floating point data
100 - Size mutability: columns can be [**inserted and
101 deleted**][insertion-deletion] from DataFrame and higher dimensional
102 objects
103 - Automatic and explicit [**data alignment**][alignment]: objects can
104 be explicitly aligned to a set of labels, or the user can simply
105 ignore the labels and let `Series`, `DataFrame`, etc. automatically
106 align the data for you in computations
107 - Powerful, flexible [**group by**][groupby] functionality to perform
108 split-apply-combine operations on data sets, for both aggregating
109 and transforming data
110 - Make it [**easy to convert**][conversion] ragged,
111 differently-indexed data in other Python and NumPy data structures
112 into DataFrame objects
113 - Intelligent label-based [**slicing**][slicing], [**fancy
114 indexing**][fancy-indexing], and [**subsetting**][subsetting] of
115 large data sets
116 - Intuitive [**merging**][merging] and [**joining**][joining] data
117 sets
118 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of
119 data sets
120 - [**Hierarchical**][mi] labeling of axes (possible to have multiple
121 labels per tick)
122 - Robust IO tools for loading data from [**flat files**][flat-files]
123 (CSV and delimited), [**Excel files**][excel], [**databases**][db],
124 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore]
125 - [**Time series**][timeseries]-specific functionality: date range
126 generation and frequency conversion, moving window statistics,
127 moving window linear regressions, date shifting and lagging, etc.
128
129
130 [missing-data]: https://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data
131 [insertion-deletion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#column-selection-addition-deletion
132 [alignment]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html?highlight=alignment#intro-to-data-structures
133 [groupby]: https://pandas.pydata.org/pandas-docs/stable/groupby.html#group-by-split-apply-combine
134 [conversion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe
135 [slicing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges
136 [fancy-indexing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#advanced-indexing-with-ix
137 [subsetting]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing
138 [merging]: https://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging
139 [joining]: https://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index
140 [reshape]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-and-pivot-tables
141 [pivot-table]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#pivot-tables-and-cross-tabulations
142 [mi]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex
143 [flat-files]: https://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files
144 [excel]: https://pandas.pydata.org/pandas-docs/stable/io.html#excel-files
145 [db]: https://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries
146 [hdfstore]: https://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables
147 [timeseries]: https://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-series-date-functionality
148
149 ## Where to get it
150 The source code is currently hosted on GitHub at:
151 https://github.com/pandas-dev/pandas
152
153 Binary installers for the latest released version are available at the [Python
154 package index](https://pypi.org/project/pandas) and on conda.
155
156 ```sh
157 # conda
158 conda install pandas
159 ```
160
161 ```sh
162 # or PyPI
163 pip install pandas
164 ```
165
166 ## Dependencies
167 - [NumPy](https://www.numpy.org): 1.13.3 or higher
168 - [python-dateutil](https://labix.org/python-dateutil): 2.5.0 or higher
169 - [pytz](https://pythonhosted.org/pytz): 2015.4 or higher
170
171 See the [full installation instructions](https://pandas.pydata.org/pandas-docs/stable/install.html#dependencies)
172 for recommended and optional dependencies.
173
174 ## Installation from sources
175 To install pandas from source you need Cython in addition to the normal
176 dependencies above. Cython can be installed from pypi:
177
178 ```sh
179 pip install cython
180 ```
181
182 In the `pandas` directory (same one where you found this file after
183 cloning the git repo), execute:
184
185 ```sh
186 python setup.py install
187 ```
188
189 or for installing in [development mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs):
190
191
192 ```sh
193 python -m pip install -e . --no-build-isolation --no-use-pep517
194 ```
195
196 If you have `make`, you can also use `make develop` to run the same command.
197
198 or alternatively
199
200 ```sh
201 python setup.py develop
202 ```
203
204 See the full instructions for [installing from source](https://pandas.pydata.org/pandas-docs/stable/install.html#installing-from-source).
205
206 ## License
207 [BSD 3](LICENSE)
208
209 ## Documentation
210 The official documentation is hosted on PyData.org: https://pandas.pydata.org/pandas-docs/stable
211
212 ## Background
213 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and
214 has been under active development since then.
215
216 ## Getting Help
217
218 For usage questions, the best place to go to is [StackOverflow](https://stackoverflow.com/questions/tagged/pandas).
219 Further, general questions and discussions can also take place on the [pydata mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata).
220
221 ## Discussion and Development
222 Most development discussion is taking place on github in this repo. Further, the [pandas-dev mailing list](https://mail.python.org/mailman/listinfo/pandas-dev) can also be used for specialized discussions or design issues, and a [Gitter channel](https://gitter.im/pydata/pandas) is available for quick development related questions.
223
224 ## Contributing to pandas [](https://www.codetriage.com/pandas-dev/pandas)
225
226 All contributions, bug reports, bug fixes, documentation improvements, enhancements and ideas are welcome.
227
228 A detailed overview on how to contribute can be found in the **[contributing guide](https://dev.pandas.io/docs/contributing.html)**. There is also an [overview](.github/CONTRIBUTING.md) on GitHub.
229
230 If you are simply looking to start working with the pandas codebase, navigate to the [GitHub "issues" tab](https://github.com/pandas-dev/pandas/issues) and start looking through interesting issues. There are a number of issues listed under [Docs](https://github.com/pandas-dev/pandas/issues?labels=Docs&sort=updated&state=open) and [good first issue](https://github.com/pandas-dev/pandas/issues?labels=good+first+issue&sort=updated&state=open) where you could start out.
231
232 You can also triage issues which may include reproducing bug reports, or asking for vital information such as version numbers or reproduction instructions. If you would like to start triaging issues, one easy way to get started is to [subscribe to pandas on CodeTriage](https://www.codetriage.com/pandas-dev/pandas).
233
234 Or maybe through using pandas you have an idea of your own or are looking for something in the documentation and thinking ‘this can be improved’...you can do something about it!
235
236 Feel free to ask questions on the [mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata) or on [Gitter](https://gitter.im/pydata/pandas).
237
238 As contributors and maintainers to this project, you are expected to abide by pandas' code of conduct. More information can be found at: [Contributor Code of Conduct](https://github.com/pandas-dev/pandas/blob/master/.github/CODE_OF_CONDUCT.md)
239
[end of README.md]
[start of pandas/core/reshape/melt.py]
1 import re
2
3 import numpy as np
4
5 from pandas.util._decorators import Appender
6
7 from pandas.core.dtypes.common import is_extension_type, is_list_like
8 from pandas.core.dtypes.concat import concat_compat
9 from pandas.core.dtypes.generic import ABCMultiIndex
10 from pandas.core.dtypes.missing import notna
11
12 from pandas.core.arrays import Categorical
13 from pandas.core.frame import _shared_docs
14 from pandas.core.indexes.base import Index
15 from pandas.core.reshape.concat import concat
16 from pandas.core.tools.numeric import to_numeric
17
18
19 @Appender(
20 _shared_docs["melt"]
21 % dict(caller="pd.melt(df, ", versionadded="", other="DataFrame.melt")
22 )
23 def melt(
24 frame,
25 id_vars=None,
26 value_vars=None,
27 var_name=None,
28 value_name="value",
29 col_level=None,
30 ):
31 # TODO: what about the existing index?
32 # If multiindex, gather names of columns on all level for checking presence
33 # of `id_vars` and `value_vars`
34 if isinstance(frame.columns, ABCMultiIndex):
35 cols = [x for c in frame.columns for x in c]
36 else:
37 cols = list(frame.columns)
38 if id_vars is not None:
39 if not is_list_like(id_vars):
40 id_vars = [id_vars]
41 elif isinstance(frame.columns, ABCMultiIndex) and not isinstance(id_vars, list):
42 raise ValueError(
43 "id_vars must be a list of tuples when columns are a MultiIndex"
44 )
45 else:
46 # Check that `id_vars` are in frame
47 id_vars = list(id_vars)
48 missing = Index(np.ravel(id_vars)).difference(cols)
49 if not missing.empty:
50 raise KeyError(
51 "The following 'id_vars' are not present"
52 " in the DataFrame: {missing}"
53 "".format(missing=list(missing))
54 )
55 else:
56 id_vars = []
57
58 if value_vars is not None:
59 if not is_list_like(value_vars):
60 value_vars = [value_vars]
61 elif isinstance(frame.columns, ABCMultiIndex) and not isinstance(
62 value_vars, list
63 ):
64 raise ValueError(
65 "value_vars must be a list of tuples when columns are a MultiIndex"
66 )
67 else:
68 value_vars = list(value_vars)
69 # Check that `value_vars` are in frame
70 missing = Index(np.ravel(value_vars)).difference(cols)
71 if not missing.empty:
72 raise KeyError(
73 "The following 'value_vars' are not present in"
74 " the DataFrame: {missing}"
75 "".format(missing=list(missing))
76 )
77 frame = frame.loc[:, id_vars + value_vars]
78 else:
79 frame = frame.copy()
80
81 if col_level is not None: # allow list or other?
82 # frame is a copy
83 frame.columns = frame.columns.get_level_values(col_level)
84
85 if var_name is None:
86 if isinstance(frame.columns, ABCMultiIndex):
87 if len(frame.columns.names) == len(set(frame.columns.names)):
88 var_name = frame.columns.names
89 else:
90 var_name = [
91 "variable_{i}".format(i=i) for i in range(len(frame.columns.names))
92 ]
93 else:
94 var_name = [
95 frame.columns.name if frame.columns.name is not None else "variable"
96 ]
97 if isinstance(var_name, str):
98 var_name = [var_name]
99
100 N, K = frame.shape
101 K -= len(id_vars)
102
103 mdata = {}
104 for col in id_vars:
105 id_data = frame.pop(col)
106 if is_extension_type(id_data):
107 id_data = concat([id_data] * K, ignore_index=True)
108 else:
109 id_data = np.tile(id_data.values, K)
110 mdata[col] = id_data
111
112 mcolumns = id_vars + var_name + [value_name]
113
114 mdata[value_name] = frame.values.ravel("F")
115 for i, col in enumerate(var_name):
116 # asanyarray will keep the columns as an Index
117 mdata[col] = np.asanyarray(frame.columns._get_level_values(i)).repeat(N)
118
119 return frame._constructor(mdata, columns=mcolumns)
120
121
122 def lreshape(data, groups, dropna=True, label=None):
123 """
124 Reshape long-format data to wide. Generalized inverse of DataFrame.pivot
125
126 Parameters
127 ----------
128 data : DataFrame
129 groups : dict
130 {new_name : list_of_columns}
131 dropna : boolean, default True
132
133 Examples
134 --------
135 >>> data = pd.DataFrame({'hr1': [514, 573], 'hr2': [545, 526],
136 ... 'team': ['Red Sox', 'Yankees'],
137 ... 'year1': [2007, 2007], 'year2': [2008, 2008]})
138 >>> data
139 hr1 hr2 team year1 year2
140 0 514 545 Red Sox 2007 2008
141 1 573 526 Yankees 2007 2008
142
143 >>> pd.lreshape(data, {'year': ['year1', 'year2'], 'hr': ['hr1', 'hr2']})
144 team year hr
145 0 Red Sox 2007 514
146 1 Yankees 2007 573
147 2 Red Sox 2008 545
148 3 Yankees 2008 526
149
150 Returns
151 -------
152 reshaped : DataFrame
153 """
154 if isinstance(groups, dict):
155 keys = list(groups.keys())
156 values = list(groups.values())
157 else:
158 keys, values = zip(*groups)
159
160 all_cols = list(set.union(*[set(x) for x in values]))
161 id_cols = list(data.columns.difference(all_cols))
162
163 K = len(values[0])
164
165 for seq in values:
166 if len(seq) != K:
167 raise ValueError("All column lists must be same length")
168
169 mdata = {}
170 pivot_cols = []
171
172 for target, names in zip(keys, values):
173 to_concat = [data[col].values for col in names]
174
175 mdata[target] = concat_compat(to_concat)
176 pivot_cols.append(target)
177
178 for col in id_cols:
179 mdata[col] = np.tile(data[col].values, K)
180
181 if dropna:
182 mask = np.ones(len(mdata[pivot_cols[0]]), dtype=bool)
183 for c in pivot_cols:
184 mask &= notna(mdata[c])
185 if not mask.all():
186 mdata = {k: v[mask] for k, v in mdata.items()}
187
188 return data._constructor(mdata, columns=id_cols + pivot_cols)
189
190
191 def wide_to_long(df, stubnames, i, j, sep: str = "", suffix: str = r"\d+"):
192 r"""
193 Wide panel to long format. Less flexible but more user-friendly than melt.
194
195 With stubnames ['A', 'B'], this function expects to find one or more
196 group of columns with format
197 A-suffix1, A-suffix2,..., B-suffix1, B-suffix2,...
198 You specify what you want to call this suffix in the resulting long format
199 with `j` (for example `j='year'`)
200
201 Each row of these wide variables are assumed to be uniquely identified by
202 `i` (can be a single column name or a list of column names)
203
204 All remaining variables in the data frame are left intact.
205
206 Parameters
207 ----------
208 df : DataFrame
209 The wide-format DataFrame
210 stubnames : str or list-like
211 The stub name(s). The wide format variables are assumed to
212 start with the stub names.
213 i : str or list-like
214 Column(s) to use as id variable(s)
215 j : str
216 The name of the sub-observation variable. What you wish to name your
217 suffix in the long format.
218 sep : str, default ""
219 A character indicating the separation of the variable names
220 in the wide format, to be stripped from the names in the long format.
221 For example, if your column names are A-suffix1, A-suffix2, you
222 can strip the hyphen by specifying `sep='-'`
223 suffix : str, default '\\d+'
224 A regular expression capturing the wanted suffixes. '\\d+' captures
225 numeric suffixes. Suffixes with no numbers could be specified with the
226 negated character class '\\D+'. You can also further disambiguate
227 suffixes, for example, if your wide variables are of the form
228 A-one, B-two,.., and you have an unrelated column A-rating, you can
229 ignore the last one by specifying `suffix='(!?one|two)'`
230
231 .. versionchanged:: 0.23.0
232 When all suffixes are numeric, they are cast to int64/float64.
233
234 Returns
235 -------
236 DataFrame
237 A DataFrame that contains each stub name as a variable, with new index
238 (i, j).
239
240 Notes
241 -----
242 All extra variables are left untouched. This simply uses
243 `pandas.melt` under the hood, but is hard-coded to "do the right thing"
244 in a typical case.
245
246 Examples
247 --------
248 >>> np.random.seed(123)
249 >>> df = pd.DataFrame({"A1970" : {0 : "a", 1 : "b", 2 : "c"},
250 ... "A1980" : {0 : "d", 1 : "e", 2 : "f"},
251 ... "B1970" : {0 : 2.5, 1 : 1.2, 2 : .7},
252 ... "B1980" : {0 : 3.2, 1 : 1.3, 2 : .1},
253 ... "X" : dict(zip(range(3), np.random.randn(3)))
254 ... })
255 >>> df["id"] = df.index
256 >>> df
257 A1970 A1980 B1970 B1980 X id
258 0 a d 2.5 3.2 -1.085631 0
259 1 b e 1.2 1.3 0.997345 1
260 2 c f 0.7 0.1 0.282978 2
261 >>> pd.wide_to_long(df, ["A", "B"], i="id", j="year")
262 ... # doctest: +NORMALIZE_WHITESPACE
263 X A B
264 id year
265 0 1970 -1.085631 a 2.5
266 1 1970 0.997345 b 1.2
267 2 1970 0.282978 c 0.7
268 0 1980 -1.085631 d 3.2
269 1 1980 0.997345 e 1.3
270 2 1980 0.282978 f 0.1
271
272 With multiple id columns
273
274 >>> df = pd.DataFrame({
275 ... 'famid': [1, 1, 1, 2, 2, 2, 3, 3, 3],
276 ... 'birth': [1, 2, 3, 1, 2, 3, 1, 2, 3],
277 ... 'ht1': [2.8, 2.9, 2.2, 2, 1.8, 1.9, 2.2, 2.3, 2.1],
278 ... 'ht2': [3.4, 3.8, 2.9, 3.2, 2.8, 2.4, 3.3, 3.4, 2.9]
279 ... })
280 >>> df
281 famid birth ht1 ht2
282 0 1 1 2.8 3.4
283 1 1 2 2.9 3.8
284 2 1 3 2.2 2.9
285 3 2 1 2.0 3.2
286 4 2 2 1.8 2.8
287 5 2 3 1.9 2.4
288 6 3 1 2.2 3.3
289 7 3 2 2.3 3.4
290 8 3 3 2.1 2.9
291 >>> l = pd.wide_to_long(df, stubnames='ht', i=['famid', 'birth'], j='age')
292 >>> l
293 ... # doctest: +NORMALIZE_WHITESPACE
294 ht
295 famid birth age
296 1 1 1 2.8
297 2 3.4
298 2 1 2.9
299 2 3.8
300 3 1 2.2
301 2 2.9
302 2 1 1 2.0
303 2 3.2
304 2 1 1.8
305 2 2.8
306 3 1 1.9
307 2 2.4
308 3 1 1 2.2
309 2 3.3
310 2 1 2.3
311 2 3.4
312 3 1 2.1
313 2 2.9
314
315 Going from long back to wide just takes some creative use of `unstack`
316
317 >>> w = l.unstack()
318 >>> w.columns = w.columns.map('{0[0]}{0[1]}'.format)
319 >>> w.reset_index()
320 famid birth ht1 ht2
321 0 1 1 2.8 3.4
322 1 1 2 2.9 3.8
323 2 1 3 2.2 2.9
324 3 2 1 2.0 3.2
325 4 2 2 1.8 2.8
326 5 2 3 1.9 2.4
327 6 3 1 2.2 3.3
328 7 3 2 2.3 3.4
329 8 3 3 2.1 2.9
330
331 Less wieldy column names are also handled
332
333 >>> np.random.seed(0)
334 >>> df = pd.DataFrame({'A(weekly)-2010': np.random.rand(3),
335 ... 'A(weekly)-2011': np.random.rand(3),
336 ... 'B(weekly)-2010': np.random.rand(3),
337 ... 'B(weekly)-2011': np.random.rand(3),
338 ... 'X' : np.random.randint(3, size=3)})
339 >>> df['id'] = df.index
340 >>> df # doctest: +NORMALIZE_WHITESPACE, +ELLIPSIS
341 A(weekly)-2010 A(weekly)-2011 B(weekly)-2010 B(weekly)-2011 X id
342 0 0.548814 0.544883 0.437587 0.383442 0 0
343 1 0.715189 0.423655 0.891773 0.791725 1 1
344 2 0.602763 0.645894 0.963663 0.528895 1 2
345
346 >>> pd.wide_to_long(df, ['A(weekly)', 'B(weekly)'], i='id',
347 ... j='year', sep='-')
348 ... # doctest: +NORMALIZE_WHITESPACE
349 X A(weekly) B(weekly)
350 id year
351 0 2010 0 0.548814 0.437587
352 1 2010 1 0.715189 0.891773
353 2 2010 1 0.602763 0.963663
354 0 2011 0 0.544883 0.383442
355 1 2011 1 0.423655 0.791725
356 2 2011 1 0.645894 0.528895
357
358 If we have many columns, we could also use a regex to find our
359 stubnames and pass that list on to wide_to_long
360
361 >>> stubnames = sorted(
362 ... set([match[0] for match in df.columns.str.findall(
363 ... r'[A-B]\(.*\)').values if match != [] ])
364 ... )
365 >>> list(stubnames)
366 ['A(weekly)', 'B(weekly)']
367
368 All of the above examples have integers as suffixes. It is possible to
369 have non-integers as suffixes.
370
371 >>> df = pd.DataFrame({
372 ... 'famid': [1, 1, 1, 2, 2, 2, 3, 3, 3],
373 ... 'birth': [1, 2, 3, 1, 2, 3, 1, 2, 3],
374 ... 'ht_one': [2.8, 2.9, 2.2, 2, 1.8, 1.9, 2.2, 2.3, 2.1],
375 ... 'ht_two': [3.4, 3.8, 2.9, 3.2, 2.8, 2.4, 3.3, 3.4, 2.9]
376 ... })
377 >>> df
378 famid birth ht_one ht_two
379 0 1 1 2.8 3.4
380 1 1 2 2.9 3.8
381 2 1 3 2.2 2.9
382 3 2 1 2.0 3.2
383 4 2 2 1.8 2.8
384 5 2 3 1.9 2.4
385 6 3 1 2.2 3.3
386 7 3 2 2.3 3.4
387 8 3 3 2.1 2.9
388
389 >>> l = pd.wide_to_long(df, stubnames='ht', i=['famid', 'birth'], j='age',
390 ... sep='_', suffix='\w+')
391 >>> l
392 ... # doctest: +NORMALIZE_WHITESPACE
393 ht
394 famid birth age
395 1 1 one 2.8
396 two 3.4
397 2 one 2.9
398 two 3.8
399 3 one 2.2
400 two 2.9
401 2 1 one 2.0
402 two 3.2
403 2 one 1.8
404 two 2.8
405 3 one 1.9
406 two 2.4
407 3 1 one 2.2
408 two 3.3
409 2 one 2.3
410 two 3.4
411 3 one 2.1
412 two 2.9
413 """
414
415 def get_var_names(df, stub, sep, suffix):
416 regex = r"^{stub}{sep}{suffix}$".format(
417 stub=re.escape(stub), sep=re.escape(sep), suffix=suffix
418 )
419 pattern = re.compile(regex)
420 return [col for col in df.columns if pattern.match(col)]
421
422 def melt_stub(df, stub, i, j, value_vars, sep: str):
423 newdf = melt(
424 df,
425 id_vars=i,
426 value_vars=value_vars,
427 value_name=stub.rstrip(sep),
428 var_name=j,
429 )
430 newdf[j] = Categorical(newdf[j])
431 newdf[j] = newdf[j].str.replace(re.escape(stub + sep), "")
432
433 # GH17627 Cast numerics suffixes to int/float
434 newdf[j] = to_numeric(newdf[j], errors="ignore")
435
436 return newdf.set_index(i + [j])
437
438 if not is_list_like(stubnames):
439 stubnames = [stubnames]
440 else:
441 stubnames = list(stubnames)
442
443 if any(col in stubnames for col in df.columns):
444 raise ValueError("stubname can't be identical to a column name")
445
446 if not is_list_like(i):
447 i = [i]
448 else:
449 i = list(i)
450
451 if df[i].duplicated().any():
452 raise ValueError("the id variables need to uniquely identify each row")
453
454 value_vars = [get_var_names(df, stub, sep, suffix) for stub in stubnames]
455
456 value_vars_flattened = [e for sublist in value_vars for e in sublist]
457 id_vars = list(set(df.columns.tolist()).difference(value_vars_flattened))
458
459 _melted = [melt_stub(df, s, i, j, v, sep) for s, v in zip(stubnames, value_vars)]
460 melted = _melted[0].join(_melted[1:], how="outer")
461
462 if len(i) == 1:
463 new = df[id_vars].set_index(i).join(melted)
464 return new
465
466 new = df[id_vars].merge(melted.reset_index(), on=i).set_index(i + [j])
467
468 return new
469
[end of pandas/core/reshape/melt.py]
[start of pandas/util/_print_versions.py]
1 import codecs
2 import json
3 import locale
4 import os
5 import platform
6 import struct
7 import subprocess
8 import sys
9
10 from pandas.compat._optional import VERSIONS, _get_version, import_optional_dependency
11
12
13 def get_sys_info():
14 "Returns system information as a dict"
15
16 blob = []
17
18 # get full commit hash
19 commit = None
20 if os.path.isdir(".git") and os.path.isdir("pandas"):
21 try:
22 pipe = subprocess.Popen(
23 'git log --format="%H" -n 1'.split(" "),
24 stdout=subprocess.PIPE,
25 stderr=subprocess.PIPE,
26 )
27 so, serr = pipe.communicate()
28 except (OSError, ValueError):
29 pass
30 else:
31 if pipe.returncode == 0:
32 commit = so
33 try:
34 commit = so.decode("utf-8")
35 except ValueError:
36 pass
37 commit = commit.strip().strip('"')
38
39 blob.append(("commit", commit))
40
41 try:
42 (sysname, nodename, release, version, machine, processor) = platform.uname()
43 blob.extend(
44 [
45 ("python", ".".join(map(str, sys.version_info))),
46 ("python-bits", struct.calcsize("P") * 8),
47 ("OS", "{sysname}".format(sysname=sysname)),
48 ("OS-release", "{release}".format(release=release)),
49 # ("Version", "{version}".format(version=version)),
50 ("machine", "{machine}".format(machine=machine)),
51 ("processor", "{processor}".format(processor=processor)),
52 ("byteorder", "{byteorder}".format(byteorder=sys.byteorder)),
53 ("LC_ALL", "{lc}".format(lc=os.environ.get("LC_ALL", "None"))),
54 ("LANG", "{lang}".format(lang=os.environ.get("LANG", "None"))),
55 ("LOCALE", ".".join(map(str, locale.getlocale()))),
56 ]
57 )
58 except (KeyError, ValueError):
59 pass
60
61 return blob
62
63
64 def show_versions(as_json=False):
65 sys_info = get_sys_info()
66 deps = [
67 "pandas",
68 # required
69 "numpy",
70 "pytz",
71 "dateutil",
72 # install / build,
73 "pip",
74 "setuptools",
75 "Cython",
76 # test
77 "pytest",
78 "hypothesis",
79 # docs
80 "sphinx",
81 # Other, need a min version
82 "blosc",
83 "feather",
84 "xlsxwriter",
85 "lxml.etree",
86 "html5lib",
87 "pymysql",
88 "psycopg2",
89 "jinja2",
90 # Other, not imported.
91 "IPython",
92 "pandas_datareader",
93 ]
94
95 deps.extend(list(VERSIONS))
96 deps_blob = []
97
98 for modname in deps:
99 mod = import_optional_dependency(
100 modname, raise_on_missing=False, on_version="ignore"
101 )
102 if mod:
103 ver = _get_version(mod)
104 else:
105 ver = None
106 deps_blob.append((modname, ver))
107
108 if as_json:
109 j = dict(system=dict(sys_info), dependencies=dict(deps_blob))
110
111 if as_json is True:
112 print(j)
113 else:
114 with codecs.open(as_json, "wb", encoding="utf8") as f:
115 json.dump(j, f, indent=2)
116
117 else:
118 maxlen = max(len(x) for x in deps)
119 tpl = "{{k:<{maxlen}}}: {{stat}}".format(maxlen=maxlen)
120 print("\nINSTALLED VERSIONS")
121 print("------------------")
122 for k, stat in sys_info:
123 print(tpl.format(k=k, stat=stat))
124 print("")
125 for k, stat in deps_blob:
126 print(tpl.format(k=k, stat=stat))
127
128
129 def main():
130 from optparse import OptionParser
131
132 parser = OptionParser()
133 parser.add_option(
134 "-j",
135 "--json",
136 metavar="FILE",
137 nargs=1,
138 help="Save output as JSON into file, pass in '-' to output to stdout",
139 )
140
141 (options, args) = parser.parse_args()
142
143 if options.json == "-":
144 options.json = True
145
146 show_versions(as_json=options.json)
147
148 return 0
149
150
151 if __name__ == "__main__":
152 sys.exit(main())
153
[end of pandas/util/_print_versions.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pandas-dev/pandas
|
a908ccbc17f668bbaa127d22cdbfc903553233bc
|
Selecting "subsets" of a `MultiIndex` `DataFrame` sometimes changes `dtypes`
#### Code Sample, a copy-pastable example if possible
Definition of data and columns:
<details>
```python
from numpy import nan
data = [['n', 1, 0, False, 2, 1, False, 0, 0, False, 2, 0, False, 0, 1, False, 1, 1, False, 'o',
1521734085.289453, 'p', 3233, 1521734085.289494]]
columns = [('a', 'd', 'i', nan, nan),
('a', 'd', 'j', 0.0, 'k'),
('a', 'd', 'j', 0.0, 'l'),
('a', 'd', 'j', 0.0, 'm'),
('a', 'd', 'j', 1.0, 'k'),
('a', 'd', 'j', 1.0, 'l'),
('a', 'd', 'j', 1.0, 'm'),
('a', 'd', 'j', 2.0, 'k'),
('a', 'd', 'j', 2.0, 'l'),
('a', 'd', 'j', 2.0, 'm'),
('a', 'd', 'j', 3.0, 'k'),
('a', 'd', 'j', 3.0, 'l'),
('a', 'd', 'j', 3.0, 'm'),
('a', 'd', 'j', 4.0, 'k'),
('a', 'd', 'j', 4.0, 'l'),
('a', 'd', 'j', 4.0, 'm'),
('a', 'd', 'j', 5.0, 'k'),
('a', 'd', 'j', 5.0, 'l'),
('a', 'd', 'j', 5.0, 'm'),
('b', 'f', nan, nan, nan),
('b', 'h', nan, nan, nan),
('c', 'e', nan, nan, nan),
('c', 'g', nan, nan, nan),
('c', 'h', nan, nan, nan)]
```
</details>
```python
pd.DataFrame(data, columns=pd.MultiIndex.from_tuples(columns)).dtypes.a.d.i
# object
pd.DataFrame(data, columns=pd.MultiIndex.from_tuples(columns)).a.d.i.dtypes
# float64
```
this causes for example:
```python
pd.DataFrame(np.array(data), columns=pd.MultiIndex.from_tuples(columns)).a.d.i
# "n", dtype: object
pd.DataFrame(data, columns=pd.MultiIndex.from_tuples(columns)).a.d.i
# nan, dtype: float32
```
#### Problem description
I think the example is self explaining
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit: None
python: 2.7.12.final.0
python-bits: 64
OS: Linux
OS-release: 4.4.0-119-generic
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: en_US.utf8
LANG: en_US.UTF-8
LOCALE: None.None
pandas: 0.23.0.dev0+38.g6552718
pytest: 2.8.7
pip: 9.0.1
setuptools: 20.7.0
Cython: 0.23.4
numpy: 1.14.2
scipy: 1.0.0
pyarrow: None
xarray: None
IPython: 5.5.0
sphinx: 1.3.6
patsy: 0.4.1
dateutil: 2.7.2
pytz: 2018.3
blosc: None
bottleneck: None
tables: 3.2.2
numexpr: 2.6.4
feather: None
matplotlib: 2.1.2
openpyxl: 2.3.0
xlrd: 0.9.4
xlwt: 0.7.5
xlsxwriter: 0.7.3
lxml: 3.5.0
bs4: 4.4.1
html5lib: 0.9999999
sqlalchemy: 1.0.11
pymysql: None
psycopg2: 2.6.1 (dt dec mx pq3 ext lo64)
jinja2: 2.8
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None
</details>
|
This looks fixed on master; could use a test:
```
In [110]: pd.DataFrame(data, columns=pd.MultiIndex.from_tuples(columns)).dtypes.a.d.i
...:
Out[110]:
NaN NaN object
dtype: object
In [111]: pd.DataFrame(data, columns=pd.MultiIndex.from_tuples(columns)).a.d.i.dtypes
...:
Out[111]:
NaN NaN object
dtype: object
```
|
2019-11-02T15:11:47Z
|
<patch>
</patch>
|
[]
|
[]
| |||
Qiskit__qiskit-609
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Change backend status from AVAILABLE to OPERATIONAL
### What is the expected enhancement?
The new name is more accurate.
</issue>
<code>
[start of README.md]
1 # Quantum Information Science Kit (QISKit)
2
3 [](https://pypi.python.org/pypi/qiskit)
4 [](https://travis-ci.org/QISKit/qiskit-core)
5
6 The Quantum Information Science Kit (**QISKit** for short) is a software development kit (SDK) for
7 working with [OpenQASM](https://github.com/QISKit/qiskit-openqasm) and the
8 [IBM Q Experience (QX)](https://quantumexperience.ng.bluemix.net/).
9
10 Use **QISKit** to create quantum computing programs, compile them, and execute them on one of
11 several backends (online Real quantum processors, online simulators, and local simulators). For
12 the online backends, QISKit uses our [python API client](https://github.com/QISKit/qiskit-api-py)
13 to connect to the IBM Q Experience.
14
15 **We use GitHub issues for tracking requests and bugs. Please see the**
16 [IBM Q Experience community](https://quantumexperience.ng.bluemix.net/qx/community) **for
17 questions and discussion.**
18
19 **If you'd like to contribute to QISKit, please take a look at our**
20 [contribution guidelines](.github/CONTRIBUTING.rst).
21
22 Links to Sections:
23
24 * [Installation](#installation)
25 * [Creating your first Quantum Program](#creating-your-first-quantum-program)
26 * [More Information](#more-information)
27 * [Authors](#authors-alphabetical)
28 * [License](#license)
29
30 ## Installation
31
32 ### Dependencies
33
34 At least [Python 3.5 or later](https://www.python.org/downloads/) is needed for using QISKit. In
35 addition, [Jupyter Notebook](https://jupyter.readthedocs.io/en/latest/install.html) is recommended
36 for interacting with the tutorials.
37 For this reason we recommend installing the [Anaconda 3](https://www.continuum.io/downloads)
38 python distribution, as it comes with all of these dependencies pre-installed.
39
40 In addition, a basic understanding of quantum information is very helpful when interacting with
41 QISKit. If you're new to quantum, start with our
42 [User Guides](https://github.com/QISKit/ibmqx-user-guides)!
43
44 ### Installation
45
46 We encourage to install QISKit via the PIP tool (a python package manager):
47
48 ```
49 pip install qiskit
50 ```
51
52 PIP will handle all dependencies automatically for us and you will always install the latest (and well-tested) version.
53
54 PIP package comes with prebuilt binaries for these platforms:
55
56 * Linux x86_64
57 * Darwin
58 * Win64
59
60 If your platform is not in the list, PIP will try to build from the sources at installation time. It will require to have CMake 3.5 or higher pre-installed and at least one of the [build environments supported by CMake](https://cmake.org/cmake/help/v3.5/manual/cmake-generators.7.html).
61
62 If during the installation PIP doesn't succeed to build, don't worry, you will have QISKit installed at the end but you probably couldn't take advantage of some of the high-performance components. Anyway, we always provide a python, not-so-fast alternative as a fallback.
63
64
65 #### Setup your environment
66
67 We recommend using python virtual environments to improve your experience. Refer to our
68 [Environment Setup documentation](doc/install.rst#3.1-Setup-the-environment) for more information.
69
70 ## Creating your first Quantum Program
71
72 Now that the SDK is installed, it's time to begin working with QISKit.
73
74 We are ready to try out a quantum circuit example, which runs via the local simulator.
75
76 This is a simple example that makes an entangled state.
77
78 ```python
79 # Import the QISKit SDK
80 from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister
81 from qiskit import available_backends, execute
82
83 # Create a Quantum Register with 2 qubits.
84 q = QuantumRegister(2)
85 # Create a Classical Register with 2 bits.
86 c = ClassicalRegister(2)
87 # Create a Quantum Circuit
88 qc = QuantumCircuit(q, c)
89
90 # Add a H gate on qubit 0, putting this qubit in superposition.
91 qc.h(q[0])
92 # Add a CX (CNOT) gate on control qubit 0 and target qubit 1, putting
93 # the qubits in a Bell state.
94 qc.cx(q[0], q[1])
95 # Add a Measure gate to see the state.
96 qc.measure(q, c)
97
98 # See a list of available local simulators
99 print("Local backends: ", available_backends({'local': True}))
100
101 # Compile and run the Quantum circuit on a simulator backend
102 job_sim = execute(qc, "local_qasm_simulator")
103 sim_result = job_sim.result()
104
105 # Show the results
106 print("simulation: ", sim_result)
107 print(sim_result.get_counts(qc))
108 ```
109
110 In this case, the output will be:
111
112 ```
113 COMPLETED
114 {'counts': {'00': 512, '11': 512}}
115 ```
116
117 This script is available [here](examples/python/hello_quantum.py), where we also show how to
118 run the same program on a real quantum computer.
119
120 ### Executing your code on a real Quantum chip
121
122 You can also use QISKit to execute your code on a
123 [real quantum chip](https://github.com/QISKit/ibmqx-backend-information).
124 In order to do so, you need to configure the SDK for using the credentials in
125 your IBM Q Experience account:
126
127
128 #### Configure your API token and QX credentials
129
130
131 1. Create an _[IBM Q Experience](https://quantumexperience.ng.bluemix.net) > Account_ if you haven't already done so.
132 2. Get an API token from the IBM Q Experience website under _My Account > Advanced > API Token_. This API token allows you to execute your programs with the IBM Q Experience backends. See: [Example](doc/example_real_backend.rst).
133 3. We are going to create a new file called `Qconfig.py` and insert the API token into it. This file must have these contents:
134
135 ```python
136 APItoken = 'MY_API_TOKEN'
137
138 config = {
139 'url': 'https://quantumexperience.ng.bluemix.net/api',
140 # The following should only be needed for IBM Q Network users.
141 'hub': 'MY_HUB',
142 'group': 'MY_GROUP',
143 'project': 'MY_PROJECT'
144 }
145 ```
146
147 4. Substitute `MY_API_TOKEN` with your real API Token extracted in step 2.
148
149 5. If you have access to the IBM Q Network features, you also need to setup the
150 values for your hub, group, and project. You can do so by filling the
151 `config` variable with the values you can find on your IBM Q account
152 page.
153
154 Once the `Qconfig.py` file is set up, you have to move it under the same directory/folder where your program/tutorial resides, so it can be imported and be used to authenticate with the `register()` function. For example:
155
156 ```python
157 from qiskit import register
158 import Qconfig
159
160 register(Qconfig.APItoken, Qconfig.config["url"],
161 hub=Qconfig.config["hub"],
162 group=Qconfig.config["group"],
163 project=Qconfig.config["project"])
164 ```
165
166 For more details on this and more information see
167 [our QISKit documentation](https://www.qiskit.org/documentation/).
168
169
170 ### Next Steps
171
172 Now you're set up and ready to check out some of the other examples from our
173 [Tutorial](https://github.com/QISKit/qiskit-tutorial) repository. Start with the
174 [index tutorial](https://github.com/QISKit/qiskit-tutorial/blob/master/index.ipynb) and then go to
175 the [‘Getting Started’ example](https://github.com/QISKit/qiskit-tutorial/blob/master/reference/tools/getting_started.ipynb).
176 If you already have [Jupyter Notebooks installed](https://jupyter.readthedocs.io/en/latest/install.html),
177 you can copy and modify the notebooks to create your own experiments.
178
179 To install the tutorials as part of the QISKit SDK, see the following
180 [installation details](doc/install.rst#Install-Jupyter-based-tutorials). Complete SDK
181 documentation can be found in the [*doc* directory](doc/qiskit.rst) and in
182 [the official QISKit site](https://www.qiskit.org/documentation).
183
184 ## More Information
185
186 For more information on how to use QISKit, tutorial examples, and other helpful links, take a look
187 at these resources:
188
189 * **[User Guides](https://github.com/QISKit/ibmqx-user-guides)**,
190 a good starting place for learning about quantum information and computing
191 * **[Tutorials](https://github.com/QISKit/qiskit-tutorial)**,
192 for example notebooks, start with the [index](https://github.com/QISKit/qiskit-tutorial/blob/master/index.ipynb) and [‘Getting Started’ Jupyter notebook](https://github.com/QISKit/qiskit-tutorial/blob/002d054c72fc59fc5009bb9fa0ee393e15a69d07/1_introduction/getting_started.ipynb)
193 * **[OpenQASM](https://github.com/QISKit/openqasm)**,
194 for additional information and examples of QASM code
195 * **[IBM Quantum Experience Composer](https://quantumexperience.ng.bluemix.net/qx/editor)**,
196 a GUI for interacting with real and simulated quantum computers
197 * **[QISkit Python API](https://github.com/QISKit/qiskit-api-py)**, an API to use the IBM Quantum
198 Experience in Python
199
200 QISKit was originally developed by researchers and developers on the
201 [IBM-Q](http://www.research.ibm.com/ibm-q/) Team at [IBM Research](http://www.research.ibm.com/),
202 with the aim of offering a high level development kit to work with quantum computers.
203
204 Visit the [IBM Q Experience community](https://quantumexperience.ng.bluemix.net/qx/community) for
205 questions and discussions on QISKit and quantum computing more broadly. If you'd like to
206 contribute to QISKit, please take a look at our [contribution guidelines](.github/CONTRIBUTING.rst).
207
208 ## Multilanguage guide
209
210 * **[Korean Translation](doc/ko/README.md)** - basic guide line written in Korean.
211 * **[Chinese Translation](doc/zh/README.md)** - basic guide line written in Chinese.
212
213 ## Authors (alphabetical)
214
215 QISKit was originally authored by
216 Luciano Bello, Jim Challenger, Andrew Cross, Ismael Faro, Jay Gambetta, Juan Gomez,
217 Ali Javadi-Abhari, Paco Martin, Diego Moreda, Jesus Perez, Erick Winston and Chris Wood.
218
219 And continues to grow with the help and work of [many people](CONTRIBUTORS.md) who contribute
220 to the project at different levels.
221
[end of README.md]
[start of qiskit/_result.py]
1 # -*- coding: utf-8 -*-
2
3 # Copyright 2017, IBM.
4 #
5 # This source code is licensed under the Apache License, Version 2.0 found in
6 # the LICENSE.txt file in the root directory of this source tree.
7
8 # pylint: disable=no-else-return
9
10 """Module for working with Results."""
11
12 import copy
13 import numpy
14 from ._qiskiterror import QISKitError
15 from ._quantumcircuit import QuantumCircuit
16
17
18 class Result(object):
19 """ Result Class.
20
21 Class internal properties.
22
23 Methods to process the quantum program after it has been run
24
25 Internal::
26
27 result = {
28 "job_id": --job-id (string),
29 #This string links the result with the job that computes it,
30 #it should be issued by the backend it is run on.
31 "status": --status (string),
32 "result":
33 [
34 {
35 "data":
36 { #### DATA CAN BE A DIFFERENT DICTIONARY FOR EACH BACKEND ####
37 "counts": {'00000': XXXX, '00001': XXXXX},
38 "time" : xx.xxxxxxxx
39 },
40 "status": --status (string)--
41 },
42 ...
43 ]
44 }
45 """
46
47 def __init__(self, qobj_result):
48 self._result = qobj_result
49
50 def __str__(self):
51 """Get the status of the run.
52
53 Returns:
54 string: the status of the results.
55 """
56 return self._result['status']
57
58 def __getitem__(self, i):
59 return self._result['result'][i]
60
61 def __len__(self):
62 return len(self._result['result'])
63
64 def __iadd__(self, other):
65 """Append a Result object to current Result object.
66
67 Arg:
68 other (Result): a Result object to append.
69 Returns:
70 Result: The current object with appended results.
71 Raises:
72 QISKitError: if the Results cannot be combined.
73 """
74 # todo: reevaluate if moving equality to Backend themselves (part of
75 # a bigger problem - backend instances will not persist between
76 # sessions)
77 this_backend = self._result.get('backend_name')
78 other_backend = other._result.get('backend_name')
79 if this_backend == other_backend:
80 self._result['result'] += other._result['result']
81 return self
82 else:
83 raise QISKitError('Result objects from different backends cannot be combined.')
84
85 def __add__(self, other):
86 """Combine Result objects.
87
88 Arg:
89 other (Result): a Result object to combine.
90 Returns:
91 Result: A new Result object consisting of combined objects.
92 """
93 ret = copy.deepcopy(self)
94 ret += other
95 return ret
96
97 def _is_error(self):
98 return self._result['status'] == 'ERROR'
99
100 def get_status(self):
101 """Return whole result status."""
102 return self._result['status']
103
104 def circuit_statuses(self):
105 """Return statuses of all circuits
106
107 Returns:
108 list(str): List of status result strings.
109 """
110 return [circuit_result['status']
111 for circuit_result in self._result['result']]
112
113 def get_circuit_status(self, icircuit):
114 """Return the status of circuit at index icircuit.
115
116 Args:
117 icircuit (int): index of circuit
118 Returns:
119 string: the status of the circuit.
120 """
121 return self._result['result'][icircuit]['status']
122
123 def get_job_id(self):
124 """Return the job id assigned by the api if this is a remote job.
125
126 Returns:
127 string: a string containing the job id.
128 """
129 return self._result['job_id']
130
131 def get_ran_qasm(self, name):
132 """Get the ran qasm for the named circuit and backend.
133
134 Args:
135 name (str): the name of the quantum circuit.
136
137 Returns:
138 string: A text version of the qasm file that has been run.
139 Raises:
140 QISKitError: if the circuit was not found.
141 """
142 try:
143 for exp_result in self._result['result']:
144 if exp_result.get('name') == name:
145 return exp_result['compiled_circuit_qasm']
146 except KeyError:
147 pass
148 raise QISKitError('No qasm for circuit "{0}"'.format(name))
149
150 def get_data(self, circuit=None):
151 """Get the data of circuit name.
152
153 The data format will depend on the backend. For a real device it
154 will be for the form::
155
156 "counts": {'00000': XXXX, '00001': XXXX},
157 "time" : xx.xxxxxxxx
158
159 for the qasm simulators of 1 shot::
160
161 'statevector': array([ XXX, ..., XXX]),
162 'classical_state': 0
163
164 for the qasm simulators of n shots::
165
166 'counts': {'0000': XXXX, '1001': XXXX}
167
168 for the unitary simulators::
169
170 'unitary': np.array([[ XX + XXj
171 ...
172 XX + XX]
173 ...
174 [ XX + XXj
175 ...
176 XX + XXj]]
177
178 Args:
179 circuit (str or QuantumCircuit or None): reference to a quantum circuit
180 If None and there is only one circuit available, returns
181 that one.
182
183 Returns:
184 dict: A dictionary of data for the different backends.
185
186 Raises:
187 QISKitError: if there is no data for the circuit, or an unhandled
188 error occurred while fetching the data.
189 Exception: if a handled error occurred while fetching the data.
190 """
191 if self._is_error():
192 exception = self._result['result']
193 if isinstance(exception, BaseException):
194 raise exception
195 else:
196 raise QISKitError(str(exception))
197 if isinstance(circuit, QuantumCircuit):
198 circuit = circuit.name
199
200 if circuit is None:
201 if len(self._result['result']) == 1:
202 return self._result['result'][0]['data']
203 else:
204 raise QISKitError("You have to select a circuit when there is more than"
205 "one available")
206
207 if not isinstance(circuit, str):
208 circuit = str(circuit)
209 try:
210 for circuit_result in self._result['result']:
211 if circuit_result.get('name') == circuit:
212 return circuit_result['data']
213 except (KeyError, TypeError):
214 pass
215 raise QISKitError('No data for circuit "{0}"'.format(circuit))
216
217 def get_counts(self, circuit=None):
218 """Get the histogram data of circuit name.
219
220 The data from the a qasm circuit is dictionary of the format
221 {'00000': XXXX, '00001': XXXXX}.
222
223 Args:
224 circuit (str or QuantumCircuit or None): reference to a quantum circuit
225 If None and there is only one circuit available, returns
226 that one.
227
228 Returns:
229 Dictionary: Counts {'00000': XXXX, '00001': XXXXX}.
230
231 Raises:
232 QISKitError: if there are no counts for the circuit.
233 """
234 try:
235 return self.get_data(circuit)['counts']
236 except KeyError:
237 raise QISKitError('No counts for circuit "{0}"'.format(circuit))
238
239 def get_statevector(self, circuit=None):
240 """Get the final statevector of circuit name.
241
242 The data is a list of complex numbers
243 [1.+0.j, 0.+0.j].
244
245 Args:
246 circuit (str or QuantumCircuit or None): reference to a quantum circuit
247 If None and there is only one circuit available, returns
248 that one.
249
250 Returns:
251 list[complex]: list of 2^n_qubits complex amplitudes.
252
253 Raises:
254 QISKitError: if there is no statevector for the circuit.
255 """
256 try:
257 return self.get_data(circuit)['statevector']
258 except KeyError:
259 raise QISKitError('No statevector for circuit "{0}"'.format(circuit))
260
261 def get_unitary(self, circuit=None):
262 """Get the final unitary of circuit name.
263
264 The data is a matrix of complex numbers
265 [[1.+0.j, 0.+0.j], .. ].
266
267 Args:
268 circuit (str or QuantumCircuit or None): reference to a quantum circuit
269 If None and there is only one circuit available, returns
270 that one.
271
272 Returns:
273 list[list[complex]]: list of 2^n_qubits x 2^n_qubits complex amplitudes.
274
275 Raises:
276 QISKitError: if there is no unitary for the circuit.
277 """
278 try:
279 return self.get_data(circuit)['unitary']
280 except KeyError:
281 raise QISKitError('No unitary for circuit "{0}"'.format(circuit))
282
283 def get_snapshots(self, circuit=None):
284 """Get snapshots recorded during the run.
285
286 The data is a dictionary:
287 where keys are requested snapshot slots.
288 and values are a dictionary of the snapshots themselves.
289
290 Args:
291 circuit (str or QuantumCircuit or None): reference to a quantum circuit
292 If None and there is only one circuit available, returns
293 that one.
294
295 Returns:
296 dict[slot: dict[str: array]]: list of 2^n_qubits complex amplitudes.
297
298 Raises:
299 QISKitError: if there are no snapshots for the circuit.
300 """
301 try:
302 return self.get_data(circuit)['snapshots']
303 except KeyError:
304 raise QISKitError('No snapshots for circuit "{0}"'.format(circuit))
305
306 def get_snapshot(self, slot=None, circuit=None):
307 """Get snapshot at a specific slot.
308
309 Args:
310 slot (str): snapshot slot to retrieve. If None and there is only one
311 slot, return that one.
312 circuit (str or QuantumCircuit or None): reference to a quantum circuit
313 If None and there is only one circuit available, returns
314 that one.
315
316 Returns:
317 dict[slot: dict[str: array]]: list of 2^n_qubits complex amplitudes.
318
319 Raises:
320 QISKitError: if there is no snapshot at all, or in this slot
321 """
322 try:
323 snapshots_dict = self.get_snapshots(circuit)
324
325 if slot is None:
326 slots = list(snapshots_dict.keys())
327 if len(slots) == 1:
328 slot = slots[0]
329 else:
330 raise QISKitError("You have to select a slot when there"
331 "is more than one available")
332 snapshot_dict = snapshots_dict[slot]
333
334 snapshot_types = list(snapshot_dict.keys())
335 if len(snapshot_types) == 1:
336 snapshot_list = snapshot_dict[snapshot_types[0]]
337 if len(snapshot_list) == 1:
338 return snapshot_list[0]
339 else:
340 return snapshot_list
341 else:
342 return snapshot_dict
343 except KeyError:
344 raise QISKitError('No snapshot at slot {0} for '
345 'circuit "{1}"'.format(slot, circuit))
346
347 def get_names(self):
348 """Get the circuit names of the results.
349
350 Returns:
351 List: A list of circuit names.
352 """
353 return [c.get('name') for c in self._result['result']]
354
355 def average_data(self, name, observable):
356 """Compute the mean value of an diagonal observable.
357
358 Takes in an observable in dictionary format and then
359 calculates the sum_i value(i) P(i) where value(i) is the value of
360 the observable for state i.
361
362 Args:
363 name (str): the name of the quantum circuit
364 observable (dict): The observable to be averaged over. As an example
365 ZZ on qubits equals {"00": 1, "11": 1, "01": -1, "10": -1}
366
367 Returns:
368 Double: Average of the observable
369 """
370 counts = self.get_counts(name)
371 temp = 0
372 tot = sum(counts.values())
373 for key in counts:
374 if key in observable:
375 temp += counts[key] * observable[key] / tot
376 return temp
377
378 def get_qubitpol_vs_xval(self, nqubits, xvals_dict=None):
379 """Compute the polarization of each qubit for all circuits and pull out each circuits
380 xval into an array. Assumes that each circuit has the same number of qubits and that
381 all qubits are measured.
382
383 Args:
384 nqubits (int): number of qubits
385 xvals_dict (dict): xvals for each circuit {'circuitname1': xval1,...}. If this
386 is none then the xvals list is just left as an array of zeros
387
388 Returns:
389 qubit_pol: mxn double array where m is the number of circuit, n the number of qubits
390 xvals: mx1 array of the circuit xvals
391 """
392 ncircuits = len(self._result['result'])
393 # Is this the best way to get the number of qubits?
394 qubitpol = numpy.zeros([ncircuits, nqubits], dtype=float)
395 xvals = numpy.zeros([ncircuits], dtype=float)
396
397 # build Z operators for each qubit
398 z_dicts = []
399 for qubit_ind in range(nqubits):
400 z_dicts.append(dict())
401 for qubit_state in range(2**nqubits):
402 new_key = ("{0:0"+"{:d}".format(nqubits) + "b}").format(qubit_state)
403 z_dicts[-1][new_key] = -1
404 if new_key[nqubits-qubit_ind-1] == '1':
405 z_dicts[-1][new_key] = 1
406
407 # go through each circuit and for eqch qubit and apply the operators using "average_data"
408 for circuit_ind in range(ncircuits):
409 circuit_name = self._result['result'][circuit_ind]['name']
410 if xvals_dict:
411 xvals[circuit_ind] = xvals_dict[circuit_name]
412 for qubit_ind in range(nqubits):
413 qubitpol[circuit_ind, qubit_ind] = self.average_data(
414 circuit_name, z_dicts[qubit_ind])
415
416 return qubitpol, xvals
417
[end of qiskit/_result.py]
[start of qiskit/backends/basebackend.py]
1 # -*- coding: utf-8 -*-
2
3 # Copyright 2017, IBM.
4 #
5 # This source code is licensed under the Apache License, Version 2.0 found in
6 # the LICENSE.txt file in the root directory of this source tree.
7
8 """This module implements the abstract base class for backend modules.
9
10 To create add-on backend modules subclass the Backend class in this module.
11 Doing so requires that the required backend interface is implemented.
12 """
13
14 from abc import ABC, abstractmethod
15 from qiskit._qiskiterror import QISKitError
16
17
18 class BaseBackend(ABC):
19 """Base class for backends."""
20
21 @abstractmethod
22 def __init__(self, configuration):
23 """Base class for backends.
24
25 This method should initialize the module and its configuration, and
26 raise an exception if a component of the module is
27 not available.
28
29 Args:
30 configuration (dict): configuration dictionary
31
32 Raises:
33 FileNotFoundError if backend executable is not available.
34 QISKitError: if there is no name in the configuration
35 """
36 if 'name' not in configuration:
37 raise QISKitError('backend does not have a name.')
38 self._configuration = configuration
39
40 @abstractmethod
41 def run(self, qobj):
42 """Run a Qobj on the the backend."""
43 pass
44
45 @property
46 def configuration(self):
47 """Return backend configuration"""
48 return self._configuration
49
50 @property
51 def calibration(self):
52 """Return backend calibration"""
53 return {}
54
55 @property
56 def parameters(self):
57 """Return backend parameters"""
58 return {}
59
60 @property
61 def status(self):
62 """Return backend status"""
63 return {'name': self.name, 'available': True}
64
65 @property
66 def name(self):
67 """Return backend name"""
68 return self._configuration['name']
69
70 def __str__(self):
71 return self.name
72
[end of qiskit/backends/basebackend.py]
[start of qiskit/backends/ibmq/ibmqjob.py]
1 # -*- coding: utf-8 -*-
2
3 # Copyright 2017, IBM.
4 #
5 # This source code is licensed under the Apache License, Version 2.0 found in
6 # the LICENSE.txt file in the root directory of this source tree.
7
8 """IBMQJob module
9
10 This module is used for creating asynchronous job objects for the
11 IBM Q Experience.
12 """
13
14 from concurrent import futures
15 import time
16 import logging
17 import pprint
18 import json
19 import datetime
20 import numpy
21
22 from qiskit.transpiler import transpile
23 from qiskit.backends import BaseJob
24 from qiskit.backends.jobstatus import JobStatus
25 from qiskit._qiskiterror import QISKitError
26 from qiskit._result import Result
27 from qiskit._resulterror import ResultError
28
29 logger = logging.getLogger(__name__)
30
31
32 class IBMQJob(BaseJob):
33 """IBM Q Job class
34
35 Attributes:
36 _executor (futures.Executor): executor to handle asynchronous jobs
37 _final_states (list(JobStatus)): terminal states of async jobs
38 """
39 _executor = futures.ThreadPoolExecutor()
40 _final_states = [
41 JobStatus.DONE,
42 JobStatus.CANCELLED,
43 JobStatus.ERROR
44 ]
45
46 def __init__(self, qobj, api, is_device):
47 """IBMQJob init function.
48
49 Args:
50 qobj (dict): job description
51 api (IBMQuantumExperience): IBM Q API
52 is_device (bool): whether backend is a real device # TODO: remove this after Qobj
53 """
54 super().__init__()
55 self._qobj = qobj
56 self._api = api
57 self._id = None # this must be before creating the future
58 self._backend_name = self._qobj.get('config').get('backend_name')
59 self._status = JobStatus.INITIALIZING
60 self._future_submit = self._executor.submit(self._submit)
61 self._status_msg = 'Job is initializing. Please, wait a moment.'
62 self._queue_position = None
63 self._cancelled = False
64 self._exception = None
65 self._is_device = is_device
66 self.creation_date = datetime.datetime.utcnow().replace(
67 tzinfo=datetime.timezone.utc).isoformat()
68
69 @classmethod
70 def from_api(cls, job_info, api, is_device):
71 """Instantiates job using information returned from
72 IBMQuantumExperience about a particular job.
73
74 Args:
75 job_info (dict): This is the information about a job returned from
76 the API. It has the simplified structure:
77
78 {'backend': {'id', 'backend id string',
79 'name', 'ibmqx4'},
80 'id': 'job id string',
81 'qasms': [{'executionId': 'id string',
82 'qasm': 'qasm string'},
83 ]
84 'status': 'status string',
85 'seed': '1',
86 'shots': 1024,
87 'status': 'status string',
88 'usedCredits': 3,
89 'creationDate': '2018-06-13T04:31:13.175Z'
90 'userId': 'user id'}
91 api (IBMQuantumExperience): IBM Q API
92 is_device (bool): whether backend is a real device # TODO: remove this after Qobj
93
94 Returns:
95 IBMQJob: an instance of this class
96 """
97 job_instance = cls.__new__(cls)
98 job_instance._status = JobStatus.QUEUED
99 job_instance._backend_name = job_info.get('backend').get('name')
100 job_instance._api = api
101 job_instance._id = job_info.get('id')
102 job_instance._exception = None # needs to be before status call below
103 job_instance._status_msg = None
104 job_instance._queue_position = None
105 job_instance._cancelled = False
106 job_instance._is_device = is_device
107 job_instance.creation_date = job_info.get('creationDate')
108 return job_instance
109
110 def result(self, timeout=None, wait=5):
111 """Return the result from the job.
112
113 Args:
114 timeout (int): number of seconds to wait for job
115 wait (int): time between queries to IBM Q server
116
117 Returns:
118 Result: Result object
119
120 Raises:
121 IBMQJobError: exception raised during job initialization
122 """
123 # pylint: disable=arguments-differ
124 while self._status == JobStatus.INITIALIZING:
125 if self._future_submit.exception():
126 raise IBMQJobError('error submitting job: {}'.format(
127 repr(self._future_submit.exception())))
128 time.sleep(0.1)
129 try:
130 this_result = self._wait_for_job(timeout=timeout, wait=wait)
131 except TimeoutError as err:
132 # A timeout error retrieving the results does not imply the job
133 # is failing. The job can be still running.
134 return Result({'id': self._id, 'status': 'ERROR',
135 'result': str(err)})
136
137 if self._is_device and self.done:
138 _reorder_bits(this_result)
139
140 if self._status not in self._final_states:
141 if this_result.get_status() == 'ERROR':
142 self._status = JobStatus.ERROR
143 else:
144 self._status = JobStatus.DONE
145 return this_result
146
147 def cancel(self):
148 """Attempt to cancel job. Currently this is only possible on
149 commercial systems.
150 Returns:
151 bool: True if job can be cancelled, else False.
152
153 Raises:
154 IBMQJobError: if server returned error
155 """
156 if self._is_commercial:
157 hub = self._api.config['hub']
158 group = self._api.config['group']
159 project = self._api.config['project']
160 response = self._api.cancel_job(self._id, hub, group, project)
161 if 'error' in response:
162 err_msg = response.get('error', '')
163 error = IBMQJobError('Error cancelling job: %s' % err_msg)
164 self._exception = error
165 raise error
166 else:
167 self._cancelled = True
168 return True
169 else:
170 self._cancelled = False
171 return False
172
173 @property
174 def status(self):
175 self._update_status()
176 stats = {
177 'job_id': self._id,
178 'status': self._status,
179 'status_msg': self._status_msg
180 }
181 if self._queue_position:
182 stats['queue_position'] = self._queue_position
183 # Reset once consumed to allow _update_status to regenerate the
184 # value if needed.
185 self._queue_position = None
186 return stats
187
188 def _update_status(self):
189 """Query the API to update the status."""
190 if (self._status in self._final_states or
191 self._status == JobStatus.INITIALIZING):
192 return None
193
194 try:
195 api_job = self._api.get_job(self.id)
196 if 'status' not in api_job:
197 raise QISKitError('get_job didn\'t return status: %s' %
198 pprint.pformat(api_job))
199 # pylint: disable=broad-except
200 except Exception as err:
201 self._status = JobStatus.ERROR
202 self._exception = err
203 self._status_msg = '{}'.format(err)
204 return None
205
206 if api_job['status'] == 'RUNNING':
207 self._status = JobStatus.RUNNING
208 self._status_msg = self._status.value
209 queued, queue_position = self._is_job_queued(api_job)
210 if queued:
211 self._status = JobStatus.QUEUED
212 self._status_msg = self._status.value
213 if queue_position:
214 self._queue_position = queue_position
215
216 elif api_job['status'] == 'COMPLETED':
217 self._status = JobStatus.DONE
218 self._status_msg = self._status.value
219
220 elif api_job['status'] == 'CANCELLED':
221 self._status = JobStatus.CANCELLED
222 self._status_msg = self._status.value
223 self._cancelled = True
224
225 elif 'ERROR' in api_job['status']:
226 # ERROR_CREATING_JOB or ERROR_RUNNING_JOB
227 self._status = JobStatus.ERROR
228 self._status_msg = api_job['status']
229
230 elif self.exception or self._future_submit.exception():
231 self._status = JobStatus.ERROR
232 if self._future_submit.exception():
233 self._exception = self._future_submit.exception()
234 self._status_msg = str(self.exception)
235
236 else:
237 self._status = JobStatus.ERROR
238 self._exception = IBMQJobError(
239 'Unrecognized result: \n{}'.format(pprint.pformat(api_job)))
240 self._status_msg = '{}'.format(self._exception)
241
242 return api_job
243
244 def _is_job_queued(self, api_job):
245 is_queued, position = False, None
246 if 'infoQueue' in api_job:
247 if 'status' in api_job['infoQueue']:
248 queue_status = api_job['infoQueue']['status']
249 is_queued = queue_status == 'PENDING_IN_QUEUE'
250 if 'position' in api_job['infoQueue']:
251 position = api_job['infoQueue']['position']
252 return is_queued, position
253
254 @property
255 def queued(self):
256 """
257 Returns whether job is queued.
258
259 Returns:
260 bool: True if job is queued, else False.
261
262 Raises:
263 QISKitError: couldn't get job status from server
264 """
265 return self.status['status'] == JobStatus.QUEUED
266
267 @property
268 def running(self):
269 """
270 Returns whether job is actively running
271
272 Returns:
273 bool: True if job is running, else False.
274
275 Raises:
276 QISKitError: couldn't get job status from server
277 """
278 return self.status['status'] == JobStatus.RUNNING
279
280 @property
281 def done(self):
282 """
283 Returns True if job successfully finished running.
284
285 Note behavior is slightly different than Future objects which would
286 also return true if successfully cancelled.
287 """
288 return self.status['status'] == JobStatus.DONE
289
290 @property
291 def cancelled(self):
292 return self._cancelled
293
294 @property
295 def exception(self):
296 """
297 Return Exception object previously raised by job else None
298
299 Returns:
300 Exception: exception raised by job
301 """
302 if isinstance(self._exception, Exception):
303 self._status_msg = str(self._exception)
304 return self._exception
305
306 @property
307 def _is_commercial(self):
308 config = self._api.config
309 # this check may give false positives so should probably be improved
310 return config.get('hub') and config.get('group') and config.get('project')
311
312 @property
313 def id(self):
314 """
315 Return backend determined id (also available in status method).
316 """
317 # pylint: disable=invalid-name
318 while self._id is None and self._status not in self._final_states:
319 if self._future_submit.exception():
320 self._status = JobStatus.ERROR
321 self._exception = self._future_submit.exception()
322 # job is initializing and hasn't gotten a id yet.
323 time.sleep(0.1)
324 return self._id
325
326 @property
327 def backend_name(self):
328 """
329 Return backend name used for this job
330 """
331 return self._backend_name
332
333 def _submit(self):
334 """Submit job to IBM Q.
335
336 Returns:
337 dict: submission info including job id from server
338
339 Raises:
340 QISKitError: The backend name in the job doesn't match this backend.
341 ResultError: If the API reported an error with the submitted job.
342 RegisterSizeError: If the requested register size exceeded device
343 capability.
344 """
345 qobj = self._qobj
346 api_jobs = []
347 for circuit in qobj['circuits']:
348 job = {}
349 if (('compiled_circuit_qasm' not in circuit) or
350 (circuit['compiled_circuit_qasm'] is None)):
351 compiled_circuit = transpile(circuit['circuit'])
352 circuit['compiled_circuit_qasm'] = compiled_circuit.qasm(qeflag=True)
353 if isinstance(circuit['compiled_circuit_qasm'], bytes):
354 job['qasm'] = circuit['compiled_circuit_qasm'].decode()
355 else:
356 job['qasm'] = circuit['compiled_circuit_qasm']
357 if 'name' in circuit:
358 job['name'] = circuit['name']
359 # convert numpy types for json serialization
360 compiled_circuit = json.loads(
361 json.dumps(circuit['compiled_circuit'],
362 default=_numpy_type_converter))
363 job['metadata'] = {'compiled_circuit': compiled_circuit}
364 api_jobs.append(job)
365 seed0 = qobj['circuits'][0]['config']['seed']
366 hpc = None
367 if 'hpc' in qobj['config']:
368 try:
369 # Use CamelCase when passing the hpc parameters to the API.
370 hpc = {
371 'multiShotOptimization':
372 qobj['config']['hpc']['multi_shot_optimization'],
373 'ompNumThreads':
374 qobj['config']['hpc']['omp_num_threads']
375 }
376 except (KeyError, TypeError):
377 hpc = None
378 backend_name = qobj['config']['backend_name']
379 if backend_name != self._backend_name:
380 raise QISKitError("inconsistent qobj backend "
381 "name ({0} != {1})".format(backend_name,
382 self._backend_name))
383 submit_info = {}
384 try:
385 submit_info = self._api.run_job(api_jobs, backend=backend_name,
386 shots=qobj['config']['shots'],
387 max_credits=qobj['config']['max_credits'],
388 seed=seed0,
389 hpc=hpc)
390 # pylint: disable=broad-except
391 except Exception as err:
392 self._status = JobStatus.ERROR
393 self._status_msg = str(err)
394 self._exception = err
395 return None
396 if 'error' in submit_info:
397 self._status = JobStatus.ERROR
398 self._status_msg = str(submit_info['error'])
399 self._exception = IBMQJobError(self._status_msg)
400 return submit_info
401 self._id = submit_info.get('id')
402 self.creation_date = submit_info.get('creationDate')
403 self._status = JobStatus.QUEUED
404 return submit_info
405
406 def _wait_for_job(self, timeout=60, wait=5):
407 """Wait until all online ran circuits of a qobj are 'COMPLETED'.
408
409 Args:
410 timeout (float or None): seconds to wait for job. If None, wait
411 indefinitely.
412 wait (float): seconds between queries
413
414 Returns:
415 Result: A result object.
416
417 Raises:
418 QISKitError: job didn't return status or reported error in status
419 TimeoutError: if the job does not return results before an
420 specified timeout.
421 """
422 start_time = time.time()
423 api_result = self._update_status()
424 while self._status not in self._final_states:
425 elapsed_time = time.time() - start_time
426 if timeout is not None and elapsed_time >= timeout:
427 raise TimeoutError('QISKit timed out')
428 logger.info('status = %s (%d seconds)', api_result['status'],
429 elapsed_time)
430
431 if 'status' not in api_result:
432 self._exception = QISKitError("get_job didn't return status: %s" %
433 (pprint.pformat(api_result)))
434 raise QISKitError("get_job didn't return status: %s" %
435 (pprint.pformat(api_result)))
436
437 if (api_result['status'] == 'ERROR_CREATING_JOB' or
438 api_result['status'] == 'ERROR_RUNNING_JOB'):
439 job_result = {'id': self._id, 'status': 'ERROR',
440 'result': api_result['status']}
441 return Result(job_result)
442
443 time.sleep(wait)
444 api_result = self._update_status()
445
446 if self.cancelled:
447 job_result = {'id': self._id, 'status': 'CANCELLED',
448 'result': 'job cancelled'}
449 return Result(job_result)
450
451 elif self.exception:
452 job_result = {'id': self._id, 'status': 'ERROR',
453 'result': str(self.exception)}
454 return Result(job_result)
455
456 if api_result is None:
457 api_result = self._api.get_job(self._id)
458
459 job_result_list = []
460 for circuit_result in api_result['qasms']:
461 this_result = {'data': circuit_result['data'],
462 'name': circuit_result.get('name'),
463 'compiled_circuit_qasm': circuit_result.get('qasm'),
464 'status': circuit_result['status']}
465 if 'metadata' in circuit_result:
466 this_result['metadata'] = circuit_result['metadata']
467 job_result_list.append(this_result)
468 job_result = {'id': self._id,
469 'status': api_result['status'],
470 'used_credits': api_result.get('usedCredits'),
471 'result': job_result_list}
472 job_result['backend_name'] = self.backend_name
473 return Result(job_result)
474
475
476 class IBMQJobError(QISKitError):
477 """class for IBM Q Job errors"""
478 pass
479
480
481 def _reorder_bits(result):
482 """temporary fix for ibmq backends.
483 for every ran circuit, get reordering information from qobj
484 and apply reordering on result"""
485 for circuit_result in result._result['result']:
486 if 'metadata' in circuit_result:
487 circ = circuit_result['metadata'].get('compiled_circuit')
488 else:
489 logger.warning('result object missing metadata for reordering'
490 ' bits: bits may be out of order')
491 return
492 # device_qubit -> device_clbit (how it should have been)
493 measure_dict = {op['qubits'][0]: op['clbits'][0]
494 for op in circ['operations']
495 if op['name'] == 'measure'}
496 counts_dict_new = {}
497 for item in circuit_result['data']['counts'].items():
498 # fix clbit ordering to what it should have been
499 bits = list(item[0])
500 bits.reverse() # lsb in 0th position
501 count = item[1]
502 reordered_bits = list('x' * len(bits))
503 for device_clbit, bit in enumerate(bits):
504 if device_clbit in measure_dict:
505 correct_device_clbit = measure_dict[device_clbit]
506 reordered_bits[correct_device_clbit] = bit
507 reordered_bits.reverse()
508
509 # only keep the clbits specified by circuit, not everything on device
510 num_clbits = circ['header']['number_of_clbits']
511 compact_key = reordered_bits[-num_clbits:]
512 compact_key = "".join([b if b != 'x' else '0'
513 for b in compact_key])
514
515 # insert spaces to signify different classical registers
516 cregs = circ['header']['clbit_labels']
517 if sum([creg[1] for creg in cregs]) != num_clbits:
518 raise ResultError("creg sizes don't add up in result header.")
519 creg_begin_pos = []
520 creg_end_pos = []
521 acc = 0
522 for creg in reversed(cregs):
523 creg_size = creg[1]
524 creg_begin_pos.append(acc)
525 creg_end_pos.append(acc + creg_size)
526 acc += creg_size
527 compact_key = " ".join([compact_key[creg_begin_pos[i]:creg_end_pos[i]]
528 for i in range(len(cregs))])
529
530 # marginalize over unwanted measured qubits
531 if compact_key not in counts_dict_new:
532 counts_dict_new[compact_key] = count
533 else:
534 counts_dict_new[compact_key] += count
535
536 circuit_result['data']['counts'] = counts_dict_new
537
538
539 def _numpy_type_converter(obj):
540 if isinstance(obj, numpy.integer):
541 return int(obj)
542 elif isinstance(obj, numpy.floating): # pylint: disable=no-member
543 return float(obj)
544 elif isinstance(obj, numpy.ndarray):
545 return obj.tolist()
546 return obj
547
[end of qiskit/backends/ibmq/ibmqjob.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
Qiskit/qiskit
|
61392d7a42f2ff26b5d55cebedf8b0e8422d44fc
|
Change backend status from AVAILABLE to OPERATIONAL
### What is the expected enhancement?
The new name is more accurate.
|
2018-06-29T00:11:32Z
|
<patch>
diff --git a/qiskit/_quantumprogram.py b/qiskit/_quantumprogram.py
--- a/qiskit/_quantumprogram.py
+++ b/qiskit/_quantumprogram.py
@@ -822,7 +822,7 @@ def get_backend_status(self, backend):
backend (str): The backend to check
Returns:
- dict: {'available': True}
+ dict: {'operational': True}
Raises:
ConnectionError: if the API call failed.
diff --git a/qiskit/_util.py b/qiskit/_util.py
--- a/qiskit/_util.py
+++ b/qiskit/_util.py
@@ -11,6 +11,7 @@
import re
import sys
import warnings
+from collections import UserDict
API_NAME = 'IBMQuantumExperience'
logger = logging.getLogger(__name__)
@@ -99,7 +100,11 @@ def _enable_deprecation_warnings():
# Instead of using warnings.simple_filter() directly, the internal
# _add_filter() function is used for being able to match against the
# module.
- warnings._add_filter(*deprecation_filter, append=False)
+ try:
+ warnings._add_filter(*deprecation_filter, append=False)
+ except AttributeError:
+ # ._add_filter is internal and not available in some Python versions.
+ pass
def _camel_case_to_snake_case(identifier):
@@ -118,3 +123,20 @@ def _camel_case_to_snake_case(identifier):
_check_python_version()
_check_ibmqx_version()
_enable_deprecation_warnings()
+
+
+class AvailableToOperationalDict(UserDict):
+ """
+ TEMPORARY class for transitioning from `status['available']` to
+ `status['operational']`.
+
+ FIXME: Remove this class as soon as the API is updated, please.
+ """
+ def __getitem__(self, key):
+ if key == 'available':
+ warnings.warn(
+ "status['available'] has been renamed to status['operational'] "
+ " since 0.5.5. Please use status['operational'] accordingly.",
+ DeprecationWarning)
+
+ return super(AvailableToOperationalDict, self).__getitem__(key)
diff --git a/qiskit/backends/basebackend.py b/qiskit/backends/basebackend.py
--- a/qiskit/backends/basebackend.py
+++ b/qiskit/backends/basebackend.py
@@ -10,9 +10,10 @@
To create add-on backend modules subclass the Backend class in this module.
Doing so requires that the required backend interface is implemented.
"""
-
from abc import ABC, abstractmethod
+
from qiskit._qiskiterror import QISKitError
+from qiskit._util import AvailableToOperationalDict
class BaseBackend(ABC):
@@ -60,7 +61,8 @@ def parameters(self):
@property
def status(self):
"""Return backend status"""
- return {'name': self.name, 'available': True}
+ return AvailableToOperationalDict(
+ {'name': self.name, 'operational': True, 'pending_jobs': 0})
@property
def name(self):
diff --git a/qiskit/backends/ibmq/ibmqbackend.py b/qiskit/backends/ibmq/ibmqbackend.py
--- a/qiskit/backends/ibmq/ibmqbackend.py
+++ b/qiskit/backends/ibmq/ibmqbackend.py
@@ -12,7 +12,7 @@
import logging
from qiskit import QISKitError
-from qiskit._util import _camel_case_to_snake_case
+from qiskit._util import _camel_case_to_snake_case, AvailableToOperationalDict
from qiskit.backends import BaseBackend
from qiskit.backends.ibmq.ibmqjob import IBMQJob
from qiskit.backends import JobStatus
@@ -137,10 +137,15 @@ def status(self):
if status['name'] == 'ibmqx_hpc_qasm_simulator':
status['available'] = True
+ # FIXME: this needs to be replaced at the API level - eventually
+ # it will.
+ if 'available' in status:
+ status['operational'] = status['available']
+ del status['available']
except Exception as ex:
raise LookupError(
"Couldn't get backend status: {0}".format(ex))
- return status
+ return AvailableToOperationalDict(status)
def jobs(self, limit=50, skip=0, status=None, db_filter=None):
"""Attempt to get the jobs submitted to the backend.
diff --git a/qiskit/wrapper/_wrapper.py b/qiskit/wrapper/_wrapper.py
--- a/qiskit/wrapper/_wrapper.py
+++ b/qiskit/wrapper/_wrapper.py
@@ -158,7 +158,7 @@ def least_busy(names):
"""
backends = [get_backend(name) for name in names]
try:
- return min([b for b in backends if b.status['available'] and 'pending_jobs' in b.status],
+ return min([b for b in backends if b.status['operational'] and 'pending_jobs' in b.status],
key=lambda b: b.status['pending_jobs']).name
except (ValueError, TypeError):
raise QISKitError("Can only find least_busy backend from a non-empty list.")
diff --git a/qiskit/wrapper/defaultqiskitprovider.py b/qiskit/wrapper/defaultqiskitprovider.py
--- a/qiskit/wrapper/defaultqiskitprovider.py
+++ b/qiskit/wrapper/defaultqiskitprovider.py
@@ -46,7 +46,7 @@ def available_backends(self, filters=None):
each will either pass through, or be filtered out.
1) dict: {'criteria': value}
the criteria can be over backend's `configuration` or `status`
- e.g. {'local': False, 'simulator': False, 'available': True}
+ e.g. {'local': False, 'simulator': False, 'operational': True}
2) callable: BaseBackend -> bool
e.g. lambda x: x.configuration['n_qubits'] > 5
@@ -65,7 +65,7 @@ def available_backends(self, filters=None):
if filters is not None:
if isinstance(filters, dict):
# exact match filter:
- # e.g. {'n_qubits': 5, 'available': True}
+ # e.g. {'n_qubits': 5, 'operational': True}
for key, value in filters.items():
backends = [instance for instance in backends
if instance.configuration.get(key) == value
</patch>
|
[]
|
[]
| ||||
wagtail__wagtail-272
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Related Links validation error
1. Start to create new page
2. Click "+" on related links
3. Delete empty related link
4. Fill all fields except one that is required (for a page)
5. Save as Draft
Now validation is saying that there are two errors (omitted field and empty related link)
When fixed ommited field error and deleteing related field the saving is successful.
</issue>
<code>
[start of README.rst]
1 .. image:: https://travis-ci.org/torchbox/wagtail.png?branch=master
2 :target: https://travis-ci.org/torchbox/wagtail
3
4 .. image:: https://coveralls.io/repos/torchbox/wagtail/badge.png?branch=master
5 :target: https://coveralls.io/r/torchbox/wagtail?branch=master
6
7 .. image:: https://pypip.in/v/wagtail/badge.png?zxcv
8 :target: https://crate.io/packages/wagtail/
9
10 Wagtail CMS
11 ===========
12
13 .. image:: http://i.imgur.com/4pbWQ35.png
14
15 Wagtail is a Django content management system built originally for the `Royal College of Art <http://www.rca.ac.uk/>`_ and focused on flexibility and user experience. Its features include:
16
17 * A fast, attractive editor interface
18 * Complete control over design with standard Django templates
19 * Configure content types through standard Django models
20 * Tightly integrated search (with an `Elasticsearch <http://www.elasticsearch.org/>`_ backend for production)
21 * Strong document and image management
22 * Wide support for embedded content
23 * Simple, configurable permissions
24 * Support for tree-based content organisation
25 * Optional preview->submit->approve workflow
26 * Fast out of the box. `Varnish <https://www.varnish-cache.org/>`_-friendly if you need it
27 * Tests! But not enough; we're working hard to improve this
28
29 Find out more at `wagtail.io <http://wagtail.io/>`_.
30
31 Got a question? Ask it on our `Google Group <https://groups.google.com/forum/#!forum/wagtail>`_.
32
33 Getting started
34 ~~~~~~~~~~~~~~~
35 * To get you up and running quickly, we've provided a demonstration site with all the configuration in place, at `github.com/torchbox/wagtaildemo <https://github.com/torchbox/wagtaildemo/>`_; see the `README <https://github.com/torchbox/wagtaildemo/blob/master/README.md>`_ for installation instructions.
36 * See the `Getting Started <http://wagtail.readthedocs.org/en/latest/gettingstarted.html#getting-started>`_ docs for installation (with the demo app) on a fresh Debian/Ubuntu box with production-ready dependencies, on OS X and on a Vagrant box.
37 * `Serafeim Papastefanos <https://github.com/spapas>`_ has written a `tutorial <http://spapas.github.io/2014/02/13/wagtail-tutorial/>`_ with all the steps to build a simple Wagtail site from scratch.
38
39 Documentation
40 ~~~~~~~~~~~~~
41 Available at `wagtail.readthedocs.org <http://wagtail.readthedocs.org/>`_. and always being updated.
42
43 Compatibility
44 ~~~~~~~~~~~~~
45 Wagtail supports Django 1.6.2+ on Python 2.6 and 2.7. Django 1.7 and Python 3 support are in progress.
46
47 Contributing
48 ~~~~~~~~~~~~
49 If you're a Python or Django developer, fork the repo and get stuck in! Send us a useful pull request and we'll post you a `t-shirt <https://twitter.com/WagtailCMS/status/432166799464210432/photo/1>`_. Our immediate priorities are better docs, more tests, internationalisation and localisation.
50
51
[end of README.rst]
[start of wagtail/wagtailcore/models.py]
1 import sys
2 import os
3 from StringIO import StringIO
4 from urlparse import urlparse
5
6 from modelcluster.models import ClusterableModel
7
8 from django.db import models, connection, transaction
9 from django.db.models import get_model, Q
10 from django.http import Http404
11 from django.core.cache import cache
12 from django.core.handlers.wsgi import WSGIRequest
13 from django.core.handlers.base import BaseHandler
14 from django.contrib.contenttypes.models import ContentType
15 from django.contrib.auth.models import Group
16 from django.conf import settings
17 from django.template.response import TemplateResponse
18 from django.utils.translation import ugettext_lazy as _
19
20 from treebeard.mp_tree import MP_Node
21
22 from wagtail.wagtailcore.util import camelcase_to_underscore
23 from wagtail.wagtailcore.query import PageQuerySet
24
25 from wagtail.wagtailsearch import Indexed, get_search_backend
26
27
28 class SiteManager(models.Manager):
29 def get_by_natural_key(self, hostname):
30 return self.get(hostname=hostname)
31
32
33 class Site(models.Model):
34 hostname = models.CharField(max_length=255, unique=True, db_index=True)
35 port = models.IntegerField(default=80, help_text=_("Set this to something other than 80 if you need a specific port number to appear in URLs (e.g. development on port 8000). Does not affect request handling (so port forwarding still works)."))
36 root_page = models.ForeignKey('Page', related_name='sites_rooted_here')
37 is_default_site = models.BooleanField(default=False, help_text=_("If true, this site will handle requests for all other hostnames that do not have a site entry of their own"))
38
39 def natural_key(self):
40 return (self.hostname,)
41
42 def __unicode__(self):
43 return self.hostname + ("" if self.port == 80 else (":%d" % self.port)) + (" [default]" if self.is_default_site else "")
44
45 @staticmethod
46 def find_for_request(request):
47 """Find the site object responsible for responding to this HTTP request object"""
48 try:
49 hostname = request.META['HTTP_HOST'].split(':')[0]
50 # find a Site matching this specific hostname
51 return Site.objects.get(hostname=hostname)
52 except (Site.DoesNotExist, KeyError):
53 # If no matching site exists, or request does not specify an HTTP_HOST (which
54 # will often be the case for the Django test client), look for a catch-all Site.
55 # If that fails, let the Site.DoesNotExist propagate back to the caller
56 return Site.objects.get(is_default_site=True)
57
58 @property
59 def root_url(self):
60 if self.port == 80:
61 return 'http://%s' % self.hostname
62 elif self.port == 443:
63 return 'https://%s' % self.hostname
64 else:
65 return 'http://%s:%d' % (self.hostname, self.port)
66
67 # clear the wagtail_site_root_paths cache whenever Site records are updated
68 def save(self, *args, **kwargs):
69 result = super(Site, self).save(*args, **kwargs)
70 cache.delete('wagtail_site_root_paths')
71 return result
72
73 @staticmethod
74 def get_site_root_paths():
75 """
76 Return a list of (root_path, root_url) tuples, most specific path first -
77 used to translate url_paths into actual URLs with hostnames
78 """
79 result = cache.get('wagtail_site_root_paths')
80
81 if result is None:
82 result = [
83 (site.id, site.root_page.url_path, site.root_url)
84 for site in Site.objects.select_related('root_page').order_by('-root_page__url_path')
85 ]
86 cache.set('wagtail_site_root_paths', result, 3600)
87
88 return result
89
90
91 PAGE_MODEL_CLASSES = []
92 _PAGE_CONTENT_TYPES = []
93
94
95 def get_page_types():
96 global _PAGE_CONTENT_TYPES
97 if len(_PAGE_CONTENT_TYPES) != len(PAGE_MODEL_CLASSES):
98 _PAGE_CONTENT_TYPES = [
99 ContentType.objects.get_for_model(cls) for cls in PAGE_MODEL_CLASSES
100 ]
101 return _PAGE_CONTENT_TYPES
102
103
104 LEAF_PAGE_MODEL_CLASSES = []
105 _LEAF_PAGE_CONTENT_TYPE_IDS = []
106
107
108 def get_leaf_page_content_type_ids():
109 global _LEAF_PAGE_CONTENT_TYPE_IDS
110 if len(_LEAF_PAGE_CONTENT_TYPE_IDS) != len(LEAF_PAGE_MODEL_CLASSES):
111 _LEAF_PAGE_CONTENT_TYPE_IDS = [
112 ContentType.objects.get_for_model(cls).id for cls in LEAF_PAGE_MODEL_CLASSES
113 ]
114 return _LEAF_PAGE_CONTENT_TYPE_IDS
115
116
117 NAVIGABLE_PAGE_MODEL_CLASSES = []
118 _NAVIGABLE_PAGE_CONTENT_TYPE_IDS = []
119
120
121 def get_navigable_page_content_type_ids():
122 global _NAVIGABLE_PAGE_CONTENT_TYPE_IDS
123 if len(_NAVIGABLE_PAGE_CONTENT_TYPE_IDS) != len(NAVIGABLE_PAGE_MODEL_CLASSES):
124 _NAVIGABLE_PAGE_CONTENT_TYPE_IDS = [
125 ContentType.objects.get_for_model(cls).id for cls in NAVIGABLE_PAGE_MODEL_CLASSES
126 ]
127 return _NAVIGABLE_PAGE_CONTENT_TYPE_IDS
128
129
130 class PageManager(models.Manager):
131 def get_query_set(self):
132 return PageQuerySet(self.model).order_by('path')
133
134 def live(self):
135 return self.get_query_set().live()
136
137 def not_live(self):
138 return self.get_query_set().not_live()
139
140 def page(self, other):
141 return self.get_query_set().page(other)
142
143 def not_page(self, other):
144 return self.get_query_set().not_page(other)
145
146 def descendant_of(self, other, inclusive=False):
147 return self.get_query_set().descendant_of(other, inclusive)
148
149 def not_descendant_of(self, other, inclusive=False):
150 return self.get_query_set().not_descendant_of(other, inclusive)
151
152 def child_of(self, other):
153 return self.get_query_set().child_of(other)
154
155 def not_child_of(self, other):
156 return self.get_query_set().not_child_of(other)
157
158 def ancestor_of(self, other, inclusive=False):
159 return self.get_query_set().ancestor_of(other, inclusive)
160
161 def not_ancestor_of(self, other, inclusive=False):
162 return self.get_query_set().not_ancestor_of(other, inclusive)
163
164 def parent_of(self, other):
165 return self.get_query_set().parent_of(other)
166
167 def not_parent_of(self, other):
168 return self.get_query_set().not_parent_of(other)
169
170 def sibling_of(self, other, inclusive=False):
171 return self.get_query_set().sibling_of(other, inclusive)
172
173 def not_sibling_of(self, other, inclusive=False):
174 return self.get_query_set().not_sibling_of(other, inclusive)
175
176 def type(self, model):
177 return self.get_query_set().type(model)
178
179 def not_type(self, model):
180 return self.get_query_set().not_type(model)
181
182
183 class PageBase(models.base.ModelBase):
184 """Metaclass for Page"""
185 def __init__(cls, name, bases, dct):
186 super(PageBase, cls).__init__(name, bases, dct)
187
188 if cls._deferred:
189 # this is an internal class built for Django's deferred-attribute mechanism;
190 # don't proceed with all this page type registration stuff
191 return
192
193 # Add page manager
194 PageManager().contribute_to_class(cls, 'objects')
195
196 if 'template' not in dct:
197 # Define a default template path derived from the app name and model name
198 cls.template = "%s/%s.html" % (cls._meta.app_label, camelcase_to_underscore(name))
199
200 if 'ajax_template' not in dct:
201 cls.ajax_template = None
202
203 cls._clean_subpage_types = None # to be filled in on first call to cls.clean_subpage_types
204
205 if not dct.get('is_abstract'):
206 # subclasses are only abstract if the subclass itself defines itself so
207 cls.is_abstract = False
208
209 if not cls.is_abstract:
210 # register this type in the list of page content types
211 PAGE_MODEL_CLASSES.append(cls)
212 if cls.subpage_types:
213 NAVIGABLE_PAGE_MODEL_CLASSES.append(cls)
214 else:
215 LEAF_PAGE_MODEL_CLASSES.append(cls)
216
217
218 class Page(MP_Node, ClusterableModel, Indexed):
219 __metaclass__ = PageBase
220
221 title = models.CharField(max_length=255, help_text=_("The page title as you'd like it to be seen by the public"))
222 slug = models.SlugField(help_text=_("The name of the page as it will appear in URLs e.g http://domain.com/blog/[my-slug]/"))
223 # TODO: enforce uniqueness on slug field per parent (will have to be done at the Django
224 # level rather than db, since there is no explicit parent relation in the db)
225 content_type = models.ForeignKey('contenttypes.ContentType', related_name='pages')
226 live = models.BooleanField(default=True, editable=False)
227 has_unpublished_changes = models.BooleanField(default=False, editable=False)
228 url_path = models.CharField(max_length=255, blank=True, editable=False)
229 owner = models.ForeignKey(settings.AUTH_USER_MODEL, null=True, blank=True, editable=False, related_name='owned_pages')
230
231 seo_title = models.CharField(verbose_name=_("Page title"), max_length=255, blank=True, help_text=_("Optional. 'Search Engine Friendly' title. This will appear at the top of the browser window."))
232 show_in_menus = models.BooleanField(default=False, help_text=_("Whether a link to this page will appear in automatically generated menus"))
233 search_description = models.TextField(blank=True)
234
235 indexed_fields = {
236 'title': {
237 'type': 'string',
238 'analyzer': 'edgengram_analyzer',
239 'boost': 100,
240 },
241 'live': {
242 'type': 'boolean',
243 'index': 'not_analyzed',
244 },
245 'path': {
246 'type': 'string',
247 'index': 'not_analyzed',
248 },
249 }
250
251 def __init__(self, *args, **kwargs):
252 super(Page, self).__init__(*args, **kwargs)
253 if not self.id and not self.content_type_id:
254 # this model is being newly created rather than retrieved from the db;
255 # set content type to correctly represent the model class that this was
256 # created as
257 self.content_type = ContentType.objects.get_for_model(self)
258
259 def __unicode__(self):
260 return self.title
261
262 # by default pages do not allow any kind of subpages
263 subpage_types = []
264
265 is_abstract = True # don't offer Page in the list of page types a superuser can create
266
267 def set_url_path(self, parent):
268 """
269 Populate the url_path field based on this page's slug and the specified parent page.
270 (We pass a parent in here, rather than retrieving it via get_parent, so that we can give
271 new unsaved pages a meaningful URL when previewing them; at that point the page has not
272 been assigned a position in the tree, as far as treebeard is concerned.
273 """
274 if parent:
275 self.url_path = parent.url_path + self.slug + '/'
276 else:
277 # a page without a parent is the tree root, which always has a url_path of '/'
278 self.url_path = '/'
279
280 return self.url_path
281
282 @transaction.atomic # ensure that changes are only committed when we have updated all descendant URL paths, to preserve consistency
283 def save(self, *args, **kwargs):
284 update_descendant_url_paths = False
285
286 if self.id is None:
287 # we are creating a record. If we're doing things properly, this should happen
288 # through a treebeard method like add_child, in which case the 'path' field
289 # has been set and so we can safely call get_parent
290 self.set_url_path(self.get_parent())
291 else:
292 # see if the slug has changed from the record in the db, in which case we need to
293 # update url_path of self and all descendants
294 old_record = Page.objects.get(id=self.id)
295 if old_record.slug != self.slug:
296 self.set_url_path(self.get_parent())
297 update_descendant_url_paths = True
298 old_url_path = old_record.url_path
299 new_url_path = self.url_path
300
301 result = super(Page, self).save(*args, **kwargs)
302
303 if update_descendant_url_paths:
304 self._update_descendant_url_paths(old_url_path, new_url_path)
305
306 # Check if this is a root page of any sites and clear the 'wagtail_site_root_paths' key if so
307 if Site.objects.filter(root_page=self).exists():
308 cache.delete('wagtail_site_root_paths')
309
310 return result
311
312 def _update_descendant_url_paths(self, old_url_path, new_url_path):
313 cursor = connection.cursor()
314 if connection.vendor == 'sqlite':
315 update_statement = """
316 UPDATE wagtailcore_page
317 SET url_path = %s || substr(url_path, %s)
318 WHERE path LIKE %s AND id <> %s
319 """
320 elif connection.vendor == 'mysql':
321 update_statement = """
322 UPDATE wagtailcore_page
323 SET url_path= CONCAT(%s, substring(url_path, %s))
324 WHERE path LIKE %s AND id <> %s
325 """
326 else:
327 update_statement = """
328 UPDATE wagtailcore_page
329 SET url_path = %s || substring(url_path from %s)
330 WHERE path LIKE %s AND id <> %s
331 """
332 cursor.execute(update_statement,
333 [new_url_path, len(old_url_path) + 1, self.path + '%', self.id])
334
335 @property
336 def specific(self):
337 """
338 Return this page in its most specific subclassed form.
339 """
340 # the ContentType.objects manager keeps a cache, so this should potentially
341 # avoid a database lookup over doing self.content_type. I think.
342 content_type = ContentType.objects.get_for_id(self.content_type_id)
343 if isinstance(self, content_type.model_class()):
344 # self is already the an instance of the most specific class
345 return self
346 else:
347 return content_type.get_object_for_this_type(id=self.id)
348
349 @property
350 def specific_class(self):
351 """
352 return the class that this page would be if instantiated in its
353 most specific form
354 """
355 content_type = ContentType.objects.get_for_id(self.content_type_id)
356 return content_type.model_class()
357
358 def route(self, request, path_components):
359 if path_components:
360 # request is for a child of this page
361 child_slug = path_components[0]
362 remaining_components = path_components[1:]
363
364 try:
365 subpage = self.get_children().get(slug=child_slug)
366 except Page.DoesNotExist:
367 raise Http404
368
369 return subpage.specific.route(request, remaining_components)
370
371 else:
372 # request is for this very page
373 if self.live:
374 return self.serve(request)
375 else:
376 raise Http404
377
378 def save_revision(self, user=None, submitted_for_moderation=False):
379 self.revisions.create(content_json=self.to_json(), user=user, submitted_for_moderation=submitted_for_moderation)
380
381 def get_latest_revision(self):
382 try:
383 revision = self.revisions.order_by('-created_at')[0]
384 except IndexError:
385 return False
386
387 return revision
388
389 def get_latest_revision_as_page(self):
390 try:
391 revision = self.revisions.order_by('-created_at')[0]
392 except IndexError:
393 return self.specific
394
395 return revision.as_page_object()
396
397 def get_context(self, request):
398 return {
399 'self': self,
400 'request': request,
401 }
402
403 def get_template(self, request):
404 if request.is_ajax():
405 return self.ajax_template or self.template
406 else:
407 return self.template
408
409 def serve(self, request):
410 return TemplateResponse(
411 request,
412 self.get_template(request),
413 self.get_context(request)
414 )
415
416 def is_navigable(self):
417 """
418 Return true if it's meaningful to browse subpages of this page -
419 i.e. it currently has subpages, or its page type indicates that sub-pages are supported,
420 or it's at the top level (this rule necessary for empty out-of-the-box sites to have working navigation)
421 """
422 return (not self.is_leaf()) or (self.content_type_id not in get_leaf_page_content_type_ids()) or self.depth == 2
423
424 def get_other_siblings(self):
425 # get sibling pages excluding self
426 return self.get_siblings().exclude(id=self.id)
427
428 @property
429 def full_url(self):
430 """Return the full URL (including protocol / domain) to this page, or None if it is not routable"""
431 for (id, root_path, root_url) in Site.get_site_root_paths():
432 if self.url_path.startswith(root_path):
433 return root_url + self.url_path[len(root_path) - 1:]
434
435 @property
436 def url(self):
437 """
438 Return the 'most appropriate' URL for referring to this page from the pages we serve,
439 within the Wagtail backend and actual website templates;
440 this is the local URL (starting with '/') if we're only running a single site
441 (i.e. we know that whatever the current page is being served from, this link will be on the
442 same domain), and the full URL (with domain) if not.
443 Return None if the page is not routable.
444 """
445 root_paths = Site.get_site_root_paths()
446 for (id, root_path, root_url) in Site.get_site_root_paths():
447 if self.url_path.startswith(root_path):
448 return ('' if len(root_paths) == 1 else root_url) + self.url_path[len(root_path) - 1:]
449
450 def relative_url(self, current_site):
451 """
452 Return the 'most appropriate' URL for this page taking into account the site we're currently on;
453 a local URL if the site matches, or a fully qualified one otherwise.
454 Return None if the page is not routable.
455 """
456 for (id, root_path, root_url) in Site.get_site_root_paths():
457 if self.url_path.startswith(root_path):
458 return ('' if current_site.id == id else root_url) + self.url_path[len(root_path) - 1:]
459
460 @classmethod
461 def search(cls, query_string, show_unpublished=False, search_title_only=False, extra_filters={}, prefetch_related=[], path=None):
462 # Filters
463 filters = extra_filters.copy()
464 if not show_unpublished:
465 filters['live'] = True
466
467 # Path
468 if path:
469 filters['path__startswith'] = path
470
471 # Fields
472 fields = None
473 if search_title_only:
474 fields = ['title']
475
476 # Search
477 s = get_search_backend()
478 return s.search(query_string, model=cls, fields=fields, filters=filters, prefetch_related=prefetch_related)
479
480 @classmethod
481 def clean_subpage_types(cls):
482 """
483 Returns the list of subpage types, with strings converted to class objects
484 where required
485 """
486 if cls._clean_subpage_types is None:
487 res = []
488 for page_type in cls.subpage_types:
489 if isinstance(page_type, basestring):
490 try:
491 app_label, model_name = page_type.split(".")
492 except ValueError:
493 # If we can't split, assume a model in current app
494 app_label = cls._meta.app_label
495 model_name = page_type
496
497 model = get_model(app_label, model_name)
498 if model:
499 res.append(model)
500 else:
501 raise NameError(_("name '{0}' (used in subpage_types list) is not defined.").format(page_type))
502
503 else:
504 # assume it's already a model class
505 res.append(page_type)
506
507 cls._clean_subpage_types = res
508
509 return cls._clean_subpage_types
510
511 @classmethod
512 def allowed_parent_page_types(cls):
513 """
514 Returns the list of page types that this page type can be a subpage of
515 """
516 return [ct for ct in get_page_types() if cls in ct.model_class().clean_subpage_types()]
517
518 @classmethod
519 def allowed_parent_pages(cls):
520 """
521 Returns the list of pages that this page type can be a subpage of
522 """
523 return Page.objects.filter(content_type__in=cls.allowed_parent_page_types())
524
525 @classmethod
526 def get_verbose_name(cls):
527 # This is similar to doing cls._meta.verbose_name.title()
528 # except this doesn't convert any characters to lowercase
529 return ' '.join([word[0].upper() + word[1:] for word in cls._meta.verbose_name.split()])
530
531 @property
532 def status_string(self):
533 if not self.live:
534 return "draft"
535 else:
536 if self.has_unpublished_changes:
537 return "live + draft"
538 else:
539 return "live"
540
541 def has_unpublished_subtree(self):
542 """
543 An awkwardly-defined flag used in determining whether unprivileged editors have
544 permission to delete this article. Returns true if and only if this page is non-live,
545 and it has no live children.
546 """
547 return (not self.live) and (not self.get_descendants().filter(live=True).exists())
548
549 @transaction.atomic # only commit when all descendants are properly updated
550 def move(self, target, pos=None):
551 """
552 Extension to the treebeard 'move' method to ensure that url_path is updated too.
553 """
554 old_url_path = Page.objects.get(id=self.id).url_path
555 super(Page, self).move(target, pos=pos)
556 # treebeard's move method doesn't actually update the in-memory instance, so we need to work
557 # with a freshly loaded one now
558 new_self = Page.objects.get(id=self.id)
559 new_url_path = new_self.set_url_path(new_self.get_parent())
560 new_self.save()
561 new_self._update_descendant_url_paths(old_url_path, new_url_path)
562
563 def permissions_for_user(self, user):
564 """
565 Return a PagePermissionsTester object defining what actions the user can perform on this page
566 """
567 user_perms = UserPagePermissionsProxy(user)
568 return user_perms.for_page(self)
569
570 def dummy_request(self):
571 """
572 Construct a HttpRequest object that is, as far as possible, representative of ones that would
573 receive this page as a response. Used for previewing / moderation and any other place where we
574 want to display a view of this page in the admin interface without going through the regular
575 page routing logic.
576 """
577 url = self.full_url
578 if url:
579 url_info = urlparse(url)
580 hostname = url_info.hostname
581 path = url_info.path
582 port = url_info.port or 80
583 else:
584 hostname = 'example.com'
585 path = '/'
586 port = 80
587
588 request = WSGIRequest({
589 'REQUEST_METHOD': 'GET',
590 'PATH_INFO': path,
591 'SERVER_NAME': hostname,
592 'SERVER_PORT': port,
593 'wsgi.input': StringIO(),
594 })
595
596 # Apply middleware to the request - see http://www.mellowmorning.com/2011/04/18/mock-django-request-for-testing/
597 handler = BaseHandler()
598 handler.load_middleware()
599 for middleware_method in handler._request_middleware:
600 if middleware_method(request):
601 raise Exception("Couldn't create request mock object - "
602 "request middleware returned a response")
603 return request
604
605 def get_page_modes(self):
606 """
607 Return a list of (internal_name, display_name) tuples for the modes in which
608 this page can be displayed for preview/moderation purposes. Ordinarily a page
609 will only have one display mode, but subclasses of Page can override this -
610 for example, a page containing a form might have a default view of the form,
611 and a post-submission 'thankyou' page
612 """
613 return [('', 'Default')]
614
615 def show_as_mode(self, mode_name):
616 """
617 Given an internal name from the get_page_modes() list, return an HTTP response
618 indicative of the page being viewed in that mode. By default this passes a
619 dummy request into the serve() mechanism, ensuring that it matches the behaviour
620 on the front-end; subclasses that define additional page modes will need to
621 implement alternative logic to serve up the appropriate view here.
622 """
623 return self.serve(self.dummy_request())
624
625 def get_static_site_paths(self):
626 """
627 This is a generator of URL paths to feed into a static site generator
628 Override this if you would like to create static versions of subpages
629 """
630 # Yield paths for this page
631 yield '/'
632
633 # Yield paths for child pages
634 for child in self.get_children().live():
635 for path in child.specific.get_static_site_paths():
636 yield '/' + child.slug + path
637
638 def get_ancestors(self, inclusive=False):
639 return Page.objects.ancestor_of(self, inclusive)
640
641 def get_descendants(self, inclusive=False):
642 return Page.objects.descendant_of(self, inclusive)
643
644 def get_siblings(self, inclusive=True):
645 return Page.objects.sibling_of(self, inclusive)
646
647
648 def get_navigation_menu_items():
649 # Get all pages that appear in the navigation menu: ones which have children,
650 # or are a non-leaf type (indicating that they *could* have children),
651 # or are at the top-level (this rule required so that an empty site out-of-the-box has a working menu)
652 navigable_content_type_ids = get_navigable_page_content_type_ids()
653 if navigable_content_type_ids:
654 pages = Page.objects.filter(Q(content_type__in=navigable_content_type_ids)|Q(depth=2)|Q(numchild__gt=0)).order_by('path')
655 else:
656 pages = Page.objects.filter(Q(depth=2)|Q(numchild__gt=0)).order_by('path')
657
658 # Turn this into a tree structure:
659 # tree_node = (page, children)
660 # where 'children' is a list of tree_nodes.
661 # Algorithm:
662 # Maintain a list that tells us, for each depth level, the last page we saw at that depth level.
663 # Since our page list is ordered by path, we know that whenever we see a page
664 # at depth d, its parent must be the last page we saw at depth (d-1), and so we can
665 # find it in that list.
666
667 depth_list = [(None, [])] # a dummy node for depth=0, since one doesn't exist in the DB
668
669 for page in pages:
670 # create a node for this page
671 node = (page, [])
672 # retrieve the parent from depth_list
673 parent_page, parent_childlist = depth_list[page.depth - 1]
674 # insert this new node in the parent's child list
675 parent_childlist.append(node)
676
677 # add the new node to depth_list
678 try:
679 depth_list[page.depth] = node
680 except IndexError:
681 # an exception here means that this node is one level deeper than any we've seen so far
682 depth_list.append(node)
683
684 # in Wagtail, the convention is to have one root node in the db (depth=1); the menu proper
685 # begins with the children of that node (depth=2).
686 try:
687 root, root_children = depth_list[1]
688 return root_children
689 except IndexError:
690 # what, we don't even have a root node? Fine, just return an empty list...
691 return []
692
693
694 class Orderable(models.Model):
695 sort_order = models.IntegerField(null=True, blank=True, editable=False)
696 sort_order_field = 'sort_order'
697
698 class Meta:
699 abstract = True
700 ordering = ['sort_order']
701
702
703 class SubmittedRevisionsManager(models.Manager):
704 def get_query_set(self):
705 return super(SubmittedRevisionsManager, self).get_query_set().filter(submitted_for_moderation=True)
706
707
708 class PageRevision(models.Model):
709 page = models.ForeignKey('Page', related_name='revisions')
710 submitted_for_moderation = models.BooleanField(default=False)
711 created_at = models.DateTimeField(auto_now_add=True)
712 user = models.ForeignKey(settings.AUTH_USER_MODEL, null=True, blank=True)
713 content_json = models.TextField()
714
715 objects = models.Manager()
716 submitted_revisions = SubmittedRevisionsManager()
717
718 def save(self, *args, **kwargs):
719 super(PageRevision, self).save(*args, **kwargs)
720 if self.submitted_for_moderation:
721 # ensure that all other revisions of this page have the 'submitted for moderation' flag unset
722 self.page.revisions.exclude(id=self.id).update(submitted_for_moderation=False)
723
724 def as_page_object(self):
725 obj = self.page.specific_class.from_json(self.content_json)
726
727 # Override the possibly-outdated tree parameter fields from this revision object
728 # with up-to-date values
729 obj.path = self.page.path
730 obj.depth = self.page.depth
731 obj.numchild = self.page.numchild
732
733 # Populate url_path based on the revision's current slug and the parent page as determined
734 # by path
735 obj.set_url_path(self.page.get_parent())
736
737 # also copy over other properties which are meaningful for the page as a whole, not a
738 # specific revision of it
739 obj.live = self.page.live
740 obj.has_unpublished_changes = self.page.has_unpublished_changes
741 obj.owner = self.page.owner
742
743 return obj
744
745 def publish(self):
746 page = self.as_page_object()
747 page.live = True
748 page.save()
749 self.submitted_for_moderation = False
750 page.revisions.update(submitted_for_moderation=False)
751
752 PAGE_PERMISSION_TYPE_CHOICES = [
753 ('add', 'Add'),
754 ('edit', 'Edit'),
755 ('publish', 'Publish'),
756 ]
757
758
759 class GroupPagePermission(models.Model):
760 group = models.ForeignKey(Group, related_name='page_permissions')
761 page = models.ForeignKey('Page', related_name='group_permissions')
762 permission_type = models.CharField(max_length=20, choices=PAGE_PERMISSION_TYPE_CHOICES)
763
764
765 class UserPagePermissionsProxy(object):
766 """Helper object that encapsulates all the page permission rules that this user has
767 across the page hierarchy."""
768 def __init__(self, user):
769 self.user = user
770
771 if user.is_active and not user.is_superuser:
772 self.permissions = GroupPagePermission.objects.filter(group__user=self.user).select_related('page')
773
774 def revisions_for_moderation(self):
775 """Return a queryset of page revisions awaiting moderation that this user has publish permission on"""
776
777 # Deal with the trivial cases first...
778 if not self.user.is_active:
779 return PageRevision.objects.none()
780 if self.user.is_superuser:
781 return PageRevision.submitted_revisions.all()
782
783 # get the list of pages for which they have direct publish permission (i.e. they can publish any page within this subtree)
784 publishable_pages = [perm.page for perm in self.permissions if perm.permission_type == 'publish']
785 if not publishable_pages:
786 return PageRevision.objects.none()
787
788 # compile a filter expression to apply to the PageRevision.submitted_revisions manager:
789 # return only those pages whose paths start with one of the publishable_pages paths
790 only_my_sections = Q(page__path__startswith=publishable_pages[0].path)
791 for page in publishable_pages[1:]:
792 only_my_sections = only_my_sections | Q(page__path__startswith=page.path)
793
794 # return the filtered queryset
795 return PageRevision.submitted_revisions.filter(only_my_sections)
796
797 def for_page(self, page):
798 """Return a PagePermissionTester object that can be used to query whether this user has
799 permission to perform specific tasks on the given page"""
800 return PagePermissionTester(self, page)
801
802 def editable_pages(self):
803 """Return a queryset of the pages that this user has permission to edit"""
804 # Deal with the trivial cases first...
805 if not self.user.is_active:
806 return Page.objects.none()
807 if self.user.is_superuser:
808 return Page.objects.all()
809
810 # Translate each of the user's permission rules into a Q-expression
811 q_expressions = []
812 for perm in self.permissions:
813 if perm.permission_type == 'add':
814 # user has edit permission on any subpage of perm.page
815 # (including perm.page itself) that is owned by them
816 q_expressions.append(
817 Q(path__startswith=perm.page.path, owner=self.user)
818 )
819 elif perm.permission_type == 'edit':
820 # user has edit permission on any subpage of perm.page
821 # (including perm.page itself) regardless of owner
822 q_expressions.append(
823 Q(path__startswith=perm.page.path)
824 )
825
826 if q_expressions:
827 all_rules = q_expressions[0]
828 for expr in q_expressions[1:]:
829 all_rules = all_rules | expr
830 return Page.objects.filter(all_rules)
831 else:
832 return Page.objects.none()
833
834 class PagePermissionTester(object):
835 def __init__(self, user_perms, page):
836 self.user = user_perms.user
837 self.user_perms = user_perms
838 self.page = page
839 self.page_is_root = page.depth == 1 # Equivalent to page.is_root()
840
841 if self.user.is_active and not self.user.is_superuser:
842 self.permissions = set(
843 perm.permission_type for perm in user_perms.permissions
844 if self.page.path.startswith(perm.page.path)
845 )
846
847 def can_add_subpage(self):
848 if not self.user.is_active:
849 return False
850 return self.user.is_superuser or ('add' in self.permissions)
851
852 def can_edit(self):
853 if not self.user.is_active:
854 return False
855 if self.page_is_root: # root node is not a page and can never be edited, even by superusers
856 return False
857 return self.user.is_superuser or ('edit' in self.permissions) or ('add' in self.permissions and self.page.owner_id == self.user.id)
858
859 def can_delete(self):
860 if not self.user.is_active:
861 return False
862 if self.page_is_root: # root node is not a page and can never be deleted, even by superusers
863 return False
864
865 if self.user.is_superuser or ('publish' in self.permissions):
866 # Users with publish permission can unpublish any pages that need to be unpublished to achieve deletion
867 return True
868
869 elif 'edit' in self.permissions:
870 # user can only delete if there are no live pages in this subtree
871 return (not self.page.live) and (not self.page.get_descendants().filter(live=True).exists())
872
873 elif 'add' in self.permissions:
874 # user can only delete if all pages in this subtree are unpublished and owned by this user
875 return (
876 (not self.page.live)
877 and (self.page.owner_id == self.user.id)
878 and (not self.page.get_descendants().exclude(live=False, owner=self.user).exists())
879 )
880
881 else:
882 return False
883
884 def can_unpublish(self):
885 if not self.user.is_active:
886 return False
887 if (not self.page.live) or self.page_is_root:
888 return False
889
890 return self.user.is_superuser or ('publish' in self.permissions)
891
892 def can_publish(self):
893 if not self.user.is_active:
894 return False
895 if self.page_is_root:
896 return False
897
898 return self.user.is_superuser or ('publish' in self.permissions)
899
900 def can_publish_subpage(self):
901 """
902 Niggly special case for creating and publishing a page in one go.
903 Differs from can_publish in that we want to be able to publish subpages of root, but not
904 to be able to publish root itself
905 """
906 if not self.user.is_active:
907 return False
908
909 return self.user.is_superuser or ('publish' in self.permissions)
910
911 def can_reorder_children(self):
912 """
913 Keep reorder permissions the same as publishing, since it immediately affects published pages
914 (and the use-cases for a non-admin needing to do it are fairly obscure...)
915 """
916 return self.can_publish_subpage()
917
918 def can_move(self):
919 """
920 Moving a page should be logically equivalent to deleting and re-adding it (and all its children).
921 As such, the permission test for 'can this be moved at all?' should be the same as for deletion.
922 (Further constraints will then apply on where it can be moved *to*.)
923 """
924 return self.can_delete()
925
926 def can_move_to(self, destination):
927 # reject the logically impossible cases first
928 if self.page == destination or destination.is_descendant_of(self.page):
929 return False
930
931 # and shortcut the trivial 'everything' / 'nothing' permissions
932 if not self.user.is_active:
933 return False
934 if self.user.is_superuser:
935 return True
936
937 # check that the page can be moved at all
938 if not self.can_move():
939 return False
940
941 # Inspect permissions on the destination
942 destination_perms = self.user_perms.for_page(destination)
943
944 # we always need at least add permission in the target
945 if 'add' not in destination_perms.permissions:
946 return False
947
948 if self.page.live or self.page.get_descendants().filter(live=True).exists():
949 # moving this page will entail publishing within the destination section
950 return ('publish' in destination_perms.permissions)
951 else:
952 # no publishing required, so the already-tested 'add' permission is sufficient
953 return True
954
[end of wagtail/wagtailcore/models.py]
[start of wagtail/wagtailsearch/backends/db.py]
1 from django.db import models
2
3 from wagtail.wagtailsearch.backends.base import BaseSearch
4 from wagtail.wagtailsearch.indexed import Indexed
5
6
7 class DBSearch(BaseSearch):
8 def __init__(self, params):
9 super(DBSearch, self).__init__(params)
10
11 def reset_index(self):
12 pass # Not needed
13
14 def add_type(self, model):
15 pass # Not needed
16
17 def refresh_index(self):
18 pass # Not needed
19
20 def add(self, obj):
21 pass # Not needed
22
23 def add_bulk(self, obj_list):
24 pass # Not needed
25
26 def delete(self, obj):
27 pass # Not needed
28
29 def search(self, query_string, model, fields=None, filters={}, prefetch_related=[]):
30 # Get terms
31 terms = query_string.split()
32 if not terms:
33 return model.objects.none()
34
35 # Get fields
36 if fields is None:
37 fields = model.indexed_get_indexed_fields().keys()
38
39 # Start will all objects
40 query = model.objects.all()
41
42 # Apply filters
43 if filters:
44 query = query.filter(**filters)
45
46 # Filter by terms
47 for term in terms:
48 term_query = None
49 for field_name in fields:
50 # Check if the field exists (this will filter out indexed callables)
51 try:
52 model._meta.get_field_by_name(field_name)
53 except:
54 continue
55
56 # Filter on this field
57 field_filter = {'%s__icontains' % field_name: term}
58 if term_query is None:
59 term_query = models.Q(**field_filter)
60 else:
61 term_query |= models.Q(**field_filter)
62 query = query.filter(term_query)
63
64 # Distinct
65 query = query.distinct()
66
67 # Prefetch related
68 for prefetch in prefetch_related:
69 query = query.prefetch_related(prefetch)
70
71 return query
[end of wagtail/wagtailsearch/backends/db.py]
[start of wagtail/wagtailsearch/backends/elasticsearch.py]
1 from django.db import models
2
3 from elasticutils import get_es, S
4
5 from wagtail.wagtailsearch.backends.base import BaseSearch
6 from wagtail.wagtailsearch.indexed import Indexed
7
8 import string
9
10
11 class ElasticSearchResults(object):
12 def __init__(self, model, query, prefetch_related=[]):
13 self.model = model
14 self.query = query
15 self.count = query.count()
16 self.prefetch_related = prefetch_related
17
18 def __getitem__(self, key):
19 if isinstance(key, slice):
20 # Get primary keys
21 pk_list_unclean = [result._source["pk"] for result in self.query[key]]
22
23 # Remove duplicate keys (and preserve order)
24 seen_pks = set()
25 pk_list = []
26 for pk in pk_list_unclean:
27 if pk not in seen_pks:
28 seen_pks.add(pk)
29 pk_list.append(pk)
30
31 # Get results
32 results = self.model.objects.filter(pk__in=pk_list)
33
34 # Prefetch related
35 for prefetch in self.prefetch_related:
36 results = results.prefetch_related(prefetch)
37
38 # Put results into a dictionary (using primary key as the key)
39 results_dict = dict((str(result.pk), result) for result in results)
40
41 # Build new list with items in the correct order
42 results_sorted = [results_dict[str(pk)] for pk in pk_list if str(pk) in results_dict]
43
44 # Return the list
45 return results_sorted
46 else:
47 # Return a single item
48 pk = self.query[key]._source["pk"]
49 return self.model.objects.get(pk=pk)
50
51 def __len__(self):
52 return self.count
53
54
55 class ElasticSearch(BaseSearch):
56 def __init__(self, params):
57 super(ElasticSearch, self).__init__(params)
58
59 # Get settings
60 self.es_urls = params.pop('URLS', ['http://localhost:9200'])
61 self.es_index = params.pop('INDEX', 'wagtail')
62 self.es_timeout = params.pop('TIMEOUT', 5)
63 self.es_force_new = params.pop('FORCE_NEW', False)
64
65 # Get ElasticSearch interface
66 # Any remaining params are passed into the ElasticSearch constructor
67 self.es = get_es(
68 urls=self.es_urls,
69 timeout=self.es_timeout,
70 force_new=self.es_force_new,
71 **params)
72 self.s = S().es(
73 urls=self.es_urls,
74 timeout=self.es_timeout,
75 force_new=self.es_force_new,
76 **params).indexes(self.es_index)
77
78 def reset_index(self):
79 # Delete old index
80 try:
81 self.es.delete_index(self.es_index)
82 except:
83 pass
84
85 # Settings
86 INDEX_SETTINGS = {
87 "settings": {
88 "analysis": {
89 "analyzer": {
90 "ngram_analyzer": {
91 "type": "custom",
92 "tokenizer": "lowercase",
93 "filter": ["ngram"]
94 },
95 "edgengram_analyzer": {
96 "type": "custom",
97 "tokenizer": "lowercase",
98 "filter": ["edgengram"]
99 }
100 },
101 "tokenizer": {
102 "ngram_tokenizer": {
103 "type": "nGram",
104 "min_gram": 3,
105 "max_gram": 15,
106 },
107 "edgengram_tokenizer": {
108 "type": "edgeNGram",
109 "min_gram": 2,
110 "max_gram": 15,
111 "side": "front"
112 }
113 },
114 "filter": {
115 "ngram": {
116 "type": "nGram",
117 "min_gram": 3,
118 "max_gram": 15
119 },
120 "edgengram": {
121 "type": "edgeNGram",
122 "min_gram": 1,
123 "max_gram": 15
124 }
125 }
126 }
127 }
128 }
129
130 # Create new index
131 self.es.create_index(self.es_index, INDEX_SETTINGS)
132
133 def add_type(self, model):
134 # Get type name
135 content_type = model.indexed_get_content_type()
136
137 # Get indexed fields
138 indexed_fields = model.indexed_get_indexed_fields()
139
140 # Make field list
141 fields = dict({
142 "pk": dict(type="string", index="not_analyzed", store="yes"),
143 "content_type": dict(type="string"),
144 }.items() + indexed_fields.items())
145
146 # Put mapping
147 self.es.put_mapping(self.es_index, content_type, {
148 content_type: {
149 "properties": fields,
150 }
151 })
152
153 def refresh_index(self):
154 self.es.refresh(self.es_index)
155
156 def add(self, obj):
157 # Make sure the object can be indexed
158 if not self.object_can_be_indexed(obj):
159 return
160
161 # Build document
162 doc = obj.indexed_build_document()
163
164 # Add to index
165 self.es.index(self.es_index, obj.indexed_get_content_type(), doc, id=doc["id"])
166
167 def add_bulk(self, obj_list):
168 # Group all objects by their type
169 type_set = {}
170 for obj in obj_list:
171 # Object must be a decendant of Indexed and be a django model
172 if not self.object_can_be_indexed(obj):
173 continue
174
175 # Get object type
176 obj_type = obj.indexed_get_content_type()
177
178 # If type is currently not in set, add it
179 if obj_type not in type_set:
180 type_set[obj_type] = []
181
182 # Add object to set
183 type_set[obj_type].append(obj.indexed_build_document())
184
185 # Loop through each type and bulk add them
186 results = []
187 for type_name, type_objects in type_set.items():
188 results.append((type_name, len(type_objects)))
189 self.es.bulk_index(self.es_index, type_name, type_objects)
190 return results
191
192 def delete(self, obj):
193 # Object must be a decendant of Indexed and be a django model
194 if not isinstance(obj, Indexed) or not isinstance(obj, models.Model):
195 return
196
197 # Get ID for document
198 doc_id = obj.indexed_get_document_id()
199
200 # Delete document
201 try:
202 self.es.delete(self.es_index, obj.indexed_get_content_type(), doc_id)
203 except:
204 pass # Document doesn't exist, ignore this exception
205
206 def search(self, query_string, model, fields=None, filters={}, prefetch_related=[]):
207 # Model must be a descendant of Indexed and be a django model
208 if not issubclass(model, Indexed) or not issubclass(model, models.Model):
209 return []
210
211 # Clean up query string
212 query_string = "".join([c for c in query_string if c not in string.punctuation])
213
214 # Check that theres still a query string after the clean up
215 if not query_string:
216 return []
217
218 # Query
219 if fields:
220 query = self.s.query_raw({
221 "query_string": {
222 "query": query_string,
223 "fields": fields,
224 }
225 })
226 else:
227 query = self.s.query_raw({
228 "query_string": {
229 "query": query_string,
230 }
231 })
232
233 # Filter results by this content type
234 query = query.filter(content_type__prefix=model.indexed_get_content_type())
235
236 # Extra filters
237 if filters:
238 query = query.filter(**filters)
239
240 # Return search results
241 return ElasticSearchResults(model, query, prefetch_related=prefetch_related)
242
[end of wagtail/wagtailsearch/backends/elasticsearch.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
wagtail/wagtail
|
f25c44851e510efe055b2ec5f56755419e328621
|
Related Links validation error
1. Start to create new page
2. Click "+" on related links
3. Delete empty related link
4. Fill all fields except one that is required (for a page)
5. Save as Draft
Now validation is saying that there are two errors (omitted field and empty related link)
When fixed ommited field error and deleteing related field the saving is successful.
|
Thanks, @utek, we'll investigate.
Related issue:
1) Edit a page that already has related links.
2) Delete a related link.
3) Delete the contents of a required field to cause a validation failure.
4) Save as draft.
-> Deleted related link reappears
5) Re-enter text into required field to cause successful validation
6) Save as draft.
-> Related link has now been deleted.
|
2014-05-29T14:28:23Z
|
<patch>
diff --git a/wagtail/wagtailadmin/views/pages.py b/wagtail/wagtailadmin/views/pages.py
--- a/wagtail/wagtailadmin/views/pages.py
+++ b/wagtail/wagtailadmin/views/pages.py
@@ -5,7 +5,7 @@
from django.contrib.contenttypes.models import ContentType
from django.contrib.auth.decorators import permission_required
from django.core.paginator import Paginator, EmptyPage, PageNotAnInteger
-from django.utils.translation import ugettext as _
+from django.utils.translation import ugettext as _
from django.views.decorators.vary import vary_on_headers
from wagtail.wagtailadmin.edit_handlers import TabbedInterface, ObjectList
</patch>
|
[]
|
[]
| |||
Qiskit__qiskit-4762
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Performance regression in transpile with optimization_level <=2
<!-- ⚠️ If you do not respect this template, your issue will be closed -->
<!-- ⚠️ Make sure to browse the opened and closed issues -->
### Information
- **Qiskit Terra version**: master (since https://github.com/Qiskit/qiskit-terra/commit/35db627fbd1c9762d9f394c71a1df129c24797f4 )
- **Python version**: Any
- **Operating system**: Any
### What is the current behavior?
The benchmarking site has flagged a regression on transpile of circuits with optimization <= 2 after https://github.com/Qiskit/qiskit-terra/commit/35db627fbd1c9762d9f394c71a1df129c24797f4 merged. This is a bit unexpected since the primary change was on block collection and consolidation which is only run on level 3 (which got faster).
https://qiskit.github.io/qiskit/#transpiler_levels.TranspilerLevelBenchmarks.time_transpile_qv_14_x_14?machine=qiskit-benchmarking&os=Ubuntu%2018.04&ram=16%20GB&p-transpiler%20optimization%20level=0&p-transpiler%20optimization%20level=1&p-transpiler%20optimization%20level=2&p-transpiler%20optimization%20level=3&commits=35db627f
</issue>
<code>
[start of README.md]
1 # Qiskit Terra
2
3 [](https://opensource.org/licenses/Apache-2.0)[](https://travis-ci.com/Qiskit/qiskit-terra)[](https://github.com/Qiskit/qiskit-terra/releases)[](https://pypi.org/project/qiskit-terra/)[](https://coveralls.io/github/Qiskit/qiskit-terra?branch=master)
4
5 **Qiskit** is an open-source framework for working with noisy quantum computers at the level of pulses, circuits, and algorithms.
6
7 Qiskit is made up of elements that work together to enable quantum computing. This element is **Terra** and is the foundation on which the rest of Qiskit is built.
8
9 ## Installation
10
11 We encourage installing Qiskit via the pip tool (a python package manager), which installs all Qiskit elements, including Terra.
12
13 ```bash
14 pip install qiskit
15 ```
16
17 PIP will handle all dependencies automatically and you will always install the latest (and well-tested) version.
18
19 To install from source, follow the instructions in the [documentation](https://qiskit.org/documentation/contributing_to_qiskit.html#install-terra-from-source).
20
21 ## Creating Your First Quantum Program in Qiskit Terra
22
23 Now that Qiskit is installed, it's time to begin working with Terra.
24
25 We are ready to try out a quantum circuit example, which is simulated locally using
26 the Qiskit BasicAer element. This is a simple example that makes an entangled state.
27
28 ```
29 $ python
30 ```
31
32 ```python
33 >>> from qiskit import *
34 >>> qc = QuantumCircuit(2, 2)
35 >>> qc.h(0)
36 >>> qc.cx(0, 1)
37 >>> qc.measure([0,1], [0,1])
38 >>> backend_sim = BasicAer.get_backend('qasm_simulator')
39 >>> result = backend_sim.run(assemble(qc)).result()
40 >>> print(result.get_counts(qc))
41 ```
42
43 In this case, the output will be:
44
45 ```python
46 {'00': 513, '11': 511}
47 ```
48
49 A script is available [here](examples/python/ibmq/hello_quantum.py), where we also show how to
50 run the same program on a real quantum computer via IBMQ.
51
52 ### Executing your code on a real quantum chip
53
54 You can also use Qiskit to execute your code on a
55 **real quantum chip**.
56 In order to do so, you need to configure Qiskit for using the credentials in
57 your IBM Q account:
58
59 #### Configure your IBMQ credentials
60
61 1. Create an _[IBM Q](https://quantum-computing.ibm.com) > Account_ if you haven't already done so.
62
63 2. Get an API token from the IBM Q website under _My Account > API Token_ and the URL for the account.
64
65 3. Take your token and url from step 2, here called `MY_API_TOKEN`, `MY_URL`, and run:
66
67 ```python
68 >>> from qiskit import IBMQ
69 >>> IBMQ.save_account('MY_API_TOKEN', 'MY_URL')
70 ```
71
72 After calling `IBMQ.save_account()`, your credentials will be stored on disk.
73 Once they are stored, at any point in the future you can load and use them
74 in your program simply via:
75
76 ```python
77 >>> from qiskit import IBMQ
78 >>> IBMQ.load_account()
79 ```
80
81 Those who do not want to save their credentials to disk should use instead:
82
83 ```python
84 >>> from qiskit import IBMQ
85 >>> IBMQ.enable_account('MY_API_TOKEN')
86 ```
87
88 and the token will only be active for the session. For examples using Terra with real
89 devices we have provided a set of examples in **examples/python** and we suggest starting with [using_qiskit_terra_level_0.py](examples/python/using_qiskit_terra_level_0.py) and working up in
90 the levels.
91
92 ## Contribution Guidelines
93
94 If you'd like to contribute to Qiskit Terra, please take a look at our
95 [contribution guidelines](CONTRIBUTING.md). This project adheres to Qiskit's [code of conduct](CODE_OF_CONDUCT.md). By participating, you are expected to uphold this code.
96
97 We use [GitHub issues](https://github.com/Qiskit/qiskit-terra/issues) for tracking requests and bugs. Please
98 [join the Qiskit Slack community](https://join.slack.com/t/qiskit/shared_invite/zt-e4sscbg2-p8NHTezPVkC3r8nV6BIUVw)
99 and use our [Qiskit Slack channel](https://qiskit.slack.com) for discussion and simple questions.
100 For questions that are more suited for a forum we use the Qiskit tag in the [Stack Exchange](https://quantumcomputing.stackexchange.com/questions/tagged/qiskit).
101
102 ## Next Steps
103
104 Now you're set up and ready to check out some of the other examples from our
105 [Qiskit Tutorials](https://github.com/Qiskit/qiskit-tutorials) repository.
106
107 ## Authors and Citation
108
109 Qiskit Terra is the work of [many people](https://github.com/Qiskit/qiskit-terra/graphs/contributors) who contribute
110 to the project at different levels. If you use Qiskit, please cite as per the included [BibTeX file](https://github.com/Qiskit/qiskit/blob/master/Qiskit.bib).
111
112 ## Changelog and Release Notes
113
114 The changelog for a particular release is dynamically generated and gets
115 written to the release page on Github for each release. For example, you can
116 find the page for the `0.9.0` release here:
117
118 https://github.com/Qiskit/qiskit-terra/releases/tag/0.9.0
119
120 The changelog for the current release can be found in the releases tab:
121 
122 The changelog provides a quick overview of noteable changes for a given
123 release.
124
125 Additionally, as part of each release detailed release notes are written to
126 document in detail what has changed as part of a release. This includes any
127 documentation on potential breaking changes on upgrade and new features.
128 For example, You can find the release notes for the `0.9.0` release in the
129 Qiskit documentation here:
130
131 https://qiskit.org/documentation/release_notes.html#terra-0-9
132
133 ## License
134
135 [Apache License 2.0](LICENSE.txt)
136
[end of README.md]
[start of qiskit/execute.py]
1 # -*- coding: utf-8 -*-
2
3 # This code is part of Qiskit.
4 #
5 # (C) Copyright IBM 2017, 2019.
6 #
7 # This code is licensed under the Apache License, Version 2.0. You may
8 # obtain a copy of this license in the LICENSE.txt file in the root directory
9 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
10 #
11 # Any modifications or derivative works of this code must retain this
12 # copyright notice, and modified files need to carry a notice indicating
13 # that they have been altered from the originals.
14
15 """
16 =============================================
17 Executing Experiments (:mod:`qiskit.execute`)
18 =============================================
19
20 .. currentmodule:: qiskit.execute
21
22 .. autofunction:: execute
23 """
24 import logging
25 from time import time
26 from qiskit.compiler import transpile, assemble, schedule
27 from qiskit.qobj.utils import MeasLevel, MeasReturnType
28 from qiskit.pulse import Schedule
29 from qiskit.exceptions import QiskitError
30
31 logger = logging.getLogger(__name__)
32
33
34 def _log_submission_time(start_time, end_time):
35 log_msg = ("Total Job Submission Time - %.5f (ms)"
36 % ((end_time - start_time) * 1000))
37 logger.info(log_msg)
38
39
40 def execute(experiments, backend,
41 basis_gates=None, coupling_map=None, # circuit transpile options
42 backend_properties=None, initial_layout=None,
43 seed_transpiler=None, optimization_level=None, pass_manager=None,
44 qobj_id=None, qobj_header=None, shots=1024, # common run options
45 memory=False, max_credits=10, seed_simulator=None,
46 default_qubit_los=None, default_meas_los=None, # schedule run options
47 schedule_los=None, meas_level=MeasLevel.CLASSIFIED,
48 meas_return=MeasReturnType.AVERAGE,
49 memory_slots=None, memory_slot_size=100, rep_time=None, rep_delay=None,
50 parameter_binds=None, schedule_circuit=False, inst_map=None, meas_map=None,
51 scheduling_method=None, init_qubits=None,
52 **run_config):
53 """Execute a list of :class:`qiskit.circuit.QuantumCircuit` or
54 :class:`qiskit.pulse.Schedule` on a backend.
55
56 The execution is asynchronous, and a handle to a job instance is returned.
57
58 Args:
59 experiments (QuantumCircuit or list[QuantumCircuit] or Schedule or list[Schedule]):
60 Circuit(s) or pulse schedule(s) to execute
61
62 backend (BaseBackend):
63 Backend to execute circuits on.
64 Transpiler options are automatically grabbed from
65 backend.configuration() and backend.properties().
66 If any other option is explicitly set (e.g. coupling_map), it
67 will override the backend's.
68
69 basis_gates (list[str]):
70 List of basis gate names to unroll to.
71 e.g: ``['u1', 'u2', 'u3', 'cx']``
72 If ``None``, do not unroll.
73
74 coupling_map (CouplingMap or list): Coupling map (perhaps custom) to
75 target in mapping. Multiple formats are supported:
76
77 #. CouplingMap instance
78 #. list
79 Must be given as an adjacency matrix, where each entry
80 specifies all two-qubit interactions supported by backend
81 e.g:
82 ``[[0, 1], [0, 3], [1, 2], [1, 5], [2, 5], [4, 1], [5, 3]]``
83
84 backend_properties (BackendProperties):
85 Properties returned by a backend, including information on gate
86 errors, readout errors, qubit coherence times, etc. Find a backend
87 that provides this information with:
88 ``backend.properties()``
89
90 initial_layout (Layout or dict or list):
91 Initial position of virtual qubits on physical qubits.
92 If this layout makes the circuit compatible with the coupling_map
93 constraints, it will be used.
94 The final layout is not guaranteed to be the same, as the transpiler
95 may permute qubits through swaps or other means.
96
97 Multiple formats are supported:
98
99 #. :class:`qiskit.transpiler.Layout` instance
100 #. ``dict``:
101 virtual to physical::
102
103 {qr[0]: 0,
104 qr[1]: 3,
105 qr[2]: 5}
106
107 physical to virtual::
108 {0: qr[0],
109 3: qr[1],
110 5: qr[2]}
111
112 #. ``list``
113 virtual to physical::
114
115 [0, 3, 5] # virtual qubits are ordered (in addition to named)
116
117 physical to virtual::
118
119 [qr[0], None, None, qr[1], None, qr[2]]
120
121 seed_transpiler (int): Sets random seed for the stochastic parts of the transpiler
122
123 optimization_level (int): How much optimization to perform on the circuits.
124 Higher levels generate more optimized circuits,
125 at the expense of longer transpilation time.
126 #. No optimization
127 #. Light optimization
128 #. Heavy optimization
129 #. Highest optimization
130 If None, level 1 will be chosen as default.
131
132 pass_manager (PassManager): The pass manager to use during transpilation. If this
133 arg is present, auto-selection of pass manager based on the transpile options
134 will be turned off and this pass manager will be used directly.
135
136 qobj_id (str): String identifier to annotate the Qobj
137
138 qobj_header (QobjHeader or dict): User input that will be inserted in Qobj header,
139 and will also be copied to the corresponding :class:`qiskit.result.Result`
140 header. Headers do not affect the run.
141
142 shots (int): Number of repetitions of each circuit, for sampling. Default: 1024
143
144 memory (bool): If True, per-shot measurement bitstrings are returned as well
145 (provided the backend supports it). For OpenPulse jobs, only
146 measurement level 2 supports this option. Default: False
147
148 max_credits (int): Maximum credits to spend on job. Default: 10
149
150 seed_simulator (int): Random seed to control sampling, for when backend is a simulator
151
152 default_qubit_los (list): List of default qubit LO frequencies in Hz
153
154 default_meas_los (list): List of default meas LO frequencies in Hz
155
156 schedule_los (None or list or dict or LoConfig): Experiment LO
157 configurations, if specified the list is in the format::
158
159 list[Union[Dict[PulseChannel, float], LoConfig]] or
160 Union[Dict[PulseChannel, float], LoConfig]
161
162 meas_level (int or MeasLevel): Set the appropriate level of the
163 measurement output for pulse experiments.
164
165 meas_return (str or MeasReturn): Level of measurement data for the
166 backend to return For ``meas_level`` 0 and 1:
167 ``"single"`` returns information from every shot.
168 ``"avg"`` returns average measurement output (averaged over number
169 of shots).
170
171 memory_slots (int): Number of classical memory slots used in this job.
172
173 memory_slot_size (int): Size of each memory slot if the output is Level 0.
174
175 rep_time (int): Time per program execution in sec. Must be from the list provided
176 by the backend (``backend.configuration().rep_times``).
177
178 rep_delay (float): Delay between programs in sec. Only supported on certain
179 backends (``backend.configuration().dynamic_reprate_enabled`` ).
180 If supported, ``rep_delay`` will be used instead of ``rep_time``. Must be from the list
181 provided by the backend (``backend.configuration().rep_delays``).
182
183 parameter_binds (list[dict]): List of Parameter bindings over which the set of
184 experiments will be executed. Each list element (bind) should be of the form
185 ``{Parameter1: value1, Parameter2: value2, ...}``. All binds will be
186 executed across all experiments, e.g. if parameter_binds is a
187 length-n list, and there are m experiments, a total of :math:`m x n`
188 experiments will be run (one for each experiment/bind pair).
189
190 schedule_circuit (bool): If ``True``, ``experiments`` will be converted to
191 :class:`qiskit.pulse.Schedule` objects prior to execution.
192
193 inst_map (InstructionScheduleMap):
194 Mapping of circuit operations to pulse schedules. If None, defaults to the
195 ``instruction_schedule_map`` of ``backend``.
196
197 meas_map (list(list(int))):
198 List of sets of qubits that must be measured together. If None, defaults to
199 the ``meas_map`` of ``backend``.
200
201 scheduling_method (str or list(str)):
202 Optionally specify a particular scheduling method.
203
204 init_qubits (bool): Whether to reset the qubits to the ground state for each shot.
205 Default: ``True``.
206
207 run_config (dict):
208 Extra arguments used to configure the run (e.g. for Aer configurable backends).
209 Refer to the backend documentation for details on these arguments.
210 Note: for now, these keyword arguments will both be copied to the
211 Qobj config, and passed to backend.run()
212
213 Returns:
214 BaseJob: returns job instance derived from BaseJob
215
216 Raises:
217 QiskitError: if the execution cannot be interpreted as either circuits or schedules
218
219 Example:
220 Construct a 5-qubit GHZ circuit and execute 4321 shots on a backend.
221
222 .. jupyter-execute::
223
224 from qiskit import QuantumCircuit, execute, BasicAer
225
226 backend = BasicAer.get_backend('qasm_simulator')
227
228 qc = QuantumCircuit(5, 5)
229 qc.h(0)
230 qc.cx(0, range(1, 5))
231 qc.measure_all()
232
233 job = execute(qc, backend, shots=4321)
234 """
235 if isinstance(experiments, Schedule) or (isinstance(experiments, list) and
236 isinstance(experiments[0], Schedule)):
237 # do not transpile a schedule circuit
238 if schedule_circuit:
239 raise QiskitError("Must supply QuantumCircuit to schedule circuit.")
240 elif pass_manager is not None:
241 # transpiling using pass_manager
242 _check_conflicting_argument(optimization_level=optimization_level,
243 basis_gates=basis_gates,
244 coupling_map=coupling_map,
245 seed_transpiler=seed_transpiler,
246 backend_properties=backend_properties,
247 initial_layout=initial_layout,
248 backend=backend)
249 experiments = pass_manager.run(experiments)
250 else:
251 # transpiling the circuits using given transpile options
252 experiments = transpile(experiments,
253 basis_gates=basis_gates,
254 coupling_map=coupling_map,
255 backend_properties=backend_properties,
256 initial_layout=initial_layout,
257 seed_transpiler=seed_transpiler,
258 optimization_level=optimization_level,
259 backend=backend)
260
261 if schedule_circuit:
262 experiments = schedule(circuits=experiments,
263 backend=backend,
264 inst_map=inst_map,
265 meas_map=meas_map,
266 method=scheduling_method)
267
268 # assembling the circuits into a qobj to be run on the backend
269 qobj = assemble(experiments,
270 qobj_id=qobj_id,
271 qobj_header=qobj_header,
272 shots=shots,
273 memory=memory,
274 max_credits=max_credits,
275 seed_simulator=seed_simulator,
276 default_qubit_los=default_qubit_los,
277 default_meas_los=default_meas_los,
278 schedule_los=schedule_los,
279 meas_level=meas_level,
280 meas_return=meas_return,
281 memory_slots=memory_slots,
282 memory_slot_size=memory_slot_size,
283 rep_time=rep_time,
284 rep_delay=rep_delay,
285 parameter_binds=parameter_binds,
286 backend=backend,
287 init_qubits=init_qubits,
288 **run_config)
289
290 # executing the circuits on the backend and returning the job
291 start_time = time()
292 job = backend.run(qobj, **run_config)
293 end_time = time()
294 _log_submission_time(start_time, end_time)
295 return job
296
297
298 def _check_conflicting_argument(**kargs):
299 conflicting_args = [arg for arg, value in kargs.items() if value]
300 if conflicting_args:
301 raise QiskitError("The parameters pass_manager conflicts with the following "
302 "parameter(s): {}.".format(', '.join(conflicting_args)))
303
[end of qiskit/execute.py]
[start of qiskit/transpiler/__init__.py]
1 # -*- coding: utf-8 -*-
2
3 # This code is part of Qiskit.
4 #
5 # (C) Copyright IBM 2017, 2018.
6 #
7 # This code is licensed under the Apache License, Version 2.0. You may
8 # obtain a copy of this license in the LICENSE.txt file in the root directory
9 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
10 #
11 # Any modifications or derivative works of this code must retain this
12 # copyright notice, and modified files need to carry a notice indicating
13 # that they have been altered from the originals.
14
15 """
16 =====================================
17 Transpiler (:mod:`qiskit.transpiler`)
18 =====================================
19
20 .. currentmodule:: qiskit.transpiler
21
22 Overview
23 ========
24 Transpilation is the process of rewriting a given input circuit to match
25 the topoplogy of a specific quantum device, and/or to optimize the circuit
26 for execution on present day noisy quantum systems.
27
28 Most circuits must undergo a series of transformations that make them compatible with
29 a given target device, and optimize them to reduce the effects of noise on the
30 resulting outcomes. Rewriting quantum circuits to match hardware constraints and
31 optimizing for performance can be far from trivial. The flow of logic in the rewriting
32 tool chain need not be linear, and can often have iterative sub-loops, conditional
33 branches, and other complex behaviors. That being said, the basic building blocks
34 follow the structure given below:
35
36 .. image:: /source_images/transpiling_core_steps.png
37
38 .. raw:: html
39
40 <br>
41
42 Qiskit has four pre-built transpilation pipelines available here:
43 :mod:`qiskit.transpiler.preset_passmanagers`. Unless the reader is familiar with
44 quantum circuit optimization methods and their usage, it is best to use one of
45 these ready-made routines.
46
47
48 Supplementary Information
49 =========================
50
51 .. container:: toggle
52
53 .. container:: header
54
55 **Basis Gates**
56
57 When writing a quantum circuit you are free to use any quantum gate (unitary operator) that
58 you like, along with a collection of non-gate operations such as qubit measurements and
59 reset operations. However, when running a circuit on a real quantum device one no longer
60 has this flexibility. Due to limitations in, for example, the physical interactions
61 between qubits, difficulty in implementing multi-qubit gates, control electronics etc,
62 a quantum computing device can only natively support a handful of quantum gates and non-gate
63 operations. In the present case of IBM Q devices, the native gate set can be found by querying
64 the devices themselves, and looking for the corresponding attribute in their configuration:
65
66 .. jupyter-execute::
67 :hide-code:
68 :hide-output:
69
70 from qiskit.test.mock import FakeVigo
71 backend = FakeVigo()
72
73 .. jupyter-execute::
74
75 backend.configuration().basis_gates
76
77
78 Every quantum circuit run on an IBM Q device must be expressed using only these basis gates.
79 For example, suppose one wants to run a simple phase estimation circuit:
80
81 .. jupyter-execute::
82
83 import numpy as np
84 from qiskit import QuantumCircuit
85 qc = QuantumCircuit(2, 1)
86
87 qc.h(0)
88 qc.x(1)
89 qc.cu1(np.pi/4, 0, 1)
90 qc.h(0)
91 qc.measure([0], [0])
92 qc.draw(output='mpl')
93
94 We have :math:`H`, :math:`X`, and controlled-:math:`U_{1}` gates, all of which are
95 not in our devices basis gate set, and must be expanded. This expansion is taken
96 care of for us in the :func:`qiskit.execute` function. However, we can
97 decompose the circuit to show what it would look like in the native gate set of
98 the IBM Quantum devices:
99
100 .. jupyter-execute::
101
102 qc_basis = qc.decompose()
103 qc_basis.draw(output='mpl')
104
105
106 A few things to highlight. First, the circuit has gotten longer with respect to the
107 initial one. This can be verified by checking the depth of the circuits:
108
109 .. jupyter-execute::
110
111 print('Original depth:', qc.depth(), 'Decomposed Depth:', qc_basis.depth())
112
113 Second, although we had a single controlled gate, the fact that it was not in the basis
114 set means that, when expanded, it requires more than a single `cx` gate to implement.
115 All said, unrolling to the basis set of gates leads to an increase in the depth of a
116 quantum circuit and the number of gates.
117
118 It is important to highlight two special cases:
119
120 1. A SWAP gate is not a native gate on the IBM Q devices, and must be decomposed into
121 three CNOT gates:
122
123 .. jupyter-execute::
124
125 swap_circ = QuantumCircuit(2)
126 swap_circ.swap(0, 1)
127 swap_circ.decompose().draw(output='mpl')
128
129 As a product of three CNOT gates, SWAP gates are expensive operations to perform on a
130 noisy quantum devices. However, such operations are usually necessary for embedding a
131 circuit into the limited entangling gate connectivities of actual devices. Thus,
132 minimizing the number of SWAP gates in a circuit is a primary goal in the
133 transpilation process.
134
135
136 2. A Toffoli, or controlled-controlled-not gate (`ccx`), is a three-qubit gate. Given
137 that our basis gate set includes only single- and two-qubit gates, it is obvious that
138 this gate must be decomposed. This decomposition is quite costly:
139
140 .. jupyter-execute::
141
142 ccx_circ = QuantumCircuit(3)
143 ccx_circ.ccx(0, 1, 2)
144 ccx_circ.decompose().draw(output='mpl')
145
146 For every Toffoli gate in a quantum circuit, the IBM Quantum hardware may execute up to
147 six CNOT gates, and a handful of single-qubit gates. From this example, it should be
148 clear that any algorithm that makes use of multiple Toffoli gates will end up as a
149 circuit with large depth and will therefore be appreciably affected by noise and gate
150 errors.
151
152
153 .. raw:: html
154
155 <br>
156
157 .. container:: toggle
158
159 .. container:: header
160
161 **Initial Layout**
162
163 Quantum circuits are abstract entities whose qubits are "virtual" representations of actual
164 qubits used in computations. We need to be able to map these virtual qubits in a one-to-one
165 manner to the "physical" qubits in an actual quantum device.
166
167 .. image:: /source_images/mapping.png
168
169 .. raw:: html
170
171 <br><br>
172
173 By default, qiskit will do this mapping for you. The choice of mapping depends on the
174 properties of the circuit, the particular device you are targeting, and the optimization
175 level that is chosen. The basic mapping strategies are the following:
176
177 - **Trivial layout**: Map virtual qubits to the same numbered physical qubit on the device,
178 i.e. `[0,1,2,3,4]` -> `[0,1,2,3,4]` (default in `optimization_level=0` and
179 `optimization_level=1`).
180
181 - **Dense layout**: Find the sub-graph of the device with same number of qubits as the circuit
182 with the greatest connectivity (default in `optimization_level=2` and `optimization_level=3`).
183
184
185 The choice of initial layout is extremely important when:
186
187 1. Computing the number of SWAP operations needed to map the input circuit onto the device
188 topology.
189
190 2. Taking into account the noise properties of the device.
191
192
193 The choice of `initial_layout` can mean the difference between getting a result,
194 and getting nothing but noise.
195
196 Lets see what layouts are automatically picked at various optimization levels. The modified
197 circuits returned by :func:`qiskit.compiler.transpile` have this initial layout information
198 in them, and we can view this layout selection graphically using
199 :func:`qiskit.visualization.plot_circuit_layout`:
200
201 .. jupyter-execute::
202
203 from qiskit import QuantumCircuit, transpile
204 from qiskit.visualization import plot_circuit_layout
205 from qiskit.test.mock import FakeVigo
206 backend = FakeVigo()
207
208 ghz = QuantumCircuit(3, 3)
209 ghz.h(0)
210 ghz.cx(0,range(1,3))
211 ghz.barrier()
212 ghz.measure(range(3), range(3))
213 ghz.draw(output='mpl')
214
215
216 - **Layout Using Optimization Level 0**
217
218 .. jupyter-execute::
219
220 new_circ_lv0 = transpile(ghz, backend=backend, optimization_level=0)
221 plot_circuit_layout(new_circ_lv0, backend)
222
223
224 - **Layout Using Optimization Level 3**
225
226 .. jupyter-execute::
227
228 new_circ_lv3 = transpile(ghz, backend=backend, optimization_level=3)
229 plot_circuit_layout(new_circ_lv3, backend)
230
231
232 It is completely possible to specify your own initial layout. To do so we can
233 pass a list of integers to :func:`qiskit.compiler.transpile` via the `initial_layout`
234 keyword argument, where the index labels the virtual qubit in the circuit and the
235 corresponding value is the label for the physical qubit to map onto:
236
237 .. jupyter-execute::
238
239 # Virtual -> physical
240 # 0 -> 3
241 # 1 -> 4
242 # 2 -> 2
243
244 my_ghz = transpile(ghz, backend, initial_layout=[3, 4, 2])
245 plot_circuit_layout(my_ghz, backend)
246
247 .. raw:: html
248
249 <br>
250
251
252 .. container:: toggle
253
254 .. container:: header
255
256 **Mapping Circuits to Hardware Topology**
257
258 In order to implement a CNOT gate between qubits in a quantum circuit that are not directly
259 connected on a quantum device one or more SWAP gates must be inserted into the circuit to
260 move the qubit states around until they are adjacent on the device gate map. Each SWAP
261 gate is decomposed into three CNOT gates on the IBM Quantum devices, and represents an
262 expensive and noisy operation to perform. Thus, finding the minimum number of SWAP gates
263 needed to map a circuit onto a given device, is an important step (if not the most important)
264 in the whole execution process.
265
266 However, as with many important things in life, finding the optimal SWAP mapping is hard.
267 In fact it is in a class of problems called NP-Hard, and is thus prohibitively expensive
268 to compute for all but the smallest quantum devices and input circuits. To get around this,
269 by default Qiskit uses a stochastic heuristic algorithm called
270 :class:`Qiskit.transpiler.passes.StochasticSwap` to compute a good, but not necessarily minimal
271 SWAP count. The use of a stochastic method means the circuits generated by
272 :func:`Qiskit.compiler.transpile` (or :func:`Qiskit.execute` that calls `transpile` internally)
273 are not guaranteed to be the same over repeated runs. Indeed, running the same circuit
274 repeatedly will in general result in a distribution of circuit depths and gate counts at the
275 output.
276
277 In order to highlight this, we run a GHZ circuit 100 times, using a "bad" (disconnected)
278 `initial_layout`:
279
280 .. jupyter-execute::
281
282 import matplotlib.pyplot as plt
283 from qiskit import QuantumCircuit, transpile
284 from qiskit.test.mock import FakeBoeblingen
285 backend = FakeBoeblingen()
286
287 ghz = QuantumCircuit(5)
288 ghz.h(0)
289 ghz.cx(0,range(1,5))
290 ghz.draw(output='mpl')
291
292
293 .. jupyter-execute::
294
295 depths = []
296 for _ in range(100):
297 depths.append(transpile(ghz,
298 backend,
299 initial_layout=[7, 0, 4, 15, 19],
300 ).depth())
301
302 plt.figure(figsize=(8, 6))
303 plt.hist(depths, bins=list(range(14,36)), align='left', color='#AC557C')
304 plt.xlabel('Depth', fontsize=14)
305 plt.ylabel('Counts', fontsize=14);
306
307
308 This distribution is quite wide, signaling the difficultly the SWAP mapper is having
309 in computing the best mapping. Most circuits will have a distribution of depths,
310 perhaps not as wide as this one, due to the stochastic nature of the default SWAP
311 mapper. Of course, we want the best circuit we can get, especially in cases where
312 the depth is critical to success or failure. In cases like this, it is best to
313 :func:`transpile` a circuit several times, e.g. 10, and take the one with the
314 lowest depth. The :func:`transpile` function will automatically run in parallel
315 mode, making this procedure relatively speedy in most cases.
316
317 .. raw:: html
318
319 <br>
320
321
322 .. container:: toggle
323
324 .. container:: header
325
326 **Gate Optimization**
327
328 Decomposing quantum circuits into the basis gate set of the IBM Quantum devices,
329 and the addition of SWAP gates needed to match hardware topology, conspire to
330 increase the depth and gate count of quantum circuits. Fortunately many routines
331 for optimizing circuits by combining or eliminating gates exist. In some cases
332 these methods are so effective the output circuits have lower depth than the inputs.
333 In other cases, not much can be done, and the computation may be difficult to
334 perform on noisy devices. Different gate optimizations are turned on with
335 different `optimization_level` values. Below we show the benefits gained from
336 setting the optimization level higher:
337
338 .. important::
339
340 The output from :func:`transpile` varies due to the stochastic swap mapper.
341 So the numbers below will likely change each time you run the code.
342
343
344 .. jupyter-execute::
345
346 import matplotlib.pyplot as plt
347 from qiskit import QuantumCircuit, transpile
348 from qiskit.test.mock import FakeBoeblingen
349 backend = FakeBoeblingen()
350
351 ghz = QuantumCircuit(5)
352 ghz.h(0)
353 ghz.cx(0,range(1,5))
354 ghz.draw(output='mpl')
355
356
357 .. jupyter-execute::
358
359 for kk in range(4):
360 circ = transpile(ghz, backend, optimization_level=kk)
361 print('Optimization Level {}'.format(kk))
362 print('Depth:', circ.depth())
363 print('Gate counts:', circ.count_ops())
364 print()
365
366
367 .. raw:: html
368
369 <br>
370
371
372 Transpiler API
373 ==============
374
375 Pass Manager Construction
376 -------------------------
377
378 .. autosummary::
379 :toctree: ../stubs/
380
381 PassManager
382 PassManagerConfig
383 PropertySet
384 FlowController
385
386 Layout and Topology
387 -------------------
388
389 .. autosummary::
390 :toctree: ../stubs/
391
392 Layout
393 CouplingMap
394
395 Fenced Objects
396 --------------
397
398 .. autosummary::
399 :toctree: ../stubs/
400
401 FencedDAGCircuit
402 FencedPropertySet
403
404 Exceptions
405 ----------
406
407 .. autosummary::
408 :toctree: ../stubs/
409
410 TranspilerError
411 TranspilerAccessError
412 """
413
414 from .runningpassmanager import FlowController
415 from .passmanager import PassManager
416 from .passmanager_config import PassManagerConfig
417 from .propertyset import PropertySet
418 from .exceptions import TranspilerError, TranspilerAccessError
419 from .fencedobjs import FencedDAGCircuit, FencedPropertySet
420 from .basepasses import AnalysisPass, TransformationPass
421 from .coupling import CouplingMap
422 from .layout import Layout
423
[end of qiskit/transpiler/__init__.py]
[start of setup.py]
1 # -*- coding: utf-8 -*-
2
3 # This code is part of Qiskit.
4 #
5 # (C) Copyright IBM 2017.
6 #
7 # This code is licensed under the Apache License, Version 2.0. You may
8 # obtain a copy of this license in the LICENSE.txt file in the root directory
9 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
10 #
11 # Any modifications or derivative works of this code must retain this
12 # copyright notice, and modified files need to carry a notice indicating
13 # that they have been altered from the originals.
14
15 "The Qiskit Terra setup file."
16
17 import os
18 import sys
19 from setuptools import setup, find_packages, Extension
20 try:
21 from Cython.Build import cythonize
22 except ImportError:
23 import subprocess
24 subprocess.call([sys.executable, '-m', 'pip', 'install', 'Cython>=0.27.1'])
25 from Cython.Build import cythonize
26
27 REQUIREMENTS = [
28 "contextvars>=2.4;python_version<'3.7'",
29 "jsonschema>=2.6",
30 "networkx>=2.2;python_version>'3.5'",
31 # Networkx 2.4 is the final version with python 3.5 support.
32 "networkx>=2.2,<2.4;python_version=='3.5'",
33 "retworkx>=0.4.0",
34 "numpy>=1.17",
35 "ply>=3.10",
36 "psutil>=5",
37 "scipy>=1.4",
38 "sympy>=1.3",
39 "dill>=0.3",
40 "fastjsonschema>=2.10",
41 "python-constraint>=1.4",
42 "python-dateutil>=2.8.0",
43 ]
44
45 # Add Cython extensions here
46 CYTHON_EXTS = ['utils', 'swap_trial']
47 CYTHON_MODULE = 'qiskit.transpiler.passes.routing.cython.stochastic_swap'
48 CYTHON_SOURCE_DIR = 'qiskit/transpiler/passes/routing/cython/stochastic_swap'
49
50 INCLUDE_DIRS = []
51 # Extra link args
52 LINK_FLAGS = []
53 # If on Win and not in MSYS2 (i.e. Visual studio compile)
54 if (sys.platform == 'win32' and os.environ.get('MSYSTEM') is None):
55 COMPILER_FLAGS = ['/O2']
56 # Everything else
57 else:
58 COMPILER_FLAGS = ['-O2', '-funroll-loops', '-std=c++11']
59 if sys.platform == 'darwin':
60 # These are needed for compiling on OSX 10.14+
61 COMPILER_FLAGS.append('-mmacosx-version-min=10.9')
62 LINK_FLAGS.append('-mmacosx-version-min=10.9')
63
64
65 EXT_MODULES = []
66 # Add Cython Extensions
67 for ext in CYTHON_EXTS:
68 mod = Extension(CYTHON_MODULE + '.' + ext,
69 sources=[CYTHON_SOURCE_DIR + '/' + ext + '.pyx'],
70 include_dirs=INCLUDE_DIRS,
71 extra_compile_args=COMPILER_FLAGS,
72 extra_link_args=LINK_FLAGS,
73 language='c++')
74 EXT_MODULES.append(mod)
75
76 # Read long description from README.
77 README_PATH = os.path.join(os.path.abspath(os.path.dirname(__file__)),
78 'README.md')
79 with open(README_PATH) as readme_file:
80 README = readme_file.read()
81
82 setup(
83 name="qiskit-terra",
84 version="0.15.0",
85 description="Software for developing quantum computing programs",
86 long_description=README,
87 long_description_content_type='text/markdown',
88 url="https://github.com/Qiskit/qiskit-terra",
89 author="Qiskit Development Team",
90 author_email="[email protected]",
91 license="Apache 2.0",
92 classifiers=[
93 "Environment :: Console",
94 "License :: OSI Approved :: Apache Software License",
95 "Intended Audience :: Developers",
96 "Intended Audience :: Science/Research",
97 "Operating System :: Microsoft :: Windows",
98 "Operating System :: MacOS",
99 "Operating System :: POSIX :: Linux",
100 "Programming Language :: Python :: 3 :: Only",
101 "Programming Language :: Python :: 3.5",
102 "Programming Language :: Python :: 3.6",
103 "Programming Language :: Python :: 3.7",
104 "Programming Language :: Python :: 3.8",
105 "Topic :: Scientific/Engineering",
106 ],
107 keywords="qiskit sdk quantum",
108 packages=find_packages(exclude=['test*']),
109 install_requires=REQUIREMENTS,
110 setup_requires=['Cython>=0.27.1'],
111 include_package_data=True,
112 python_requires=">=3.5",
113 extras_require={
114 'visualization': ['matplotlib>=2.1', 'ipywidgets>=7.3.0',
115 'pydot', "pillow>=4.2.1", "pylatexenc>=1.4",
116 "seaborn>=0.9.0", "pygments>=2.4"],
117 'full-featured-simulators': ['qiskit-aer>=0.1'],
118 'crosstalk-pass': ['z3-solver>=4.7'],
119 },
120 project_urls={
121 "Bug Tracker": "https://github.com/Qiskit/qiskit-terra/issues",
122 "Documentation": "https://qiskit.org/documentation/",
123 "Source Code": "https://github.com/Qiskit/qiskit-terra",
124 },
125 ext_modules=cythonize(EXT_MODULES),
126 zip_safe=False
127 )
128
[end of setup.py]
[start of tools/report_ci_failure.py]
1 # -*- coding: utf-8 -*-
2
3 # This code is part of Qiskit.
4 #
5 # (C) Copyright IBM 2017, 2018.
6 #
7 # This code is licensed under the Apache License, Version 2.0. You may
8 # obtain a copy of this license in the LICENSE.txt file in the root directory
9 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
10 #
11 # Any modifications or derivative works of this code must retain this
12 # copyright notice, and modified files need to carry a notice indicating
13 # that they have been altered from the originals.
14 """Utility module to open an issue on the repository when CIs fail."""
15
16 import os
17
18 from github import Github
19
20
21 class CIFailureReporter:
22 """Instances of this class can report to GitHub that the CI is failing.
23
24 """
25
26 def __init__(self, repository, token):
27 """
28 Args:
29 repository (str): a string in the form 'owner/repository-name'
30 indicating the GitHub repository to report against.
31 token (str): a GitHub token obtained following:
32 https://help.github.com/articles/creating-a-personal-access-token-for-the-command-line/
33 """
34 self._repo = repository
35 self._api = Github(token)
36
37 def report(self, branch, commit, infourl=None, job_name=None):
38 """Report on GitHub that the specified branch is failing to build at
39 the specified commit. The method will open an issue indicating that
40 the branch is failing. If there is an issue already open, it will add a
41 comment avoiding to report twice about the same failure.
42
43 Args:
44 branch (str): branch name to report about.
45 commit (str): commit hash at which the build fails.
46 infourl (str): URL with extra info about the failure such as the
47 build logs.
48 job_name (str): name of the failed ci job.
49 """
50 if branch != 'master' and not branch.startswith('stable/'):
51 return None
52 key_label = self._key_label(branch, job_name)
53 issue_number = self._get_report_issue_number(key_label)
54 if issue_number:
55 self._report_as_comment(issue_number, branch, commit, infourl)
56 else:
57 self._report_as_issue(branch, commit, infourl, job_name)
58
59 def _key_label(self, branch_name, job_name):
60 if job_name == 'Randomized tests':
61 return 'randomized test'
62 elif job_name == 'Benchmarks':
63 return 'benchmarks failing'
64 elif branch_name == 'master':
65 return 'master failing'
66 elif branch_name.startswith('stable/'):
67 return 'stable failing'
68 else:
69 return ''
70
71 def _get_report_issue_number(self, key_label):
72 query = 'state:open label:"{}" repo:{}'.format(
73 key_label, self._repo)
74 results = self._api.search_issues(query=query)
75 try:
76 return results[0].number
77 except IndexError:
78 return None
79
80 def _report_as_comment(self, issue_number, branch, commit, infourl):
81 stamp = _branch_is_failing_stamp(branch, commit)
82 report_exists = self._check_report_existence(issue_number, stamp)
83 if not report_exists:
84 _, body = _branch_is_failing_template(branch, commit, infourl)
85 message_body = '{}\n{}'.format(stamp, body)
86 self._post_new_comment(issue_number, message_body)
87
88 def _check_report_existence(self, issue_number, target):
89 repo = self._api.get_repo(self._repo)
90 issue = repo.get_issue(issue_number)
91 if target in issue.body:
92 return True
93
94 for comment in issue.get_comments():
95 if target in comment.body:
96 return True
97
98 return False
99
100 def _report_as_issue(self, branch, commit, infourl, key_label):
101 repo = self._api.get_repo(self._repo)
102 stamp = _branch_is_failing_stamp(branch, commit)
103 title, body = _branch_is_failing_template(branch, commit, infourl)
104 message_body = '{}\n{}'.format(stamp, body)
105 repo.create_issue(title=title, body=message_body,
106 labels=[key_label])
107
108 def _post_new_comment(self, issue_number, body):
109 repo = self._api.get_repo(self._repo)
110 issue = repo.get_issue(issue_number)
111 issue.create_comment(body)
112
113
114 def _branch_is_failing_template(branch, commit, infourl):
115 title = 'Branch `{}` is failing'.format(branch)
116 body = 'Trying to build `{}` at commit {} failed.'.format(branch, commit)
117 if infourl:
118 body += '\nMore info at: {}'.format(infourl)
119 return title, body
120
121
122 def _branch_is_failing_stamp(branch, commit):
123 return '<!-- commit {}@{} -->'.format(commit, branch)
124
125
126 _REPOSITORY = 'Qiskit/qiskit-terra'
127 _GH_TOKEN = os.getenv('GH_TOKEN')
128
129
130 def _get_repo_name():
131 return os.getenv('TRAVIS_REPO_SLUG') or os.getenv('APPVEYOR_REPO_NAME')
132
133
134 def _get_branch_name():
135 return os.getenv('TRAVIS_BRANCH') or os.getenv('APPVEYOR_REPO_BRANCH')
136
137
138 def _get_commit_hash():
139 return os.getenv('TRAVIS_COMMIT') or os.getenv('APPVEYOR_REPO_COMMIT')
140
141
142 def _get_job_name():
143 return os.getenv('TRAVIS_JOB_NAME') or os.getenv('APPVEYOR_JOB_NAME')
144
145
146 def _get_info_url():
147 if os.getenv('TRAVIS'):
148 job_id = os.getenv('TRAVIS_JOB_ID')
149 return 'https://travis-ci.com/{}/jobs/{}'.format(_REPOSITORY, job_id)
150
151 if os.getenv('APPVEYOR'):
152 build_id = os.getenv('APPVEYOR_BUILD_ID')
153 return 'https://ci.appveyor.com/project/{}/build/{}'.format(_REPOSITORY, build_id)
154
155 return None
156
157
158 if __name__ == '__main__':
159 _REPORTER = CIFailureReporter(_get_repo_name(), _GH_TOKEN)
160 _REPORTER.report(_get_branch_name(), _get_commit_hash(),
161 _get_info_url(), _get_job_name())
162
[end of tools/report_ci_failure.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
Qiskit/qiskit
|
7d4aaa9a35a3075f84b4caab89ce925ab27592a8
|
Performance regression in transpile with optimization_level <=2
<!-- ⚠️ If you do not respect this template, your issue will be closed -->
<!-- ⚠️ Make sure to browse the opened and closed issues -->
### Information
- **Qiskit Terra version**: master (since https://github.com/Qiskit/qiskit-terra/commit/35db627fbd1c9762d9f394c71a1df129c24797f4 )
- **Python version**: Any
- **Operating system**: Any
### What is the current behavior?
The benchmarking site has flagged a regression on transpile of circuits with optimization <= 2 after https://github.com/Qiskit/qiskit-terra/commit/35db627fbd1c9762d9f394c71a1df129c24797f4 merged. This is a bit unexpected since the primary change was on block collection and consolidation which is only run on level 3 (which got faster).
https://qiskit.github.io/qiskit/#transpiler_levels.TranspilerLevelBenchmarks.time_transpile_qv_14_x_14?machine=qiskit-benchmarking&os=Ubuntu%2018.04&ram=16%20GB&p-transpiler%20optimization%20level=0&p-transpiler%20optimization%20level=1&p-transpiler%20optimization%20level=2&p-transpiler%20optimization%20level=3&commits=35db627f
|
My working idea here is that this was caused by the addition of `inline=True` https://github.com/Qiskit/qiskit-terra/commit/35db627f#diff-d00f0de07831e0961cb92a4af5f40e5eR480 which makes the decomposition of unitaries from the input QV circuits slower.
|
2020-07-20T19:45:06Z
|
<patch>
diff --git a/qiskit/circuit/quantumcircuit.py b/qiskit/circuit/quantumcircuit.py
--- a/qiskit/circuit/quantumcircuit.py
+++ b/qiskit/circuit/quantumcircuit.py
@@ -576,31 +576,73 @@ def compose(self, other, qubits=None, clbits=None, front=False, inplace=False):
lcr_1: 0 ═══════════ lcr_1: 0 ═══════════════════════
"""
- if front:
- raise CircuitError("Front composition of QuantumCircuit not supported yet.")
-
- if isinstance(other, QuantumCircuit):
- from qiskit.converters.circuit_to_dag import circuit_to_dag
- from qiskit.converters.dag_to_circuit import dag_to_circuit
-
- dag_self = circuit_to_dag(self)
- dag_other = circuit_to_dag(other)
- dag_self.compose(dag_other, qubits=qubits, clbits=clbits, front=front)
- composed_circuit = dag_to_circuit(dag_self)
- if inplace: # FIXME: this is just a hack for inplace to work. Still copies.
- self.__dict__.update(composed_circuit.__dict__)
- return None
+
+ if inplace:
+ dest = self
+ else:
+ dest = self.copy()
+
+ if not isinstance(other, QuantumCircuit):
+ if front:
+ dest.data.insert(0, (other, qubits, clbits))
else:
- return composed_circuit
+ dest.append(other, qargs=qubits, cargs=clbits)
- else: # fall back to append which accepts Instruction and BaseOperator
if inplace:
- self.append(other, qargs=qubits, cargs=clbits)
return None
- else:
- new_circuit = self.copy()
- new_circuit.append(other, qargs=qubits, cargs=clbits)
- return new_circuit
+ return dest
+
+ instrs = other.data
+
+ if other.num_qubits > self.num_qubits or \
+ other.num_clbits > self.num_clbits:
+ raise CircuitError("Trying to compose with another QuantumCircuit "
+ "which has more 'in' edges.")
+
+ # number of qubits and clbits must match number in circuit or None
+ identity_qubit_map = dict(zip(other.qubits, self.qubits))
+ identity_clbit_map = dict(zip(other.clbits, self.clbits))
+
+ if qubits is None:
+ qubit_map = identity_qubit_map
+ elif len(qubits) != len(other.qubits):
+ raise CircuitError("Number of items in qubits parameter does not"
+ " match number of qubits in the circuit.")
+ else:
+ qubit_map = {other.qubits[i]: (self.qubits[q] if isinstance(q, int) else q)
+ for i, q in enumerate(qubits)}
+ if clbits is None:
+ clbit_map = identity_clbit_map
+ elif len(clbits) != len(other.clbits):
+ raise CircuitError("Number of items in clbits parameter does not"
+ " match number of clbits in the circuit.")
+ else:
+ clbit_map = {other.clbits[i]: (self.clbits[c] if isinstance(c, int) else c)
+ for i, c in enumerate(clbits)}
+
+ edge_map = {**qubit_map, **clbit_map} or {**identity_qubit_map, **identity_clbit_map}
+
+ mapped_instrs = []
+ for instr, qargs, cargs in instrs:
+ n_qargs = [edge_map[qarg] for qarg in qargs]
+ n_cargs = [edge_map[carg] for carg in cargs]
+ n_instr = instr.copy()
+
+ if instr.condition is not None:
+ from qiskit.dagcircuit import DAGCircuit # pylint: disable=cyclic-import
+ n_instr.condition = DAGCircuit._map_condition(edge_map, instr.condition)
+
+ mapped_instrs.append((n_instr, n_qargs, n_cargs))
+
+ if front:
+ dest._data = mapped_instrs + dest._data
+ else:
+ dest._data += mapped_instrs
+
+ if inplace:
+ return None
+
+ return dest
@property
def qubits(self):
diff --git a/qiskit/dagcircuit/dagcircuit.py b/qiskit/dagcircuit/dagcircuit.py
--- a/qiskit/dagcircuit/dagcircuit.py
+++ b/qiskit/dagcircuit/dagcircuit.py
@@ -444,7 +444,8 @@ def _check_wiremap_validity(self, wire_map, keymap, valmap):
raise DAGCircuitError("inconsistent wire_map at (%s,%s)" %
(kname, vname))
- def _map_condition(self, wire_map, condition):
+ @staticmethod
+ def _map_condition(wire_map, condition):
"""Use the wire_map dict to change the condition tuple's creg name.
Args:
</patch>
|
[]
|
[]
| |||
pypa__pip-1176
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
pip install --use-wheel doesn't respect --user for installing scripts
For example:
```
habnabit@bach:~$ pip install --use-wheel -i [redacted] -r requirements.txt --user
Downloading/unpacking zope.interface==4.0.5 (from -r requirements.txt (line 2))
Downloading zope.interface-4.0.5-cp27-none-linux_x86_64.whl (144kB): 144kB downloaded
Downloading/unpacking Twisted==12.3.0 (from -r requirements.txt (line 3))
Downloading Twisted-12.3.0-cp27-none-linux_x86_64.whl (2.9MB): 2.9MB downloaded
Downloading/unpacking py-bcrypt==0.3 (from -r requirements.txt (line 4))
Downloading py_bcrypt-0.3-cp27-none-linux_x86_64.whl
Downloading/unpacking web.py==0.37 (from -r requirements.txt (line 5))
Downloading web.py-0.37-py27-none-any.whl (100kB): 100kB downloaded
Downloading/unpacking SQLAlchemy==0.7.10 (from -r requirements.txt (line 6))
Downloading SQLAlchemy-0.7.10-cp27-none-linux_x86_64.whl (727kB): 727kB downloaded
Downloading/unpacking psycopg2==2.5 (from -r requirements.txt (line 7))
Downloading psycopg2-2.5-cp27-none-linux_x86_64.whl (309kB): 309kB downloaded
Downloading/unpacking sanpera==0.1.1.dev1 (from -r requirements.txt (line 8))
Downloading sanpera-0.1.1.dev1.tar.gz (239kB): 239kB downloaded
Running setup.py egg_info for package sanpera
Package ImageMagick was not found in the pkg-config search path.
Perhaps you should add the directory containing `ImageMagick.pc'
to the PKG_CONFIG_PATH environment variable
No package 'ImageMagick' found
Package ImageMagick was not found in the pkg-config search path.
Perhaps you should add the directory containing `ImageMagick.pc'
to the PKG_CONFIG_PATH environment variable
No package 'ImageMagick' found
Downloading/unpacking anyjson==0.3.3 (from -r requirements.txt (line 9))
Downloading anyjson-0.3.3-py27-none-any.whl
Downloading/unpacking yajl==0.3.5 (from -r requirements.txt (line 10))
Downloading yajl-0.3.5-cp27-none-linux_x86_64.whl (56kB): 56kB downloaded
Requirement already satisfied (use --upgrade to upgrade): setuptools in ./.local/lib/python2.7/site-packages/setuptools-1.0-py2.7.egg (from zope.interface==4.0.5->-r requirements.txt (line 2))
Installing collected packages: zope.interface, Twisted, py-bcrypt, web.py, SQLAlchemy, psycopg2, sanpera, anyjson, yajl
Cleaning up...
Exception:
Traceback (most recent call last):
File "/home/habnabit/.local/lib/python2.7/site-packages/pip-1.4.1-py2.7.egg/pip/basecommand.py", line 134, in main
status = self.run(options, args)
File "/home/habnabit/.local/lib/python2.7/site-packages/pip-1.4.1-py2.7.egg/pip/commands/install.py", line 241, in run
requirement_set.install(install_options, global_options, root=options.root_path)
File "/home/habnabit/.local/lib/python2.7/site-packages/pip-1.4.1-py2.7.egg/pip/req.py", line 1298, in install
requirement.install(install_options, global_options, *args, **kwargs)
File "/home/habnabit/.local/lib/python2.7/site-packages/pip-1.4.1-py2.7.egg/pip/req.py", line 595, in install
self.move_wheel_files(self.source_dir)
File "/home/habnabit/.local/lib/python2.7/site-packages/pip-1.4.1-py2.7.egg/pip/req.py", line 815, in move_wheel_files
move_wheel_files(self.name, self.req, wheeldir, user=self.use_user_site, home=self.target_dir)
File "/home/habnabit/.local/lib/python2.7/site-packages/pip-1.4.1-py2.7.egg/pip/wheel.py", line 184, in move_wheel_files
clobber(source, dest, False, fixer=fixer)
File "/home/habnabit/.local/lib/python2.7/site-packages/pip-1.4.1-py2.7.egg/pip/wheel.py", line 166, in clobber
shutil.move(srcfile, destfile)
File "/usr/lib/python2.7/shutil.py", line 301, in move
copy2(src, real_dst)
File "/usr/lib/python2.7/shutil.py", line 130, in copy2
copyfile(src, dst)
File "/usr/lib/python2.7/shutil.py", line 83, in copyfile
with open(dst, 'wb') as fdst:
IOError: [Errno 13] Permission denied: '/usr/bin/trial'
```
Why is this trying to write to `/usr/bin` when `--user` is given?
</issue>
<code>
[start of README.rst]
1 pip
2 ===
3
4 .. image:: https://pypip.in/v/pip/badge.png
5 :target: https://crate.io/packages/pip
6
7 .. image:: https://secure.travis-ci.org/pypa/pip.png?branch=develop
8 :target: http://travis-ci.org/pypa/pip
9
10 For documentation, see http://www.pip-installer.org
11
[end of README.rst]
[start of pip/cmdoptions.py]
1 """shared options and groups"""
2 from optparse import make_option, OptionGroup
3 from pip.locations import build_prefix
4
5
6 def make_option_group(group, parser):
7 """
8 Return an OptionGroup object
9 group -- assumed to be dict with 'name' and 'options' keys
10 parser -- an optparse Parser
11 """
12 option_group = OptionGroup(parser, group['name'])
13 for option in group['options']:
14 option_group.add_option(option)
15 return option_group
16
17 ###########
18 # options #
19 ###########
20
21 index_url = make_option(
22 '-i', '--index-url', '--pypi-url',
23 dest='index_url',
24 metavar='URL',
25 default='https://pypi.python.org/simple/',
26 help='Base URL of Python Package Index (default %default).')
27
28 extra_index_url = make_option(
29 '--extra-index-url',
30 dest='extra_index_urls',
31 metavar='URL',
32 action='append',
33 default=[],
34 help='Extra URLs of package indexes to use in addition to --index-url.')
35
36 no_index = make_option(
37 '--no-index',
38 dest='no_index',
39 action='store_true',
40 default=False,
41 help='Ignore package index (only looking at --find-links URLs instead).')
42
43 find_links = make_option(
44 '-f', '--find-links',
45 dest='find_links',
46 action='append',
47 default=[],
48 metavar='url',
49 help="If a url or path to an html file, then parse for links to archives. If a local path or file:// url that's a directory, then look for archives in the directory listing.")
50
51 use_mirrors = make_option(
52 '-M', '--use-mirrors',
53 dest='use_mirrors',
54 action='store_true',
55 default=False,
56 help='Use the PyPI mirrors as a fallback in case the main index is down.')
57
58 mirrors = make_option(
59 '--mirrors',
60 dest='mirrors',
61 metavar='URL',
62 action='append',
63 default=[],
64 help='Specific mirror URLs to query when --use-mirrors is used.')
65
66 allow_external = make_option(
67 "--allow-external",
68 dest="allow_external",
69 action="append",
70 default=[],
71 metavar="PACKAGE",
72 help="Allow the installation of externally hosted files",
73 )
74
75 allow_all_external = make_option(
76 "--allow-all-external",
77 dest="allow_all_external",
78 action="store_true",
79 default=True, # TODO: Change to False after 1.4 has been released
80 help="Allow the installation of all externally hosted files",
81 )
82
83 # TODO: NOOP after 1.4 has been released
84 no_allow_external = make_option(
85 "--no-allow-external",
86 dest="allow_all_external",
87 action="store_false",
88 help="Disallow the installation of all externally hosted files",
89 )
90
91 allow_unsafe = make_option(
92 "--allow-insecure",
93 dest="allow_insecure",
94 action="append",
95 default=[],
96 metavar="PACKAGE",
97 help="Allow the installation of insecure and unverifiable files",
98 )
99
100 no_allow_unsafe = make_option(
101 "--no-allow-insecure",
102 dest="allow_all_insecure",
103 action="store_false",
104 default=True,
105 help="Disallow the installation of insecure and unverifiable files"
106 )
107
108 requirements = make_option(
109 '-r', '--requirement',
110 dest='requirements',
111 action='append',
112 default=[],
113 metavar='file',
114 help='Install from the given requirements file. '
115 'This option can be used multiple times.')
116
117 use_wheel = make_option(
118 '--use-wheel',
119 dest='use_wheel',
120 action='store_true',
121 help='Find and prefer wheel archives when searching indexes and find-links locations. Default to accepting source archives.')
122
123 download_cache = make_option(
124 '--download-cache',
125 dest='download_cache',
126 metavar='dir',
127 default=None,
128 help='Cache downloaded packages in <dir>.')
129
130 no_deps = make_option(
131 '--no-deps', '--no-dependencies',
132 dest='ignore_dependencies',
133 action='store_true',
134 default=False,
135 help="Don't install package dependencies.")
136
137 build_dir = make_option(
138 '-b', '--build', '--build-dir', '--build-directory',
139 dest='build_dir',
140 metavar='dir',
141 default=build_prefix,
142 help='Directory to unpack packages into and build in. '
143 'The default in a virtualenv is "<venv path>/build". '
144 'The default for global installs is "<OS temp dir>/pip_build_<username>".')
145
146 install_options = make_option(
147 '--install-option',
148 dest='install_options',
149 action='append',
150 metavar='options',
151 help="Extra arguments to be supplied to the setup.py install "
152 "command (use like --install-option=\"--install-scripts=/usr/local/bin\"). "
153 "Use multiple --install-option options to pass multiple options to setup.py install. "
154 "If you are using an option with a directory path, be sure to use absolute path.")
155
156 global_options = make_option(
157 '--global-option',
158 dest='global_options',
159 action='append',
160 metavar='options',
161 help="Extra global options to be supplied to the setup.py "
162 "call before the install command.")
163
164 no_clean = make_option(
165 '--no-clean',
166 action='store_true',
167 default=False,
168 help="Don't clean up build directories.")
169
170
171 ##########
172 # groups #
173 ##########
174
175 index_group = {
176 'name': 'Package Index Options',
177 'options': [
178 index_url,
179 extra_index_url,
180 no_index,
181 find_links,
182 use_mirrors,
183 mirrors,
184 allow_external,
185 allow_all_external,
186 no_allow_external,
187 allow_unsafe,
188 no_allow_unsafe,
189 ]
190 }
191
[end of pip/cmdoptions.py]
[start of pip/commands/install.py]
1 import os
2 import sys
3 import tempfile
4 import shutil
5 from pip.req import InstallRequirement, RequirementSet, parse_requirements
6 from pip.log import logger
7 from pip.locations import src_prefix, virtualenv_no_global, distutils_scheme
8 from pip.basecommand import Command
9 from pip.index import PackageFinder
10 from pip.exceptions import InstallationError, CommandError, PreviousBuildDirError
11 from pip import cmdoptions
12
13
14 class InstallCommand(Command):
15 """
16 Install packages from:
17
18 - PyPI (and other indexes) using requirement specifiers.
19 - VCS project urls.
20 - Local project directories.
21 - Local or remote source archives.
22
23 pip also supports installing from "requirements files", which provide
24 an easy way to specify a whole environment to be installed.
25
26 See http://www.pip-installer.org for details on VCS url formats and
27 requirements files.
28 """
29 name = 'install'
30
31 usage = """
32 %prog [options] <requirement specifier> ...
33 %prog [options] -r <requirements file> ...
34 %prog [options] [-e] <vcs project url> ...
35 %prog [options] [-e] <local project path> ...
36 %prog [options] <archive url/path> ..."""
37
38 summary = 'Install packages.'
39 bundle = False
40
41 def __init__(self, *args, **kw):
42 super(InstallCommand, self).__init__(*args, **kw)
43
44 cmd_opts = self.cmd_opts
45
46 cmd_opts.add_option(
47 '-e', '--editable',
48 dest='editables',
49 action='append',
50 default=[],
51 metavar='path/url',
52 help='Install a project in editable mode (i.e. setuptools "develop mode") from a local project path or a VCS url.')
53
54 cmd_opts.add_option(cmdoptions.requirements)
55 cmd_opts.add_option(cmdoptions.build_dir)
56
57 cmd_opts.add_option(
58 '-t', '--target',
59 dest='target_dir',
60 metavar='dir',
61 default=None,
62 help='Install packages into <dir>.')
63
64 cmd_opts.add_option(
65 '-d', '--download', '--download-dir', '--download-directory',
66 dest='download_dir',
67 metavar='dir',
68 default=None,
69 help="Download packages into <dir> instead of installing them, regardless of what's already installed.")
70
71 cmd_opts.add_option(cmdoptions.download_cache)
72
73 cmd_opts.add_option(
74 '--src', '--source', '--source-dir', '--source-directory',
75 dest='src_dir',
76 metavar='dir',
77 default=src_prefix,
78 help='Directory to check out editable projects into. '
79 'The default in a virtualenv is "<venv path>/src". '
80 'The default for global installs is "<current dir>/src".')
81
82 cmd_opts.add_option(
83 '-U', '--upgrade',
84 dest='upgrade',
85 action='store_true',
86 help='Upgrade all packages to the newest available version. '
87 'This process is recursive regardless of whether a dependency is already satisfied.')
88
89 cmd_opts.add_option(
90 '--force-reinstall',
91 dest='force_reinstall',
92 action='store_true',
93 help='When upgrading, reinstall all packages even if they are '
94 'already up-to-date.')
95
96 cmd_opts.add_option(
97 '-I', '--ignore-installed',
98 dest='ignore_installed',
99 action='store_true',
100 help='Ignore the installed packages (reinstalling instead).')
101
102 cmd_opts.add_option(cmdoptions.no_deps)
103
104 cmd_opts.add_option(
105 '--no-install',
106 dest='no_install',
107 action='store_true',
108 help="Download and unpack all packages, but don't actually install them.")
109
110 cmd_opts.add_option(
111 '--no-download',
112 dest='no_download',
113 action="store_true",
114 help="Don't download any packages, just install the ones already downloaded "
115 "(completes an install run with --no-install).")
116
117 cmd_opts.add_option(cmdoptions.install_options)
118 cmd_opts.add_option(cmdoptions.global_options)
119
120 cmd_opts.add_option(
121 '--user',
122 dest='use_user_site',
123 action='store_true',
124 help='Install using the user scheme.')
125
126 cmd_opts.add_option(
127 '--egg',
128 dest='as_egg',
129 action='store_true',
130 help="Install as self contained egg file, like easy_install does.")
131
132 cmd_opts.add_option(
133 '--root',
134 dest='root_path',
135 metavar='dir',
136 default=None,
137 help="Install everything relative to this alternate root directory.")
138
139 cmd_opts.add_option(cmdoptions.use_wheel)
140
141 cmd_opts.add_option(
142 '--pre',
143 action='store_true',
144 default=False,
145 help="Include pre-release and development versions. By default, pip only finds stable versions.")
146
147 cmd_opts.add_option(cmdoptions.no_clean)
148
149 index_opts = cmdoptions.make_option_group(cmdoptions.index_group, self.parser)
150
151 self.parser.insert_option_group(0, index_opts)
152 self.parser.insert_option_group(0, cmd_opts)
153
154 def _build_package_finder(self, options, index_urls):
155 """
156 Create a package finder appropriate to this install command.
157 This method is meant to be overridden by subclasses, not
158 called directly.
159 """
160 return PackageFinder(find_links=options.find_links,
161 index_urls=index_urls,
162 use_mirrors=options.use_mirrors,
163 mirrors=options.mirrors,
164 use_wheel=options.use_wheel,
165 allow_external=options.allow_external,
166 allow_insecure=options.allow_insecure,
167 allow_all_external=options.allow_all_external,
168 allow_all_insecure=options.allow_all_insecure,
169 allow_all_prereleases=options.pre,
170 )
171
172 def run(self, options, args):
173 if options.download_dir:
174 options.no_install = True
175 options.ignore_installed = True
176 options.build_dir = os.path.abspath(options.build_dir)
177 options.src_dir = os.path.abspath(options.src_dir)
178 install_options = options.install_options or []
179 if options.use_user_site:
180 if virtualenv_no_global():
181 raise InstallationError("Can not perform a '--user' install. User site-packages are not visible in this virtualenv.")
182 install_options.append('--user')
183
184 temp_target_dir = None
185 if options.target_dir:
186 options.ignore_installed = True
187 temp_target_dir = tempfile.mkdtemp()
188 options.target_dir = os.path.abspath(options.target_dir)
189 if os.path.exists(options.target_dir) and not os.path.isdir(options.target_dir):
190 raise CommandError("Target path exists but is not a directory, will not continue.")
191 install_options.append('--home=' + temp_target_dir)
192
193 global_options = options.global_options or []
194 index_urls = [options.index_url] + options.extra_index_urls
195 if options.no_index:
196 logger.notify('Ignoring indexes: %s' % ','.join(index_urls))
197 index_urls = []
198
199 finder = self._build_package_finder(options, index_urls)
200
201 requirement_set = RequirementSet(
202 build_dir=options.build_dir,
203 src_dir=options.src_dir,
204 download_dir=options.download_dir,
205 download_cache=options.download_cache,
206 upgrade=options.upgrade,
207 as_egg=options.as_egg,
208 ignore_installed=options.ignore_installed,
209 ignore_dependencies=options.ignore_dependencies,
210 force_reinstall=options.force_reinstall,
211 use_user_site=options.use_user_site,
212 target_dir=temp_target_dir)
213 for name in args:
214 requirement_set.add_requirement(
215 InstallRequirement.from_line(name, None))
216 for name in options.editables:
217 requirement_set.add_requirement(
218 InstallRequirement.from_editable(name, default_vcs=options.default_vcs))
219 for filename in options.requirements:
220 for req in parse_requirements(filename, finder=finder, options=options):
221 requirement_set.add_requirement(req)
222 if not requirement_set.has_requirements:
223 opts = {'name': self.name}
224 if options.find_links:
225 msg = ('You must give at least one requirement to %(name)s '
226 '(maybe you meant "pip %(name)s %(links)s"?)' %
227 dict(opts, links=' '.join(options.find_links)))
228 else:
229 msg = ('You must give at least one requirement '
230 'to %(name)s (see "pip help %(name)s")' % opts)
231 logger.warn(msg)
232 return
233
234 try:
235 if not options.no_download:
236 requirement_set.prepare_files(finder, force_root_egg_info=self.bundle, bundle=self.bundle)
237 else:
238 requirement_set.locate_files()
239
240 if not options.no_install and not self.bundle:
241 requirement_set.install(install_options, global_options, root=options.root_path)
242 installed = ' '.join([req.name for req in
243 requirement_set.successfully_installed])
244 if installed:
245 logger.notify('Successfully installed %s' % installed)
246 elif not self.bundle:
247 downloaded = ' '.join([req.name for req in
248 requirement_set.successfully_downloaded])
249 if downloaded:
250 logger.notify('Successfully downloaded %s' % downloaded)
251 elif self.bundle:
252 requirement_set.create_bundle(self.bundle_filename)
253 logger.notify('Created bundle in %s' % self.bundle_filename)
254 except PreviousBuildDirError:
255 options.no_clean = True
256 raise
257 finally:
258 # Clean up
259 if (not options.no_clean) and ((not options.no_install) or options.download_dir):
260 requirement_set.cleanup_files(bundle=self.bundle)
261
262 if options.target_dir:
263 if not os.path.exists(options.target_dir):
264 os.makedirs(options.target_dir)
265 lib_dir = distutils_scheme('', home=temp_target_dir)['purelib']
266 for item in os.listdir(lib_dir):
267 shutil.move(
268 os.path.join(lib_dir, item),
269 os.path.join(options.target_dir, item)
270 )
271 shutil.rmtree(temp_target_dir)
272 return requirement_set
273
[end of pip/commands/install.py]
[start of pip/util.py]
1 import sys
2 import shutil
3 import os
4 import stat
5 import re
6 import posixpath
7 import pkg_resources
8 import zipfile
9 import tarfile
10 import subprocess
11 import textwrap
12 from pip.exceptions import InstallationError, BadCommand, PipError
13 from pip.backwardcompat import(WindowsError, string_types, raw_input,
14 console_to_str, user_site, PermissionError)
15 from pip.locations import site_packages, running_under_virtualenv, virtualenv_no_global
16 from pip.log import logger
17 from pip.vendor.distlib import version
18
19 __all__ = ['rmtree', 'display_path', 'backup_dir',
20 'find_command', 'ask', 'Inf',
21 'normalize_name', 'splitext',
22 'format_size', 'is_installable_dir',
23 'is_svn_page', 'file_contents',
24 'split_leading_dir', 'has_leading_dir',
25 'make_path_relative', 'normalize_path',
26 'renames', 'get_terminal_size', 'get_prog',
27 'unzip_file', 'untar_file', 'create_download_cache_folder',
28 'cache_download', 'unpack_file', 'call_subprocess']
29
30
31 def get_prog():
32 try:
33 if os.path.basename(sys.argv[0]) in ('__main__.py', '-c'):
34 return "%s -m pip" % sys.executable
35 except (AttributeError, TypeError, IndexError):
36 pass
37 return 'pip'
38
39
40 def rmtree(dir, ignore_errors=False):
41 shutil.rmtree(dir, ignore_errors=ignore_errors,
42 onerror=rmtree_errorhandler)
43
44
45 def rmtree_errorhandler(func, path, exc_info):
46 """On Windows, the files in .svn are read-only, so when rmtree() tries to
47 remove them, an exception is thrown. We catch that here, remove the
48 read-only attribute, and hopefully continue without problems."""
49 exctype, value = exc_info[:2]
50 if not ((exctype is WindowsError and value.args[0] == 5) or #others
51 (exctype is OSError and value.args[0] == 13) or #python2.4
52 (exctype is PermissionError and value.args[3] == 5) #python3.3
53 ):
54 raise
55 # file type should currently be read only
56 if ((os.stat(path).st_mode & stat.S_IREAD) != stat.S_IREAD):
57 raise
58 # convert to read/write
59 os.chmod(path, stat.S_IWRITE)
60 # use the original function to repeat the operation
61 func(path)
62
63
64 def display_path(path):
65 """Gives the display value for a given path, making it relative to cwd
66 if possible."""
67 path = os.path.normcase(os.path.abspath(path))
68 if path.startswith(os.getcwd() + os.path.sep):
69 path = '.' + path[len(os.getcwd()):]
70 return path
71
72
73 def backup_dir(dir, ext='.bak'):
74 """Figure out the name of a directory to back up the given dir to
75 (adding .bak, .bak2, etc)"""
76 n = 1
77 extension = ext
78 while os.path.exists(dir + extension):
79 n += 1
80 extension = ext + str(n)
81 return dir + extension
82
83
84 def find_command(cmd, paths=None, pathext=None):
85 """Searches the PATH for the given command and returns its path"""
86 if paths is None:
87 paths = os.environ.get('PATH', '').split(os.pathsep)
88 if isinstance(paths, string_types):
89 paths = [paths]
90 # check if there are funny path extensions for executables, e.g. Windows
91 if pathext is None:
92 pathext = get_pathext()
93 pathext = [ext for ext in pathext.lower().split(os.pathsep) if len(ext)]
94 # don't use extensions if the command ends with one of them
95 if os.path.splitext(cmd)[1].lower() in pathext:
96 pathext = ['']
97 # check if we find the command on PATH
98 for path in paths:
99 # try without extension first
100 cmd_path = os.path.join(path, cmd)
101 for ext in pathext:
102 # then including the extension
103 cmd_path_ext = cmd_path + ext
104 if os.path.isfile(cmd_path_ext):
105 return cmd_path_ext
106 if os.path.isfile(cmd_path):
107 return cmd_path
108 raise BadCommand('Cannot find command %r' % cmd)
109
110
111 def get_pathext(default_pathext=None):
112 """Returns the path extensions from environment or a default"""
113 if default_pathext is None:
114 default_pathext = os.pathsep.join(['.COM', '.EXE', '.BAT', '.CMD'])
115 pathext = os.environ.get('PATHEXT', default_pathext)
116 return pathext
117
118
119 def ask_path_exists(message, options):
120 for action in os.environ.get('PIP_EXISTS_ACTION', ''):
121 if action in options:
122 return action
123 return ask(message, options)
124
125
126 def ask(message, options):
127 """Ask the message interactively, with the given possible responses"""
128 while 1:
129 if os.environ.get('PIP_NO_INPUT'):
130 raise Exception('No input was expected ($PIP_NO_INPUT set); question: %s' % message)
131 response = raw_input(message)
132 response = response.strip().lower()
133 if response not in options:
134 print('Your response (%r) was not one of the expected responses: %s' % (
135 response, ', '.join(options)))
136 else:
137 return response
138
139
140 class _Inf(object):
141 """I am bigger than everything!"""
142
143 def __eq__(self, other):
144 if self is other:
145 return True
146 else:
147 return False
148
149 def __ne__(self, other):
150 return not self.__eq__(other)
151
152 def __lt__(self, other):
153 return False
154
155 def __le__(self, other):
156 return False
157
158 def __gt__(self, other):
159 return True
160
161 def __ge__(self, other):
162 return True
163
164 def __repr__(self):
165 return 'Inf'
166
167
168 Inf = _Inf() #this object is not currently used as a sortable in our code
169 del _Inf
170
171
172 _normalize_re = re.compile(r'[^a-z]', re.I)
173
174
175 def normalize_name(name):
176 return _normalize_re.sub('-', name.lower())
177
178
179 def format_size(bytes):
180 if bytes > 1000*1000:
181 return '%.1fMB' % (bytes/1000.0/1000)
182 elif bytes > 10*1000:
183 return '%ikB' % (bytes/1000)
184 elif bytes > 1000:
185 return '%.1fkB' % (bytes/1000.0)
186 else:
187 return '%ibytes' % bytes
188
189
190 def is_installable_dir(path):
191 """Return True if `path` is a directory containing a setup.py file."""
192 if not os.path.isdir(path):
193 return False
194 setup_py = os.path.join(path, 'setup.py')
195 if os.path.isfile(setup_py):
196 return True
197 return False
198
199
200 def is_svn_page(html):
201 """Returns true if the page appears to be the index page of an svn repository"""
202 return (re.search(r'<title>[^<]*Revision \d+:', html)
203 and re.search(r'Powered by (?:<a[^>]*?>)?Subversion', html, re.I))
204
205
206 def file_contents(filename):
207 fp = open(filename, 'rb')
208 try:
209 return fp.read().decode('utf-8')
210 finally:
211 fp.close()
212
213
214 def split_leading_dir(path):
215 path = str(path)
216 path = path.lstrip('/').lstrip('\\')
217 if '/' in path and (('\\' in path and path.find('/') < path.find('\\'))
218 or '\\' not in path):
219 return path.split('/', 1)
220 elif '\\' in path:
221 return path.split('\\', 1)
222 else:
223 return path, ''
224
225
226 def has_leading_dir(paths):
227 """Returns true if all the paths have the same leading path name
228 (i.e., everything is in one subdirectory in an archive)"""
229 common_prefix = None
230 for path in paths:
231 prefix, rest = split_leading_dir(path)
232 if not prefix:
233 return False
234 elif common_prefix is None:
235 common_prefix = prefix
236 elif prefix != common_prefix:
237 return False
238 return True
239
240
241 def make_path_relative(path, rel_to):
242 """
243 Make a filename relative, where the filename path, and it is
244 relative to rel_to
245
246 >>> make_relative_path('/usr/share/something/a-file.pth',
247 ... '/usr/share/another-place/src/Directory')
248 '../../../something/a-file.pth'
249 >>> make_relative_path('/usr/share/something/a-file.pth',
250 ... '/home/user/src/Directory')
251 '../../../usr/share/something/a-file.pth'
252 >>> make_relative_path('/usr/share/a-file.pth', '/usr/share/')
253 'a-file.pth'
254 """
255 path_filename = os.path.basename(path)
256 path = os.path.dirname(path)
257 path = os.path.normpath(os.path.abspath(path))
258 rel_to = os.path.normpath(os.path.abspath(rel_to))
259 path_parts = path.strip(os.path.sep).split(os.path.sep)
260 rel_to_parts = rel_to.strip(os.path.sep).split(os.path.sep)
261 while path_parts and rel_to_parts and path_parts[0] == rel_to_parts[0]:
262 path_parts.pop(0)
263 rel_to_parts.pop(0)
264 full_parts = ['..']*len(rel_to_parts) + path_parts + [path_filename]
265 if full_parts == ['']:
266 return '.' + os.path.sep
267 return os.path.sep.join(full_parts)
268
269
270 def normalize_path(path):
271 """
272 Convert a path to its canonical, case-normalized, absolute version.
273
274 """
275 return os.path.normcase(os.path.realpath(path))
276
277
278 def splitext(path):
279 """Like os.path.splitext, but take off .tar too"""
280 base, ext = posixpath.splitext(path)
281 if base.lower().endswith('.tar'):
282 ext = base[-4:] + ext
283 base = base[:-4]
284 return base, ext
285
286
287 def renames(old, new):
288 """Like os.renames(), but handles renaming across devices."""
289 # Implementation borrowed from os.renames().
290 head, tail = os.path.split(new)
291 if head and tail and not os.path.exists(head):
292 os.makedirs(head)
293
294 shutil.move(old, new)
295
296 head, tail = os.path.split(old)
297 if head and tail:
298 try:
299 os.removedirs(head)
300 except OSError:
301 pass
302
303
304 def is_local(path):
305 """
306 Return True if path is within sys.prefix, if we're running in a virtualenv.
307
308 If we're not in a virtualenv, all paths are considered "local."
309
310 """
311 if not running_under_virtualenv():
312 return True
313 return normalize_path(path).startswith(normalize_path(sys.prefix))
314
315
316 def dist_is_local(dist):
317 """
318 Return True if given Distribution object is installed locally
319 (i.e. within current virtualenv).
320
321 Always True if we're not in a virtualenv.
322
323 """
324 return is_local(dist_location(dist))
325
326
327 def dist_in_usersite(dist):
328 """
329 Return True if given Distribution is installed in user site.
330 """
331 if user_site:
332 return normalize_path(dist_location(dist)).startswith(normalize_path(user_site))
333 else:
334 return False
335
336 def dist_in_site_packages(dist):
337 """
338 Return True if given Distribution is installed in distutils.sysconfig.get_python_lib().
339 """
340 return normalize_path(dist_location(dist)).startswith(normalize_path(site_packages))
341
342
343 def dist_is_editable(dist):
344 """Is distribution an editable install?"""
345 #TODO: factor out determining editableness out of FrozenRequirement
346 from pip import FrozenRequirement
347 req = FrozenRequirement.from_dist(dist, [])
348 return req.editable
349
350 def get_installed_distributions(local_only=True,
351 skip=('setuptools', 'pip', 'python'),
352 include_editables=True,
353 editables_only=False):
354 """
355 Return a list of installed Distribution objects.
356
357 If ``local_only`` is True (default), only return installations
358 local to the current virtualenv, if in a virtualenv.
359
360 ``skip`` argument is an iterable of lower-case project names to
361 ignore; defaults to ('setuptools', 'pip', 'python'). [FIXME also
362 skip virtualenv?]
363
364 If ``editables`` is False, don't report editables.
365
366 If ``editables_only`` is True , only report editables.
367
368 """
369 if local_only:
370 local_test = dist_is_local
371 else:
372 local_test = lambda d: True
373
374 if include_editables:
375 editable_test = lambda d: True
376 else:
377 editable_test = lambda d: not dist_is_editable(d)
378
379 if editables_only:
380 editables_only_test = lambda d: dist_is_editable(d)
381 else:
382 editables_only_test = lambda d: True
383
384 return [d for d in pkg_resources.working_set
385 if local_test(d)
386 and d.key not in skip
387 and editable_test(d)
388 and editables_only_test(d)
389 ]
390
391
392 def egg_link_path(dist):
393 """
394 Return the path for the .egg-link file if it exists, otherwise, None.
395
396 There's 3 scenarios:
397 1) not in a virtualenv
398 try to find in site.USER_SITE, then site_packages
399 2) in a no-global virtualenv
400 try to find in site_packages
401 3) in a yes-global virtualenv
402 try to find in site_packages, then site.USER_SITE (don't look in global location)
403
404 For #1 and #3, there could be odd cases, where there's an egg-link in 2 locations.
405 This method will just return the first one found.
406 """
407 sites = []
408 if running_under_virtualenv():
409 if virtualenv_no_global():
410 sites.append(site_packages)
411 else:
412 sites.append(site_packages)
413 if user_site:
414 sites.append(user_site)
415 else:
416 if user_site:
417 sites.append(user_site)
418 sites.append(site_packages)
419
420 for site in sites:
421 egglink = os.path.join(site, dist.project_name) + '.egg-link'
422 if os.path.isfile(egglink):
423 return egglink
424
425
426 def dist_location(dist):
427 """
428 Get the site-packages location of this distribution. Generally
429 this is dist.location, except in the case of develop-installed
430 packages, where dist.location is the source code location, and we
431 want to know where the egg-link file is.
432
433 """
434 egg_link = egg_link_path(dist)
435 if egg_link:
436 return egg_link
437 return dist.location
438
439
440 def get_terminal_size():
441 """Returns a tuple (x, y) representing the width(x) and the height(x)
442 in characters of the terminal window."""
443 def ioctl_GWINSZ(fd):
444 try:
445 import fcntl
446 import termios
447 import struct
448 cr = struct.unpack('hh', fcntl.ioctl(fd, termios.TIOCGWINSZ,
449 '1234'))
450 except:
451 return None
452 if cr == (0, 0):
453 return None
454 if cr == (0, 0):
455 return None
456 return cr
457 cr = ioctl_GWINSZ(0) or ioctl_GWINSZ(1) or ioctl_GWINSZ(2)
458 if not cr:
459 try:
460 fd = os.open(os.ctermid(), os.O_RDONLY)
461 cr = ioctl_GWINSZ(fd)
462 os.close(fd)
463 except:
464 pass
465 if not cr:
466 cr = (os.environ.get('LINES', 25), os.environ.get('COLUMNS', 80))
467 return int(cr[1]), int(cr[0])
468
469
470 def current_umask():
471 """Get the current umask which involves having to set it temporarily."""
472 mask = os.umask(0)
473 os.umask(mask)
474 return mask
475
476
477 def unzip_file(filename, location, flatten=True):
478 """
479 Unzip the file (with path `filename`) to the destination `location`. All
480 files are written based on system defaults and umask (i.e. permissions are
481 not preserved), except that regular file members with any execute
482 permissions (user, group, or world) have "chmod +x" applied after being
483 written. Note that for windows, any execute changes using os.chmod are
484 no-ops per the python docs.
485 """
486 if not os.path.exists(location):
487 os.makedirs(location)
488 zipfp = open(filename, 'rb')
489 try:
490 zip = zipfile.ZipFile(zipfp)
491 leading = has_leading_dir(zip.namelist()) and flatten
492 for info in zip.infolist():
493 name = info.filename
494 data = zip.read(name)
495 fn = name
496 if leading:
497 fn = split_leading_dir(name)[1]
498 fn = os.path.join(location, fn)
499 dir = os.path.dirname(fn)
500 if not os.path.exists(dir):
501 os.makedirs(dir)
502 if fn.endswith('/') or fn.endswith('\\'):
503 # A directory
504 if not os.path.exists(fn):
505 os.makedirs(fn)
506 else:
507 fp = open(fn, 'wb')
508 try:
509 fp.write(data)
510 finally:
511 fp.close()
512 mode = info.external_attr >> 16
513 # if mode and regular file and any execute permissions for user/group/world?
514 if mode and stat.S_ISREG(mode) and mode & 0o111:
515 # make dest file have execute for user/group/world (chmod +x)
516 # no-op on windows per python docs
517 os.chmod(fn, (0o777-current_umask() | 0o111))
518 finally:
519 zipfp.close()
520
521
522 def untar_file(filename, location):
523 """
524 Untar the file (with path `filename`) to the destination `location`.
525 All files are written based on system defaults and umask (i.e. permissions
526 are not preserved), except that regular file members with any execute
527 permissions (user, group, or world) have "chmod +x" applied after being
528 written. Note that for windows, any execute changes using os.chmod are
529 no-ops per the python docs.
530 """
531 if not os.path.exists(location):
532 os.makedirs(location)
533 if filename.lower().endswith('.gz') or filename.lower().endswith('.tgz'):
534 mode = 'r:gz'
535 elif filename.lower().endswith('.bz2') or filename.lower().endswith('.tbz'):
536 mode = 'r:bz2'
537 elif filename.lower().endswith('.tar'):
538 mode = 'r'
539 else:
540 logger.warn('Cannot determine compression type for file %s' % filename)
541 mode = 'r:*'
542 tar = tarfile.open(filename, mode)
543 try:
544 # note: python<=2.5 doesnt seem to know about pax headers, filter them
545 leading = has_leading_dir([
546 member.name for member in tar.getmembers()
547 if member.name != 'pax_global_header'
548 ])
549 for member in tar.getmembers():
550 fn = member.name
551 if fn == 'pax_global_header':
552 continue
553 if leading:
554 fn = split_leading_dir(fn)[1]
555 path = os.path.join(location, fn)
556 if member.isdir():
557 if not os.path.exists(path):
558 os.makedirs(path)
559 elif member.issym():
560 try:
561 tar._extract_member(member, path)
562 except:
563 e = sys.exc_info()[1]
564 # Some corrupt tar files seem to produce this
565 # (specifically bad symlinks)
566 logger.warn(
567 'In the tar file %s the member %s is invalid: %s'
568 % (filename, member.name, e))
569 continue
570 else:
571 try:
572 fp = tar.extractfile(member)
573 except (KeyError, AttributeError):
574 e = sys.exc_info()[1]
575 # Some corrupt tar files seem to produce this
576 # (specifically bad symlinks)
577 logger.warn(
578 'In the tar file %s the member %s is invalid: %s'
579 % (filename, member.name, e))
580 continue
581 if not os.path.exists(os.path.dirname(path)):
582 os.makedirs(os.path.dirname(path))
583 destfp = open(path, 'wb')
584 try:
585 shutil.copyfileobj(fp, destfp)
586 finally:
587 destfp.close()
588 fp.close()
589 # member have any execute permissions for user/group/world?
590 if member.mode & 0o111:
591 # make dest file have execute for user/group/world
592 # no-op on windows per python docs
593 os.chmod(path, (0o777-current_umask() | 0o111))
594 finally:
595 tar.close()
596
597
598 def create_download_cache_folder(folder):
599 logger.indent -= 2
600 logger.notify('Creating supposed download cache at %s' % folder)
601 logger.indent += 2
602 os.makedirs(folder)
603
604
605 def cache_download(target_file, temp_location, content_type):
606 logger.notify('Storing download in cache at %s' % display_path(target_file))
607 shutil.copyfile(temp_location, target_file)
608 fp = open(target_file+'.content-type', 'w')
609 fp.write(content_type)
610 fp.close()
611
612
613 def unpack_file(filename, location, content_type, link):
614 filename = os.path.realpath(filename)
615 if (content_type == 'application/zip'
616 or filename.endswith('.zip')
617 or filename.endswith('.pybundle')
618 or filename.endswith('.whl')
619 or zipfile.is_zipfile(filename)):
620 unzip_file(filename, location, flatten=not filename.endswith(('.pybundle', '.whl')))
621 elif (content_type == 'application/x-gzip'
622 or tarfile.is_tarfile(filename)
623 or splitext(filename)[1].lower() in ('.tar', '.tar.gz', '.tar.bz2', '.tgz', '.tbz')):
624 untar_file(filename, location)
625 elif (content_type and content_type.startswith('text/html')
626 and is_svn_page(file_contents(filename))):
627 # We don't really care about this
628 from pip.vcs.subversion import Subversion
629 Subversion('svn+' + link.url).unpack(location)
630 else:
631 ## FIXME: handle?
632 ## FIXME: magic signatures?
633 logger.fatal('Cannot unpack file %s (downloaded from %s, content-type: %s); cannot detect archive format'
634 % (filename, location, content_type))
635 raise InstallationError('Cannot determine archive format of %s' % location)
636
637
638 def call_subprocess(cmd, show_stdout=True,
639 filter_stdout=None, cwd=None,
640 raise_on_returncode=True,
641 command_level=logger.DEBUG, command_desc=None,
642 extra_environ=None):
643 if command_desc is None:
644 cmd_parts = []
645 for part in cmd:
646 if ' ' in part or '\n' in part or '"' in part or "'" in part:
647 part = '"%s"' % part.replace('"', '\\"')
648 cmd_parts.append(part)
649 command_desc = ' '.join(cmd_parts)
650 if show_stdout:
651 stdout = None
652 else:
653 stdout = subprocess.PIPE
654 logger.log(command_level, "Running command %s" % command_desc)
655 env = os.environ.copy()
656 if extra_environ:
657 env.update(extra_environ)
658 try:
659 proc = subprocess.Popen(
660 cmd, stderr=subprocess.STDOUT, stdin=None, stdout=stdout,
661 cwd=cwd, env=env)
662 except Exception:
663 e = sys.exc_info()[1]
664 logger.fatal(
665 "Error %s while executing command %s" % (e, command_desc))
666 raise
667 all_output = []
668 if stdout is not None:
669 stdout = proc.stdout
670 while 1:
671 line = console_to_str(stdout.readline())
672 if not line:
673 break
674 line = line.rstrip()
675 all_output.append(line + '\n')
676 if filter_stdout:
677 level = filter_stdout(line)
678 if isinstance(level, tuple):
679 level, line = level
680 logger.log(level, line)
681 if not logger.stdout_level_matches(level):
682 logger.show_progress()
683 else:
684 logger.info(line)
685 else:
686 returned_stdout, returned_stderr = proc.communicate()
687 all_output = [returned_stdout or '']
688 proc.wait()
689 if proc.returncode:
690 if raise_on_returncode:
691 if all_output:
692 logger.notify('Complete output from command %s:' % command_desc)
693 logger.notify('\n'.join(all_output) + '\n----------------------------------------')
694 raise InstallationError(
695 "Command %s failed with error code %s in %s"
696 % (command_desc, proc.returncode, cwd))
697 else:
698 logger.warn(
699 "Command %s had error code %s in %s"
700 % (command_desc, proc.returncode, cwd))
701 if stdout is not None:
702 return ''.join(all_output)
703
704
705 def is_prerelease(vers):
706 """
707 Attempt to determine if this is a pre-release using PEP386/PEP426 rules.
708
709 Will return True if it is a pre-release and False if not. Versions are
710 assumed to be a pre-release if they cannot be parsed.
711 """
712 normalized = version.suggest_normalized_version(vers)
713
714 if normalized is None:
715 # Cannot normalize, assume it is a pre-release
716 return True
717
718 parsed = version.normalized_key(normalized)
719 return any([any([y in set(["a", "b", "c", "rc", "dev"]) for y in x]) for x in parsed])
720
[end of pip/util.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pypa/pip
|
2a6cf632e89b759f8b663785fe44e017e6fbbbcb
|
pip install --use-wheel doesn't respect --user for installing scripts
For example:
```
habnabit@bach:~$ pip install --use-wheel -i [redacted] -r requirements.txt --user
Downloading/unpacking zope.interface==4.0.5 (from -r requirements.txt (line 2))
Downloading zope.interface-4.0.5-cp27-none-linux_x86_64.whl (144kB): 144kB downloaded
Downloading/unpacking Twisted==12.3.0 (from -r requirements.txt (line 3))
Downloading Twisted-12.3.0-cp27-none-linux_x86_64.whl (2.9MB): 2.9MB downloaded
Downloading/unpacking py-bcrypt==0.3 (from -r requirements.txt (line 4))
Downloading py_bcrypt-0.3-cp27-none-linux_x86_64.whl
Downloading/unpacking web.py==0.37 (from -r requirements.txt (line 5))
Downloading web.py-0.37-py27-none-any.whl (100kB): 100kB downloaded
Downloading/unpacking SQLAlchemy==0.7.10 (from -r requirements.txt (line 6))
Downloading SQLAlchemy-0.7.10-cp27-none-linux_x86_64.whl (727kB): 727kB downloaded
Downloading/unpacking psycopg2==2.5 (from -r requirements.txt (line 7))
Downloading psycopg2-2.5-cp27-none-linux_x86_64.whl (309kB): 309kB downloaded
Downloading/unpacking sanpera==0.1.1.dev1 (from -r requirements.txt (line 8))
Downloading sanpera-0.1.1.dev1.tar.gz (239kB): 239kB downloaded
Running setup.py egg_info for package sanpera
Package ImageMagick was not found in the pkg-config search path.
Perhaps you should add the directory containing `ImageMagick.pc'
to the PKG_CONFIG_PATH environment variable
No package 'ImageMagick' found
Package ImageMagick was not found in the pkg-config search path.
Perhaps you should add the directory containing `ImageMagick.pc'
to the PKG_CONFIG_PATH environment variable
No package 'ImageMagick' found
Downloading/unpacking anyjson==0.3.3 (from -r requirements.txt (line 9))
Downloading anyjson-0.3.3-py27-none-any.whl
Downloading/unpacking yajl==0.3.5 (from -r requirements.txt (line 10))
Downloading yajl-0.3.5-cp27-none-linux_x86_64.whl (56kB): 56kB downloaded
Requirement already satisfied (use --upgrade to upgrade): setuptools in ./.local/lib/python2.7/site-packages/setuptools-1.0-py2.7.egg (from zope.interface==4.0.5->-r requirements.txt (line 2))
Installing collected packages: zope.interface, Twisted, py-bcrypt, web.py, SQLAlchemy, psycopg2, sanpera, anyjson, yajl
Cleaning up...
Exception:
Traceback (most recent call last):
File "/home/habnabit/.local/lib/python2.7/site-packages/pip-1.4.1-py2.7.egg/pip/basecommand.py", line 134, in main
status = self.run(options, args)
File "/home/habnabit/.local/lib/python2.7/site-packages/pip-1.4.1-py2.7.egg/pip/commands/install.py", line 241, in run
requirement_set.install(install_options, global_options, root=options.root_path)
File "/home/habnabit/.local/lib/python2.7/site-packages/pip-1.4.1-py2.7.egg/pip/req.py", line 1298, in install
requirement.install(install_options, global_options, *args, **kwargs)
File "/home/habnabit/.local/lib/python2.7/site-packages/pip-1.4.1-py2.7.egg/pip/req.py", line 595, in install
self.move_wheel_files(self.source_dir)
File "/home/habnabit/.local/lib/python2.7/site-packages/pip-1.4.1-py2.7.egg/pip/req.py", line 815, in move_wheel_files
move_wheel_files(self.name, self.req, wheeldir, user=self.use_user_site, home=self.target_dir)
File "/home/habnabit/.local/lib/python2.7/site-packages/pip-1.4.1-py2.7.egg/pip/wheel.py", line 184, in move_wheel_files
clobber(source, dest, False, fixer=fixer)
File "/home/habnabit/.local/lib/python2.7/site-packages/pip-1.4.1-py2.7.egg/pip/wheel.py", line 166, in clobber
shutil.move(srcfile, destfile)
File "/usr/lib/python2.7/shutil.py", line 301, in move
copy2(src, real_dst)
File "/usr/lib/python2.7/shutil.py", line 130, in copy2
copyfile(src, dst)
File "/usr/lib/python2.7/shutil.py", line 83, in copyfile
with open(dst, 'wb') as fdst:
IOError: [Errno 13] Permission denied: '/usr/bin/trial'
```
Why is this trying to write to `/usr/bin` when `--user` is given?
|
It's probably either a bug in https://github.com/pypa/pip/blob/develop/pip/locations.py#L131 or the user flag isn't making it through to that point.
this line is throwing away the proper location for scripts.
https://github.com/pypa/pip/blob/develop/pip/locations.py#L147
the `bin_py` logic location up above can probably replaced with the newer `distutils_scheme` method.
btw, there is a `--user` test for wheels, just wasn't confirming scripts.
fyi, will get on this tomorrow.
|
2013-08-31T15:23:38Z
|
<patch>
diff --git a/pip/locations.py b/pip/locations.py
--- a/pip/locations.py
+++ b/pip/locations.py
@@ -137,15 +137,15 @@ def distutils_scheme(dist_name, user=False, home=None):
scheme = {}
d = Distribution({'name': dist_name})
i = install(d)
+ # NOTE: setting user or home has the side-effect of creating the home dir or
+ # user base for installations during finalize_options()
+ # ideally, we'd prefer a scheme class that has no side-effects.
i.user = user or i.user
i.home = home or i.home
i.finalize_options()
for key in SCHEME_KEYS:
scheme[key] = getattr(i, 'install_'+key)
- #be backward-compatible with what pip has always done?
- scheme['scripts'] = bin_py
-
if running_under_virtualenv():
scheme['headers'] = os.path.join(sys.prefix,
'include',
</patch>
|
[]
|
[]
| |||
numpy__numpy-13097
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ndarray.T does not return self as doc says
<!-- Please describe the issue in detail here, and fill in the fields below -->
`x.T` seems returning *a view of* `x` if `x.ndim < 2`, while the documentation (https://www.numpy.org/devdocs/reference/generated/numpy.ndarray.T.html) says it returns "self". To me it's not clear what is different between `x.T` and `x.transpose()` except for performance.
### Reproducing code example:
<!-- A short code example that reproduces the problem/missing feature. It should be
self-contained, i.e., possible to run as-is via 'python myproblem.py' -->
```python
import numpy as np
x = np.array([3, 4])
print(x.T is x) # => False
print(x.T.base is x) # => True
```
### Numpy/Python version information:
I checked two sets of versions:
```
>>> print(numpy.__version__, sys.version)
1.16.1 3.6.5 (default, Jun 21 2018, 17:25:32)
[GCC 5.4.0 20160609]
```
and
```
>>> print(numpy.__version__, sys.version)
('1.9.3', '2.7.15 (default, Nov 7 2018, 12:21:05) \n[GCC 4.2.1 Compatible Apple LLVM 10.0.0 (clang-1000.10.44.4)]')
```
</issue>
<code>
[start of README.md]
1 # <img alt="NumPy" src="https://cdn.rawgit.com/numpy/numpy/master/branding/icons/numpylogo.svg" height="60">
2
3 [](
4 https://travis-ci.org/numpy/numpy)
5 [](
6 https://ci.appveyor.com/project/charris/numpy)
7 [](
8 https://dev.azure.com/numpy/numpy/_build/latest?definitionId=5)
9 [](
10 https://codecov.io/gh/numpy/numpy)
11
12 NumPy is the fundamental package needed for scientific computing with Python.
13
14 - **Website (including documentation):** https://www.numpy.org
15 - **Mailing list:** https://mail.python.org/mailman/listinfo/numpy-discussion
16 - **Source:** https://github.com/numpy/numpy
17 - **Bug reports:** https://github.com/numpy/numpy/issues
18 - **Contributing:** https://www.numpy.org/devdocs/dev/index.html
19
20 It provides:
21
22 - a powerful N-dimensional array object
23 - sophisticated (broadcasting) functions
24 - tools for integrating C/C++ and Fortran code
25 - useful linear algebra, Fourier transform, and random number capabilities
26
27 Testing:
28
29 - NumPy versions ≥ 1.15 require `pytest`
30 - NumPy versions < 1.15 require `nose`
31
32 Tests can then be run after installation with:
33
34 python -c 'import numpy; numpy.test()'
35
36 [](https://numfocus.org)
37
[end of README.md]
[start of numpy/__init__.py]
1 """
2 NumPy
3 =====
4
5 Provides
6 1. An array object of arbitrary homogeneous items
7 2. Fast mathematical operations over arrays
8 3. Linear Algebra, Fourier Transforms, Random Number Generation
9
10 How to use the documentation
11 ----------------------------
12 Documentation is available in two forms: docstrings provided
13 with the code, and a loose standing reference guide, available from
14 `the NumPy homepage <https://www.scipy.org>`_.
15
16 We recommend exploring the docstrings using
17 `IPython <https://ipython.org>`_, an advanced Python shell with
18 TAB-completion and introspection capabilities. See below for further
19 instructions.
20
21 The docstring examples assume that `numpy` has been imported as `np`::
22
23 >>> import numpy as np
24
25 Code snippets are indicated by three greater-than signs::
26
27 >>> x = 42
28 >>> x = x + 1
29
30 Use the built-in ``help`` function to view a function's docstring::
31
32 >>> help(np.sort)
33 ... # doctest: +SKIP
34
35 For some objects, ``np.info(obj)`` may provide additional help. This is
36 particularly true if you see the line "Help on ufunc object:" at the top
37 of the help() page. Ufuncs are implemented in C, not Python, for speed.
38 The native Python help() does not know how to view their help, but our
39 np.info() function does.
40
41 To search for documents containing a keyword, do::
42
43 >>> np.lookfor('keyword')
44 ... # doctest: +SKIP
45
46 General-purpose documents like a glossary and help on the basic concepts
47 of numpy are available under the ``doc`` sub-module::
48
49 >>> from numpy import doc
50 >>> help(doc)
51 ... # doctest: +SKIP
52
53 Available subpackages
54 ---------------------
55 doc
56 Topical documentation on broadcasting, indexing, etc.
57 lib
58 Basic functions used by several sub-packages.
59 random
60 Core Random Tools
61 linalg
62 Core Linear Algebra Tools
63 fft
64 Core FFT routines
65 polynomial
66 Polynomial tools
67 testing
68 NumPy testing tools
69 f2py
70 Fortran to Python Interface Generator.
71 distutils
72 Enhancements to distutils with support for
73 Fortran compilers support and more.
74
75 Utilities
76 ---------
77 test
78 Run numpy unittests
79 show_config
80 Show numpy build configuration
81 dual
82 Overwrite certain functions with high-performance Scipy tools
83 matlib
84 Make everything matrices.
85 __version__
86 NumPy version string
87
88 Viewing documentation using IPython
89 -----------------------------------
90 Start IPython with the NumPy profile (``ipython -p numpy``), which will
91 import `numpy` under the alias `np`. Then, use the ``cpaste`` command to
92 paste examples into the shell. To see which functions are available in
93 `numpy`, type ``np.<TAB>`` (where ``<TAB>`` refers to the TAB key), or use
94 ``np.*cos*?<ENTER>`` (where ``<ENTER>`` refers to the ENTER key) to narrow
95 down the list. To view the docstring for a function, use
96 ``np.cos?<ENTER>`` (to view the docstring) and ``np.cos??<ENTER>`` (to view
97 the source code).
98
99 Copies vs. in-place operation
100 -----------------------------
101 Most of the functions in `numpy` return a copy of the array argument
102 (e.g., `np.sort`). In-place versions of these functions are often
103 available as array methods, i.e. ``x = np.array([1,2,3]); x.sort()``.
104 Exceptions to this rule are documented.
105
106 """
107 from __future__ import division, absolute_import, print_function
108
109 import sys
110 import warnings
111
112 from ._globals import ModuleDeprecationWarning, VisibleDeprecationWarning
113 from ._globals import _NoValue
114
115 # We first need to detect if we're being called as part of the numpy setup
116 # procedure itself in a reliable manner.
117 try:
118 __NUMPY_SETUP__
119 except NameError:
120 __NUMPY_SETUP__ = False
121
122 if __NUMPY_SETUP__:
123 sys.stderr.write('Running from numpy source directory.\n')
124 else:
125 try:
126 from numpy.__config__ import show as show_config
127 except ImportError:
128 msg = """Error importing numpy: you should not try to import numpy from
129 its source directory; please exit the numpy source tree, and relaunch
130 your python interpreter from there."""
131 raise ImportError(msg)
132
133 from .version import git_revision as __git_revision__
134 from .version import version as __version__
135
136 __all__ = ['ModuleDeprecationWarning',
137 'VisibleDeprecationWarning']
138
139 # Allow distributors to run custom init code
140 from . import _distributor_init
141
142 from . import core
143 from .core import *
144 from . import compat
145 from . import lib
146 from .lib import *
147 from . import linalg
148 from . import fft
149 from . import polynomial
150 from . import random
151 from . import ctypeslib
152 from . import ma
153 from . import matrixlib as _mat
154 from .matrixlib import *
155 from .compat import long
156
157 # Make these accessible from numpy name-space
158 # but not imported in from numpy import *
159 if sys.version_info[0] >= 3:
160 from builtins import bool, int, float, complex, object, str
161 unicode = str
162 else:
163 from __builtin__ import bool, int, float, complex, object, unicode, str
164
165 from .core import round, abs, max, min
166 # now that numpy modules are imported, can initialize limits
167 core.getlimits._register_known_types()
168
169 __all__.extend(['__version__', 'show_config'])
170 __all__.extend(core.__all__)
171 __all__.extend(_mat.__all__)
172 __all__.extend(lib.__all__)
173 __all__.extend(['linalg', 'fft', 'random', 'ctypeslib', 'ma'])
174
175 # Filter out Cython harmless warnings
176 warnings.filterwarnings("ignore", message="numpy.dtype size changed")
177 warnings.filterwarnings("ignore", message="numpy.ufunc size changed")
178 warnings.filterwarnings("ignore", message="numpy.ndarray size changed")
179
180 # oldnumeric and numarray were removed in 1.9. In case some packages import
181 # but do not use them, we define them here for backward compatibility.
182 oldnumeric = 'removed'
183 numarray = 'removed'
184
185 # We don't actually use this ourselves anymore, but I'm not 100% sure that
186 # no-one else in the world is using it (though I hope not)
187 from .testing import Tester
188
189 # Pytest testing
190 from numpy._pytesttester import PytestTester
191 test = PytestTester(__name__)
192 del PytestTester
193
194
195 def _sanity_check():
196 """
197 Quick sanity checks for common bugs caused by environment.
198 There are some cases e.g. with wrong BLAS ABI that cause wrong
199 results under specific runtime conditions that are not necessarily
200 achieved during test suite runs, and it is useful to catch those early.
201
202 See https://github.com/numpy/numpy/issues/8577 and other
203 similar bug reports.
204
205 """
206 try:
207 x = ones(2, dtype=float32)
208 if not abs(x.dot(x) - 2.0) < 1e-5:
209 raise AssertionError()
210 except AssertionError:
211 msg = ("The current Numpy installation ({!r}) fails to "
212 "pass simple sanity checks. This can be caused for example "
213 "by incorrect BLAS library being linked in, or by mixing "
214 "package managers (pip, conda, apt, ...). Search closed "
215 "numpy issues for similar problems.")
216 raise RuntimeError(msg.format(__file__))
217
218 _sanity_check()
219 del _sanity_check
220
[end of numpy/__init__.py]
[start of numpy/doc/structured_arrays.py]
1 """
2 =================
3 Structured Arrays
4 =================
5
6 Introduction
7 ============
8
9 Structured arrays are ndarrays whose datatype is a composition of simpler
10 datatypes organized as a sequence of named :term:`fields <field>`. For example,
11 ::
12
13 >>> x = np.array([('Rex', 9, 81.0), ('Fido', 3, 27.0)],
14 ... dtype=[('name', 'U10'), ('age', 'i4'), ('weight', 'f4')])
15 >>> x
16 array([('Rex', 9, 81.), ('Fido', 3, 27.)],
17 dtype=[('name', 'U10'), ('age', '<i4'), ('weight', '<f4')])
18
19 Here ``x`` is a one-dimensional array of length two whose datatype is a
20 structure with three fields: 1. A string of length 10 or less named 'name', 2.
21 a 32-bit integer named 'age', and 3. a 32-bit float named 'weight'.
22
23 If you index ``x`` at position 1 you get a structure::
24
25 >>> x[1]
26 ('Fido', 3, 27.0)
27
28 You can access and modify individual fields of a structured array by indexing
29 with the field name::
30
31 >>> x['age']
32 array([9, 3], dtype=int32)
33 >>> x['age'] = 5
34 >>> x
35 array([('Rex', 5, 81.), ('Fido', 5, 27.)],
36 dtype=[('name', 'U10'), ('age', '<i4'), ('weight', '<f4')])
37
38 Structured datatypes are designed to be able to mimic 'structs' in the C
39 language, and share a similar memory layout. They are meant for interfacing with
40 C code and for low-level manipulation of structured buffers, for example for
41 interpreting binary blobs. For these purposes they support specialized features
42 such as subarrays, nested datatypes, and unions, and allow control over the
43 memory layout of the structure.
44
45 Users looking to manipulate tabular data, such as stored in csv files, may find
46 other pydata projects more suitable, such as xarray, pandas, or DataArray.
47 These provide a high-level interface for tabular data analysis and are better
48 optimized for that use. For instance, the C-struct-like memory layout of
49 structured arrays in numpy can lead to poor cache behavior in comparison.
50
51 .. _defining-structured-types:
52
53 Structured Datatypes
54 ====================
55
56 A structured datatype can be thought of as a sequence of bytes of a certain
57 length (the structure's :term:`itemsize`) which is interpreted as a collection
58 of fields. Each field has a name, a datatype, and a byte offset within the
59 structure. The datatype of a field may be any numpy datatype including other
60 structured datatypes, and it may also be a :term:`subarray data type` which
61 behaves like an ndarray of a specified shape. The offsets of the fields are
62 arbitrary, and fields may even overlap. These offsets are usually determined
63 automatically by numpy, but can also be specified.
64
65 Structured Datatype Creation
66 ----------------------------
67
68 Structured datatypes may be created using the function :func:`numpy.dtype`.
69 There are 4 alternative forms of specification which vary in flexibility and
70 conciseness. These are further documented in the
71 :ref:`Data Type Objects <arrays.dtypes.constructing>` reference page, and in
72 summary they are:
73
74 1. A list of tuples, one tuple per field
75
76 Each tuple has the form ``(fieldname, datatype, shape)`` where shape is
77 optional. ``fieldname`` is a string (or tuple if titles are used, see
78 :ref:`Field Titles <titles>` below), ``datatype`` may be any object
79 convertible to a datatype, and ``shape`` is a tuple of integers specifying
80 subarray shape.
81
82 >>> np.dtype([('x', 'f4'), ('y', np.float32), ('z', 'f4', (2, 2))])
83 dtype([('x', '<f4'), ('y', '<f4'), ('z', '<f4', (2, 2))])
84
85 If ``fieldname`` is the empty string ``''``, the field will be given a
86 default name of the form ``f#``, where ``#`` is the integer index of the
87 field, counting from 0 from the left::
88
89 >>> np.dtype([('x', 'f4'), ('', 'i4'), ('z', 'i8')])
90 dtype([('x', '<f4'), ('f1', '<i4'), ('z', '<i8')])
91
92 The byte offsets of the fields within the structure and the total
93 structure itemsize are determined automatically.
94
95 2. A string of comma-separated dtype specifications
96
97 In this shorthand notation any of the :ref:`string dtype specifications
98 <arrays.dtypes.constructing>` may be used in a string and separated by
99 commas. The itemsize and byte offsets of the fields are determined
100 automatically, and the field names are given the default names ``f0``,
101 ``f1``, etc. ::
102
103 >>> np.dtype('i8, f4, S3')
104 dtype([('f0', '<i8'), ('f1', '<f4'), ('f2', 'S3')])
105 >>> np.dtype('3int8, float32, (2, 3)float64')
106 dtype([('f0', 'i1', (3,)), ('f1', '<f4'), ('f2', '<f8', (2, 3))])
107
108 3. A dictionary of field parameter arrays
109
110 This is the most flexible form of specification since it allows control
111 over the byte-offsets of the fields and the itemsize of the structure.
112
113 The dictionary has two required keys, 'names' and 'formats', and four
114 optional keys, 'offsets', 'itemsize', 'aligned' and 'titles'. The values
115 for 'names' and 'formats' should respectively be a list of field names and
116 a list of dtype specifications, of the same length. The optional 'offsets'
117 value should be a list of integer byte-offsets, one for each field within
118 the structure. If 'offsets' is not given the offsets are determined
119 automatically. The optional 'itemsize' value should be an integer
120 describing the total size in bytes of the dtype, which must be large
121 enough to contain all the fields.
122 ::
123
124 >>> np.dtype({'names': ['col1', 'col2'], 'formats': ['i4', 'f4']})
125 dtype([('col1', '<i4'), ('col2', '<f4')])
126 >>> np.dtype({'names': ['col1', 'col2'],
127 ... 'formats': ['i4', 'f4'],
128 ... 'offsets': [0, 4],
129 ... 'itemsize': 12})
130 dtype({'names':['col1','col2'], 'formats':['<i4','<f4'], 'offsets':[0,4], 'itemsize':12})
131
132 Offsets may be chosen such that the fields overlap, though this will mean
133 that assigning to one field may clobber any overlapping field's data. As
134 an exception, fields of :class:`numpy.object` type cannot overlap with
135 other fields, because of the risk of clobbering the internal object
136 pointer and then dereferencing it.
137
138 The optional 'aligned' value can be set to ``True`` to make the automatic
139 offset computation use aligned offsets (see :ref:`offsets-and-alignment`),
140 as if the 'align' keyword argument of :func:`numpy.dtype` had been set to
141 True.
142
143 The optional 'titles' value should be a list of titles of the same length
144 as 'names', see :ref:`Field Titles <titles>` below.
145
146 4. A dictionary of field names
147
148 The use of this form of specification is discouraged, but documented here
149 because older numpy code may use it. The keys of the dictionary are the
150 field names and the values are tuples specifying type and offset::
151
152 >>> np.dtype({'col1': ('i1', 0), 'col2': ('f4', 1)})
153 dtype([('col1', 'i1'), ('col2', '<f4')])
154
155 This form is discouraged because Python dictionaries do not preserve order
156 in Python versions before Python 3.6, and the order of the fields in a
157 structured dtype has meaning. :ref:`Field Titles <titles>` may be
158 specified by using a 3-tuple, see below.
159
160 Manipulating and Displaying Structured Datatypes
161 ------------------------------------------------
162
163 The list of field names of a structured datatype can be found in the ``names``
164 attribute of the dtype object::
165
166 >>> d = np.dtype([('x', 'i8'), ('y', 'f4')])
167 >>> d.names
168 ('x', 'y')
169
170 The field names may be modified by assigning to the ``names`` attribute using a
171 sequence of strings of the same length.
172
173 The dtype object also has a dictionary-like attribute, ``fields``, whose keys
174 are the field names (and :ref:`Field Titles <titles>`, see below) and whose
175 values are tuples containing the dtype and byte offset of each field. ::
176
177 >>> d.fields
178 mappingproxy({'x': (dtype('int64'), 0), 'y': (dtype('float32'), 8)})
179
180 Both the ``names`` and ``fields`` attributes will equal ``None`` for
181 unstructured arrays. The recommended way to test if a dtype is structured is
182 with `if dt.names is not None` rather than `if dt.names`, to account for dtypes
183 with 0 fields.
184
185 The string representation of a structured datatype is shown in the "list of
186 tuples" form if possible, otherwise numpy falls back to using the more general
187 dictionary form.
188
189 .. _offsets-and-alignment:
190
191 Automatic Byte Offsets and Alignment
192 ------------------------------------
193
194 Numpy uses one of two methods to automatically determine the field byte offsets
195 and the overall itemsize of a structured datatype, depending on whether
196 ``align=True`` was specified as a keyword argument to :func:`numpy.dtype`.
197
198 By default (``align=False``), numpy will pack the fields together such that
199 each field starts at the byte offset the previous field ended, and the fields
200 are contiguous in memory. ::
201
202 >>> def print_offsets(d):
203 ... print("offsets:", [d.fields[name][1] for name in d.names])
204 ... print("itemsize:", d.itemsize)
205 >>> print_offsets(np.dtype('u1, u1, i4, u1, i8, u2'))
206 offsets: [0, 1, 2, 6, 7, 15]
207 itemsize: 17
208
209 If ``align=True`` is set, numpy will pad the structure in the same way many C
210 compilers would pad a C-struct. Aligned structures can give a performance
211 improvement in some cases, at the cost of increased datatype size. Padding
212 bytes are inserted between fields such that each field's byte offset will be a
213 multiple of that field's alignment, which is usually equal to the field's size
214 in bytes for simple datatypes, see :c:member:`PyArray_Descr.alignment`. The
215 structure will also have trailing padding added so that its itemsize is a
216 multiple of the largest field's alignment. ::
217
218 >>> print_offsets(np.dtype('u1, u1, i4, u1, i8, u2', align=True))
219 offsets: [0, 1, 4, 8, 16, 24]
220 itemsize: 32
221
222 Note that although almost all modern C compilers pad in this way by default,
223 padding in C structs is C-implementation-dependent so this memory layout is not
224 guaranteed to exactly match that of a corresponding struct in a C program. Some
225 work may be needed, either on the numpy side or the C side, to obtain exact
226 correspondence.
227
228 If offsets were specified using the optional ``offsets`` key in the
229 dictionary-based dtype specification, setting ``align=True`` will check that
230 each field's offset is a multiple of its size and that the itemsize is a
231 multiple of the largest field size, and raise an exception if not.
232
233 If the offsets of the fields and itemsize of a structured array satisfy the
234 alignment conditions, the array will have the ``ALIGNED`` :attr:`flag
235 <numpy.ndarray.flags>` set.
236
237 A convenience function :func:`numpy.lib.recfunctions.repack_fields` converts an
238 aligned dtype or array to a packed one and vice versa. It takes either a dtype
239 or structured ndarray as an argument, and returns a copy with fields re-packed,
240 with or without padding bytes.
241
242 .. _titles:
243
244 Field Titles
245 ------------
246
247 In addition to field names, fields may also have an associated :term:`title`,
248 an alternate name, which is sometimes used as an additional description or
249 alias for the field. The title may be used to index an array, just like a
250 field name.
251
252 To add titles when using the list-of-tuples form of dtype specification, the
253 field name may be specified as a tuple of two strings instead of a single
254 string, which will be the field's title and field name respectively. For
255 example::
256
257 >>> np.dtype([(('my title', 'name'), 'f4')])
258 dtype([(('my title', 'name'), '<f4')])
259
260 When using the first form of dictionary-based specification, the titles may be
261 supplied as an extra ``'titles'`` key as described above. When using the second
262 (discouraged) dictionary-based specification, the title can be supplied by
263 providing a 3-element tuple ``(datatype, offset, title)`` instead of the usual
264 2-element tuple::
265
266 >>> np.dtype({'name': ('i4', 0, 'my title')})
267 dtype([(('my title', 'name'), '<i4')])
268
269 The ``dtype.fields`` dictionary will contain titles as keys, if any
270 titles are used. This means effectively that a field with a title will be
271 represented twice in the fields dictionary. The tuple values for these fields
272 will also have a third element, the field title. Because of this, and because
273 the ``names`` attribute preserves the field order while the ``fields``
274 attribute may not, it is recommended to iterate through the fields of a dtype
275 using the ``names`` attribute of the dtype, which will not list titles, as
276 in::
277
278 >>> for name in d.names:
279 ... print(d.fields[name][:2])
280 (dtype('int64'), 0)
281 (dtype('float32'), 8)
282
283 Union types
284 -----------
285
286 Structured datatypes are implemented in numpy to have base type
287 :class:`numpy.void` by default, but it is possible to interpret other numpy
288 types as structured types using the ``(base_dtype, dtype)`` form of dtype
289 specification described in
290 :ref:`Data Type Objects <arrays.dtypes.constructing>`. Here, ``base_dtype`` is
291 the desired underlying dtype, and fields and flags will be copied from
292 ``dtype``. This dtype is similar to a 'union' in C.
293
294 Indexing and Assignment to Structured arrays
295 ============================================
296
297 Assigning data to a Structured Array
298 ------------------------------------
299
300 There are a number of ways to assign values to a structured array: Using python
301 tuples, using scalar values, or using other structured arrays.
302
303 Assignment from Python Native Types (Tuples)
304 ````````````````````````````````````````````
305
306 The simplest way to assign values to a structured array is using python tuples.
307 Each assigned value should be a tuple of length equal to the number of fields
308 in the array, and not a list or array as these will trigger numpy's
309 broadcasting rules. The tuple's elements are assigned to the successive fields
310 of the array, from left to right::
311
312 >>> x = np.array([(1, 2, 3), (4, 5, 6)], dtype='i8, f4, f8')
313 >>> x[1] = (7, 8, 9)
314 >>> x
315 array([(1, 2., 3.), (7, 8., 9.)],
316 dtype=[('f0', '<i8'), ('f1', '<f4'), ('f2', '<f8')])
317
318 Assignment from Scalars
319 ```````````````````````
320
321 A scalar assigned to a structured element will be assigned to all fields. This
322 happens when a scalar is assigned to a structured array, or when an
323 unstructured array is assigned to a structured array::
324
325 >>> x = np.zeros(2, dtype='i8, f4, ?, S1')
326 >>> x[:] = 3
327 >>> x
328 array([(3, 3., True, b'3'), (3, 3., True, b'3')],
329 dtype=[('f0', '<i8'), ('f1', '<f4'), ('f2', '?'), ('f3', 'S1')])
330 >>> x[:] = np.arange(2)
331 >>> x
332 array([(0, 0., False, b'0'), (1, 1., True, b'1')],
333 dtype=[('f0', '<i8'), ('f1', '<f4'), ('f2', '?'), ('f3', 'S1')])
334
335 Structured arrays can also be assigned to unstructured arrays, but only if the
336 structured datatype has just a single field::
337
338 >>> twofield = np.zeros(2, dtype=[('A', 'i4'), ('B', 'i4')])
339 >>> onefield = np.zeros(2, dtype=[('A', 'i4')])
340 >>> nostruct = np.zeros(2, dtype='i4')
341 >>> nostruct[:] = twofield
342 Traceback (most recent call last):
343 File "<stdin>", line 1, in <module>
344 ValueError: Can't cast from structure to non-structure, except if the structure only has a single field.
345 >>> nostruct[:] = onefield
346 >>> nostruct
347 array([0, 0], dtype=int32)
348
349 Assignment from other Structured Arrays
350 ```````````````````````````````````````
351
352 Assignment between two structured arrays occurs as if the source elements had
353 been converted to tuples and then assigned to the destination elements. That
354 is, the first field of the source array is assigned to the first field of the
355 destination array, and the second field likewise, and so on, regardless of
356 field names. Structured arrays with a different number of fields cannot be
357 assigned to each other. Bytes of the destination structure which are not
358 included in any of the fields are unaffected. ::
359
360 >>> a = np.zeros(3, dtype=[('a', 'i8'), ('b', 'f4'), ('c', 'S3')])
361 >>> b = np.ones(3, dtype=[('x', 'f4'), ('y', 'S3'), ('z', 'O')])
362 >>> b[:] = a
363 >>> b
364 array([(0., b'0.0', b''), (0., b'0.0', b''), (0., b'0.0', b'')],
365 dtype=[('x', '<f4'), ('y', 'S3'), ('z', 'O')])
366
367
368 Assignment involving subarrays
369 ``````````````````````````````
370
371 When assigning to fields which are subarrays, the assigned value will first be
372 broadcast to the shape of the subarray.
373
374 Indexing Structured Arrays
375 --------------------------
376
377 Accessing Individual Fields
378 ```````````````````````````
379
380 Individual fields of a structured array may be accessed and modified by indexing
381 the array with the field name. ::
382
383 >>> x = np.array([(1, 2), (3, 4)], dtype=[('foo', 'i8'), ('bar', 'f4')])
384 >>> x['foo']
385 array([1, 3])
386 >>> x['foo'] = 10
387 >>> x
388 array([(10, 2.), (10, 4.)],
389 dtype=[('foo', '<i8'), ('bar', '<f4')])
390
391 The resulting array is a view into the original array. It shares the same
392 memory locations and writing to the view will modify the original array. ::
393
394 >>> y = x['bar']
395 >>> y[:] = 11
396 >>> x
397 array([(10, 11.), (10, 11.)],
398 dtype=[('foo', '<i8'), ('bar', '<f4')])
399
400 This view has the same dtype and itemsize as the indexed field, so it is
401 typically a non-structured array, except in the case of nested structures.
402
403 >>> y.dtype, y.shape, y.strides
404 (dtype('float32'), (2,), (12,))
405
406 If the accessed field is a subarray, the dimensions of the subarray
407 are appended to the shape of the result::
408
409 >>> x = np.zeros((2, 2), dtype=[('a', np.int32), ('b', np.float64, (3, 3))])
410 >>> x['a'].shape
411 (2, 2)
412 >>> x['b'].shape
413 (2, 2, 3, 3)
414
415 Accessing Multiple Fields
416 ```````````````````````````
417
418 One can index and assign to a structured array with a multi-field index, where
419 the index is a list of field names.
420
421 .. warning::
422 The behavior of multi-field indexes changed from Numpy 1.15 to Numpy 1.16.
423
424 The result of indexing with a multi-field index is a view into the original
425 array, as follows::
426
427 >>> a = np.zeros(3, dtype=[('a', 'i4'), ('b', 'i4'), ('c', 'f4')])
428 >>> a[['a', 'c']]
429 array([(0, 0.), (0, 0.), (0, 0.)],
430 dtype={'names':['a','c'], 'formats':['<i4','<f4'], 'offsets':[0,8], 'itemsize':12})
431
432 Assignment to the view modifies the original array. The view's fields will be
433 in the order they were indexed. Note that unlike for single-field indexing, the
434 dtype of the view has the same itemsize as the original array, and has fields
435 at the same offsets as in the original array, and unindexed fields are merely
436 missing.
437
438 .. warning::
439 In Numpy 1.15, indexing an array with a multi-field index returned a copy of
440 the result above, but with fields packed together in memory as if
441 passed through :func:`numpy.lib.recfunctions.repack_fields`.
442
443 The new behavior as of Numpy 1.16 leads to extra "padding" bytes at the
444 location of unindexed fields compared to 1.15. You will need to update any
445 code which depends on the data having a "packed" layout. For instance code
446 such as::
447
448 >>> a[['a', 'c']].view('i8') # Fails in Numpy 1.16
449 Traceback (most recent call last):
450 File "<stdin>", line 1, in <module>
451 ValueError: When changing to a smaller dtype, its size must be a divisor of the size of original dtype
452
453 will need to be changed. This code has raised a ``FutureWarning`` since
454 Numpy 1.12, and similar code has raised ``FutureWarning`` since 1.7.
455
456 In 1.16 a number of functions have been introduced in the
457 :mod:`numpy.lib.recfunctions` module to help users account for this
458 change. These are
459 :func:`numpy.lib.recfunctions.repack_fields`.
460 :func:`numpy.lib.recfunctions.structured_to_unstructured`,
461 :func:`numpy.lib.recfunctions.unstructured_to_structured`,
462 :func:`numpy.lib.recfunctions.apply_along_fields`,
463 :func:`numpy.lib.recfunctions.assign_fields_by_name`, and
464 :func:`numpy.lib.recfunctions.require_fields`.
465
466 The function :func:`numpy.lib.recfunctions.repack_fields` can always be
467 used to reproduce the old behavior, as it will return a packed copy of the
468 structured array. The code above, for example, can be replaced with:
469
470 >>> from numpy.lib.recfunctions import repack_fields
471 >>> repack_fields(a[['a', 'c']]).view('i8') # supported in 1.16
472 array([0, 0, 0])
473
474 Furthermore, numpy now provides a new function
475 :func:`numpy.lib.recfunctions.structured_to_unstructured` which is a safer
476 and more efficient alternative for users who wish to convert structured
477 arrays to unstructured arrays, as the view above is often indeded to do.
478 This function allows safe conversion to an unstructured type taking into
479 account padding, often avoids a copy, and also casts the datatypes
480 as needed, unlike the view. Code such as:
481
482 >>> b = np.zeros(3, dtype=[('x', 'f4'), ('y', 'f4'), ('z', 'f4')])
483 >>> b[['x', 'z']].view('f4')
484 array([0., 0., 0., 0., 0., 0., 0., 0., 0.], dtype=float32)
485
486 can be made safer by replacing with:
487
488 >>> from numpy.lib.recfunctions import structured_to_unstructured
489 >>> structured_to_unstructured(b[['x', 'z']])
490 array([0, 0, 0])
491
492
493 Assignment to an array with a multi-field index modifies the original array::
494
495 >>> a[['a', 'c']] = (2, 3)
496 >>> a
497 array([(2, 0, 3.), (2, 0, 3.), (2, 0, 3.)],
498 dtype=[('a', '<i4'), ('b', '<i4'), ('c', '<f4')])
499
500 This obeys the structured array assignment rules described above. For example,
501 this means that one can swap the values of two fields using appropriate
502 multi-field indexes::
503
504 >>> a[['a', 'c']] = a[['c', 'a']]
505
506 Indexing with an Integer to get a Structured Scalar
507 ```````````````````````````````````````````````````
508
509 Indexing a single element of a structured array (with an integer index) returns
510 a structured scalar::
511
512 >>> x = np.array([(1, 2., 3.)], dtype='i, f, f')
513 >>> scalar = x[0]
514 >>> scalar
515 (1, 2., 3.)
516 >>> type(scalar)
517 <class 'numpy.void'>
518
519 Unlike other numpy scalars, structured scalars are mutable and act like views
520 into the original array, such that modifying the scalar will modify the
521 original array. Structured scalars also support access and assignment by field
522 name::
523
524 >>> x = np.array([(1, 2), (3, 4)], dtype=[('foo', 'i8'), ('bar', 'f4')])
525 >>> s = x[0]
526 >>> s['bar'] = 100
527 >>> x
528 array([(1, 100.), (3, 4.)],
529 dtype=[('foo', '<i8'), ('bar', '<f4')])
530
531 Similarly to tuples, structured scalars can also be indexed with an integer::
532
533 >>> scalar = np.array([(1, 2., 3.)], dtype='i, f, f')[0]
534 >>> scalar[0]
535 1
536 >>> scalar[1] = 4
537
538 Thus, tuples might be thought of as the native Python equivalent to numpy's
539 structured types, much like native python integers are the equivalent to
540 numpy's integer types. Structured scalars may be converted to a tuple by
541 calling :func:`ndarray.item`::
542
543 >>> scalar.item(), type(scalar.item())
544 ((1, 4.0, 3.0), <class 'tuple'>)
545
546 Viewing Structured Arrays Containing Objects
547 --------------------------------------------
548
549 In order to prevent clobbering object pointers in fields of
550 :class:`numpy.object` type, numpy currently does not allow views of structured
551 arrays containing objects.
552
553 Structure Comparison
554 --------------------
555
556 If the dtypes of two void structured arrays are equal, testing the equality of
557 the arrays will result in a boolean array with the dimensions of the original
558 arrays, with elements set to ``True`` where all fields of the corresponding
559 structures are equal. Structured dtypes are equal if the field names,
560 dtypes and titles are the same, ignoring endianness, and the fields are in
561 the same order::
562
563 >>> a = np.zeros(2, dtype=[('a', 'i4'), ('b', 'i4')])
564 >>> b = np.ones(2, dtype=[('a', 'i4'), ('b', 'i4')])
565 >>> a == b
566 array([False, False])
567
568 Currently, if the dtypes of two void structured arrays are not equivalent the
569 comparison fails, returning the scalar value ``False``. This behavior is
570 deprecated as of numpy 1.10 and will raise an error or perform elementwise
571 comparison in the future.
572
573 The ``<`` and ``>`` operators always return ``False`` when comparing void
574 structured arrays, and arithmetic and bitwise operations are not supported.
575
576 Record Arrays
577 =============
578
579 As an optional convenience numpy provides an ndarray subclass,
580 :class:`numpy.recarray`, and associated helper functions in the
581 :mod:`numpy.rec` submodule, that allows access to fields of structured arrays
582 by attribute instead of only by index. Record arrays also use a special
583 datatype, :class:`numpy.record`, that allows field access by attribute on the
584 structured scalars obtained from the array.
585
586 The simplest way to create a record array is with :func:`numpy.rec.array`::
587
588 >>> recordarr = np.rec.array([(1, 2., 'Hello'), (2, 3., "World")],
589 ... dtype=[('foo', 'i4'),('bar', 'f4'), ('baz', 'S10')])
590 >>> recordarr.bar
591 array([ 2., 3.], dtype=float32)
592 >>> recordarr[1:2]
593 rec.array([(2, 3., b'World')],
594 dtype=[('foo', '<i4'), ('bar', '<f4'), ('baz', 'S10')])
595 >>> recordarr[1:2].foo
596 array([2], dtype=int32)
597 >>> recordarr.foo[1:2]
598 array([2], dtype=int32)
599 >>> recordarr[1].baz
600 b'World'
601
602 :func:`numpy.rec.array` can convert a wide variety of arguments into record
603 arrays, including structured arrays::
604
605 >>> arr = np.array([(1, 2., 'Hello'), (2, 3., "World")],
606 ... dtype=[('foo', 'i4'), ('bar', 'f4'), ('baz', 'S10')])
607 >>> recordarr = np.rec.array(arr)
608
609 The :mod:`numpy.rec` module provides a number of other convenience functions for
610 creating record arrays, see :ref:`record array creation routines
611 <routines.array-creation.rec>`.
612
613 A record array representation of a structured array can be obtained using the
614 appropriate `view <numpy-ndarray-view>`_::
615
616 >>> arr = np.array([(1, 2., 'Hello'), (2, 3., "World")],
617 ... dtype=[('foo', 'i4'),('bar', 'f4'), ('baz', 'a10')])
618 >>> recordarr = arr.view(dtype=np.dtype((np.record, arr.dtype)),
619 ... type=np.recarray)
620
621 For convenience, viewing an ndarray as type :class:`np.recarray` will
622 automatically convert to :class:`np.record` datatype, so the dtype can be left
623 out of the view::
624
625 >>> recordarr = arr.view(np.recarray)
626 >>> recordarr.dtype
627 dtype((numpy.record, [('foo', '<i4'), ('bar', '<f4'), ('baz', 'S10')]))
628
629 To get back to a plain ndarray both the dtype and type must be reset. The
630 following view does so, taking into account the unusual case that the
631 recordarr was not a structured type::
632
633 >>> arr2 = recordarr.view(recordarr.dtype.fields or recordarr.dtype, np.ndarray)
634
635 Record array fields accessed by index or by attribute are returned as a record
636 array if the field has a structured type but as a plain ndarray otherwise. ::
637
638 >>> recordarr = np.rec.array([('Hello', (1, 2)), ("World", (3, 4))],
639 ... dtype=[('foo', 'S6'),('bar', [('A', int), ('B', int)])])
640 >>> type(recordarr.foo)
641 <class 'numpy.ndarray'>
642 >>> type(recordarr.bar)
643 <class 'numpy.recarray'>
644
645 Note that if a field has the same name as an ndarray attribute, the ndarray
646 attribute takes precedence. Such fields will be inaccessible by attribute but
647 will still be accessible by index.
648
649 """
650 from __future__ import division, absolute_import, print_function
651
[end of numpy/doc/structured_arrays.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
numpy/numpy
|
15b092f5541e80d7c3d0108957406c6f8686aba0
|
ndarray.T does not return self as doc says
<!-- Please describe the issue in detail here, and fill in the fields below -->
`x.T` seems returning *a view of* `x` if `x.ndim < 2`, while the documentation (https://www.numpy.org/devdocs/reference/generated/numpy.ndarray.T.html) says it returns "self". To me it's not clear what is different between `x.T` and `x.transpose()` except for performance.
### Reproducing code example:
<!-- A short code example that reproduces the problem/missing feature. It should be
self-contained, i.e., possible to run as-is via 'python myproblem.py' -->
```python
import numpy as np
x = np.array([3, 4])
print(x.T is x) # => False
print(x.T.base is x) # => True
```
### Numpy/Python version information:
I checked two sets of versions:
```
>>> print(numpy.__version__, sys.version)
1.16.1 3.6.5 (default, Jun 21 2018, 17:25:32)
[GCC 5.4.0 20160609]
```
and
```
>>> print(numpy.__version__, sys.version)
('1.9.3', '2.7.15 (default, Nov 7 2018, 12:21:05) \n[GCC 4.2.1 Compatible Apple LLVM 10.0.0 (clang-1000.10.44.4)]')
```
|
Yes, you are right, the doc is probably outdated and should be updated. There is no difference, it is simply a convenience shorthand.
Want to create a PR?
|
2019-03-05T11:31:10Z
|
<patch>
diff --git a/numpy/core/_add_newdocs.py b/numpy/core/_add_newdocs.py
--- a/numpy/core/_add_newdocs.py
+++ b/numpy/core/_add_newdocs.py
@@ -2450,8 +2450,9 @@
add_newdoc('numpy.core.multiarray', 'ndarray', ('T',
"""
- Same as self.transpose(), except that self is returned if
- self.ndim < 2.
+ The transposed array.
+
+ Same as ``self.transpose()``.
Examples
--------
@@ -2468,6 +2469,10 @@
>>> x.T
array([ 1., 2., 3., 4.])
+ See Also
+ --------
+ transpose
+
"""))
</patch>
|
[]
|
[]
| |||
Qiskit__qiskit-1438
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Progressbars not spawned unless execute or compile are called first
<!-- ⚠️ If you do not respect this template, your issue will be closed -->
<!-- ⚠️ Make sure to browse the opened and closed issues -->
### Informations
- **Qiskit Terra version**: master
- **Python version**: 3.7
- **Operating system**: OSX
### What is the current behavior?
The following no longer spawns a progress bar"
```
HTMLProgressBar()
ans = parallel_map(func, list(range(100)))
```
unless `compile` or `execute` have been run first.
### Steps to reproduce the problem
uncomment the line below:
```
import time
from qiskit import *
from qiskit.transpiler._parallel import parallel_map
from qiskit.tools.jupyter import *
q = QuantumRegister(2)
c = ClassicalRegister(2)
qc = QuantumCircuit(q, c)
qc.h(q[0])
qc.cx(q[0], q[1])
qc.measure(q, c)
backend = Aer.get_backend('qasm_simulator')
#qobj = compile([qc], backend)
def func(t):
time.sleep(0.1)
return 0
TextProgressBar()
parallel_map(func, list(range(100)))
```
### What is the expected behavior?
You should not have to call `execute` or `compile` before a progressbar can be displayed.
### Suggested solutions
</issue>
<code>
[start of README.md]
1 # Qiskit Terra
2
3 [](https://pypi.python.org/pypi/qiskit)
4 [](https://travis-ci.org/Qiskit/qiskit-terra)
5 [](https://travis-ci.org/Qiskit/qiskit-terra)
6
7 **Qiskit** is a software development kit for
8 developing quantum computing applications and working with NISQ (Noisy-Intermediate Scale Quantum) computers.
9
10 Qiskit is made up elements that each work together to enable quantum computing. This element is **Terra**
11 and is the foundation on which the rest of Qiskit is built (see this [post](https://medium.com/qiskit/qiskit-and-its-fundamental-elements-bcd7ead80492) for an overview).
12
13
14 ## Installation
15
16
17 We encourage installing Qiskit via the PIP tool (a python package manager):
18
19 ```bash
20 pip install qiskit
21 ```
22
23 PIP will handle all dependencies automatically for us and you will always install the latest (and well-tested) version.
24
25 At least [Python 3.5 or later](https://www.python.org/downloads/) is needed for using Qiskit. In
26 addition, [Jupyter Notebook](https://jupyter.readthedocs.io/en/latest/install.html) is recommended
27 for interacting with the tutorials.
28 For this reason we recommend installing the [Anaconda 3](https://www.continuum.io/downloads)
29 python distribution, as it comes with all of these dependencies pre-installed.
30
31 See [installing](doc/install.rst) Qiskit for detailed instructions, how to build from source and using environments.
32
33
34 ## Creating your first quantum program
35
36 Now that Qiskit is installed, it's time to begin working with Terra.
37
38 We are ready to try out a quantum circuit example, which is simulated locally using
39 the Qiskt Aer element. This is a simple example that makes an entangled state.
40
41 ```
42 $ python
43 ```
44
45 ```python
46 >>> from qiskit import *
47 >>> q = QuantumRegister(2)
48 >>> c = ClassicalRegister(2)
49 >>> qc = QuantumCircuit(q, c)
50 >>> qc.h(q[0])
51 >>> qc.cx(q[0], q[1])
52 >>> qc.measure(q, c)
53 >>> backend_sim = Aer.get_backend('qasm_simulator')
54 >>> result = execute(qc, backend_sim).result()
55 >>> print(result.get_counts(qc))
56 ```
57
58 In this case, the output will be:
59
60 ```python
61 {'counts': {'00': 513, '11': 511}}
62 ```
63
64 A script is available [here](examples/python/hello_quantum.py), where we also show how to
65 run the same program on a real quantum computer via IBMQ.
66
67 ### Executing your code on a real quantum chip
68
69 You can also use Qiskit to execute your code on a
70 **real quantum chip**.
71 In order to do so, you need to configure Qiskit for using the credentials in
72 your IBM Q account:
73
74 #### Configure your IBMQ credentials
75
76 1. Create an _[IBM Q](https://quantumexperience.ng.bluemix.net) > Account_ if you haven't already done so.
77
78 2. Get an API token from the IBM Q website under _My Account > Advanced > API Token_.
79
80 3. Take your token from step 2, here called `MY_API_TOKEN`, and run:
81
82 ```python
83 >>> from qiskit import IBMQ
84 >>> IBMQ.save_account('MY_API_TOKEN')
85 ```
86
87 4. If you have access to the IBM Q Network features, you also need to pass the
88 url listed on your IBM Q account page to `save_account`.
89
90 After calling `IBMQ.save_account()`, your credentials will be stored on disk.
91 Once they are stored, at any point in the future you can load and use them
92 in your program simply via:
93
94 ```python
95 >>> from qiskit import IBMQ
96 >>> IBMQ.load_accounts()
97 ```
98
99 For those who do not want to save there credentials to disk please use
100
101 ```python
102 >>> from qiskit import IBMQ
103 >>> IBMQ.enable_account('MY_API_TOKEN')
104 ```
105
106 and the token will only be active for the session. For examples using Terra with real
107 devices we have provided a set of examples in **examples/python** and we suggest starting with [using_qiskit_terra_level_0.py](examples/python/using_qiskit_terra_level_0.py) and working up in
108 the levels.
109
110 ## Contribution guidelines
111
112 If you'd like to contribute to Qiskit, please take a look at our
113 [contribution guidelines](.github/CONTRIBUTING.rst). This project adheres to Qiskit's [code of conduct](.github/CODE_OF_CONDUCT.rst). By participating, you are expect to uphold to this code.
114
115 We use [GitHub issues](https://github.com/Qiskit/qiskit-terra/issues) for tracking requests and bugs.
116 Please use our [slack](https://qiskit.slack.com) for discussion. To join our Slack community use the [link](https://join.slack.com/t/qiskit/shared_invite/enQtNDc2NjUzMjE4Mzc0LTMwZmE0YTM4ZThiNGJmODkzN2Y2NTNlMDIwYWNjYzA2ZmM1YTRlZGQ3OGM0NjcwMjZkZGE0MTA4MGQ1ZTVmYzk). To ask questions to [Stack Overflow](https://stackoverflow.com/questions/tagged/qiskit).
117
118
119
120 ### Next Steps
121
122 Now you're set up and ready to check out some of the other examples from our
123 [Qiskit Tutorial](https://github.com/Qiskit/qiskit-tutorial) repository.
124
125
126 ## Authors
127
128 Qiskit Terra is the work of [many people](https://github.com/Qiskit/qiskit-terra/graphs/contributors) who contribute
129 to the project at different levels.
130
131 ## License
132
133 [Apache License 2.0](LICENSE.txt)
[end of README.md]
[start of examples/python/ghz.py]
1 # -*- coding: utf-8 -*-
2
3 # Copyright 2017, IBM.
4 #
5 # This source code is licensed under the Apache License, Version 2.0 found in
6 # the LICENSE.txt file in the root directory of this source tree.
7
8 """
9 GHZ state example. It also compares running on experiment and simulator
10
11 Note: if you have only cloned the Qiskit repository but not
12 used `pip install`, the examples only work from the root directory.
13 """
14
15 from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit
16 from qiskit import IBMQ, BasicAer, execute
17 from qiskit.backends.ibmq import least_busy
18
19
20 ###############################################################
21 # Make a quantum circuit for the GHZ state.
22 ###############################################################
23 q = QuantumRegister(5, "q")
24 c = ClassicalRegister(5, "c")
25 qc = QuantumCircuit(q, c, name='ghz')
26
27 # Create a GHZ state
28 qc.h(q[0])
29 for i in range(4):
30 qc.cx(q[i], q[i+1])
31 # Insert a barrier before measurement
32 qc.barrier()
33 # Measure all of the qubits in the standard basis
34 for i in range(5):
35 qc.measure(q[i], c[i])
36
37 ###############################################################
38 # Set up the API and execute the program.
39 ###############################################################
40 try:
41 IBMQ.load_accounts()
42 except:
43 print("""WARNING: There's no connection with the API for remote backends.
44 Have you initialized a file with your personal token?
45 For now, there's only access to local simulator backends...""")
46
47 # First version: simulator
48 sim_backend = BasicAer.get_backend('qasm_simulator')
49 job = execute(qc, sim_backend, shots=1024)
50 result = job.result()
51 print('Qasm simulator')
52 print(result)
53 print(result.get_counts(qc))
54
55 # Second version: real device
56 least_busy_device = least_busy(IBMQ.backends(simulator=False,
57 filters=lambda x: x.configuration()['n_qubits'] > 4))
58 print("Running on current least busy device: ", least_busy_device)
59 job = execute(qc, least_busy_device, shots=1024)
60 result = job.result()
61 print(result)
62 print(result.get_counts(qc))
63
[end of examples/python/ghz.py]
[start of examples/python/hello_quantum.py]
1 """
2 Example used in the README. In this example a Bell state is made.
3
4 """
5
6 # Import the Qiskit
7 from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister, QiskitError
8 from qiskit import execute, IBMQ, BasicAer
9 from qiskit.backends.ibmq import least_busy
10
11 # Authenticate for access to remote backends
12 try:
13 IBMQ.load_accounts()
14 except:
15 print("""WARNING: There's no connection with the API for remote backends.
16 Have you initialized a file with your personal token?
17 For now, there's only access to local simulator backends...""")
18
19 try:
20 # Create a Quantum Register with 2 qubits.
21 q = QuantumRegister(2)
22 # Create a Classical Register with 2 bits.
23 c = ClassicalRegister(2)
24 # Create a Quantum Circuit
25 qc = QuantumCircuit(q, c)
26
27 # Add a H gate on qubit 0, putting this qubit in superposition.
28 qc.h(q[0])
29 # Add a CX (CNOT) gate on control qubit 0 and target qubit 1, putting
30 # the qubits in a Bell state.
31 qc.cx(q[0], q[1])
32 # Add a Measure gate to see the state.
33 qc.measure(q, c)
34
35 # See a list of available local simulators
36 print("BasicAer backends: ", BasicAer.backends())
37 backend_sim = BasicAer.get_backend('qasm_simulator')
38
39 # Compile and run the Quantum circuit on a simulator backend
40 job_sim = execute(qc, backend_sim)
41 result_sim = job_sim.result()
42
43 # Show the results
44 print("simulation: ", result_sim)
45 print(result_sim.get_counts(qc))
46
47 # see a list of available remote backends
48 ibmq_backends = IBMQ.backends()
49
50 print("Remote backends: ", ibmq_backends)
51 # Compile and run the Quantum Program on a real device backend
52 try:
53 least_busy_device = least_busy(IBMQ.backends(simulator=False))
54 print("Running on current least busy device: ", least_busy_device)
55
56 #running the job
57 job_exp = execute(qc, least_busy_device, shots=1024, max_credits=10)
58 result_exp = job_exp.result()
59
60 # Show the results
61 print("experiment: ", result_exp)
62 print(result_exp.get_counts(qc))
63 except:
64 print("All devices are currently unavailable.")
65
66 except QiskitError as ex:
67 print('There was an error in the circuit!. Error = {}'.format(ex))
68
[end of examples/python/hello_quantum.py]
[start of examples/python/qft.py]
1 # -*- coding: utf-8 -*-
2
3 # Copyright 2017, IBM.
4 #
5 # This source code is licensed under the Apache License, Version 2.0 found in
6 # the LICENSE.txt file in the root directory of this source tree.
7
8 """
9 Quantum Fourier Transform examples.
10
11 Note: if you have only cloned the Qiskit repository but not
12 used `pip install`, the examples only work from the root directory.
13 """
14
15 import math
16 from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit
17 from qiskit import execute, BasicAer, IBMQ
18 from qiskit.backends.ibmq import least_busy
19
20
21 ###############################################################
22 # make the qft
23 ###############################################################
24 def input_state(circ, q, n):
25 """n-qubit input state for QFT that produces output 1."""
26 for j in range(n):
27 circ.h(q[j])
28 circ.u1(math.pi/float(2**(j)), q[j]).inverse()
29
30
31 def qft(circ, q, n):
32 """n-qubit QFT on q in circ."""
33 for j in range(n):
34 for k in range(j):
35 circ.cu1(math.pi/float(2**(j-k)), q[j], q[k])
36 circ.h(q[j])
37
38
39 q = QuantumRegister(5, "q")
40 c = ClassicalRegister(5, "c")
41 qft3 = QuantumCircuit(q, c, name="qft3")
42 qft4 = QuantumCircuit(q, c, name="qft4")
43 qft5 = QuantumCircuit(q, c, name="qft5")
44
45 input_state(qft3, q, 3)
46 qft3.barrier()
47 qft(qft3, q, 3)
48 qft3.barrier()
49 for j in range(3):
50 qft3.measure(q[j], c[j])
51
52 input_state(qft4, q, 4)
53 qft4.barrier()
54 qft(qft4, q, 4)
55 qft4.barrier()
56 for j in range(4):
57 qft4.measure(q[j], c[j])
58
59 input_state(qft5, q, 5)
60 qft5.barrier()
61 qft(qft5, q, 5)
62 qft5.barrier()
63 for j in range(5):
64 qft5.measure(q[j], c[j])
65
66 print(qft3)
67 print(qft4)
68 print(qft5)
69
70 ###############################################################
71 # Set up the API and execute the program.
72 ###############################################################
73 try:
74 IBMQ.load_accounts()
75 except:
76 print("""WARNING: There's no connection with the API for remote backends.
77 Have you initialized a file with your personal token?
78 For now, there's only access to local simulator backends...""")
79
80 print('Qasm simulator')
81 sim_backend = BasicAer.get_backend('qasm_simulator')
82 job = execute([qft3, qft4, qft5], sim_backend, shots=1024)
83 result = job.result()
84 print(result)
85 print(result.get_counts(qft3))
86 print(result.get_counts(qft4))
87 print(result.get_counts(qft5))
88
89 # Second version: real device
90 least_busy_device = least_busy(IBMQ.backends(simulator=False,
91 filters=lambda x: x.configuration()['n_qubits'] > 4))
92 print("Running on current least busy device: ", least_busy_device)
93 job = execute([qft3, qft4, qft5], least_busy_device, shots=1024)
94 result = job.result()
95 print(result)
96 print(result.get_counts(qft3))
97 print(result.get_counts(qft4))
98 print(result.get_counts(qft5))
99
100
[end of examples/python/qft.py]
[start of examples/python/rippleadd.py]
1 # -*- coding: utf-8 -*-
2
3 # Copyright 2017, IBM.
4 #
5 # This source code is licensed under the Apache License, Version 2.0 found in
6 # the LICENSE.txt file in the root directory of this source tree.
7
8 """
9 Ripple adder example based on Cuccaro et al., quant-ph/0410184.
10
11 Note: if you have only cloned the Qiskit repository but not
12 used `pip install`, the examples only work from the root directory.
13 """
14
15 from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit
16 from qiskit import compile, BasicAer
17
18 ###############################################################
19 # Set the backend name and coupling map.
20 ###############################################################
21 backend = BasicAer.get_backend("qasm_simulator")
22 coupling_map = [[0,1], [0, 8], [1, 2], [1, 9], [2, 3], [2, 10], [3, 4], [3, 11],
23 [4, 5], [4, 12], [5, 6], [5, 13], [6, 7], [6, 14], [7, 15], [8, 9],
24 [9, 10], [10, 11], [11, 12], [12, 13], [13, 14], [14, 15]]
25
26 ###############################################################
27 # Make a quantum program for the n-bit ripple adder.
28 ###############################################################
29 n = 2
30
31 a = QuantumRegister(n, "a")
32 b = QuantumRegister(n, "b")
33 cin = QuantumRegister(1, "cin")
34 cout = QuantumRegister(1, "cout")
35 ans = ClassicalRegister(n+1, "ans")
36 qc = QuantumCircuit(a, b, cin, cout, ans, name="rippleadd")
37
38
39 def majority(p, a, b, c):
40 """Majority gate."""
41 p.cx(c, b)
42 p.cx(c, a)
43 p.ccx(a, b, c)
44
45
46 def unmajority(p, a, b, c):
47 """Unmajority gate."""
48 p.ccx(a, b, c)
49 p.cx(c, a)
50 p.cx(a, b)
51
52
53 # Build a temporary subcircuit that adds a to b,
54 # storing the result in b
55 adder_subcircuit = QuantumCircuit(cin, a, b, cout)
56 majority(adder_subcircuit, cin[0], b[0], a[0])
57 for j in range(n - 1):
58 majority(adder_subcircuit, a[j], b[j + 1], a[j + 1])
59 adder_subcircuit.cx(a[n - 1], cout[0])
60 for j in reversed(range(n - 1)):
61 unmajority(adder_subcircuit, a[j], b[j + 1], a[j + 1])
62 unmajority(adder_subcircuit, cin[0], b[0], a[0])
63
64 # Set the inputs to the adder
65 qc.x(a[0]) # Set input a = 0...0001
66 qc.x(b) # Set input b = 1...1111
67 # Apply the adder
68 qc += adder_subcircuit
69 # Measure the output register in the computational basis
70 for j in range(n):
71 qc.measure(b[j], ans[j])
72 qc.measure(cout[0], ans[n])
73
74 ###############################################################
75 # execute the program.
76 ###############################################################
77
78 # First version: not mapped
79 qobj = compile(qc, backend=backend, coupling_map=None, shots=1024)
80 job = backend.run(qobj)
81 result = job.result()
82 print(result)
83 print(result.get_counts(qc))
84
85 # Second version: mapped to 2x8 array coupling graph
86 qobj = compile(qc, backend=backend, coupling_map=coupling_map, shots=1024)
87 job = backend.run(qobj)
88 result = job.result()
89
90 print(result)
91 print(result.get_counts(qc))
92
93 # Both versions should give the same distribution
94
[end of examples/python/rippleadd.py]
[start of examples/python/teleport.py]
1 # -*- coding: utf-8 -*-
2
3 # Copyright 2017, IBM.
4 #
5 # This source code is licensed under the Apache License, Version 2.0 found in
6 # the LICENSE.txt file in the root directory of this source tree.
7
8 """
9 Quantum teleportation example.
10
11 Note: if you have only cloned the Qiskit repository but not
12 used `pip install`, the examples only work from the root directory.
13 """
14
15 from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit
16 from qiskit import compile, BasicAer
17
18 ###############################################################
19 # Set the backend name and coupling map.
20 ###############################################################
21 coupling_map = [[0, 1], [0, 2], [1, 2], [3, 2], [3, 4], [4, 2]]
22 backend = BasicAer.get_backend("qasm_simulator")
23
24 ###############################################################
25 # Make a quantum program for quantum teleportation.
26 ###############################################################
27 q = QuantumRegister(3, "q")
28 c0 = ClassicalRegister(1, "c0")
29 c1 = ClassicalRegister(1, "c1")
30 c2 = ClassicalRegister(1, "c2")
31 qc = QuantumCircuit(q, c0, c1, c2, name="teleport")
32
33 # Prepare an initial state
34 qc.u3(0.3, 0.2, 0.1, q[0])
35
36 # Prepare a Bell pair
37 qc.h(q[1])
38 qc.cx(q[1], q[2])
39
40 # Barrier following state preparation
41 qc.barrier(q)
42
43 # Measure in the Bell basis
44 qc.cx(q[0], q[1])
45 qc.h(q[0])
46 qc.measure(q[0], c0[0])
47 qc.measure(q[1], c1[0])
48
49 # Apply a correction
50 qc.barrier(q)
51 qc.z(q[2]).c_if(c0, 1)
52 qc.x(q[2]).c_if(c1, 1)
53 qc.measure(q[2], c2[0])
54
55 ###############################################################
56 # Execute.
57 # Experiment does not support feedback, so we use the simulator
58 ###############################################################
59
60 # First version: not mapped
61 initial_layout = {("q", 0): ("q", 0), ("q", 1): ("q", 1),
62 ("q", 2): ("q", 2)}
63 qobj = compile(qc, backend=backend, coupling_map=None, shots=1024, initial_layout=initial_layout)
64 job = backend.run(qobj)
65 qobj_exp = qobj.experiments[0]
66
67 result = job.result()
68 print(result.get_counts(qc))
69
70 # Second version: mapped to 2x8 array coupling graph
71 qobj = compile(qc, backend=backend, coupling_map=coupling_map, shots=1024,initial_layout=initial_layout)
72 qobj_exp = qobj.experiments[0]
73 qobj_exp.header.compiled_circuit_qasm = ""
74 job = backend.run(qobj)
75 result = job.result()
76 print(result.get_counts(qc))
77 print(result.data(0))
78 # Both versions should give the same distribution
79
[end of examples/python/teleport.py]
[start of examples/python/using_qiskit_terra_level_0.py]
1 # -*- coding: utf-8 -*-
2
3 # Copyright 2018, IBM.
4 #
5 # This source code is licensed under the Apache License, Version 2.0 found in
6 # the LICENSE.txt file in the root directory of this source tree.
7
8 """
9 Example showing how to use Qiskit-Terra at level 0 (novice).
10
11 This example shows the most basic way to user Terra. It builds some circuits
12 and runs them on both the Aer (local Qiskit provider) or IBMQ (remote IBMQ provider).
13
14 To control the compile parameters we have provided a compile function which can be used
15 as a level 1 user.
16
17 """
18
19 import time
20
21 # Import the Qiskit modules
22 from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister, QiskitError
23 from qiskit import execute, IBMQ, BasicAer
24 from qiskit.backends.ibmq import least_busy
25
26
27 try:
28 IBMQ.load_accounts()
29 except:
30 print("""WARNING: There's no connection with the API for remote backends.
31 Have you initialized a file with your personal token?
32 For now, there's only access to local simulator backends...""")
33
34 try:
35 # Create a Quantum and Classical Register.
36 qubit_reg = QuantumRegister(2)
37 clbit_reg = ClassicalRegister(2)
38
39 # making first circuit: bell state
40 qc1 = QuantumCircuit(qubit_reg, clbit_reg)
41 qc1.h(qubit_reg[0])
42 qc1.cx(qubit_reg[0], qubit_reg[1])
43 qc1.measure(qubit_reg, clbit_reg)
44
45 # making another circuit: superpositions
46 qc2 = QuantumCircuit(qubit_reg, clbit_reg)
47 qc2.h(qubit_reg)
48 qc2.measure(qubit_reg, clbit_reg)
49
50 # setting up the backend
51 print("(AER Backends)")
52 print(BasicAer.backends())
53
54 # running the job
55 job_sim = execute([qc1, qc2], BasicAer.get_backend('qasm_simulator'))
56 sim_result = job_sim.result()
57
58 # Show the results
59 print("simulation: ", sim_result)
60 print(sim_result.get_counts(qc1))
61 print(sim_result.get_counts(qc2))
62
63 # see a list of available remote backends
64 print("\n(IBMQ Backends)")
65 print(IBMQ.backends())
66
67 # Compile and run on a real device backend
68 try:
69 # select least busy available device and execute.
70 least_busy_device = least_busy(IBMQ.backends(simulator=False))
71 print("Running on current least busy device: ", least_busy_device)
72
73 # running the job
74 job_exp = execute([qc1, qc2], backend=least_busy_device, shots=1024, max_credits=10)
75
76 lapse = 0
77 interval = 10
78 while job_exp.status().name != 'DONE':
79 print('Status @ {} seconds'.format(interval * lapse))
80 print(job_exp.status())
81 time.sleep(interval)
82 lapse += 1
83 print(job_exp.status())
84 exp_result = job_exp.result()
85
86 # Show the results
87 print("experiment: ", exp_result)
88 print(exp_result.get_counts(qc1))
89 print(exp_result.get_counts(qc2))
90 except:
91 print("All devices are currently unavailable.")
92 except QiskitError as ex:
93 print('There was an error in the circuit!. Error = {}'.format(ex))
94
[end of examples/python/using_qiskit_terra_level_0.py]
[start of examples/python/using_qiskit_terra_level_1.py]
1 # -*- coding: utf-8 -*-
2
3 # Copyright 2018, IBM.
4 #
5 # This source code is licensed under the Apache License, Version 2.0 found in
6 # the LICENSE.txt file in the root directory of this source tree.
7
8 """
9 Example showing how to use Qiskit at level 1 (intermediate).
10
11 This example shows how an intermediate user interacts with Terra. It builds some circuits
12 and compiles them from compile parameters. It makes a qobj object which is just and container to be
13 run on a backend. The same qobj can run on many backends (as shown). It is the
14 user responsibility to make sure it can be run. This is useful when you want to compare the same
15 circuits on different backends or change the compile parameters.
16
17 To control the passes and we have a pass manager for level 2 user.
18 """
19
20 import pprint, time
21
22 # Import the Qiskit modules
23 from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister, QiskitError
24 from qiskit import compile, IBMQ, BasicAer
25 from qiskit.backends.ibmq import least_busy
26
27 try:
28 IBMQ.load_accounts()
29 except:
30 print("""WARNING: There's no connection with the API for remote backends.
31 Have you initialized a file with your personal token?
32 For now, there's only access to local simulator backends...""")
33
34 try:
35 # Create a Quantum and Classical Register and giving a name.
36 qubit_reg = QuantumRegister(2, name='q')
37 clbit_reg = ClassicalRegister(2, name='c')
38
39 # Making first circuit: bell state
40 qc1 = QuantumCircuit(qubit_reg, clbit_reg, name="bell")
41 qc1.h(qubit_reg[0])
42 qc1.cx(qubit_reg[0], qubit_reg[1])
43 qc1.measure(qubit_reg, clbit_reg)
44
45 # Making another circuit: superpositions
46 qc2 = QuantumCircuit(qubit_reg, clbit_reg, name="superposition")
47 qc2.h(qubit_reg)
48 qc2.measure(qubit_reg, clbit_reg)
49
50 # Setting up the backend
51 print("(Aer Backends)")
52 for backend in BasicAer.backends():
53 print(backend.status())
54 my_backend = BasicAer.get_backend('local_qasm_simulator')
55 print("(QASM Simulator configuration) ")
56 pprint.pprint(my_backend.configuration())
57 print("(QASM Simulator properties) ")
58 pprint.pprint(my_backend.properties())
59
60
61 print("\n(IMQ Backends)")
62 for backend in IBMQ.backends():
63 print(backend.status())
64
65 # select least busy available device and execute.
66 least_busy_device = least_busy(IBMQ.backends(simulator=False))
67 print("Running on current least busy device: ", least_busy_device)
68 print("(with configuration) ")
69 pprint.pprint(least_busy_device.configuration())
70 print("(with properties) ")
71 pprint.pprint(least_busy_device.properties())
72
73
74 # Compiling the job for the experimental backend
75 qobj = compile([qc1, qc2], backend=least_busy_device, shots=1024, max_credits=10)
76
77 # Running the job
78 sim_job = my_backend.run(qobj)
79
80 # Getting the result
81 sim_result=sim_job.result()
82
83 # Show the results
84 print("simulation: ", sim_result)
85 print(sim_result.get_counts(qc1))
86 print(sim_result.get_counts(qc2))
87
88 # Compile and run the Quantum Program on a real device backend
89 # See a list of available remote backends
90 try:
91 # Running the job.
92 exp_job = least_busy_device.run(qobj)
93
94 lapse = 0
95 interval = 10
96 while exp_job.status().name != 'DONE':
97 print('Status @ {} seconds'.format(interval * lapse))
98 print(exp_job.status())
99 time.sleep(interval)
100 lapse += 1
101 print(exp_job.status())
102
103 exp_result = exp_job.result()
104
105 # Show the results
106 print("experiment: ", exp_result)
107 print(exp_result.get_counts(qc1))
108 print(exp_result.get_counts(qc2))
109 except:
110 print("All devices are currently unavailable.")
111
112 except QiskitError as ex:
113 print('There was an error in the circuit!. Error = {}'.format(ex))
114
[end of examples/python/using_qiskit_terra_level_1.py]
[start of qiskit/transpiler/_parallel.py]
1 # -*- coding: utf-8 -*-
2
3 # Copyright 2018, IBM.
4 #
5 # This source code is licensed under the Apache License, Version 2.0 found in
6 # the LICENSE.txt file in the root directory of this source tree.
7
8 # This file is part of QuTiP: Quantum Toolbox in Python.
9 #
10 # Copyright (c) 2011 and later, Paul D. Nation and Robert J. Johansson.
11 # All rights reserved.
12 #
13 # Redistribution and use in source and binary forms, with or without
14 # modification, are permitted provided that the following conditions are
15 # met:
16 #
17 # 1. Redistributions of source code must retain the above copyright notice,
18 # this list of conditions and the following disclaimer.
19 #
20 # 2. Redistributions in binary form must reproduce the above copyright
21 # notice, this list of conditions and the following disclaimer in the
22 # documentation and/or other materials provided with the distribution.
23 #
24 # 3. Neither the name of the QuTiP: Quantum Toolbox in Python nor the names
25 # of its contributors may be used to endorse or promote products derived
26 # from this software without specific prior written permission.
27 #
28 # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
29 # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
30 # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
31 # PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
32 # HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
33 # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
34 # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
35 # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
36 # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
37 # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
38 # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
39 ###############################################################################
40
41 """
42 Routines for running Python functions in parallel using process pools
43 from the multiprocessing library.
44 """
45
46 import os
47 import platform
48 from multiprocessing import Pool
49 from qiskit.qiskiterror import QiskitError
50 from qiskit._util import local_hardware_info
51 from qiskit._pubsub import Publisher
52
53 # Number of local physical cpus
54 CPU_COUNT = local_hardware_info()['cpus']
55
56
57 def parallel_map(task, values, task_args=tuple(), task_kwargs={}, # pylint: disable=W0102
58 num_processes=CPU_COUNT):
59 """
60 Parallel execution of a mapping of `values` to the function `task`. This
61 is functionally equivalent to::
62
63 result = [task(value, *task_args, **task_kwargs) for value in values]
64
65 On Windows this function defaults to a serial implementation to avoid the
66 overhead from spawning processes in Windows.
67
68 Args:
69 task (func): Function that is to be called for each value in ``task_vec``.
70 values (array_like): List or array of values for which the ``task``
71 function is to be evaluated.
72 task_args (list): Optional additional arguments to the ``task`` function.
73 task_kwargs (dict): Optional additional keyword argument to the ``task`` function.
74 num_processes (int): Number of processes to spawn.
75
76 Returns:
77 result: The result list contains the value of
78 ``task(value, *task_args, **task_kwargs)`` for
79 each value in ``values``.
80
81 Raises:
82 QiskitError: If user interrupts via keyboard.
83
84 Events:
85 terra.transpiler.parallel.start: The collection of parallel tasks are about to start.
86 terra.transpiler.parallel.update: One of the parallel task has finished.
87 terra.transpiler.parallel.finish: All the parallel tasks have finished.
88 """
89 if len(values) == 1:
90 return [task(values[0], *task_args, **task_kwargs)]
91
92 Publisher().publish("terra.transpiler.parallel.start", len(values))
93 nfinished = [0]
94
95 def _callback(_):
96 nfinished[0] += 1
97 Publisher().publish("terra.transpiler.parallel.done", nfinished[0])
98
99 # Run in parallel if not Win and not in parallel already
100 if platform.system() != 'Windows' and num_processes > 1 \
101 and os.getenv('QISKIT_IN_PARALLEL') == 'FALSE':
102 os.environ['QISKIT_IN_PARALLEL'] = 'TRUE'
103 try:
104 pool = Pool(processes=num_processes)
105
106 async_res = [pool.apply_async(task, (value,) + task_args, task_kwargs,
107 _callback) for value in values]
108
109 while not all([item.ready() for item in async_res]):
110 for item in async_res:
111 item.wait(timeout=0.1)
112
113 pool.terminate()
114 pool.join()
115
116 except KeyboardInterrupt:
117 pool.terminate()
118 pool.join()
119 Publisher().publish("terra.parallel.parallel.finish")
120 raise QiskitError('Keyboard interrupt in parallel_map.')
121
122 Publisher().publish("terra.transpiler.parallel.finish")
123 os.environ['QISKIT_IN_PARALLEL'] = 'FALSE'
124 return [ar.get() for ar in async_res]
125
126 # Cannot do parallel on Windows , if another parallel_map is running in parallel,
127 # or len(values) == 1.
128 results = []
129 for _, value in enumerate(values):
130 result = task(value, *task_args, **task_kwargs)
131 results.append(result)
132 _callback(0)
133 Publisher().publish("terra.transpiler.parallel.finish")
134 return results
135
[end of qiskit/transpiler/_parallel.py]
[start of qiskit/transpiler/_transpiler.py]
1 # -*- coding: utf-8 -*-
2
3 # Copyright 2018, IBM.
4 #
5 # This source code is licensed under the Apache License, Version 2.0 found in
6 # the LICENSE.txt file in the root directory of this source tree.
7
8 """Tools for compiling a batch of quantum circuits."""
9 import logging
10 import warnings
11 import numpy as np
12 import scipy.sparse as sp
13 import scipy.sparse.csgraph as cs
14
15 from qiskit.circuit import QuantumCircuit
16 from qiskit.circuit import QuantumRegister
17 from qiskit.mapper import (Coupling, optimize_1q_gates, swap_mapper,
18 cx_cancellation, direction_mapper,
19 remove_last_measurements, return_last_measurements)
20 from qiskit.converters import circuit_to_dag
21 from qiskit.converters import dag_to_circuit
22 from .passes.mapping.unroller import Unroller
23 from ._parallel import parallel_map
24 from ._transpilererror import TranspilerError
25
26 logger = logging.getLogger(__name__)
27
28
29 def transpile(circuits, backend=None, basis_gates=None, coupling_map=None,
30 initial_layout=None, seed_mapper=None, pass_manager=None):
31 """transpile one or more circuits.
32
33 Args:
34 circuits (QuantumCircuit or list[QuantumCircuit]): circuits to compile
35 backend (BaseBackend): a backend to compile for
36 basis_gates (str): comma-separated basis gate set to compile to
37 coupling_map (list): coupling map (perhaps custom) to target in mapping
38 initial_layout (list): initial layout of qubits in mapping
39 seed_mapper (int): random seed for the swap_mapper
40 pass_manager (PassManager): a pass_manager for the transpiler stages
41
42 Returns:
43 QuantumCircuit or list[QuantumCircuit]: transpiled circuit(s).
44
45 Raises:
46 TranspilerError: if args are not complete for the transpiler to function
47 """
48 return_form_is_single = False
49 if isinstance(circuits, QuantumCircuit):
50 circuits = [circuits]
51 return_form_is_single = True
52
53 # Check for valid parameters for the experiments.
54 basis_gates = basis_gates or ','.join(backend.configuration().basis_gates)
55 coupling_map = coupling_map or getattr(backend.configuration(),
56 'coupling_map', None)
57
58 if not basis_gates:
59 raise TranspilerError('no basis_gates or backend to compile to')
60
61 circuits = parallel_map(_transpilation, circuits,
62 task_kwargs={'backend': backend,
63 'basis_gates': basis_gates,
64 'coupling_map': coupling_map,
65 'initial_layout': initial_layout,
66 'seed_mapper': seed_mapper,
67 'pass_manager': pass_manager})
68 if return_form_is_single:
69 return circuits[0]
70 return circuits
71
72
73 def _transpilation(circuit, backend=None, basis_gates=None, coupling_map=None,
74 initial_layout=None, seed_mapper=None,
75 pass_manager=None):
76 """Perform transpilation of a single circuit.
77
78 Args:
79 circuit (QuantumCircuit): A circuit to transpile.
80 backend (BaseBackend): a backend to compile for
81 basis_gates (str): comma-separated basis gate set to compile to
82 coupling_map (list): coupling map (perhaps custom) to target in mapping
83 initial_layout (list): initial layout of qubits in mapping
84 seed_mapper (int): random seed for the swap_mapper
85 pass_manager (PassManager): a pass_manager for the transpiler stage
86
87 Returns:
88 QuantumCircuit: A transpiled circuit.
89
90 Raises:
91 TranspilerError: if args are not complete for transpiler to function.
92 """
93 dag = circuit_to_dag(circuit)
94 if not backend and not initial_layout:
95 raise TranspilerError('initial layout not supplied, and cannot '
96 'be inferred from backend.')
97 if (initial_layout is None and not backend.configuration().simulator
98 and not _matches_coupling_map(dag, coupling_map)):
99 initial_layout = _pick_best_layout(dag, backend)
100
101 final_dag, final_layout = transpile_dag(dag, basis_gates=basis_gates,
102 coupling_map=coupling_map,
103 initial_layout=initial_layout,
104 get_layout=True, format='dag',
105 seed_mapper=seed_mapper,
106 pass_manager=pass_manager)
107 final_dag.layout = [[k, v]
108 for k, v in final_layout.items()] if final_layout else None
109
110 out_circuit = dag_to_circuit(final_dag)
111
112 return out_circuit
113
114
115 # pylint: disable=redefined-builtin
116 def transpile_dag(dag, basis_gates='u1,u2,u3,cx,id', coupling_map=None,
117 initial_layout=None, get_layout=False,
118 format='dag', seed_mapper=None, pass_manager=None):
119 """Transform a dag circuit into another dag circuit (transpile), through
120 consecutive passes on the dag.
121
122 Args:
123 dag (DAGCircuit): dag circuit to transform via transpilation
124 basis_gates (str): a comma separated string for the target basis gates
125 coupling_map (list): A graph of coupling::
126
127 [
128 [control0(int), target0(int)],
129 [control1(int), target1(int)],
130 ]
131
132 eg. [[0, 2], [1, 2], [1, 3], [3, 4]}
133
134 initial_layout (dict): A mapping of qubit to qubit::
135
136 {
137 ("q", start(int)): ("q", final(int)),
138 ...
139 }
140 eg.
141 {
142 ("q", 0): ("q", 0),
143 ("q", 1): ("q", 1),
144 ("q", 2): ("q", 2),
145 ("q", 3): ("q", 3)
146 }
147 get_layout (bool): flag for returning the final layout after mapping
148 format (str): DEPRECATED The target format of the compilation: {'dag', 'json', 'qasm'}
149 seed_mapper (int): random seed_mapper for the swap mapper
150 pass_manager (PassManager): pass manager instance for the transpilation process
151 If None, a default set of passes are run.
152 Otherwise, the passes defined in it will run.
153 If contains no passes in it, no dag transformations occur.
154
155 Returns:
156 DAGCircuit: transformed dag
157 DAGCircuit, dict: transformed dag along with the final layout on backend qubits
158 """
159 # TODO: `basis_gates` will be removed after we have the unroller pass.
160 # TODO: `coupling_map`, `initial_layout`, `get_layout`, `seed_mapper` removed after mapper pass.
161
162 # TODO: move this to the mapper pass
163 num_qubits = sum([qreg.size for qreg in dag.qregs.values()])
164 if num_qubits == 1 or coupling_map == "all-to-all":
165 coupling_map = None
166
167 final_layout = None
168
169 if pass_manager:
170 # run the passes specified by the pass manager
171 # TODO return the property set too. See #1086
172 dag = pass_manager.run_passes(dag)
173 else:
174 # default set of passes
175 # TODO: move each step here to a pass, and use a default passmanager below
176 basis = basis_gates.split(',') if basis_gates else []
177 dag = Unroller(basis).run(dag)
178 # if a coupling map is given compile to the map
179 if coupling_map:
180 logger.info("pre-mapping properties: %s",
181 dag.properties())
182 # Insert swap gates
183 coupling = Coupling(couplinglist=coupling_map)
184 removed_meas = remove_last_measurements(dag)
185 logger.info("measurements moved: %s", removed_meas)
186 logger.info("initial layout: %s", initial_layout)
187 dag, final_layout, last_layout = swap_mapper(
188 dag, coupling, initial_layout, trials=20, seed=seed_mapper)
189 logger.info("final layout: %s", final_layout)
190 # Expand swaps
191 dag = Unroller(basis).run(dag)
192 # Change cx directions
193 dag = direction_mapper(dag, coupling)
194 # Simplify cx gates
195 cx_cancellation(dag)
196 # Simplify single qubit gates
197 dag = optimize_1q_gates(dag)
198 return_last_measurements(dag, removed_meas,
199 last_layout)
200 logger.info("post-mapping properties: %s",
201 dag.properties())
202
203 if format != 'dag':
204 warnings.warn("transpiler no longer supports different formats. "
205 "only dag to dag transformations are supported.",
206 DeprecationWarning)
207
208 if get_layout:
209 return dag, final_layout
210 return dag
211
212
213 def _best_subset(backend, n_qubits):
214 """Computes the qubit mapping with the best
215 connectivity.
216
217 Parameters:
218 backend (BaseBackend): A Qiskit backend instance.
219 n_qubits (int): Number of subset qubits to consider.
220
221 Returns:
222 ndarray: Array of qubits to use for best
223 connectivity mapping.
224
225 Raises:
226 TranspilerError: Wrong number of qubits given.
227 """
228 if n_qubits == 1:
229 return np.array([0])
230 elif n_qubits <= 0:
231 raise TranspilerError('Number of qubits <= 0.')
232
233 device_qubits = backend.configuration().n_qubits
234 if n_qubits > device_qubits:
235 raise TranspilerError('Number of qubits greater than device.')
236
237 cmap = np.asarray(getattr(backend.configuration(), 'coupling_map', None))
238 data = np.ones_like(cmap[:, 0])
239 sp_cmap = sp.coo_matrix((data, (cmap[:, 0], cmap[:, 1])),
240 shape=(device_qubits, device_qubits)).tocsr()
241 best = 0
242 best_map = None
243 # do bfs with each node as starting point
244 for k in range(sp_cmap.shape[0]):
245 bfs = cs.breadth_first_order(sp_cmap, i_start=k, directed=False,
246 return_predecessors=False)
247
248 connection_count = 0
249 for i in range(n_qubits):
250 node_idx = bfs[i]
251 for j in range(sp_cmap.indptr[node_idx],
252 sp_cmap.indptr[node_idx + 1]):
253 node = sp_cmap.indices[j]
254 for counter in range(n_qubits):
255 if node == bfs[counter]:
256 connection_count += 1
257 break
258
259 if connection_count > best:
260 best = connection_count
261 best_map = bfs[0:n_qubits]
262 return best_map
263
264
265 def _matches_coupling_map(dag, coupling_map):
266 """Iterate over circuit gates to check if all multi-qubit couplings
267 match the qubit coupling graph in the backend.
268
269 Parameters:
270 dag (DAGCircuit): DAG representation of circuit.
271 coupling_map (list): Backend coupling map, represented as an adjacency list.
272
273 Returns:
274 bool: True if all gates readily fit the backend coupling graph.
275 False if there's at least one gate that uses multiple qubits
276 which does not match the backend couplings.
277 """
278 match = True
279 for _, data in dag.multi_graph.nodes(data=True):
280 if data['type'] == 'op':
281 gate_map = [qr[1] for qr in data['qargs']]
282 if len(gate_map) > 1:
283 if gate_map not in coupling_map:
284 match = False
285 break
286 return match
287
288
289 def _pick_best_layout(dag, backend):
290 """Pick a convenient layout depending on the best matching qubit connectivity
291
292 Parameters:
293 dag (DAGCircuit): DAG representation of circuit.
294 backend (BaseBackend) : The backend with the coupling_map for searching
295
296 Returns:
297 dict: A special ordered initial_layout
298 """
299 num_qubits = sum([qreg.size for qreg in dag.qregs.values()])
300 best_sub = _best_subset(backend, num_qubits)
301 layout = {}
302 map_iter = 0
303 device_qubits = backend.configuration().n_qubits
304 q = QuantumRegister(device_qubits, 'q')
305 for qreg in dag.qregs.values():
306 for i in range(qreg.size):
307 layout[(qreg.name, i)] = (q, int(best_sub[map_iter]))
308 map_iter += 1
309 return layout
310
[end of qiskit/transpiler/_transpiler.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
Qiskit/qiskit
|
2bcd9531d434768f90afc18ccc37e5889e74f272
|
Progressbars not spawned unless execute or compile are called first
<!-- ⚠️ If you do not respect this template, your issue will be closed -->
<!-- ⚠️ Make sure to browse the opened and closed issues -->
### Informations
- **Qiskit Terra version**: master
- **Python version**: 3.7
- **Operating system**: OSX
### What is the current behavior?
The following no longer spawns a progress bar"
```
HTMLProgressBar()
ans = parallel_map(func, list(range(100)))
```
unless `compile` or `execute` have been run first.
### Steps to reproduce the problem
uncomment the line below:
```
import time
from qiskit import *
from qiskit.transpiler._parallel import parallel_map
from qiskit.tools.jupyter import *
q = QuantumRegister(2)
c = ClassicalRegister(2)
qc = QuantumCircuit(q, c)
qc.h(q[0])
qc.cx(q[0], q[1])
qc.measure(q, c)
backend = Aer.get_backend('qasm_simulator')
#qobj = compile([qc], backend)
def func(t):
time.sleep(0.1)
return 0
TextProgressBar()
parallel_map(func, list(range(100)))
```
### What is the expected behavior?
You should not have to call `execute` or `compile` before a progressbar can be displayed.
### Suggested solutions
|
2018-12-06T09:45:44Z
|
<patch>
diff --git a/qiskit/tools/__init__.py b/qiskit/tools/__init__.py
--- a/qiskit/tools/__init__.py
+++ b/qiskit/tools/__init__.py
@@ -15,5 +15,7 @@
refer to the documentation of each component and use them separately.
"""
+from .parallel import parallel_map
from .compiler import (compile, execute)
from .monitor.job_monitor import job_monitor
+from .events.progressbar import TextProgressBar
diff --git a/qiskit/tools/events/__init__.py b/qiskit/tools/events/__init__.py
new file mode 100644
--- /dev/null
+++ b/qiskit/tools/events/__init__.py
@@ -0,0 +1,11 @@
+# -*- coding: utf-8 -*-
+
+# Copyright 2018, IBM.
+#
+# This source code is licensed under the Apache License, Version 2.0 found in
+# the LICENSE.txt file in the root directory of this source tree.
+
+"""Events
+"""
+
+from .progressbar import TextProgressBar
diff --git a/qiskit/_pubsub.py b/qiskit/tools/events/_pubsub.py
similarity index 99%
rename from qiskit/_pubsub.py
rename to qiskit/tools/events/_pubsub.py
--- a/qiskit/_pubsub.py
+++ b/qiskit/tools/events/_pubsub.py
@@ -9,7 +9,7 @@
Message broker for the Publisher / Subscriber mechanism
"""
-from .qiskiterror import QiskitError
+from qiskit.qiskiterror import QiskitError
class _Broker(object):
diff --git a/qiskit/transpiler/progressbar.py b/qiskit/tools/events/progressbar.py
similarity index 89%
rename from qiskit/transpiler/progressbar.py
rename to qiskit/tools/events/progressbar.py
--- a/qiskit/transpiler/progressbar.py
+++ b/qiskit/tools/events/progressbar.py
@@ -43,7 +43,7 @@
import time
import datetime
import sys
-from qiskit._pubsub import Subscriber
+from qiskit.tools.events._pubsub import Subscriber
class BaseProgressBar(Subscriber):
@@ -119,20 +119,20 @@ def _init_subscriber(self):
def _initialize_progress_bar(num_tasks):
""" """
self.start(num_tasks)
- self.subscribe("terra.transpiler.transpile_dag.start", _initialize_progress_bar)
+ self.subscribe("terra.parallel.start", _initialize_progress_bar)
def _update_progress_bar(progress):
""" """
self.update(progress)
- self.subscribe("terra.transpiler.transpile_dag.done", _update_progress_bar)
+ self.subscribe("terra.parallel.done", _update_progress_bar)
def _finish_progress_bar():
""" """
- self.unsubscribe("terra.transpiler.transpile_dag.start", _initialize_progress_bar)
- self.unsubscribe("terra.transpiler.transpile_dag.done", _update_progress_bar)
- self.unsubscribe("terra.transpiler.transpile_dag.finish", _finish_progress_bar)
+ self.unsubscribe("terra.parallel.start", _initialize_progress_bar)
+ self.unsubscribe("terra.parallel.done", _update_progress_bar)
+ self.unsubscribe("terra.parallel.finish", _finish_progress_bar)
self.finished()
- self.subscribe("terra.transpiler.transpile_dag.finish", _finish_progress_bar)
+ self.subscribe("terra.parallel.finish", _finish_progress_bar)
def start(self, iterations):
self.touched = True
diff --git a/qiskit/tools/jupyter/jupyter_magics.py b/qiskit/tools/jupyter/jupyter_magics.py
--- a/qiskit/tools/jupyter/jupyter_magics.py
+++ b/qiskit/tools/jupyter/jupyter_magics.py
@@ -14,7 +14,7 @@
from IPython.core.magic import cell_magic, Magics, magics_class # pylint: disable=import-error
import ipywidgets as widgets # pylint: disable=import-error
import qiskit
-from qiskit.transpiler.progressbar import TextProgressBar
+from qiskit.tools.events.progressbar import TextProgressBar
from .progressbar import HTMLProgressBar
diff --git a/qiskit/tools/jupyter/progressbar.py b/qiskit/tools/jupyter/progressbar.py
--- a/qiskit/tools/jupyter/progressbar.py
+++ b/qiskit/tools/jupyter/progressbar.py
@@ -41,71 +41,9 @@
"""Progress bars module"""
import time
-import datetime
-import sys
import ipywidgets as widgets # pylint: disable=import-error
from IPython.display import display # pylint: disable=import-error
-from qiskit._pubsub import Subscriber
-
-
-class BaseProgressBar(Subscriber):
- """An abstract progress bar with some shared functionality.
- """
- def __init__(self):
- super().__init__()
- self.type = 'progressbar'
- self.touched = False
- self.iter = None
- self.t_start = None
- self.t_done = None
-
- def start(self, iterations):
- """Start the progress bar.
-
- Parameters:
- iterations (int): Number of iterations.
- """
- self.touched = True
- self.iter = int(iterations)
- self.t_start = time.time()
-
- def update(self, n):
- """Update status of progress bar.
- """
- pass
-
- def time_elapsed(self):
- """Return the time elapsed since start.
-
- Returns:
- elapsed_time: Time since progress bar started.
- """
- return "%6.2fs" % (time.time() - self.t_start)
-
- def time_remaining_est(self, completed_iter):
- """Estimate the remaining time left.
-
- Parameters:
- completed_iter (int): Number of iterations completed.
-
- Returns:
- est_time: Estimated time remaining.
- """
- if completed_iter:
- t_r_est = (time.time() - self.t_start) / \
- completed_iter*(self.iter-completed_iter)
- else:
- t_r_est = 0
- date_time = datetime.datetime(1, 1, 1) + datetime.timedelta(seconds=t_r_est)
- time_string = "%02d:%02d:%02d:%02d" % \
- (date_time.day - 1, date_time.hour, date_time.minute, date_time.second)
-
- return time_string
-
- def finished(self):
- """Run when progress bar has completed.
- """
- pass
+from qiskit.tools.events.progressbar import BaseProgressBar
class HTMLProgressBar(BaseProgressBar):
@@ -128,7 +66,7 @@ def _initialize_progress_bar(num_tasks):
num_tasks: Number of compilation tasks the progress bar will track
"""
self.start(num_tasks)
- self.subscribe("terra.transpiler.transpile_dag.start", _initialize_progress_bar)
+ self.subscribe("terra.parallel.start", _initialize_progress_bar)
def _update_progress_bar(progress):
""" When an event of compilation completes, this function will be called, and
@@ -138,17 +76,17 @@ def _update_progress_bar(progress):
progress: Number of tasks completed
"""
self.update(progress)
- self.subscribe("terra.transpiler.transpile_dag.done", _update_progress_bar)
+ self.subscribe("terra.parallel.done", _update_progress_bar)
def _finish_progress_bar():
""" When an event of compilation finishes (meaning that there's no more circuits to
compile), this function will be called, unsubscribing from all events and
finishing the progress bar."""
- self.unsubscribe("terra.transpiler.transpile_dag.start", _initialize_progress_bar)
- self.unsubscribe("terra.transpiler.transpile_dag.done", _update_progress_bar)
- self.unsubscribe("terra.transpiler.transpile_dag.finish", _finish_progress_bar)
+ self.unsubscribe("terra.parallel.start", _initialize_progress_bar)
+ self.unsubscribe("terra.parallel.done", _update_progress_bar)
+ self.unsubscribe("terra.parallel.finish", _finish_progress_bar)
self.finished()
- self.subscribe("terra.transpiler.transpile_dag.finish", _finish_progress_bar)
+ self.subscribe("terra.parallel.finish", _finish_progress_bar)
def start(self, iterations):
self.touched = True
@@ -169,49 +107,3 @@ def finished(self):
self.t_done = time.time()
self.progress_bar.bar_style = 'success'
self.label.value = "Elapsed time: %s" % self.time_elapsed()
-
-
-class TextProgressBar(BaseProgressBar):
- """
- A simple text-based progress bar.
- """
-
- def __init__(self):
- super().__init__()
- self._init_subscriber()
-
- def _init_subscriber(self):
- def _initialize_progress_bar(num_tasks):
- """ """
- self.start(num_tasks)
- self.subscribe("terra.transpiler.transpile_dag.start", _initialize_progress_bar)
-
- def _update_progress_bar(progress):
- """ """
- self.update(progress)
- self.subscribe("terra.transpiler.transpile_dag.done", _update_progress_bar)
-
- def _finish_progress_bar():
- """ """
- self.unsubscribe("terra.transpiler.transpile_dag.start", _initialize_progress_bar)
- self.unsubscribe("terra.transpiler.transpile_dag.done", _update_progress_bar)
- self.unsubscribe("terra.transpiler.transpile_dag.finish", _finish_progress_bar)
- self.finished()
- self.subscribe("terra.transpiler.transpile_dag.finish", _finish_progress_bar)
-
- def start(self, iterations):
- self.touched = True
- self.iter = int(iterations)
- self.t_start = time.time()
- pbar = '-' * 50
- sys.stdout.write('\r|%s| %s%s%s [%s]' %
- (pbar, 0, '/', self.iter, ''))
-
- def update(self, n):
- filled_length = int(round(50 * n / self.iter))
- pbar = u'█' * filled_length + '-' * (50 - filled_length)
- time_left = self.time_remaining_est(n)
- sys.stdout.write('\r|%s| %s%s%s [%s]' % (pbar, n, '/', self.iter, time_left))
- if n == self.iter:
- sys.stdout.write('\n')
- sys.stdout.flush()
diff --git a/qiskit/tools/monitor/__init__.py b/qiskit/tools/monitor/__init__.py
--- a/qiskit/tools/monitor/__init__.py
+++ b/qiskit/tools/monitor/__init__.py
@@ -5,8 +5,6 @@
# This source code is licensed under the Apache License, Version 2.0 found in
# the LICENSE.txt file in the root directory of this source tree.
-# pylint: disable=redefined-builtin
-
"""A module for monitoring jobs, backends, etc.
"""
diff --git a/qiskit/transpiler/_parallel.py b/qiskit/tools/parallel.py
similarity index 88%
rename from qiskit/transpiler/_parallel.py
rename to qiskit/tools/parallel.py
--- a/qiskit/transpiler/_parallel.py
+++ b/qiskit/tools/parallel.py
@@ -48,7 +48,10 @@
from multiprocessing import Pool
from qiskit.qiskiterror import QiskitError
from qiskit._util import local_hardware_info
-from qiskit._pubsub import Publisher
+from qiskit.tools.events._pubsub import Publisher
+
+# Set parallel flag
+os.environ['QISKIT_IN_PARALLEL'] = 'FALSE'
# Number of local physical cpus
CPU_COUNT = local_hardware_info()['cpus']
@@ -82,19 +85,19 @@ def parallel_map(task, values, task_args=tuple(), task_kwargs={}, # pylint: dis
QiskitError: If user interrupts via keyboard.
Events:
- terra.transpiler.parallel.start: The collection of parallel tasks are about to start.
- terra.transpiler.parallel.update: One of the parallel task has finished.
- terra.transpiler.parallel.finish: All the parallel tasks have finished.
+ terra.parallel.start: The collection of parallel tasks are about to start.
+ terra.parallel.update: One of the parallel task has finished.
+ terra.parallel.finish: All the parallel tasks have finished.
"""
if len(values) == 1:
return [task(values[0], *task_args, **task_kwargs)]
- Publisher().publish("terra.transpiler.parallel.start", len(values))
+ Publisher().publish("terra.parallel.start", len(values))
nfinished = [0]
def _callback(_):
nfinished[0] += 1
- Publisher().publish("terra.transpiler.parallel.done", nfinished[0])
+ Publisher().publish("terra.parallel.done", nfinished[0])
# Run in parallel if not Win and not in parallel already
if platform.system() != 'Windows' and num_processes > 1 \
@@ -116,10 +119,10 @@ def _callback(_):
except KeyboardInterrupt:
pool.terminate()
pool.join()
- Publisher().publish("terra.parallel.parallel.finish")
+ Publisher().publish("terra.parallel.finish")
raise QiskitError('Keyboard interrupt in parallel_map.')
- Publisher().publish("terra.transpiler.parallel.finish")
+ Publisher().publish("terra.parallel.finish")
os.environ['QISKIT_IN_PARALLEL'] = 'FALSE'
return [ar.get() for ar in async_res]
@@ -130,5 +133,5 @@ def _callback(_):
result = task(value, *task_args, **task_kwargs)
results.append(result)
_callback(0)
- Publisher().publish("terra.transpiler.parallel.finish")
+ Publisher().publish("terra.parallel.finish")
return results
diff --git a/qiskit/transpiler/__init__.py b/qiskit/transpiler/__init__.py
--- a/qiskit/transpiler/__init__.py
+++ b/qiskit/transpiler/__init__.py
@@ -13,7 +13,3 @@
from ._fencedobjs import FencedDAGCircuit, FencedPropertySet
from ._basepasses import AnalysisPass, TransformationPass
from ._transpiler import transpile, transpile_dag
-from ._parallel import parallel_map
-
-# Set parallel environmental variable
-os.environ['QISKIT_IN_PARALLEL'] = 'FALSE'
diff --git a/qiskit/transpiler/_transpiler.py b/qiskit/transpiler/_transpiler.py
--- a/qiskit/transpiler/_transpiler.py
+++ b/qiskit/transpiler/_transpiler.py
@@ -17,10 +17,10 @@
from qiskit.mapper import (Coupling, optimize_1q_gates, swap_mapper,
cx_cancellation, direction_mapper,
remove_last_measurements, return_last_measurements)
+from qiskit.tools.parallel import parallel_map
from qiskit.converters import circuit_to_dag
from qiskit.converters import dag_to_circuit
from .passes.mapping.unroller import Unroller
-from ._parallel import parallel_map
from ._transpilererror import TranspilerError
logger = logging.getLogger(__name__)
</patch>
|
[]
|
[]
| ||||
pandas-dev__pandas-7842
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
itertuples does not work with categorical column in dataframe
Code Snippet
``` python
df = pd.DataFrame({"id":[1,2,3,4,5,6], "raw_grade":['a', 'b', 'b', 'a', 'a', 'e']})
df['grade'] = pd.Categorical(df['raw_grade'])
for t in df.itertuples(index=False):
print(t)
```
Error
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-45-182b29164b1a> in <module>()
1 df = pd.DataFrame({"id":[1,2,3,4,5,6], "raw_grade":['a', 'b', 'b', 'a', 'a', 'e']})
2 df['grade'] = pd.Categorical(df['raw_grade'])
----> 3 for t in df.itertuples(index=False):
4 print(t)
/home/has2k1/scm/pandas/pandas/core/frame.pyc in itertuples(self, index)
549 # use integer indexing because of possible duplicate column names
550 arrays.extend(self.iloc[:, k] for k in range(len(self.columns)))
--> 551 return zip(*arrays)
552
553 if compat.PY3: # pragma: no cover
TypeError: izip argument #3 must support iteration
```
This is on master at commit 24b309f.
Edit:
Version string - pandas: 0.14.1-78-g24b309f
</issue>
<code>
[start of README.md]
1 # pandas: powerful Python data analysis toolkit
2
3 
4
5 [](http://scatterci.github.io/pydata/pandas)
6
7 ## What is it
8
9 **pandas** is a Python package providing fast, flexible, and expressive data
10 structures designed to make working with "relational" or "labeled" data both
11 easy and intuitive. It aims to be the fundamental high-level building block for
12 doing practical, **real world** data analysis in Python. Additionally, it has
13 the broader goal of becoming **the most powerful and flexible open source data
14 analysis / manipulation tool available in any language**. It is already well on
15 its way toward this goal.
16
17 ## Main Features
18 Here are just a few of the things that pandas does well:
19
20 - Easy handling of [**missing data**][missing-data] (represented as
21 `NaN`) in floating point as well as non-floating point data
22 - Size mutability: columns can be [**inserted and
23 deleted**][insertion-deletion] from DataFrame and higher dimensional
24 objects
25 - Automatic and explicit [**data alignment**][alignment]: objects can
26 be explicitly aligned to a set of labels, or the user can simply
27 ignore the labels and let `Series`, `DataFrame`, etc. automatically
28 align the data for you in computations
29 - Powerful, flexible [**group by**][groupby] functionality to perform
30 split-apply-combine operations on data sets, for both aggregating
31 and transforming data
32 - Make it [**easy to convert**][conversion] ragged,
33 differently-indexed data in other Python and NumPy data structures
34 into DataFrame objects
35 - Intelligent label-based [**slicing**][slicing], [**fancy
36 indexing**][fancy-indexing], and [**subsetting**][subsetting] of
37 large data sets
38 - Intuitive [**merging**][merging] and [**joining**][joining] data
39 sets
40 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of
41 data sets
42 - [**Hierarchical**][mi] labeling of axes (possible to have multiple
43 labels per tick)
44 - Robust IO tools for loading data from [**flat files**][flat-files]
45 (CSV and delimited), [**Excel files**][excel], [**databases**][db],
46 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore]
47 - [**Time series**][timeseries]-specific functionality: date range
48 generation and frequency conversion, moving window statistics,
49 moving window linear regressions, date shifting and lagging, etc.
50
51
52 [missing-data]: http://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data
53 [insertion-deletion]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html#column-selection-addition-deletion
54 [alignment]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html?highlight=alignment#intro-to-data-structures
55 [groupby]: http://pandas.pydata.org/pandas-docs/stable/groupby.html#group-by-split-apply-combine
56 [conversion]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe
57 [slicing]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges
58 [fancy-indexing]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#advanced-indexing-with-ix
59 [subsetting]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing
60 [merging]: http://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging
61 [joining]: http://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index
62 [reshape]: http://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-and-pivot-tables
63 [pivot-table]: http://pandas.pydata.org/pandas-docs/stable/reshaping.html#pivot-tables-and-cross-tabulations
64 [mi]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex
65 [flat-files]: http://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files
66 [excel]: http://pandas.pydata.org/pandas-docs/stable/io.html#excel-files
67 [db]: http://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries
68 [hdfstore]: http://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables
69 [timeseries]: http://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-series-date-functionality
70
71 ## Where to get it
72 The source code is currently hosted on GitHub at:
73 http://github.com/pydata/pandas
74
75 Binary installers for the latest released version are available at the Python
76 package index
77
78 http://pypi.python.org/pypi/pandas/
79
80 And via `easy_install`:
81
82 ```sh
83 easy_install pandas
84 ```
85
86 or `pip`:
87
88 ```sh
89 pip install pandas
90 ```
91
92 ## Dependencies
93 - [NumPy](http://www.numpy.org): 1.6.1 or higher
94 - [python-dateutil](http://labix.org/python-dateutil): 1.5 or higher
95 - [pytz](http://pytz.sourceforge.net)
96 - Needed for time zone support with ``pandas.date_range``
97
98 ### Highly Recommended Dependencies
99 - [numexpr](http://code.google.com/p/numexpr/)
100 - Needed to accelerate some expression evaluation operations
101 - Required by PyTables
102 - [bottleneck](http://berkeleyanalytics.com/bottleneck)
103 - Needed to accelerate certain numerical operations
104
105 ### Optional dependencies
106 - [Cython](http://www.cython.org): Only necessary to build development version. Version 0.17.1 or higher.
107 - [SciPy](http://www.scipy.org): miscellaneous statistical functions
108 - [PyTables](http://www.pytables.org): necessary for HDF5-based storage
109 - [SQLAlchemy](http://www.sqlalchemy.org): for SQL database support. Version 0.8.1 or higher recommended.
110 - [matplotlib](http://matplotlib.sourceforge.net/): for plotting
111 - [statsmodels](http://statsmodels.sourceforge.net/)
112 - Needed for parts of `pandas.stats`
113 - For Excel I/O:
114 - [xlrd/xlwt](http://www.python-excel.org/)
115 - Excel reading (xlrd) and writing (xlwt)
116 - [openpyxl](http://packages.python.org/openpyxl/)
117 - openpyxl version 1.6.1 or higher, but lower than 2.0.0, for
118 writing .xlsx files
119 - xlrd >= 0.9.0
120 - [XlsxWriter](https://pypi.python.org/pypi/XlsxWriter)
121 - Alternative Excel writer.
122 - [Google bq Command Line Tool](https://developers.google.com/bigquery/bq-command-line-tool/)
123 - Needed for `pandas.io.gbq`
124 - [boto](https://pypi.python.org/pypi/boto): necessary for Amazon S3 access.
125 - One of the following combinations of libraries is needed to use the
126 top-level [`pandas.read_html`][read-html-docs] function:
127 - [BeautifulSoup4][BeautifulSoup4] and [html5lib][html5lib] (Any
128 recent version of [html5lib][html5lib] is okay.)
129 - [BeautifulSoup4][BeautifulSoup4] and [lxml][lxml]
130 - [BeautifulSoup4][BeautifulSoup4] and [html5lib][html5lib] and [lxml][lxml]
131 - Only [lxml][lxml], although see [HTML reading gotchas][html-gotchas]
132 for reasons as to why you should probably **not** take this approach.
133
134 #### Notes about HTML parsing libraries
135 - If you install [BeautifulSoup4][BeautifulSoup4] you must install
136 either [lxml][lxml] or [html5lib][html5lib] or both.
137 `pandas.read_html` will **not** work with *only* `BeautifulSoup4`
138 installed.
139 - You are strongly encouraged to read [HTML reading
140 gotchas][html-gotchas]. It explains issues surrounding the
141 installation and usage of the above three libraries.
142 - You may need to install an older version of
143 [BeautifulSoup4][BeautifulSoup4]:
144 - Versions 4.2.1, 4.1.3 and 4.0.2 have been confirmed for 64 and
145 32-bit Ubuntu/Debian
146 - Additionally, if you're using [Anaconda][Anaconda] you should
147 definitely read [the gotchas about HTML parsing][html-gotchas]
148 libraries
149 - If you're on a system with `apt-get` you can do
150
151 ```sh
152 sudo apt-get build-dep python-lxml
153 ```
154
155 to get the necessary dependencies for installation of [lxml][lxml].
156 This will prevent further headaches down the line.
157
158 [html5lib]: https://github.com/html5lib/html5lib-python "html5lib"
159 [BeautifulSoup4]: http://www.crummy.com/software/BeautifulSoup "BeautifulSoup4"
160 [lxml]: http://lxml.de
161 [Anaconda]: https://store.continuum.io/cshop/anaconda
162 [NumPy]: http://numpy.scipy.org/
163 [html-gotchas]: http://pandas.pydata.org/pandas-docs/stable/gotchas.html#html-table-parsing
164 [read-html-docs]: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.io.html.read_html.html#pandas.io.html.read_html
165
166 ## Installation from sources
167 To install pandas from source you need Cython in addition to the normal
168 dependencies above. Cython can be installed from pypi:
169
170 ```sh
171 pip install cython
172 ```
173
174 In the `pandas` directory (same one where you found this file after
175 cloning the git repo), execute:
176
177 ```sh
178 python setup.py install
179 ```
180
181 or for installing in [development mode](http://www.pip-installer.org/en/latest/usage.html):
182
183 ```sh
184 python setup.py develop
185 ```
186
187 Alternatively, you can use `pip` if you want all the dependencies pulled
188 in automatically (the `-e` option is for installing it in [development
189 mode](http://www.pip-installer.org/en/latest/usage.html)):
190
191 ```sh
192 pip install -e .
193 ```
194
195 On Windows, you will need to install MinGW and execute:
196
197 ```sh
198 python setup.py build --compiler=mingw32
199 python setup.py install
200 ```
201
202 See http://pandas.pydata.org/ for more information.
203
204 ## License
205 BSD
206
207 ## Documentation
208 The official documentation is hosted on PyData.org: http://pandas.pydata.org/
209
210 The Sphinx documentation should provide a good starting point for learning how
211 to use the library. Expect the docs to continue to expand as time goes on.
212
213 ## Background
214 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and
215 has been under active development since then.
216
217 ## Discussion and Development
218 Since pandas development is related to a number of other scientific
219 Python projects, questions are welcome on the scipy-user mailing
220 list. Specialized discussions or design issues should take place on
221 the pystatsmodels mailing list / Google group, where
222 ``scikits.statsmodels`` and other libraries will also be discussed:
223
224 http://groups.google.com/group/pystatsmodels
225
[end of README.md]
[start of pandas/compat/__init__.py]
1 """
2 compat
3 ======
4
5 Cross-compatible functions for Python 2 and 3.
6
7 Key items to import for 2/3 compatible code:
8 * iterators: range(), map(), zip(), filter(), reduce()
9 * lists: lrange(), lmap(), lzip(), lfilter()
10 * unicode: u() [u"" is a syntax error in Python 3.0-3.2]
11 * longs: long (int in Python 3)
12 * callable
13 * iterable method compatibility: iteritems, iterkeys, itervalues
14 * Uses the original method if available, otherwise uses items, keys, values.
15 * types:
16 * text_type: unicode in Python 2, str in Python 3
17 * binary_type: str in Python 2, bythes in Python 3
18 * string_types: basestring in Python 2, str in Python 3
19 * bind_method: binds functions to classes
20 * add_metaclass(metaclass) - class decorator that recreates class with with the
21 given metaclass instead (and avoids intermediary class creation)
22
23 Python 2.6 compatibility:
24 * OrderedDict
25 * Counter
26
27 Other items:
28 * OrderedDefaultDict
29 """
30 # pylint disable=W0611
31 import functools
32 import itertools
33 from distutils.version import LooseVersion
34 from itertools import product
35 import sys
36 import types
37
38 PY3 = (sys.version_info[0] >= 3)
39 PY3_2 = sys.version_info[:2] == (3, 2)
40
41 try:
42 import __builtin__ as builtins
43 # not writeable when instantiated with string, doesn't handle unicode well
44 from cStringIO import StringIO as cStringIO
45 # always writeable
46 from StringIO import StringIO
47 BytesIO = StringIO
48 import cPickle
49 import httplib
50 except ImportError:
51 import builtins
52 from io import StringIO, BytesIO
53 cStringIO = StringIO
54 import pickle as cPickle
55 import http.client as httplib
56
57 from pandas.compat.chainmap import DeepChainMap
58
59
60 if PY3:
61 def isidentifier(s):
62 return s.isidentifier()
63
64 def str_to_bytes(s, encoding=None):
65 return s.encode(encoding or 'ascii')
66
67 def bytes_to_str(b, encoding=None):
68 return b.decode(encoding or 'utf-8')
69
70 # have to explicitly put builtins into the namespace
71 range = range
72 map = map
73 zip = zip
74 filter = filter
75 reduce = functools.reduce
76 long = int
77 unichr = chr
78
79 # list-producing versions of the major Python iterating functions
80 def lrange(*args, **kwargs):
81 return list(range(*args, **kwargs))
82
83 def lzip(*args, **kwargs):
84 return list(zip(*args, **kwargs))
85
86 def lmap(*args, **kwargs):
87 return list(map(*args, **kwargs))
88
89 def lfilter(*args, **kwargs):
90 return list(filter(*args, **kwargs))
91 else:
92 # Python 2
93 import re
94 _name_re = re.compile(r"[a-zA-Z_][a-zA-Z0-9_]*$")
95
96 def isidentifier(s, dotted=False):
97 return bool(_name_re.match(s))
98
99 def str_to_bytes(s, encoding='ascii'):
100 return s
101
102 def bytes_to_str(b, encoding='ascii'):
103 return b
104
105 # import iterator versions of these functions
106 range = xrange
107 zip = itertools.izip
108 filter = itertools.ifilter
109 map = itertools.imap
110 reduce = reduce
111 long = long
112 unichr = unichr
113
114 # Python 2-builtin ranges produce lists
115 lrange = builtins.range
116 lzip = builtins.zip
117 lmap = builtins.map
118 lfilter = builtins.filter
119
120
121 def iteritems(obj, **kwargs):
122 """replacement for six's iteritems for Python2/3 compat
123 uses 'iteritems' if available and otherwise uses 'items'.
124
125 Passes kwargs to method.
126 """
127 func = getattr(obj, "iteritems", None)
128 if not func:
129 func = obj.items
130 return func(**kwargs)
131
132
133 def iterkeys(obj, **kwargs):
134 func = getattr(obj, "iterkeys", None)
135 if not func:
136 func = obj.keys
137 return func(**kwargs)
138
139
140 def itervalues(obj, **kwargs):
141 func = getattr(obj, "itervalues", None)
142 if not func:
143 func = obj.values
144 return func(**kwargs)
145
146
147 def bind_method(cls, name, func):
148 """Bind a method to class, python 2 and python 3 compatible.
149
150 Parameters
151 ----------
152
153 cls : type
154 class to receive bound method
155 name : basestring
156 name of method on class instance
157 func : function
158 function to be bound as method
159
160
161 Returns
162 -------
163 None
164 """
165 # only python 2 has bound/unbound method issue
166 if not PY3:
167 setattr(cls, name, types.MethodType(func, None, cls))
168 else:
169 setattr(cls, name, func)
170 # ----------------------------------------------------------------------------
171 # functions largely based / taken from the six module
172
173 # Much of the code in this module comes from Benjamin Peterson's six library.
174 # The license for this library can be found in LICENSES/SIX and the code can be
175 # found at https://bitbucket.org/gutworth/six
176
177 if PY3:
178 string_types = str,
179 integer_types = int,
180 class_types = type,
181 text_type = str
182 binary_type = bytes
183
184 def u(s):
185 return s
186
187 def u_safe(s):
188 return s
189 else:
190 string_types = basestring,
191 integer_types = (int, long)
192 class_types = (type, types.ClassType)
193 text_type = unicode
194 binary_type = str
195
196 def u(s):
197 return unicode(s, "unicode_escape")
198
199 def u_safe(s):
200 try:
201 return unicode(s, "unicode_escape")
202 except:
203 return s
204
205
206 string_and_binary_types = string_types + (binary_type,)
207
208
209 try:
210 # callable reintroduced in later versions of Python
211 callable = callable
212 except NameError:
213 def callable(obj):
214 return any("__call__" in klass.__dict__ for klass in type(obj).__mro__)
215
216
217 def add_metaclass(metaclass):
218 """Class decorator for creating a class with a metaclass."""
219 def wrapper(cls):
220 orig_vars = cls.__dict__.copy()
221 orig_vars.pop('__dict__', None)
222 orig_vars.pop('__weakref__', None)
223 for slots_var in orig_vars.get('__slots__', ()):
224 orig_vars.pop(slots_var)
225 return metaclass(cls.__name__, cls.__bases__, orig_vars)
226 return wrapper
227
228
229 # ----------------------------------------------------------------------------
230 # Python 2.6 compatibility shims
231 #
232
233 # OrderedDict Shim from Raymond Hettinger, python core dev
234 # http://code.activestate.com/recipes/576693-ordered-dictionary-for-py24/
235 # here to support versions before 2.6
236 if not PY3:
237 # don't need this except in 2.6
238 try:
239 from thread import get_ident as _get_ident
240 except ImportError:
241 from dummy_thread import get_ident as _get_ident
242
243 try:
244 from _abcoll import KeysView, ValuesView, ItemsView
245 except ImportError:
246 pass
247
248
249 class _OrderedDict(dict):
250
251 """Dictionary that remembers insertion order"""
252 # An inherited dict maps keys to values.
253 # The inherited dict provides __getitem__, __len__, __contains__, and get.
254 # The remaining methods are order-aware.
255 # Big-O running times for all methods are the same as for regular
256 # dictionaries.
257
258 # The internal self.__map dictionary maps keys to links in a doubly linked
259 # list. The circular doubly linked list starts and ends with a sentinel
260 # element. The sentinel element never gets deleted (this simplifies the
261 # algorithm). Each link is stored as a list of length three: [PREV, NEXT,
262 # KEY].
263
264 def __init__(self, *args, **kwds):
265 """Initialize an ordered dictionary. Signature is the same as for
266 regular dictionaries, but keyword arguments are not recommended
267 because their insertion order is arbitrary.
268 """
269 if len(args) > 1:
270 raise TypeError('expected at most 1 arguments, got %d' % len(args))
271 try:
272 self.__root
273 except AttributeError:
274 self.__root = root = [] # sentinel node
275 root[:] = [root, root, None]
276 self.__map = {}
277 self.__update(*args, **kwds)
278
279 def __setitem__(self, key, value, dict_setitem=dict.__setitem__):
280 """od.__setitem__(i, y) <==> od[i]=y"""
281 # Setting a new item creates a new link which goes at the end of the
282 # linked list, and the inherited dictionary is updated with the new
283 # key/value pair.
284 if key not in self:
285 root = self.__root
286 last = root[0]
287 last[1] = root[0] = self.__map[key] = [last, root, key]
288 dict_setitem(self, key, value)
289
290 def __delitem__(self, key, dict_delitem=dict.__delitem__):
291 """od.__delitem__(y) <==> del od[y]"""
292 # Deleting an existing item uses self.__map to find the link which is
293 # then removed by updating the links in the predecessor and successor
294 # nodes.
295 dict_delitem(self, key)
296 link_prev, link_next, key = self.__map.pop(key)
297 link_prev[1] = link_next
298 link_next[0] = link_prev
299
300 def __iter__(self):
301 """od.__iter__() <==> iter(od)"""
302 root = self.__root
303 curr = root[1]
304 while curr is not root:
305 yield curr[2]
306 curr = curr[1]
307
308 def __reversed__(self):
309 """od.__reversed__() <==> reversed(od)"""
310 root = self.__root
311 curr = root[0]
312 while curr is not root:
313 yield curr[2]
314 curr = curr[0]
315
316 def clear(self):
317 """od.clear() -> None. Remove all items from od."""
318 try:
319 for node in itervalues(self.__map):
320 del node[:]
321 root = self.__root
322 root[:] = [root, root, None]
323 self.__map.clear()
324 except AttributeError:
325 pass
326 dict.clear(self)
327
328 def popitem(self, last=True):
329 """od.popitem() -> (k, v), return and remove a (key, value) pair.
330
331 Pairs are returned in LIFO order if last is true or FIFO order if
332 false.
333 """
334 if not self:
335 raise KeyError('dictionary is empty')
336 root = self.__root
337 if last:
338 link = root[0]
339 link_prev = link[0]
340 link_prev[1] = root
341 root[0] = link_prev
342 else:
343 link = root[1]
344 link_next = link[1]
345 root[1] = link_next
346 link_next[0] = root
347 key = link[2]
348 del self.__map[key]
349 value = dict.pop(self, key)
350 return key, value
351
352 # -- the following methods do not depend on the internal structure --
353
354 def keys(self):
355 """od.keys() -> list of keys in od"""
356 return list(self)
357
358 def values(self):
359 """od.values() -> list of values in od"""
360 return [self[key] for key in self]
361
362 def items(self):
363 """od.items() -> list of (key, value) pairs in od"""
364 return [(key, self[key]) for key in self]
365
366 def iterkeys(self):
367 """od.iterkeys() -> an iterator over the keys in od"""
368 return iter(self)
369
370 def itervalues(self):
371 """od.itervalues -> an iterator over the values in od"""
372 for k in self:
373 yield self[k]
374
375 def iteritems(self):
376 """od.iteritems -> an iterator over the (key, value) items in od"""
377 for k in self:
378 yield (k, self[k])
379
380 def update(*args, **kwds):
381 """od.update(E, **F) -> None. Update od from dict/iterable E and F.
382
383 If E is a dict instance, does: for k in E: od[k] = E[k]
384 If E has a .keys() method, does: for k in E.keys(): od[k] = E[k]
385 Or if E is an iterable of items, does:for k, v in E: od[k] = v
386 In either case, this is followed by: for k, v in F.items(): od[k] = v
387 """
388 if len(args) > 2:
389 raise TypeError('update() takes at most 2 positional '
390 'arguments (%d given)' % (len(args),))
391 elif not args:
392 raise TypeError('update() takes at least 1 argument (0 given)')
393 self = args[0]
394 # Make progressively weaker assumptions about "other"
395 other = ()
396 if len(args) == 2:
397 other = args[1]
398 if isinstance(other, dict):
399 for key in other:
400 self[key] = other[key]
401 elif hasattr(other, 'keys'):
402 for key in other.keys():
403 self[key] = other[key]
404 else:
405 for key, value in other:
406 self[key] = value
407 for key, value in kwds.items():
408 self[key] = value
409 # let subclasses override update without breaking __init__
410 __update = update
411
412 __marker = object()
413
414 def pop(self, key, default=__marker):
415 """od.pop(k[,d]) -> v, remove specified key and return the
416 corresponding value. If key is not found, d is returned if given,
417 otherwise KeyError is raised.
418 """
419 if key in self:
420 result = self[key]
421 del self[key]
422 return result
423 if default is self.__marker:
424 raise KeyError(key)
425 return default
426
427 def setdefault(self, key, default=None):
428 """od.setdefault(k[,d]) -> od.get(k,d), also set od[k]=d if k not in od
429 """
430 if key in self:
431 return self[key]
432 self[key] = default
433 return default
434
435 def __repr__(self, _repr_running={}):
436 """od.__repr__() <==> repr(od)"""
437 call_key = id(self), _get_ident()
438 if call_key in _repr_running:
439 return '...'
440 _repr_running[call_key] = 1
441 try:
442 if not self:
443 return '%s()' % (self.__class__.__name__,)
444 return '%s(%r)' % (self.__class__.__name__, list(self.items()))
445 finally:
446 del _repr_running[call_key]
447
448 def __reduce__(self):
449 """Return state information for pickling"""
450 items = [[k, self[k]] for k in self]
451 inst_dict = vars(self).copy()
452 for k in vars(OrderedDict()):
453 inst_dict.pop(k, None)
454 if inst_dict:
455 return (self.__class__, (items,), inst_dict)
456 return self.__class__, (items,)
457
458 def copy(self):
459 """od.copy() -> a shallow copy of od"""
460 return self.__class__(self)
461
462 @classmethod
463 def fromkeys(cls, iterable, value=None):
464 """OD.fromkeys(S[, v]) -> New ordered dictionary with keys from S and
465 values equal to v (which defaults to None).
466 """
467 d = cls()
468 for key in iterable:
469 d[key] = value
470 return d
471
472 def __eq__(self, other):
473 """od.__eq__(y) <==> od==y. Comparison to another OD is
474 order-sensitive while comparison to a regular mapping is
475 order-insensitive.
476 """
477 if isinstance(other, OrderedDict):
478 return (len(self) == len(other) and
479 list(self.items()) == list(other.items()))
480 return dict.__eq__(self, other)
481
482 def __ne__(self, other):
483 return not self == other
484
485 # -- the following methods are only used in Python 2.7 --
486
487 def viewkeys(self):
488 """od.viewkeys() -> a set-like object providing a view on od's keys"""
489 return KeysView(self)
490
491 def viewvalues(self):
492 """od.viewvalues() -> an object providing a view on od's values"""
493 return ValuesView(self)
494
495 def viewitems(self):
496 """od.viewitems() -> a set-like object providing a view on od's items
497 """
498 return ItemsView(self)
499
500
501 # {{{ http://code.activestate.com/recipes/576611/ (r11)
502
503 try:
504 from operator import itemgetter
505 from heapq import nlargest
506 except ImportError:
507 pass
508
509
510 class _Counter(dict):
511
512 """Dict subclass for counting hashable objects. Sometimes called a bag
513 or multiset. Elements are stored as dictionary keys and their counts
514 are stored as dictionary values.
515
516 >>> Counter('zyzygy')
517 Counter({'y': 3, 'z': 2, 'g': 1})
518
519 """
520
521 def __init__(self, iterable=None, **kwds):
522 """Create a new, empty Counter object. And if given, count elements
523 from an input iterable. Or, initialize the count from another mapping
524 of elements to their counts.
525
526 >>> c = Counter() # a new, empty counter
527 >>> c = Counter('gallahad') # a new counter from an iterable
528 >>> c = Counter({'a': 4, 'b': 2}) # a new counter from a mapping
529 >>> c = Counter(a=4, b=2) # a new counter from keyword args
530
531 """
532 self.update(iterable, **kwds)
533
534 def __missing__(self, key):
535 return 0
536
537 def most_common(self, n=None):
538 """List the n most common elements and their counts from the most
539 common to the least. If n is None, then list all element counts.
540
541 >>> Counter('abracadabra').most_common(3)
542 [('a', 5), ('r', 2), ('b', 2)]
543
544 """
545 if n is None:
546 return sorted(iteritems(self), key=itemgetter(1), reverse=True)
547 return nlargest(n, iteritems(self), key=itemgetter(1))
548
549 def elements(self):
550 """Iterator over elements repeating each as many times as its count.
551
552 >>> c = Counter('ABCABC')
553 >>> sorted(c.elements())
554 ['A', 'A', 'B', 'B', 'C', 'C']
555
556 If an element's count has been set to zero or is a negative number,
557 elements() will ignore it.
558
559 """
560 for elem, count in iteritems(self):
561 for _ in range(count):
562 yield elem
563
564 # Override dict methods where the meaning changes for Counter objects.
565
566 @classmethod
567 def fromkeys(cls, iterable, v=None):
568 raise NotImplementedError(
569 'Counter.fromkeys() is undefined. Use Counter(iterable) instead.')
570
571 def update(self, iterable=None, **kwds):
572 """Like dict.update() but add counts instead of replacing them.
573
574 Source can be an iterable, a dictionary, or another Counter instance.
575
576 >>> c = Counter('which')
577 >>> c.update('witch') # add elements from another iterable
578 >>> d = Counter('watch')
579 >>> c.update(d) # add elements from another counter
580 >>> c['h'] # four 'h' in which, witch, and watch
581 4
582
583 """
584 if iterable is not None:
585 if hasattr(iterable, 'iteritems'):
586 if self:
587 self_get = self.get
588 for elem, count in iteritems(iterable):
589 self[elem] = self_get(elem, 0) + count
590 else:
591 dict.update(
592 self, iterable) # fast path when counter is empty
593 else:
594 self_get = self.get
595 for elem in iterable:
596 self[elem] = self_get(elem, 0) + 1
597 if kwds:
598 self.update(kwds)
599
600 def copy(self):
601 """Like dict.copy() but returns a Counter instance instead of a dict.
602 """
603 return Counter(self)
604
605 def __delitem__(self, elem):
606 """Like dict.__delitem__() but does not raise KeyError for missing
607 values.
608 """
609 if elem in self:
610 dict.__delitem__(self, elem)
611
612 def __repr__(self):
613 if not self:
614 return '%s()' % self.__class__.__name__
615 items = ', '.join(map('%r: %r'.__mod__, self.most_common()))
616 return '%s({%s})' % (self.__class__.__name__, items)
617
618 # Multiset-style mathematical operations discussed in:
619 # Knuth TAOCP Volume II section 4.6.3 exercise 19
620 # and at http://en.wikipedia.org/wiki/Multiset
621 #
622 # Outputs guaranteed to only include positive counts.
623 #
624 # To strip negative and zero counts, add-in an empty counter:
625 # c += Counter()
626
627 def __add__(self, other):
628 """Add counts from two counters.
629
630 >>> Counter('abbb') + Counter('bcc')
631 Counter({'b': 4, 'c': 2, 'a': 1})
632
633 """
634 if not isinstance(other, Counter):
635 return NotImplemented
636 result = Counter()
637 for elem in set(self) | set(other):
638 newcount = self[elem] + other[elem]
639 if newcount > 0:
640 result[elem] = newcount
641 return result
642
643 def __sub__(self, other):
644 """Subtract count, but keep only results with positive counts.
645
646 >>> Counter('abbbc') - Counter('bccd')
647 Counter({'b': 2, 'a': 1})
648
649 """
650 if not isinstance(other, Counter):
651 return NotImplemented
652 result = Counter()
653 for elem in set(self) | set(other):
654 newcount = self[elem] - other[elem]
655 if newcount > 0:
656 result[elem] = newcount
657 return result
658
659 def __or__(self, other):
660 """Union is the maximum of value in either of the input counters.
661
662 >>> Counter('abbb') | Counter('bcc')
663 Counter({'b': 3, 'c': 2, 'a': 1})
664
665 """
666 if not isinstance(other, Counter):
667 return NotImplemented
668 _max = max
669 result = Counter()
670 for elem in set(self) | set(other):
671 newcount = _max(self[elem], other[elem])
672 if newcount > 0:
673 result[elem] = newcount
674 return result
675
676 def __and__(self, other):
677 """Intersection is the minimum of corresponding counts.
678
679 >>> Counter('abbb') & Counter('bcc')
680 Counter({'b': 1})
681
682 """
683 if not isinstance(other, Counter):
684 return NotImplemented
685 _min = min
686 result = Counter()
687 if len(self) < len(other):
688 self, other = other, self
689 for elem in filter(self.__contains__, other):
690 newcount = _min(self[elem], other[elem])
691 if newcount > 0:
692 result[elem] = newcount
693 return result
694
695 if sys.version_info[:2] < (2, 7):
696 OrderedDict = _OrderedDict
697 Counter = _Counter
698 else:
699 from collections import OrderedDict, Counter
700
701 if PY3:
702 def raise_with_traceback(exc, traceback=Ellipsis):
703 if traceback == Ellipsis:
704 _, _, traceback = sys.exc_info()
705 raise exc.with_traceback(traceback)
706 else:
707 # this version of raise is a syntax error in Python 3
708 exec("""
709 def raise_with_traceback(exc, traceback=Ellipsis):
710 if traceback == Ellipsis:
711 _, _, traceback = sys.exc_info()
712 raise exc, None, traceback
713 """)
714
715 raise_with_traceback.__doc__ = """Raise exception with existing traceback.
716 If traceback is not passed, uses sys.exc_info() to get traceback."""
717
718
719 # http://stackoverflow.com/questions/4126348
720 # Thanks to @martineau at SO
721
722 from dateutil import parser as _date_parser
723 import dateutil
724 if LooseVersion(dateutil.__version__) < '2.0':
725 @functools.wraps(_date_parser.parse)
726 def parse_date(timestr, *args, **kwargs):
727 timestr = bytes(timestr)
728 return _date_parser.parse(timestr, *args, **kwargs)
729 else:
730 parse_date = _date_parser.parse
731
732
733 class OrderedDefaultdict(OrderedDict):
734
735 def __init__(self, *args, **kwargs):
736 newdefault = None
737 newargs = ()
738 if args:
739 newdefault = args[0]
740 if not (newdefault is None or callable(newdefault)):
741 raise TypeError('first argument must be callable or None')
742 newargs = args[1:]
743 self.default_factory = newdefault
744 super(self.__class__, self).__init__(*newargs, **kwargs)
745
746 def __missing__(self, key):
747 if self.default_factory is None:
748 raise KeyError(key)
749 self[key] = value = self.default_factory()
750 return value
751
752 def __reduce__(self): # optional, for pickle support
753 args = self.default_factory if self.default_factory else tuple()
754 return type(self), args, None, None, list(self.items())
755
[end of pandas/compat/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pandas-dev/pandas
|
a23c6c77c76fc23ff8c997dcdcff43366d15d01a
|
itertuples does not work with categorical column in dataframe
Code Snippet
``` python
df = pd.DataFrame({"id":[1,2,3,4,5,6], "raw_grade":['a', 'b', 'b', 'a', 'a', 'e']})
df['grade'] = pd.Categorical(df['raw_grade'])
for t in df.itertuples(index=False):
print(t)
```
Error
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-45-182b29164b1a> in <module>()
1 df = pd.DataFrame({"id":[1,2,3,4,5,6], "raw_grade":['a', 'b', 'b', 'a', 'a', 'e']})
2 df['grade'] = pd.Categorical(df['raw_grade'])
----> 3 for t in df.itertuples(index=False):
4 print(t)
/home/has2k1/scm/pandas/pandas/core/frame.pyc in itertuples(self, index)
549 # use integer indexing because of possible duplicate column names
550 arrays.extend(self.iloc[:, k] for k in range(len(self.columns)))
--> 551 return zip(*arrays)
552
553 if compat.PY3: # pragma: no cover
TypeError: izip argument #3 must support iteration
```
This is on master at commit 24b309f.
Edit:
Version string - pandas: 0.14.1-78-g24b309f
|
2014-07-25T22:41:05Z
|
<patch>
diff --git a/doc/source/v0.15.0.txt b/doc/source/v0.15.0.txt
--- a/doc/source/v0.15.0.txt
+++ b/doc/source/v0.15.0.txt
@@ -97,7 +97,7 @@ Categoricals in Series/DataFrame
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
:class:`~pandas.Categorical` can now be included in `Series` and `DataFrames` and gained new
-methods to manipulate. Thanks to Jan Schultz for much of this API/implementation. (:issue:`3943`, :issue:`5313`, :issue:`5314`, :issue:`7444`).
+methods to manipulate. Thanks to Jan Schultz for much of this API/implementation. (:issue:`3943`, :issue:`5313`, :issue:`5314`, :issue:`7444`, :issue:`7839`).
For full docs, see the :ref:`Categorical introduction <categorical>` and the :ref:`API documentation <api.categorical>`.
diff --git a/pandas/core/series.py b/pandas/core/series.py
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -973,7 +973,9 @@ def _get_repr(
return result
def __iter__(self):
- if np.issubdtype(self.dtype, np.datetime64):
+ if com.is_categorical_dtype(self.dtype):
+ return iter(self.values)
+ elif np.issubdtype(self.dtype, np.datetime64):
return (lib.Timestamp(x) for x in self.values)
else:
return iter(self.values)
</patch>
|
[]
|
[]
| ||||
Qiskit__qiskit-4558
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Transpile Performance Regression
<!-- ⚠️ If you do not respect this template, your issue will be closed -->
<!-- ⚠️ Make sure to browse the opened and closed issues -->
### Information
- **Qiskit Terra version**: >=0.14.0
- **Python version**: Any
- **Operating system**: Any
### What is the current behavior?
https://github.com/Qiskit/qiskit-terra/commit/529a22b6 made some changes to how we reorder bits in stochastic swap and basic swap. These change slow down these passes about [2x slower for basic swap ](https://qiskit.github.io/qiskit/#mapping_passes.PassBenchmarks.time_basic_swap?machine=dedicated-benchmarking-softlayer-baremetal&os=Linux%204.15.0-46-generic&ram=16GB&p-n_qubits=5&p-depth=1024&commits=529a22b6)and [10-30% for stochastic swap](https://qiskit.github.io/qiskit/#mapping_passes.PassBenchmarks.time_stochastic_swap?machine=qiskit-benchmarking&os=Ubuntu%2018.04&ram=16%20GB&p-n_qubits=5&p-depth=1024&commits=529a22b6).
You can also see the impact of stochastic swap for transpiles with the preset pass managers (because of stochastic swap) here: https://qiskit.github.io/qiskit/#transpiler_levels.TranspilerLevelBenchmarks.time_quantum_volume_transpile_50_x_20?machine=qiskit-benchmarking&os=Ubuntu%2018.04&ram=16%20GB&p-transpiler%20optimization%20level=1&commits=529a22b6
</issue>
<code>
[start of README.md]
1 # Qiskit Terra
2
3 [](https://opensource.org/licenses/Apache-2.0)[](https://travis-ci.com/Qiskit/qiskit-terra)[](https://github.com/Qiskit/qiskit-terra/releases)[](https://pypi.org/project/qiskit-terra/)[](https://coveralls.io/github/Qiskit/qiskit-terra?branch=master)
4
5 **Qiskit** is an open-source framework for working with noisy quantum computers at the level of pulses, circuits, and algorithms.
6
7 Qiskit is made up of elements that work together to enable quantum computing. This element is **Terra** and is the foundation on which the rest of Qiskit is built.
8
9 ## Installation
10
11 We encourage installing Qiskit via the pip tool (a python package manager), which installs all Qiskit elements, including Terra.
12
13 ```bash
14 pip install qiskit
15 ```
16
17 PIP will handle all dependencies automatically and you will always install the latest (and well-tested) version.
18
19 To install from source, follow the instructions in the [documentation](https://qiskit.org/documentation/contributing_to_qiskit.html#install-terra-from-source).
20
21 ## Creating Your First Quantum Program in Qiskit Terra
22
23 Now that Qiskit is installed, it's time to begin working with Terra.
24
25 We are ready to try out a quantum circuit example, which is simulated locally using
26 the Qiskit BasicAer element. This is a simple example that makes an entangled state.
27
28 ```
29 $ python
30 ```
31
32 ```python
33 >>> from qiskit import *
34 >>> qc = QuantumCircuit(2, 2)
35 >>> qc.h(0)
36 >>> qc.cx(0, 1)
37 >>> qc.measure([0,1], [0,1])
38 >>> backend_sim = BasicAer.get_backend('qasm_simulator')
39 >>> result = backend_sim.run(assemble(qc)).result()
40 >>> print(result.get_counts(qc))
41 ```
42
43 In this case, the output will be:
44
45 ```python
46 {'00': 513, '11': 511}
47 ```
48
49 A script is available [here](examples/python/ibmq/hello_quantum.py), where we also show how to
50 run the same program on a real quantum computer via IBMQ.
51
52 ### Executing your code on a real quantum chip
53
54 You can also use Qiskit to execute your code on a
55 **real quantum chip**.
56 In order to do so, you need to configure Qiskit for using the credentials in
57 your IBM Q account:
58
59 #### Configure your IBMQ credentials
60
61 1. Create an _[IBM Q](https://quantum-computing.ibm.com) > Account_ if you haven't already done so.
62
63 2. Get an API token from the IBM Q website under _My Account > API Token_ and the URL for the account.
64
65 3. Take your token and url from step 2, here called `MY_API_TOKEN`, `MY_URL`, and run:
66
67 ```python
68 >>> from qiskit import IBMQ
69 >>> IBMQ.save_account('MY_API_TOKEN', 'MY_URL')
70 ```
71
72 After calling `IBMQ.save_account()`, your credentials will be stored on disk.
73 Once they are stored, at any point in the future you can load and use them
74 in your program simply via:
75
76 ```python
77 >>> from qiskit import IBMQ
78 >>> IBMQ.load_account()
79 ```
80
81 Those who do not want to save their credentials to disk should use instead:
82
83 ```python
84 >>> from qiskit import IBMQ
85 >>> IBMQ.enable_account('MY_API_TOKEN')
86 ```
87
88 and the token will only be active for the session. For examples using Terra with real
89 devices we have provided a set of examples in **examples/python** and we suggest starting with [using_qiskit_terra_level_0.py](examples/python/using_qiskit_terra_level_0.py) and working up in
90 the levels.
91
92 ## Contribution Guidelines
93
94 If you'd like to contribute to Qiskit Terra, please take a look at our
95 [contribution guidelines](CONTRIBUTING.md). This project adheres to Qiskit's [code of conduct](CODE_OF_CONDUCT.md). By participating, you are expected to uphold this code.
96
97 We use [GitHub issues](https://github.com/Qiskit/qiskit-terra/issues) for tracking requests and bugs. Please
98 [join the Qiskit Slack community](https://join.slack.com/t/qiskit/shared_invite/enQtODQ2NTIyOTgwMTQ3LTI0NzM2NzkzZjJhNDgzZjY5MTQzNDY3MGNiZGQzNTNkZTE4Nzg1MjMwMmFjY2UwZTgyNDlmYWQwYmZjMjE1ZTM)
99 and use our [Qiskit Slack channel](https://qiskit.slack.com) for discussion and simple questions.
100 For questions that are more suited for a forum we use the Qiskit tag in the [Stack Exchange](https://quantumcomputing.stackexchange.com/questions/tagged/qiskit).
101
102 ## Next Steps
103
104 Now you're set up and ready to check out some of the other examples from our
105 [Qiskit Tutorials](https://github.com/Qiskit/qiskit-tutorials) repository.
106
107 ## Authors and Citation
108
109 Qiskit Terra is the work of [many people](https://github.com/Qiskit/qiskit-terra/graphs/contributors) who contribute
110 to the project at different levels. If you use Qiskit, please cite as per the included [BibTeX file](https://github.com/Qiskit/qiskit/blob/master/Qiskit.bib).
111
112 ## Changelog and Release Notes
113
114 The changelog for a particular release is dynamically generated and gets
115 written to the release page on Github for each release. For example, you can
116 find the page for the `0.9.0` release here:
117
118 https://github.com/Qiskit/qiskit-terra/releases/tag/0.9.0
119
120 The changelog for the current release can be found in the releases tab:
121 
122 The changelog provides a quick overview of noteable changes for a given
123 release.
124
125 Additionally, as part of each release detailed release notes are written to
126 document in detail what has changed as part of a release. This includes any
127 documentation on potential breaking changes on upgrade and new features.
128 For example, You can find the release notes for the `0.9.0` release in the
129 Qiskit documentation here:
130
131 https://qiskit.org/documentation/release_notes.html#terra-0-9
132
133 ## License
134
135 [Apache License 2.0](LICENSE.txt)
136
[end of README.md]
[start of qiskit/compiler/transpile.py]
1 # -*- coding: utf-8 -*-
2
3 # This code is part of Qiskit.
4 #
5 # (C) Copyright IBM 2017, 2019.
6 #
7 # This code is licensed under the Apache License, Version 2.0. You may
8 # obtain a copy of this license in the LICENSE.txt file in the root directory
9 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
10 #
11 # Any modifications or derivative works of this code must retain this
12 # copyright notice, and modified files need to carry a notice indicating
13 # that they have been altered from the originals.
14
15 """Circuit transpile function"""
16 import logging
17 from time import time
18 import warnings
19 from typing import List, Union, Dict, Callable, Any, Optional, Tuple
20 from qiskit.circuit.quantumcircuit import QuantumCircuit
21 from qiskit.providers import BaseBackend
22 from qiskit.providers.models import BackendProperties
23 from qiskit.transpiler import Layout, CouplingMap, PropertySet, PassManager
24 from qiskit.transpiler.basepasses import BasePass
25 from qiskit.dagcircuit import DAGCircuit
26 from qiskit.tools.parallel import parallel_map
27 from qiskit.transpiler.passmanager_config import PassManagerConfig
28 from qiskit.pulse import Schedule
29 from qiskit.circuit.quantumregister import Qubit
30 from qiskit import user_config
31 from qiskit.transpiler.exceptions import TranspilerError
32 from qiskit.converters import isinstanceint, isinstancelist
33 from qiskit.transpiler.passes.basis.ms_basis_decomposer import MSBasisDecomposer
34 from qiskit.transpiler.preset_passmanagers import (level_0_pass_manager,
35 level_1_pass_manager,
36 level_2_pass_manager,
37 level_3_pass_manager)
38
39 LOG = logging.getLogger(__name__)
40
41
42 def transpile(circuits: Union[QuantumCircuit, List[QuantumCircuit]],
43 backend: Optional[BaseBackend] = None,
44 basis_gates: Optional[List[str]] = None,
45 coupling_map: Optional[Union[CouplingMap, List[List[int]]]] = None,
46 backend_properties: Optional[BackendProperties] = None,
47 initial_layout: Optional[Union[Layout, Dict, List]] = None,
48 layout_method: Optional[str] = None,
49 routing_method: Optional[str] = None,
50 seed_transpiler: Optional[int] = None,
51 optimization_level: Optional[int] = None,
52 pass_manager: Optional[PassManager] = None,
53 callback: Optional[Callable[[BasePass, DAGCircuit, float,
54 PropertySet, int], Any]] = None,
55 output_name: Optional[Union[str, List[str]]] = None) -> Union[QuantumCircuit,
56 List[QuantumCircuit]]:
57 """Transpile one or more circuits, according to some desired transpilation targets.
58
59 All arguments may be given as either a singleton or list. In case of a list,
60 the length must be equal to the number of circuits being transpiled.
61
62 Transpilation is done in parallel using multiprocessing.
63
64 Args:
65 circuits: Circuit(s) to transpile
66 backend: If set, transpiler options are automatically grabbed from
67 ``backend.configuration()`` and ``backend.properties()``.
68 If any other option is explicitly set (e.g., ``coupling_map``), it
69 will override the backend's.
70
71 .. note::
72
73 The backend arg is purely for convenience. The resulting
74 circuit may be run on any backend as long as it is compatible.
75 basis_gates: List of basis gate names to unroll to
76 (e.g: ``['u1', 'u2', 'u3', 'cx']``). If ``None``, do not unroll.
77 coupling_map: Coupling map (perhaps custom) to target in mapping.
78 Multiple formats are supported:
79
80 #. ``CouplingMap`` instance
81 #. List, must be given as an adjacency matrix, where each entry
82 specifies all two-qubit interactions supported by backend,
83 e.g: ``[[0, 1], [0, 3], [1, 2], [1, 5], [2, 5], [4, 1], [5, 3]]``
84
85 backend_properties: properties returned by a backend, including information on gate
86 errors, readout errors, qubit coherence times, etc. Find a backend
87 that provides this information with: ``backend.properties()``
88 initial_layout: Initial position of virtual qubits on physical qubits.
89 If this layout makes the circuit compatible with the coupling_map
90 constraints, it will be used. The final layout is not guaranteed to be the same,
91 as the transpiler may permute qubits through swaps or other means.
92 Multiple formats are supported:
93
94 #. ``Layout`` instance
95 #. Dict
96 * virtual to physical::
97
98 {qr[0]: 0,
99 qr[1]: 3,
100 qr[2]: 5}
101
102 * physical to virtual::
103
104 {0: qr[0],
105 3: qr[1],
106 5: qr[2]}
107
108 #. List
109
110 * virtual to physical::
111
112 [0, 3, 5] # virtual qubits are ordered (in addition to named)
113
114 * physical to virtual::
115
116 [qr[0], None, None, qr[1], None, qr[2]]
117
118 layout_method: Name of layout selection pass ('trivial', 'dense', 'noise_adaptive')
119 Sometimes a perfect layout can be available in which case the layout_method
120 may not run.
121 routing_method: Name of routing pass ('basic', 'lookahead', 'stochastic')
122 seed_transpiler: Sets random seed for the stochastic parts of the transpiler
123 optimization_level: How much optimization to perform on the circuits.
124 Higher levels generate more optimized circuits,
125 at the expense of longer transpilation time.
126 * 0: no optimization
127 * 1: light optimization
128 * 2: heavy optimization
129 * 3: even heavier optimization
130 If ``None``, level 1 will be chosen as default.
131 pass_manager: The pass manager to use for a custom pipeline of transpiler passes.
132 If this arg is present, all other args will be ignored and the
133 pass manager will be used directly (Qiskit will not attempt to
134 auto-select a pass manager based on transpile options).
135 callback: A callback function that will be called after each
136 pass execution. The function will be called with 5 keyword
137 arguments,
138 | ``pass_``: the pass being run.
139 | ``dag``: the dag output of the pass.
140 | ``time``: the time to execute the pass.
141 | ``property_set``: the property set.
142 | ``count``: the index for the pass execution.
143 The exact arguments passed expose the internals of the pass manager,
144 and are subject to change as the pass manager internals change. If
145 you intend to reuse a callback function over multiple releases, be
146 sure to check that the arguments being passed are the same.
147 To use the callback feature, define a function that will
148 take in kwargs dict and access the variables. For example::
149
150 def callback_func(**kwargs):
151 pass_ = kwargs['pass_']
152 dag = kwargs['dag']
153 time = kwargs['time']
154 property_set = kwargs['property_set']
155 count = kwargs['count']
156 ...
157 transpile(circ, callback=callback_func)
158
159 output_name: A list with strings to identify the output circuits. The length of
160 the list should be exactly the length of the ``circuits`` parameter.
161
162 Returns:
163 The transpiled circuit(s).
164
165 Raises:
166 TranspilerError: in case of bad inputs to transpiler (like conflicting parameters)
167 or errors in passes
168 """
169 circuits = circuits if isinstance(circuits, list) else [circuits]
170
171 # transpiling schedules is not supported yet.
172 start_time = time()
173 if all(isinstance(c, Schedule) for c in circuits):
174 warnings.warn("Transpiling schedules is not supported yet.", UserWarning)
175 if len(circuits) == 1:
176 end_time = time()
177 _log_transpile_time(start_time, end_time)
178 return circuits[0]
179 end_time = time()
180 _log_transpile_time(start_time, end_time)
181 return circuits
182
183 if pass_manager is not None:
184 _check_conflicting_argument(optimization_level=optimization_level, basis_gates=basis_gates,
185 coupling_map=coupling_map, seed_transpiler=seed_transpiler,
186 backend_properties=backend_properties,
187 initial_layout=initial_layout, layout_method=layout_method,
188 routing_method=routing_method, backend=backend)
189
190 warnings.warn("The parameter pass_manager in transpile is being deprecated. "
191 "The preferred way to tranpile a circuit using a custom pass manager is"
192 " pass_manager.run(circuit)", DeprecationWarning, stacklevel=2)
193 return pass_manager.run(circuits, output_name=output_name, callback=callback)
194
195 if optimization_level is None:
196 # Take optimization level from the configuration or 1 as default.
197 config = user_config.get_config()
198 optimization_level = config.get('transpile_optimization_level', 1)
199
200 # Get transpile_args to configure the circuit transpilation job(s)
201 transpile_args = _parse_transpile_args(circuits, backend, basis_gates, coupling_map,
202 backend_properties, initial_layout,
203 layout_method, routing_method,
204 seed_transpiler, optimization_level,
205 callback, output_name)
206
207 _check_circuits_coupling_map(circuits, transpile_args, backend)
208
209 # Transpile circuits in parallel
210 circuits = parallel_map(_transpile_circuit, list(zip(circuits, transpile_args)))
211
212 if len(circuits) == 1:
213 end_time = time()
214 _log_transpile_time(start_time, end_time)
215 return circuits[0]
216 end_time = time()
217 _log_transpile_time(start_time, end_time)
218 return circuits
219
220
221 def _check_conflicting_argument(**kargs):
222 conflicting_args = [arg for arg, value in kargs.items() if value]
223 if conflicting_args:
224 raise TranspilerError("The parameters pass_manager conflicts with the following "
225 "parameter(s): {}.".format(', '.join(conflicting_args)))
226
227
228 def _check_circuits_coupling_map(circuits, transpile_args, backend):
229 # Check circuit width against number of qubits in coupling_map(s)
230 coupling_maps_list = list(config['pass_manager_config'].coupling_map for config in
231 transpile_args)
232 for circuit, parsed_coupling_map in zip(circuits, coupling_maps_list):
233 # If coupling_map is not None or num_qubits == 1
234 num_qubits = len(circuit.qubits)
235 max_qubits = None
236 if isinstance(parsed_coupling_map, CouplingMap):
237 max_qubits = parsed_coupling_map.size()
238
239 # If coupling_map is None, the limit might be in the backend (like in 1Q devices)
240 elif backend is not None and not backend.configuration().simulator:
241 max_qubits = backend.configuration().n_qubits
242
243 if max_qubits is not None and (num_qubits > max_qubits):
244 raise TranspilerError('Number of qubits ({}) '.format(num_qubits) +
245 'in {} '.format(circuit.name) +
246 'is greater than maximum ({}) '.format(max_qubits) +
247 'in the coupling_map')
248
249
250 def _log_transpile_time(start_time, end_time):
251 log_msg = "Total Transpile Time - %.5f (ms)" % ((end_time - start_time) * 1000)
252 LOG.info(log_msg)
253
254
255 def _transpile_circuit(circuit_config_tuple: Tuple[QuantumCircuit, Dict]) -> QuantumCircuit:
256 """Select a PassManager and run a single circuit through it.
257 Args:
258 circuit_config_tuple (tuple):
259 circuit (QuantumCircuit): circuit to transpile
260 transpile_config (dict): configuration dictating how to transpile. The
261 dictionary has the following format:
262 {'optimization_level': int,
263 'pass_manager': PassManager,
264 'output_name': string,
265 'callback': callable,
266 'pass_manager_config': PassManagerConfig}
267 Returns:
268 The transpiled circuit
269 Raises:
270 TranspilerError: if transpile_config is not valid or transpilation incurs error
271 """
272 circuit, transpile_config = circuit_config_tuple
273
274 pass_manager_config = transpile_config['pass_manager_config']
275
276 ms_basis_swap = None
277 if pass_manager_config.basis_gates is not None:
278 # Workaround for ion trap support: If basis gates includes
279 # Mølmer-Sørensen (rxx) and the circuit includes gates outside the basis,
280 # first unroll to u3, cx, then run MSBasisDecomposer to target basis.
281 basic_insts = ['measure', 'reset', 'barrier', 'snapshot']
282 device_insts = set(pass_manager_config.basis_gates).union(basic_insts)
283 if 'rxx' in pass_manager_config.basis_gates and \
284 not device_insts >= circuit.count_ops().keys():
285 ms_basis_swap = pass_manager_config.basis_gates
286 pass_manager_config.basis_gates = list(
287 set(['u3', 'cx']).union(pass_manager_config.basis_gates))
288
289 # we choose an appropriate one based on desired optimization level
290 level = transpile_config['optimization_level']
291
292 if level == 0:
293 pass_manager = level_0_pass_manager(pass_manager_config)
294 elif level == 1:
295 pass_manager = level_1_pass_manager(pass_manager_config)
296 elif level == 2:
297 pass_manager = level_2_pass_manager(pass_manager_config)
298 elif level == 3:
299 pass_manager = level_3_pass_manager(pass_manager_config)
300 else:
301 raise TranspilerError("optimization_level can range from 0 to 3.")
302
303 if ms_basis_swap is not None:
304 pass_manager.append(MSBasisDecomposer(ms_basis_swap))
305
306 return pass_manager.run(circuit, callback=transpile_config['callback'],
307 output_name=transpile_config['output_name'])
308
309
310 def _parse_transpile_args(circuits, backend,
311 basis_gates, coupling_map, backend_properties,
312 initial_layout, layout_method, routing_method,
313 seed_transpiler, optimization_level,
314 callback, output_name) -> List[Dict]:
315 """Resolve the various types of args allowed to the transpile() function through
316 duck typing, overriding args, etc. Refer to the transpile() docstring for details on
317 what types of inputs are allowed.
318
319 Here the args are resolved by converting them to standard instances, and prioritizing
320 them in case a transpile option is passed through multiple args (explicitly setting an
321 arg has more priority than the arg set by backend).
322
323 Returns:
324 list[dicts]: a list of transpile parameters.
325 """
326 if initial_layout is not None and layout_method is not None:
327 warnings.warn("initial_layout provided; layout_method is ignored.",
328 UserWarning)
329 # Each arg could be single or a list. If list, it must be the same size as
330 # number of circuits. If single, duplicate to create a list of that size.
331 num_circuits = len(circuits)
332
333 basis_gates = _parse_basis_gates(basis_gates, backend, circuits)
334 coupling_map = _parse_coupling_map(coupling_map, backend, num_circuits)
335 backend_properties = _parse_backend_properties(backend_properties, backend, num_circuits)
336 initial_layout = _parse_initial_layout(initial_layout, circuits)
337 layout_method = _parse_layout_method(layout_method, num_circuits)
338 routing_method = _parse_routing_method(routing_method, num_circuits)
339 seed_transpiler = _parse_seed_transpiler(seed_transpiler, num_circuits)
340 optimization_level = _parse_optimization_level(optimization_level, num_circuits)
341 output_name = _parse_output_name(output_name, circuits)
342 callback = _parse_callback(callback, num_circuits)
343
344 list_transpile_args = []
345 for args in zip(basis_gates, coupling_map, backend_properties,
346 initial_layout, layout_method, routing_method,
347 seed_transpiler, optimization_level,
348 output_name, callback):
349 transpile_args = {'pass_manager_config': PassManagerConfig(basis_gates=args[0],
350 coupling_map=args[1],
351 backend_properties=args[2],
352 initial_layout=args[3],
353 layout_method=args[4],
354 routing_method=args[5],
355 seed_transpiler=args[6]),
356 'optimization_level': args[7],
357 'output_name': args[8],
358 'callback': args[9]}
359 list_transpile_args.append(transpile_args)
360
361 return list_transpile_args
362
363
364 def _parse_basis_gates(basis_gates, backend, circuits):
365 # try getting basis_gates from user, else backend
366 if basis_gates is None:
367 if getattr(backend, 'configuration', None):
368 basis_gates = getattr(backend.configuration(), 'basis_gates', None)
369 # basis_gates could be None, or a list of basis, e.g. ['u3', 'cx']
370 if basis_gates is None or (isinstance(basis_gates, list) and
371 all(isinstance(i, str) for i in basis_gates)):
372 basis_gates = [basis_gates] * len(circuits)
373
374 return basis_gates
375
376
377 def _parse_coupling_map(coupling_map, backend, num_circuits):
378 # try getting coupling_map from user, else backend
379 if coupling_map is None:
380 if getattr(backend, 'configuration', None):
381 configuration = backend.configuration()
382 if hasattr(configuration, 'coupling_map') and configuration.coupling_map:
383 coupling_map = CouplingMap(configuration.coupling_map)
384
385 # coupling_map could be None, or a list of lists, e.g. [[0, 1], [2, 1]]
386 if coupling_map is None or isinstance(coupling_map, CouplingMap):
387 coupling_map = [coupling_map] * num_circuits
388 elif isinstance(coupling_map, list) and all(isinstance(i, list) and len(i) == 2
389 for i in coupling_map):
390 coupling_map = [coupling_map] * num_circuits
391
392 coupling_map = [CouplingMap(cm) if isinstance(cm, list) else cm for cm in coupling_map]
393
394 return coupling_map
395
396
397 def _parse_backend_properties(backend_properties, backend, num_circuits):
398 # try getting backend_properties from user, else backend
399 if backend_properties is None:
400 if getattr(backend, 'properties', None):
401 backend_properties = backend.properties()
402 if not isinstance(backend_properties, list):
403 backend_properties = [backend_properties] * num_circuits
404 return backend_properties
405
406
407 def _parse_initial_layout(initial_layout, circuits):
408 # initial_layout could be None, or a list of ints, e.g. [0, 5, 14]
409 # or a list of tuples/None e.g. [qr[0], None, qr[1]] or a dict e.g. {qr[0]: 0}
410 def _layout_from_raw(initial_layout, circuit):
411 if initial_layout is None or isinstance(initial_layout, Layout):
412 return initial_layout
413 elif isinstancelist(initial_layout):
414 if all(isinstanceint(elem) for elem in initial_layout):
415 initial_layout = Layout.from_intlist(initial_layout, *circuit.qregs)
416 elif all(elem is None or isinstance(elem, Qubit) for elem in initial_layout):
417 initial_layout = Layout.from_qubit_list(initial_layout)
418 elif isinstance(initial_layout, dict):
419 initial_layout = Layout(initial_layout)
420 else:
421 raise TranspilerError("The initial_layout parameter could not be parsed")
422 return initial_layout
423
424 # multiple layouts?
425 if isinstance(initial_layout, list) and \
426 any(isinstance(i, (list, dict)) for i in initial_layout):
427 initial_layout = [_layout_from_raw(lo, circ) if isinstance(lo, (list, dict)) else lo
428 for lo, circ in zip(initial_layout, circuits)]
429 else:
430 # even if one layout, but multiple circuits, the layout needs to be adapted for each
431 initial_layout = [_layout_from_raw(initial_layout, circ) for circ in circuits]
432 if not isinstance(initial_layout, list):
433 initial_layout = [initial_layout] * len(circuits)
434 return initial_layout
435
436
437 def _parse_layout_method(layout_method, num_circuits):
438 if not isinstance(layout_method, list):
439 layout_method = [layout_method] * num_circuits
440 return layout_method
441
442
443 def _parse_routing_method(routing_method, num_circuits):
444 if not isinstance(routing_method, list):
445 routing_method = [routing_method] * num_circuits
446 return routing_method
447
448
449 def _parse_seed_transpiler(seed_transpiler, num_circuits):
450 if not isinstance(seed_transpiler, list):
451 seed_transpiler = [seed_transpiler] * num_circuits
452 return seed_transpiler
453
454
455 def _parse_optimization_level(optimization_level, num_circuits):
456 if not isinstance(optimization_level, list):
457 optimization_level = [optimization_level] * num_circuits
458 return optimization_level
459
460
461 def _parse_pass_manager(pass_manager, num_circuits):
462 if not isinstance(pass_manager, list):
463 pass_manager = [pass_manager] * num_circuits
464 return pass_manager
465
466
467 def _parse_callback(callback, num_circuits):
468 if not isinstance(callback, list):
469 callback = [callback] * num_circuits
470 return callback
471
472
473 def _parse_output_name(output_name, circuits):
474 # naming and returning circuits
475 # output_name could be either a string or a list
476 if output_name is not None:
477 if isinstance(output_name, str):
478 # single circuit
479 if len(circuits) == 1:
480 return [output_name]
481 # multiple circuits
482 else:
483 raise TranspilerError("Expected a list object of length equal " +
484 "to that of the number of circuits " +
485 "being transpiled")
486 elif isinstance(output_name, list):
487 if len(circuits) == len(output_name) and \
488 all(isinstance(name, str) for name in output_name):
489 return output_name
490 else:
491 raise TranspilerError("The length of output_name list "
492 "must be equal to the number of "
493 "transpiled circuits and the output_name "
494 "list should be strings.")
495 else:
496 raise TranspilerError("The parameter output_name should be a string or a"
497 "list of strings: %s was used." % type(output_name))
498 else:
499 return [circuit.name for circuit in circuits]
500
[end of qiskit/compiler/transpile.py]
[start of qiskit/transpiler/preset_passmanagers/level1.py]
1 # -*- coding: utf-8 -*-
2
3 # This code is part of Qiskit.
4 #
5 # (C) Copyright IBM 2017, 2018.
6 #
7 # This code is licensed under the Apache License, Version 2.0. You may
8 # obtain a copy of this license in the LICENSE.txt file in the root directory
9 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
10 #
11 # Any modifications or derivative works of this code must retain this
12 # copyright notice, and modified files need to carry a notice indicating
13 # that they have been altered from the originals.
14
15 """Pass manager for optimization level 1, providing light optimization.
16
17 Level 1 pass manager: light optimization by simple adjacent gate collapsing.
18 """
19
20 from qiskit.transpiler.passmanager_config import PassManagerConfig
21 from qiskit.transpiler.passmanager import PassManager
22
23 from qiskit.transpiler.passes import Unroller
24 from qiskit.transpiler.passes import Unroll3qOrMore
25 from qiskit.transpiler.passes import CXCancellation
26 from qiskit.transpiler.passes import CheckMap
27 from qiskit.transpiler.passes import CXDirection
28 from qiskit.transpiler.passes import SetLayout
29 from qiskit.transpiler.passes import TrivialLayout
30 from qiskit.transpiler.passes import DenseLayout
31 from qiskit.transpiler.passes import NoiseAdaptiveLayout
32 from qiskit.transpiler.passes import BarrierBeforeFinalMeasurements
33 from qiskit.transpiler.passes import BasicSwap
34 from qiskit.transpiler.passes import LookaheadSwap
35 from qiskit.transpiler.passes import StochasticSwap
36 from qiskit.transpiler.passes import FullAncillaAllocation
37 from qiskit.transpiler.passes import EnlargeWithAncilla
38 from qiskit.transpiler.passes import FixedPoint
39 from qiskit.transpiler.passes import Depth
40 from qiskit.transpiler.passes import RemoveResetInZeroState
41 from qiskit.transpiler.passes import Optimize1qGates
42 from qiskit.transpiler.passes import ApplyLayout
43 from qiskit.transpiler.passes import CheckCXDirection
44 from qiskit.transpiler.passes import Layout2qDistance
45
46 from qiskit.transpiler import TranspilerError
47
48
49 def level_1_pass_manager(pass_manager_config: PassManagerConfig) -> PassManager:
50 """Level 1 pass manager: light optimization by simple adjacent gate collapsing.
51
52 This pass manager applies the user-given initial layout. If none is given,
53 and a trivial layout (i-th virtual -> i-th physical) makes the circuit fit
54 the coupling map, that is used.
55 Otherwise, the circuit is mapped to the most densely connected coupling subgraph,
56 and swaps are inserted to map. Any unused physical qubit is allocated as ancilla space.
57 The pass manager then unrolls the circuit to the desired basis, and transforms the
58 circuit to match the coupling map. Finally, optimizations in the form of adjacent
59 gate collapse and redundant reset removal are performed.
60
61 Note:
62 In simulators where ``coupling_map=None``, only the unrolling and
63 optimization stages are done.
64
65 Args:
66 pass_manager_config: configuration of the pass manager.
67
68 Returns:
69 a level 1 pass manager.
70
71 Raises:
72 TranspilerError: if the passmanager config is invalid.
73 """
74 basis_gates = pass_manager_config.basis_gates
75 coupling_map = pass_manager_config.coupling_map
76 initial_layout = pass_manager_config.initial_layout
77 layout_method = pass_manager_config.layout_method or 'dense'
78 routing_method = pass_manager_config.routing_method or 'stochastic'
79 seed_transpiler = pass_manager_config.seed_transpiler
80 backend_properties = pass_manager_config.backend_properties
81
82 # 1. Use trivial layout if no layout given
83 _given_layout = SetLayout(initial_layout)
84
85 _choose_layout_and_score = [TrivialLayout(coupling_map),
86 Layout2qDistance(coupling_map,
87 property_name='trivial_layout_score')]
88
89 def _choose_layout_condition(property_set):
90 return not property_set['layout']
91
92 # 2. Use a better layout on densely connected qubits, if circuit needs swaps
93 if layout_method == 'trivial':
94 _improve_layout = TrivialLayout(coupling_map)
95 elif layout_method == 'dense':
96 _improve_layout = DenseLayout(coupling_map, backend_properties)
97 elif layout_method == 'noise_adaptive':
98 _improve_layout = NoiseAdaptiveLayout(backend_properties)
99 else:
100 raise TranspilerError("Invalid layout method %s." % layout_method)
101
102 def _not_perfect_yet(property_set):
103 return property_set['trivial_layout_score'] is not None and \
104 property_set['trivial_layout_score'] != 0
105
106 # 3. Extend dag/layout with ancillas using the full coupling map
107 _embed = [FullAncillaAllocation(coupling_map), EnlargeWithAncilla(), ApplyLayout()]
108
109 # 4. Decompose so only 1-qubit and 2-qubit gates remain
110 _unroll3q = Unroll3qOrMore()
111
112 # 5. Swap to fit the coupling map
113 _swap_check = CheckMap(coupling_map)
114
115 def _swap_condition(property_set):
116 return not property_set['is_swap_mapped']
117
118 _swap = [BarrierBeforeFinalMeasurements()]
119 if routing_method == 'basic':
120 _swap += [BasicSwap(coupling_map)]
121 elif routing_method == 'stochastic':
122 _swap += [StochasticSwap(coupling_map, trials=20, seed=seed_transpiler)]
123 elif routing_method == 'lookahead':
124 _swap += [LookaheadSwap(coupling_map, search_depth=4, search_width=4)]
125 else:
126 raise TranspilerError("Invalid routing method %s." % routing_method)
127
128 # 6. Unroll to the basis
129 _unroll = Unroller(basis_gates)
130
131 # 7. Fix any bad CX directions
132 _direction_check = [CheckCXDirection(coupling_map)]
133
134 def _direction_condition(property_set):
135 return not property_set['is_direction_mapped']
136
137 _direction = [CXDirection(coupling_map)]
138
139 # 8. Remove zero-state reset
140 _reset = RemoveResetInZeroState()
141
142 # 9. Merge 1q rotations and cancel CNOT gates iteratively until no more change in depth
143 _depth_check = [Depth(), FixedPoint('depth')]
144
145 def _opt_control(property_set):
146 return not property_set['depth_fixed_point']
147
148 _opt = [Optimize1qGates(basis_gates), CXCancellation()]
149
150 # Build pass manager
151 pm1 = PassManager()
152 if coupling_map:
153 pm1.append(_given_layout)
154 pm1.append(_choose_layout_and_score, condition=_choose_layout_condition)
155 pm1.append(_improve_layout, condition=_not_perfect_yet)
156 pm1.append(_embed)
157 pm1.append(_unroll3q)
158 pm1.append(_swap_check)
159 pm1.append(_swap, condition=_swap_condition)
160 pm1.append(_unroll)
161 if coupling_map and not coupling_map.is_symmetric:
162 pm1.append(_direction_check)
163 pm1.append(_direction, condition=_direction_condition)
164 pm1.append(_reset)
165 pm1.append(_depth_check + _opt, do_while=_opt_control)
166
167 return pm1
168
[end of qiskit/transpiler/preset_passmanagers/level1.py]
[start of qiskit/util.py]
1 # -*- coding: utf-8 -*-
2 # This code is part of Qiskit.
3 #
4 # (C) Copyright IBM 2017.
5 #
6 # This code is licensed under the Apache License, Version 2.0. You may
7 # obtain a copy of this license in the LICENSE.txt file in the root directory
8 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
9 #
10 # Any modifications or derivative works of this code must retain this
11 # copyright notice, and modified files need to carry a notice indicating
12 # that they have been altered from the originals.
13
14 """Common utilities for Qiskit."""
15
16 import multiprocessing as mp
17 import platform
18 import re
19 import socket
20 import sys
21 import warnings
22 import functools
23
24 import psutil
25
26
27 def _check_python_version():
28 """Check for Python version 3.5+."""
29 if sys.version_info < (3, 5):
30 raise Exception('Qiskit requires Python version 3.5 or greater.')
31
32
33 def _filter_deprecation_warnings():
34 """Apply filters to deprecation warnings.
35
36 Force the `DeprecationWarning` warnings to be displayed for the qiskit
37 module, overriding the system configuration as they are ignored by default
38 [1] for end-users. Additionally, silence the `ChangedInMarshmallow3Warning`
39 messages.
40
41 TODO: on Python 3.7, this might not be needed due to PEP-0565 [2].
42
43 [1] https://docs.python.org/3/library/warnings.html#default-warning-filters
44 [2] https://www.python.org/dev/peps/pep-0565/
45 """
46 deprecation_filter = ('always', None, DeprecationWarning,
47 re.compile(r'^qiskit\.*', re.UNICODE), 0)
48
49 # Instead of using warnings.simple_filter() directly, the internal
50 # _add_filter() function is used for being able to match against the
51 # module.
52 try:
53 warnings._add_filter(*deprecation_filter, append=False)
54 except AttributeError:
55 # ._add_filter is internal and not available in some Python versions.
56 pass
57
58
59 _check_python_version()
60 _filter_deprecation_warnings()
61
62
63 def local_hardware_info():
64 """Basic hardware information about the local machine.
65
66 Gives actual number of CPU's in the machine, even when hyperthreading is
67 turned on. CPU count defaults to 1 when true count can't be determined.
68
69 Returns:
70 dict: The hardware information.
71 """
72 results = {
73 'os': platform.system(),
74 'memory': psutil.virtual_memory().total / (1024 ** 3),
75 'cpus': psutil.cpu_count(logical=False) or 1
76 }
77 return results
78
79
80 def _has_connection(hostname, port):
81 """Checks if internet connection exists to host via specified port.
82
83 If any exception is raised while trying to open a socket this will return
84 false.
85
86 Args:
87 hostname (str): Hostname to connect to.
88 port (int): Port to connect to
89
90 Returns:
91 bool: Has connection or not
92
93 """
94 try:
95 host = socket.gethostbyname(hostname)
96 socket.create_connection((host, port), 2).close()
97 return True
98 except Exception: # pylint: disable=broad-except
99 return False
100
101
102 def deprecate_arguments(kwarg_map):
103 """Decorator to automatically alias deprecated agrument names and warn upon use."""
104 def decorator(func):
105 @functools.wraps(func)
106 def wrapper(*args, **kwargs):
107 if kwargs:
108 _rename_kwargs(func.__name__, kwargs, kwarg_map)
109 return func(*args, **kwargs)
110 return wrapper
111 return decorator
112
113
114 def _rename_kwargs(func_name, kwargs, kwarg_map):
115 for old_arg, new_arg in kwarg_map.items():
116 if old_arg in kwargs:
117 if new_arg in kwargs:
118 raise TypeError('{} received both {} and {} (deprecated).'.format(
119 func_name, new_arg, old_arg))
120
121 warnings.warn('{} keyword argument {} is deprecated and '
122 'replaced with {}.'.format(
123 func_name, old_arg, new_arg),
124 DeprecationWarning, stacklevel=3)
125
126 kwargs[new_arg] = kwargs.pop(old_arg)
127
128
129 def is_main_process():
130 """Checks whether the current process is the main one"""
131
132 return not (
133 isinstance(mp.current_process(),
134 (mp.context.ForkProcess, mp.context.SpawnProcess))
135
136 # In python 3.5 and 3.6, processes created by "ProcessPoolExecutor" are not
137 # mp.context.ForkProcess or mp.context.SpawnProcess. As a workaround,
138 # "name" of the process is checked instead.
139 or (sys.version_info[0] == 3
140 and (sys.version_info[1] == 5 or sys.version_info[1] == 6)
141 and mp.current_process().name != 'MainProcess')
142 )
143
[end of qiskit/util.py]
[start of tools/report_ci_failure.py]
1 # -*- coding: utf-8 -*-
2
3 # This code is part of Qiskit.
4 #
5 # (C) Copyright IBM 2017, 2018.
6 #
7 # This code is licensed under the Apache License, Version 2.0. You may
8 # obtain a copy of this license in the LICENSE.txt file in the root directory
9 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
10 #
11 # Any modifications or derivative works of this code must retain this
12 # copyright notice, and modified files need to carry a notice indicating
13 # that they have been altered from the originals.
14 """Utility module to open an issue on the repository when CIs fail."""
15
16 import os
17 from github import Github
18
19
20 class CIFailureReporter:
21 """Instances of this class can report to GitHub that the CI is failing.
22
23 """
24
25 def __init__(self, repository, token):
26 """
27 Args:
28 repository (str): a string in the form 'owner/repository-name'
29 indicating the GitHub repository to report against.
30 token (str): a GitHub token obtained following:
31 https://help.github.com/articles/creating-a-personal-access-token-for-the-command-line/
32 """
33 self._repo = repository
34 self._api = Github(token)
35
36 def report(self, branch, commit, infourl=None, job_name=None):
37 """Report on GitHub that the specified branch is failing to build at
38 the specified commit. The method will open an issue indicating that
39 the branch is failing. If there is an issue already open, it will add a
40 comment avoiding to report twice about the same failure.
41
42 Args:
43 branch (str): branch name to report about.
44 commit (str): commit hash at which the build fails.
45 infourl (str): URL with extra info about the failure such as the
46 build logs.
47 job_name (str): name of the failed ci job.
48 """
49 key_label = self._key_label(branch, job_name)
50 issue_number = self._get_report_issue_number(key_label)
51 if issue_number:
52 self._report_as_comment(issue_number, branch, commit, infourl)
53 else:
54 self._report_as_issue(branch, commit, infourl, job_name)
55
56 def _key_label(self, branch_name, job_name):
57 if job_name == 'Randomized tests':
58 return 'randomized test'
59 elif job_name == 'Benchmarks':
60 return 'benchmarks failing'
61 elif branch_name == 'master':
62 return 'master failing'
63 elif branch_name == 'stable':
64 return 'stable failing'
65 else:
66 return ''
67
68 def _get_report_issue_number(self, key_label):
69 query = 'state:open label:"{}" repo:{}'.format(
70 key_label, self._repo)
71 results = self._api.search_issues(query=query)
72 try:
73 return results[0].number
74 except IndexError:
75 return None
76
77 def _report_as_comment(self, issue_number, branch, commit, infourl):
78 stamp = _branch_is_failing_stamp(branch, commit)
79 report_exists = self._check_report_existence(issue_number, stamp)
80 if not report_exists:
81 _, body = _branch_is_failing_template(branch, commit, infourl)
82 message_body = '{}\n{}'.format(stamp, body)
83 self._post_new_comment(issue_number, message_body)
84
85 def _check_report_existence(self, issue_number, target):
86 repo = self._api.get_repo(self._repo)
87 issue = repo.get_issue(issue_number)
88 if target in issue.body:
89 return True
90
91 for comment in issue.get_comments():
92 if target in comment.body:
93 return True
94
95 return False
96
97 def _report_as_issue(self, branch, commit, infourl, key_label):
98 repo = self._api.get_repo(self._repo)
99 stamp = _branch_is_failing_stamp(branch, commit)
100 title, body = _branch_is_failing_template(branch, commit, infourl)
101 message_body = '{}\n{}'.format(stamp, body)
102 repo.create_issue(title=title, body=message_body,
103 labels=[key_label])
104
105 def _post_new_comment(self, issue_number, body):
106 repo = self._api.get_repo(self._repo)
107 issue = repo.get_issue(issue_number)
108 issue.create_comment(body)
109
110
111 def _branch_is_failing_template(branch, commit, infourl):
112 title = 'Branch `{}` is failing'.format(branch)
113 body = 'Trying to build `{}` at commit {} failed.'.format(branch, commit)
114 if infourl:
115 body += '\nMore info at: {}'.format(infourl)
116 return title, body
117
118
119 def _branch_is_failing_stamp(branch, commit):
120 return '<!-- commit {}@{} -->'.format(commit, branch)
121
122
123 _REPOSITORY = 'Qiskit/qiskit-terra'
124 _GH_TOKEN = os.getenv('GH_TOKEN')
125
126
127 def _get_repo_name():
128 return os.getenv('TRAVIS_REPO_SLUG') or os.getenv('APPVEYOR_REPO_NAME')
129
130
131 def _get_branch_name():
132 return os.getenv('TRAVIS_BRANCH') or os.getenv('APPVEYOR_REPO_BRANCH')
133
134
135 def _get_commit_hash():
136 return os.getenv('TRAVIS_COMMIT') or os.getenv('APPVEYOR_REPO_COMMIT')
137
138
139 def _get_job_name():
140 return os.getenv('TRAVIS_JOB_NAME') or os.getenv('APPVEYOR_JOB_NAME')
141
142
143 def _get_info_url():
144 if os.getenv('TRAVIS'):
145 job_id = os.getenv('TRAVIS_JOB_ID')
146 return 'https://travis-ci.com/{}/jobs/{}'.format(_REPOSITORY, job_id)
147
148 if os.getenv('APPVEYOR'):
149 build_id = os.getenv('APPVEYOR_BUILD_ID')
150 return 'https://ci.appveyor.com/project/{}/build/{}'.format(_REPOSITORY, build_id)
151
152 return None
153
154
155 if __name__ == '__main__':
156 _REPORTER = CIFailureReporter(_get_repo_name(), _GH_TOKEN)
157 _REPORTER.report(_get_branch_name(), _get_commit_hash(),
158 _get_info_url(), _get_job_name())
159
[end of tools/report_ci_failure.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
Qiskit/qiskit
|
627ecaccc0d5c5db7c8265cdb36bfb66ef82de95
|
Transpile Performance Regression
<!-- ⚠️ If you do not respect this template, your issue will be closed -->
<!-- ⚠️ Make sure to browse the opened and closed issues -->
### Information
- **Qiskit Terra version**: >=0.14.0
- **Python version**: Any
- **Operating system**: Any
### What is the current behavior?
https://github.com/Qiskit/qiskit-terra/commit/529a22b6 made some changes to how we reorder bits in stochastic swap and basic swap. These change slow down these passes about [2x slower for basic swap ](https://qiskit.github.io/qiskit/#mapping_passes.PassBenchmarks.time_basic_swap?machine=dedicated-benchmarking-softlayer-baremetal&os=Linux%204.15.0-46-generic&ram=16GB&p-n_qubits=5&p-depth=1024&commits=529a22b6)and [10-30% for stochastic swap](https://qiskit.github.io/qiskit/#mapping_passes.PassBenchmarks.time_stochastic_swap?machine=qiskit-benchmarking&os=Ubuntu%2018.04&ram=16%20GB&p-n_qubits=5&p-depth=1024&commits=529a22b6).
You can also see the impact of stochastic swap for transpiles with the preset pass managers (because of stochastic swap) here: https://qiskit.github.io/qiskit/#transpiler_levels.TranspilerLevelBenchmarks.time_quantum_volume_transpile_50_x_20?machine=qiskit-benchmarking&os=Ubuntu%2018.04&ram=16%20GB&p-transpiler%20optimization%20level=1&commits=529a22b6
|
Some more data points. I apologize that some of the columns are missing as I have not been keeping full statistics on the variants until recently.
I have been watching transpiler performance for a while mostly to insure the QiskitC effort stays on track. Attached is a spreadsheet with data showing the performance regression in the Python code. The first three tables show performance with BasicSwap with large circuits at three time-points. Remember the ripple-adder uses 2N+2 qubits and N+1 Cbits so actual circuit width is 3N+3 total bits.
The next three tables show the same adder using StochasticSwap.
These statistics are for circuit generation + a single execution of the BasicSwap and Stochastic
transpilers, called directly and bypassing the pass manager. Times are in seconds.
The Terra numbers for both NetworkX (done for reference) and RetworkX clearly show the
performance regression. The QiskitC numbers do not and I assume that is because of how
I handle the gate library. I don't know if what I did is easly back-ported to Python but
I can provide details if people are interested.
[TranspilerPerformance.xlsx](https://github.com/Qiskit/qiskit-terra/files/4696349/TranspilerPerformance.xlsx)
|
2020-06-09T16:11:33Z
|
<patch>
diff --git a/qiskit/dagcircuit/dagcircuit.py b/qiskit/dagcircuit/dagcircuit.py
--- a/qiskit/dagcircuit/dagcircuit.py
+++ b/qiskit/dagcircuit/dagcircuit.py
@@ -91,12 +91,20 @@ def __init__(self):
# Edges carry wire labels (reg,idx) and each operation has
# corresponding in- and out-edges with the same wire labels.
- # Map of qreg name to QuantumRegister object
+ # Map of qreg/creg name to Register object.
self.qregs = OrderedDict()
-
- # Map of creg name to ClassicalRegister object
self.cregs = OrderedDict()
+ # List of Qubit/Clbit wires that the DAG acts on.
+ class DummyCallableList(list):
+ """Dummy class so we can deprecate dag.qubits() and do
+ dag.qubits as property.
+ """
+ def __call__(self):
+ return self
+ self._qubits = DummyCallableList() # TODO: make these a regular empty list [] after the
+ self._clbits = DummyCallableList() # DeprecationWarning period, and remove name underscore.
+
self._id_to_node = {}
self._multi_graph = None
@@ -152,13 +160,17 @@ def to_networkx(self):
return G
+ @property
def qubits(self):
"""Return a list of qubits (as a list of Qubit instances)."""
- return [qubit for qreg in self.qregs.values() for qubit in qreg]
+ # TODO: remove this property after DeprecationWarning period (~9/2020)
+ return self._qubits
+ @property
def clbits(self):
"""Return a list of classical bits (as a list of Clbit instances)."""
- return [clbit for creg in self.cregs.values() for clbit in creg]
+ # TODO: remove this property after DeprecationWarning period (~9/2020)
+ return self._clbits
@property
def wires(self):
@@ -187,6 +199,7 @@ def add_qreg(self, qreg):
raise DAGCircuitError("duplicate register %s" % qreg.name)
self.qregs[qreg.name] = qreg
for j in range(qreg.size):
+ self.qubits.append(qreg[j])
self._add_wire(qreg[j])
def add_creg(self, creg):
@@ -197,6 +210,7 @@ def add_creg(self, creg):
raise DAGCircuitError("duplicate register %s" % creg.name)
self.cregs[creg.name] = creg
for j in range(creg.size):
+ self.clbits.append(creg[j])
self._add_wire(creg[j])
def _add_wire(self, wire):
@@ -521,8 +535,8 @@ def compose(self, other, edge_map=None, qubits=None, clbits=None, front=False, i
if front:
raise DAGCircuitError("Front composition not supported yet.")
- if len(other.qubits()) > len(self.qubits()) or \
- len(other.clbits()) > len(self.clbits()):
+ if len(other.qubits) > len(self.qubits) or \
+ len(other.clbits) > len(self.clbits):
raise DAGCircuitError("Trying to compose with another DAGCircuit "
"which has more 'in' edges.")
@@ -535,15 +549,15 @@ def compose(self, other, edge_map=None, qubits=None, clbits=None, front=False, i
qubits = []
if clbits is None:
clbits = []
- qubit_map = {other.qubits()[i]: (self.qubits()[q] if isinstance(q, int) else q)
+ qubit_map = {other.qubits[i]: (self.qubits[q] if isinstance(q, int) else q)
for i, q in enumerate(qubits)}
- clbit_map = {other.clbits()[i]: (self.clbits()[c] if isinstance(c, int) else c)
+ clbit_map = {other.clbits[i]: (self.clbits[c] if isinstance(c, int) else c)
for i, c in enumerate(clbits)}
edge_map = edge_map or {**qubit_map, **clbit_map} or None
# if no edge_map, try to do a 1-1 mapping in order
if edge_map is None:
- identity_qubit_map = dict(zip(other.qubits(), self.qubits()))
- identity_clbit_map = dict(zip(other.clbits(), self.clbits()))
+ identity_qubit_map = dict(zip(other.qubits, self.qubits))
+ identity_clbit_map = dict(zip(other.clbits, self.clbits))
edge_map = {**identity_qubit_map, **identity_clbit_map}
# Check the edge_map for duplicate values
diff --git a/qiskit/transpiler/passes/layout/csp_layout.py b/qiskit/transpiler/passes/layout/csp_layout.py
--- a/qiskit/transpiler/passes/layout/csp_layout.py
+++ b/qiskit/transpiler/passes/layout/csp_layout.py
@@ -102,7 +102,7 @@ def __init__(self, coupling_map, strict_direction=False, seed=None, call_limit=1
self.seed = seed
def run(self, dag):
- qubits = dag.qubits()
+ qubits = dag.qubits
cxs = set()
for gate in dag.two_qubit_ops():
diff --git a/qiskit/transpiler/passes/layout/noise_adaptive_layout.py b/qiskit/transpiler/passes/layout/noise_adaptive_layout.py
--- a/qiskit/transpiler/passes/layout/noise_adaptive_layout.py
+++ b/qiskit/transpiler/passes/layout/noise_adaptive_layout.py
@@ -140,7 +140,7 @@ def _create_program_graph(self, dag):
number of CNOTs between the pair.
"""
idx = 0
- for q in dag.qubits():
+ for q in dag.qubits:
self.qarg_to_id[q.register.name + str(q.index)] = idx
idx += 1
for gate in dag.two_qubit_ops():
@@ -252,7 +252,7 @@ def run(self, dag):
self.prog2hw[qid] = self.available_hw_qubits[0]
self.available_hw_qubits.remove(self.prog2hw[qid])
layout = Layout()
- for q in dag.qubits():
+ for q in dag.qubits:
pid = self._qarg_to_id(q)
hwid = self.prog2hw[pid]
layout[q] = hwid
diff --git a/qiskit/transpiler/passes/optimization/consolidate_blocks.py b/qiskit/transpiler/passes/optimization/consolidate_blocks.py
--- a/qiskit/transpiler/passes/optimization/consolidate_blocks.py
+++ b/qiskit/transpiler/passes/optimization/consolidate_blocks.py
@@ -61,7 +61,7 @@ def run(self, dag):
new_dag.add_creg(creg)
# compute ordered indices for the global circuit wires
- global_index_map = {wire: idx for idx, wire in enumerate(dag.qubits())}
+ global_index_map = {wire: idx for idx, wire in enumerate(dag.qubits)}
blocks = self.property_set['block_list']
# just to make checking if a node is in any block easier
diff --git a/qiskit/transpiler/passes/optimization/hoare_opt.py b/qiskit/transpiler/passes/optimization/hoare_opt.py
--- a/qiskit/transpiler/passes/optimization/hoare_opt.py
+++ b/qiskit/transpiler/passes/optimization/hoare_opt.py
@@ -69,7 +69,7 @@ def _initialize(self, dag):
Args:
dag (DAGCircuit): input DAG to get qubits from
"""
- for qbt in dag.qubits():
+ for qbt in dag.qubits:
self.gatenum[qbt.index] = 0
self.variables[qbt.index] = []
self.gatecache[qbt.index] = []
@@ -338,6 +338,6 @@ def run(self, dag):
self._initialize(dag)
self._traverse_dag(dag)
if self.size > 1:
- for qbt in dag.qubits():
+ for qbt in dag.qubits:
self._multigate_opt(dag, qbt.index)
return dag
diff --git a/qiskit/transpiler/passes/routing/basic_swap.py b/qiskit/transpiler/passes/routing/basic_swap.py
--- a/qiskit/transpiler/passes/routing/basic_swap.py
+++ b/qiskit/transpiler/passes/routing/basic_swap.py
@@ -60,7 +60,7 @@ def run(self, dag):
if len(dag.qregs) != 1 or dag.qregs.get('q', None) is None:
raise TranspilerError('Basic swap runs on physical circuits only')
- if len(dag.qubits()) > len(self.coupling_map.physical_qubits):
+ if len(dag.qubits) > len(self.coupling_map.physical_qubits):
raise TranspilerError('The layout does not match the amount of qubits in the DAG')
canonical_register = dag.qregs['q']
@@ -92,14 +92,14 @@ def run(self, dag):
cargs=[])
# layer insertion
- order = current_layout.reorder_bits(new_dag.qubits())
+ order = current_layout.reorder_bits(new_dag.qubits)
new_dag.compose(swap_layer, qubits=order)
# update current_layout
for swap in range(len(path) - 2):
current_layout.swap(path[swap], path[swap + 1])
- order = current_layout.reorder_bits(new_dag.qubits())
+ order = current_layout.reorder_bits(new_dag.qubits)
new_dag.compose(subdag, qubits=order)
return new_dag
diff --git a/qiskit/transpiler/passes/routing/layout_transformation.py b/qiskit/transpiler/passes/routing/layout_transformation.py
--- a/qiskit/transpiler/passes/routing/layout_transformation.py
+++ b/qiskit/transpiler/passes/routing/layout_transformation.py
@@ -79,7 +79,7 @@ def run(self, dag):
if len(dag.qregs) != 1 or dag.qregs.get('q', None) is None:
raise TranspilerError('LayoutTransform runs on physical circuits only')
- if len(dag.qubits()) > len(self.coupling_map.physical_qubits):
+ if len(dag.qubits) > len(self.coupling_map.physical_qubits):
raise TranspilerError('The layout does not match the amount of qubits in the DAG')
from_layout = self.from_layout
@@ -102,7 +102,7 @@ def run(self, dag):
perm_circ = self.token_swapper.permutation_circuit(permutation, self.trials)
- edge_map = {vqubit: dag.qubits()[pqubit]
+ edge_map = {vqubit: dag.qubits[pqubit]
for (pqubit, vqubit) in perm_circ.inputmap.items()}
dag.compose(perm_circ.circuit, edge_map=edge_map)
return dag
diff --git a/qiskit/transpiler/passes/routing/lookahead_swap.py b/qiskit/transpiler/passes/routing/lookahead_swap.py
--- a/qiskit/transpiler/passes/routing/lookahead_swap.py
+++ b/qiskit/transpiler/passes/routing/lookahead_swap.py
@@ -93,7 +93,7 @@ def run(self, dag):
if len(dag.qregs) != 1 or dag.qregs.get('q', None) is None:
raise TranspilerError('Lookahead swap runs on physical circuits only')
- if len(dag.qubits()) > len(self.coupling_map.physical_qubits):
+ if len(dag.qubits) > len(self.coupling_map.physical_qubits):
raise TranspilerError('The layout does not match the amount of qubits in the DAG')
canonical_register = dag.qregs['q']
diff --git a/qiskit/transpiler/passes/routing/stochastic_swap.py b/qiskit/transpiler/passes/routing/stochastic_swap.py
--- a/qiskit/transpiler/passes/routing/stochastic_swap.py
+++ b/qiskit/transpiler/passes/routing/stochastic_swap.py
@@ -87,7 +87,7 @@ def run(self, dag):
if len(dag.qregs) != 1 or dag.qregs.get('q', None) is None:
raise TranspilerError('Basic swap runs on physical circuits only')
- if len(dag.qubits()) > len(self.coupling_map.physical_qubits):
+ if len(dag.qubits) > len(self.coupling_map.physical_qubits):
raise TranspilerError('The layout does not match the amount of qubits in the DAG')
canonical_register = dag.qregs['q']
@@ -265,7 +265,7 @@ def _layer_update(self, i, best_layout, best_depth,
for creg in layer_circuit.cregs.values():
dagcircuit_output.add_creg(creg)
- order = layout.reorder_bits(dagcircuit_output.qubits())
+ order = layout.reorder_bits(dagcircuit_output.qubits)
dagcircuit_output.compose(layer_circuit, qubits=order)
return dagcircuit_output
diff --git a/qiskit/transpiler/passes/utils/barrier_before_final_measurements.py b/qiskit/transpiler/passes/utils/barrier_before_final_measurements.py
--- a/qiskit/transpiler/passes/utils/barrier_before_final_measurements.py
+++ b/qiskit/transpiler/passes/utils/barrier_before_final_measurements.py
@@ -59,7 +59,7 @@ def run(self, dag):
# Add a barrier across all qubits so swap mapper doesn't add a swap
# from an unmeasured qubit after a measure.
- final_qubits = dag.qubits()
+ final_qubits = dag.qubits
barrier_layer.apply_operation_back(
Barrier(len(final_qubits)), list(final_qubits), [])
diff --git a/qiskit/visualization/utils.py b/qiskit/visualization/utils.py
--- a/qiskit/visualization/utils.py
+++ b/qiskit/visualization/utils.py
@@ -125,8 +125,8 @@ def _get_layered_instructions(circuit, reverse_bits=False,
dag = circuit_to_dag(circuit)
ops = []
- qregs = dag.qubits()
- cregs = dag.clbits()
+ qregs = dag.qubits
+ cregs = dag.clbits
if justify == 'none':
for node in dag.topological_op_nodes():
@@ -198,7 +198,7 @@ def __init__(self, dag, justification):
"""Create spool"""
super(_LayerSpooler, self).__init__()
self.dag = dag
- self.qregs = dag.qubits()
+ self.qregs = dag.qubits
self.justification = justification
if self.justification == 'left':
</patch>
|
[]
|
[]
| |||
Qiskit__qiskit-4596
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Cannot unroll identity matrix of more than 2 qubits when coupling_map is set
<!-- ⚠️ If you do not respect this template, your issue will be closed -->
<!-- ⚠️ Make sure to browse the opened and closed issues -->
### Information
- **Qiskit Terra version**: 0.14.1
- **Python version**: 3.8
- **Operating system**: both Windows and Linux
### What is the current behavior?
The `transpile` function fails to unroll an `UnitaryGate` containing an identity matrix of more than 2 qubits when the `backend` argument is set to be a remote quantum computer or the `coupling_map` argument is set.
### Steps to reproduce the problem
```
>>> import numpy as np
>>> from qiskit import IBMQ, QuantumCircuit, transpile
>>> from qiskit.extensions import UnitaryGate
>>> provider = IBMQ.load_account()
>>> backend = provider.get_backend('ibmq_london') # arbitrary backend with at least 3 qubits
>>> circuit = QuantumCircuit(3)
>>> gate = UnitaryGate(np.eye(2 ** 3))
>>> circuit.append(gate, range(3))
<qiskit.circuit.instructionset.InstructionSet object at 0x7ff8b93a60d0>
>>> transpile(circuit, backend=backend)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.8/dist-packages/qiskit/compiler/transpile.py", line 210, in transpile circuits = parallel_map(_transpile_circuit, list(zip(circuits, transpile_args)))
File "/usr/local/lib/python3.8/dist-packages/qiskit/tools/parallel.py", line 105, in parallel_map
return [task(values[0], *task_args, **task_kwargs)]
File "/usr/local/lib/python3.8/dist-packages/qiskit/compiler/transpile.py", line 306, in _transpile_circuit
return pass_manager.run(circuit, callback=transpile_config['callback'],
File "/usr/local/lib/python3.8/dist-packages/qiskit/transpiler/passmanager.py", line 214, in run
return self._run_single_circuit(circuits, output_name, callback)
File "/usr/local/lib/python3.8/dist-packages/qiskit/transpiler/passmanager.py", line 277, in _run_single_circuit
result = running_passmanager.run(circuit, output_name=output_name, callback=callback)
File "/usr/local/lib/python3.8/dist-packages/qiskit/transpiler/runningpassmanager.py", line 115, in run
dag = self._do_pass(pass_, dag, passset.options)
File "/usr/local/lib/python3.8/dist-packages/qiskit/transpiler/runningpassmanager.py", line 145, in _do_pass
dag = self._run_this_pass(pass_, dag)
File "/usr/local/lib/python3.8/dist-packages/qiskit/transpiler/runningpassmanager.py", line 157, in _run_this_pass
new_dag = pass_.run(dag)
File "/usr/local/lib/python3.8/dist-packages/qiskit/transpiler/passes/basis/unroll_3q_or_more.py", line 54, in run
decomposition = self.run(decomposition) # recursively unroll
File "/usr/local/lib/python3.8/dist-packages/qiskit/transpiler/passes/basis/unroll_3q_or_more.py", line 54, in run
decomposition = self.run(decomposition) # recursively unroll
File "/usr/local/lib/python3.8/dist-packages/qiskit/transpiler/passes/basis/unroll_3q_or_more.py", line 39, in run
raise QiskitError("Cannot unroll all 3q or more gates. "
qiskit.exceptions.QiskitError: 'Cannot unroll all 3q or more gates. No rule to expand instruction circuit9_dg.'
```
Notes:
- This bug only happens when the `backend` argument is set to be a remote quantum computer or the `coupling_map` argument is set to be a coupling map of a remote quantum computer. Calling `transpile(circuit, basis_gates=['u1', 'u2', 'u3', 'cx', 'id'])` works fine.
- This bug only happens when the `UnitaryGate` contains an identity matrix of more than 2 qubits.
### What is the expected behavior?
Successfully transpile the circuit.
</issue>
<code>
[start of README.md]
1 # Qiskit Terra
2
3 [](https://opensource.org/licenses/Apache-2.0)[](https://travis-ci.com/Qiskit/qiskit-terra)[](https://github.com/Qiskit/qiskit-terra/releases)[](https://pypi.org/project/qiskit-terra/)[](https://coveralls.io/github/Qiskit/qiskit-terra?branch=master)
4
5 **Qiskit** is an open-source framework for working with noisy quantum computers at the level of pulses, circuits, and algorithms.
6
7 Qiskit is made up of elements that work together to enable quantum computing. This element is **Terra** and is the foundation on which the rest of Qiskit is built.
8
9 ## Installation
10
11 We encourage installing Qiskit via the pip tool (a python package manager), which installs all Qiskit elements, including Terra.
12
13 ```bash
14 pip install qiskit
15 ```
16
17 PIP will handle all dependencies automatically and you will always install the latest (and well-tested) version.
18
19 To install from source, follow the instructions in the [documentation](https://qiskit.org/documentation/contributing_to_qiskit.html#install-terra-from-source).
20
21 ## Creating Your First Quantum Program in Qiskit Terra
22
23 Now that Qiskit is installed, it's time to begin working with Terra.
24
25 We are ready to try out a quantum circuit example, which is simulated locally using
26 the Qiskit BasicAer element. This is a simple example that makes an entangled state.
27
28 ```
29 $ python
30 ```
31
32 ```python
33 >>> from qiskit import *
34 >>> qc = QuantumCircuit(2, 2)
35 >>> qc.h(0)
36 >>> qc.cx(0, 1)
37 >>> qc.measure([0,1], [0,1])
38 >>> backend_sim = BasicAer.get_backend('qasm_simulator')
39 >>> result = backend_sim.run(assemble(qc)).result()
40 >>> print(result.get_counts(qc))
41 ```
42
43 In this case, the output will be:
44
45 ```python
46 {'00': 513, '11': 511}
47 ```
48
49 A script is available [here](examples/python/ibmq/hello_quantum.py), where we also show how to
50 run the same program on a real quantum computer via IBMQ.
51
52 ### Executing your code on a real quantum chip
53
54 You can also use Qiskit to execute your code on a
55 **real quantum chip**.
56 In order to do so, you need to configure Qiskit for using the credentials in
57 your IBM Q account:
58
59 #### Configure your IBMQ credentials
60
61 1. Create an _[IBM Q](https://quantum-computing.ibm.com) > Account_ if you haven't already done so.
62
63 2. Get an API token from the IBM Q website under _My Account > API Token_ and the URL for the account.
64
65 3. Take your token and url from step 2, here called `MY_API_TOKEN`, `MY_URL`, and run:
66
67 ```python
68 >>> from qiskit import IBMQ
69 >>> IBMQ.save_account('MY_API_TOKEN', 'MY_URL')
70 ```
71
72 After calling `IBMQ.save_account()`, your credentials will be stored on disk.
73 Once they are stored, at any point in the future you can load and use them
74 in your program simply via:
75
76 ```python
77 >>> from qiskit import IBMQ
78 >>> IBMQ.load_account()
79 ```
80
81 Those who do not want to save their credentials to disk should use instead:
82
83 ```python
84 >>> from qiskit import IBMQ
85 >>> IBMQ.enable_account('MY_API_TOKEN')
86 ```
87
88 and the token will only be active for the session. For examples using Terra with real
89 devices we have provided a set of examples in **examples/python** and we suggest starting with [using_qiskit_terra_level_0.py](examples/python/using_qiskit_terra_level_0.py) and working up in
90 the levels.
91
92 ## Contribution Guidelines
93
94 If you'd like to contribute to Qiskit Terra, please take a look at our
95 [contribution guidelines](CONTRIBUTING.md). This project adheres to Qiskit's [code of conduct](CODE_OF_CONDUCT.md). By participating, you are expected to uphold this code.
96
97 We use [GitHub issues](https://github.com/Qiskit/qiskit-terra/issues) for tracking requests and bugs. Please
98 [join the Qiskit Slack community](https://join.slack.com/t/qiskit/shared_invite/zt-e4sscbg2-p8NHTezPVkC3r8nV6BIUVw)
99 and use our [Qiskit Slack channel](https://qiskit.slack.com) for discussion and simple questions.
100 For questions that are more suited for a forum we use the Qiskit tag in the [Stack Exchange](https://quantumcomputing.stackexchange.com/questions/tagged/qiskit).
101
102 ## Next Steps
103
104 Now you're set up and ready to check out some of the other examples from our
105 [Qiskit Tutorials](https://github.com/Qiskit/qiskit-tutorials) repository.
106
107 ## Authors and Citation
108
109 Qiskit Terra is the work of [many people](https://github.com/Qiskit/qiskit-terra/graphs/contributors) who contribute
110 to the project at different levels. If you use Qiskit, please cite as per the included [BibTeX file](https://github.com/Qiskit/qiskit/blob/master/Qiskit.bib).
111
112 ## Changelog and Release Notes
113
114 The changelog for a particular release is dynamically generated and gets
115 written to the release page on Github for each release. For example, you can
116 find the page for the `0.9.0` release here:
117
118 https://github.com/Qiskit/qiskit-terra/releases/tag/0.9.0
119
120 The changelog for the current release can be found in the releases tab:
121 
122 The changelog provides a quick overview of noteable changes for a given
123 release.
124
125 Additionally, as part of each release detailed release notes are written to
126 document in detail what has changed as part of a release. This includes any
127 documentation on potential breaking changes on upgrade and new features.
128 For example, You can find the release notes for the `0.9.0` release in the
129 Qiskit documentation here:
130
131 https://qiskit.org/documentation/release_notes.html#terra-0-9
132
133 ## License
134
135 [Apache License 2.0](LICENSE.txt)
136
[end of README.md]
[start of qiskit/compiler/transpile.py]
1 # -*- coding: utf-8 -*-
2
3 # This code is part of Qiskit.
4 #
5 # (C) Copyright IBM 2017, 2019.
6 #
7 # This code is licensed under the Apache License, Version 2.0. You may
8 # obtain a copy of this license in the LICENSE.txt file in the root directory
9 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
10 #
11 # Any modifications or derivative works of this code must retain this
12 # copyright notice, and modified files need to carry a notice indicating
13 # that they have been altered from the originals.
14
15 """Circuit transpile function"""
16 import logging
17 from time import time
18 import warnings
19 from typing import List, Union, Dict, Callable, Any, Optional, Tuple
20 from qiskit.circuit.quantumcircuit import QuantumCircuit
21 from qiskit.providers import BaseBackend
22 from qiskit.providers.models import BackendProperties
23 from qiskit.transpiler import Layout, CouplingMap, PropertySet, PassManager
24 from qiskit.transpiler.basepasses import BasePass
25 from qiskit.dagcircuit import DAGCircuit
26 from qiskit.tools.parallel import parallel_map
27 from qiskit.transpiler.passmanager_config import PassManagerConfig
28 from qiskit.pulse import Schedule
29 from qiskit.circuit.quantumregister import Qubit
30 from qiskit import user_config
31 from qiskit.transpiler.exceptions import TranspilerError
32 from qiskit.converters import isinstanceint, isinstancelist
33 from qiskit.transpiler.passes.basis.ms_basis_decomposer import MSBasisDecomposer
34 from qiskit.transpiler.preset_passmanagers import (level_0_pass_manager,
35 level_1_pass_manager,
36 level_2_pass_manager,
37 level_3_pass_manager)
38
39 LOG = logging.getLogger(__name__)
40
41
42 def transpile(circuits: Union[QuantumCircuit, List[QuantumCircuit]],
43 backend: Optional[BaseBackend] = None,
44 basis_gates: Optional[List[str]] = None,
45 coupling_map: Optional[Union[CouplingMap, List[List[int]]]] = None,
46 backend_properties: Optional[BackendProperties] = None,
47 initial_layout: Optional[Union[Layout, Dict, List]] = None,
48 layout_method: Optional[str] = None,
49 routing_method: Optional[str] = None,
50 seed_transpiler: Optional[int] = None,
51 optimization_level: Optional[int] = None,
52 pass_manager: Optional[PassManager] = None,
53 callback: Optional[Callable[[BasePass, DAGCircuit, float,
54 PropertySet, int], Any]] = None,
55 output_name: Optional[Union[str, List[str]]] = None) -> Union[QuantumCircuit,
56 List[QuantumCircuit]]:
57 """Transpile one or more circuits, according to some desired transpilation targets.
58
59 All arguments may be given as either a singleton or list. In case of a list,
60 the length must be equal to the number of circuits being transpiled.
61
62 Transpilation is done in parallel using multiprocessing.
63
64 Args:
65 circuits: Circuit(s) to transpile
66 backend: If set, transpiler options are automatically grabbed from
67 ``backend.configuration()`` and ``backend.properties()``.
68 If any other option is explicitly set (e.g., ``coupling_map``), it
69 will override the backend's.
70
71 .. note::
72
73 The backend arg is purely for convenience. The resulting
74 circuit may be run on any backend as long as it is compatible.
75 basis_gates: List of basis gate names to unroll to
76 (e.g: ``['u1', 'u2', 'u3', 'cx']``). If ``None``, do not unroll.
77 coupling_map: Coupling map (perhaps custom) to target in mapping.
78 Multiple formats are supported:
79
80 #. ``CouplingMap`` instance
81 #. List, must be given as an adjacency matrix, where each entry
82 specifies all two-qubit interactions supported by backend,
83 e.g: ``[[0, 1], [0, 3], [1, 2], [1, 5], [2, 5], [4, 1], [5, 3]]``
84
85 backend_properties: properties returned by a backend, including information on gate
86 errors, readout errors, qubit coherence times, etc. Find a backend
87 that provides this information with: ``backend.properties()``
88 initial_layout: Initial position of virtual qubits on physical qubits.
89 If this layout makes the circuit compatible with the coupling_map
90 constraints, it will be used. The final layout is not guaranteed to be the same,
91 as the transpiler may permute qubits through swaps or other means.
92 Multiple formats are supported:
93
94 #. ``Layout`` instance
95 #. Dict
96 * virtual to physical::
97
98 {qr[0]: 0,
99 qr[1]: 3,
100 qr[2]: 5}
101
102 * physical to virtual::
103
104 {0: qr[0],
105 3: qr[1],
106 5: qr[2]}
107
108 #. List
109
110 * virtual to physical::
111
112 [0, 3, 5] # virtual qubits are ordered (in addition to named)
113
114 * physical to virtual::
115
116 [qr[0], None, None, qr[1], None, qr[2]]
117
118 layout_method: Name of layout selection pass ('trivial', 'dense', 'noise_adaptive')
119 Sometimes a perfect layout can be available in which case the layout_method
120 may not run.
121 routing_method: Name of routing pass ('basic', 'lookahead', 'stochastic')
122 seed_transpiler: Sets random seed for the stochastic parts of the transpiler
123 optimization_level: How much optimization to perform on the circuits.
124 Higher levels generate more optimized circuits,
125 at the expense of longer transpilation time.
126 * 0: no optimization
127 * 1: light optimization
128 * 2: heavy optimization
129 * 3: even heavier optimization
130 If ``None``, level 1 will be chosen as default.
131 pass_manager: The pass manager to use for a custom pipeline of transpiler passes.
132 If this arg is present, all other args will be ignored and the
133 pass manager will be used directly (Qiskit will not attempt to
134 auto-select a pass manager based on transpile options).
135 callback: A callback function that will be called after each
136 pass execution. The function will be called with 5 keyword
137 arguments,
138 | ``pass_``: the pass being run.
139 | ``dag``: the dag output of the pass.
140 | ``time``: the time to execute the pass.
141 | ``property_set``: the property set.
142 | ``count``: the index for the pass execution.
143 The exact arguments passed expose the internals of the pass manager,
144 and are subject to change as the pass manager internals change. If
145 you intend to reuse a callback function over multiple releases, be
146 sure to check that the arguments being passed are the same.
147 To use the callback feature, define a function that will
148 take in kwargs dict and access the variables. For example::
149
150 def callback_func(**kwargs):
151 pass_ = kwargs['pass_']
152 dag = kwargs['dag']
153 time = kwargs['time']
154 property_set = kwargs['property_set']
155 count = kwargs['count']
156 ...
157 transpile(circ, callback=callback_func)
158
159 output_name: A list with strings to identify the output circuits. The length of
160 the list should be exactly the length of the ``circuits`` parameter.
161
162 Returns:
163 The transpiled circuit(s).
164
165 Raises:
166 TranspilerError: in case of bad inputs to transpiler (like conflicting parameters)
167 or errors in passes
168 """
169 circuits = circuits if isinstance(circuits, list) else [circuits]
170
171 # transpiling schedules is not supported yet.
172 start_time = time()
173 if all(isinstance(c, Schedule) for c in circuits):
174 warnings.warn("Transpiling schedules is not supported yet.", UserWarning)
175 if len(circuits) == 1:
176 end_time = time()
177 _log_transpile_time(start_time, end_time)
178 return circuits[0]
179 end_time = time()
180 _log_transpile_time(start_time, end_time)
181 return circuits
182
183 if pass_manager is not None:
184 _check_conflicting_argument(optimization_level=optimization_level, basis_gates=basis_gates,
185 coupling_map=coupling_map, seed_transpiler=seed_transpiler,
186 backend_properties=backend_properties,
187 initial_layout=initial_layout, layout_method=layout_method,
188 routing_method=routing_method, backend=backend)
189
190 warnings.warn("The parameter pass_manager in transpile is being deprecated. "
191 "The preferred way to tranpile a circuit using a custom pass manager is"
192 " pass_manager.run(circuit)", DeprecationWarning, stacklevel=2)
193 return pass_manager.run(circuits, output_name=output_name, callback=callback)
194
195 if optimization_level is None:
196 # Take optimization level from the configuration or 1 as default.
197 config = user_config.get_config()
198 optimization_level = config.get('transpile_optimization_level', 1)
199
200 # Get transpile_args to configure the circuit transpilation job(s)
201 transpile_args = _parse_transpile_args(circuits, backend, basis_gates, coupling_map,
202 backend_properties, initial_layout,
203 layout_method, routing_method,
204 seed_transpiler, optimization_level,
205 callback, output_name)
206
207 _check_circuits_coupling_map(circuits, transpile_args, backend)
208
209 # Transpile circuits in parallel
210 circuits = parallel_map(_transpile_circuit, list(zip(circuits, transpile_args)))
211
212 if len(circuits) == 1:
213 end_time = time()
214 _log_transpile_time(start_time, end_time)
215 return circuits[0]
216 end_time = time()
217 _log_transpile_time(start_time, end_time)
218 return circuits
219
220
221 def _check_conflicting_argument(**kargs):
222 conflicting_args = [arg for arg, value in kargs.items() if value]
223 if conflicting_args:
224 raise TranspilerError("The parameters pass_manager conflicts with the following "
225 "parameter(s): {}.".format(', '.join(conflicting_args)))
226
227
228 def _check_circuits_coupling_map(circuits, transpile_args, backend):
229 # Check circuit width against number of qubits in coupling_map(s)
230 coupling_maps_list = list(config['pass_manager_config'].coupling_map for config in
231 transpile_args)
232 for circuit, parsed_coupling_map in zip(circuits, coupling_maps_list):
233 # If coupling_map is not None or num_qubits == 1
234 num_qubits = len(circuit.qubits)
235 max_qubits = None
236 if isinstance(parsed_coupling_map, CouplingMap):
237 max_qubits = parsed_coupling_map.size()
238
239 # If coupling_map is None, the limit might be in the backend (like in 1Q devices)
240 elif backend is not None and not backend.configuration().simulator:
241 max_qubits = backend.configuration().n_qubits
242
243 if max_qubits is not None and (num_qubits > max_qubits):
244 raise TranspilerError('Number of qubits ({}) '.format(num_qubits) +
245 'in {} '.format(circuit.name) +
246 'is greater than maximum ({}) '.format(max_qubits) +
247 'in the coupling_map')
248
249
250 def _log_transpile_time(start_time, end_time):
251 log_msg = "Total Transpile Time - %.5f (ms)" % ((end_time - start_time) * 1000)
252 LOG.info(log_msg)
253
254
255 def _transpile_circuit(circuit_config_tuple: Tuple[QuantumCircuit, Dict]) -> QuantumCircuit:
256 """Select a PassManager and run a single circuit through it.
257 Args:
258 circuit_config_tuple (tuple):
259 circuit (QuantumCircuit): circuit to transpile
260 transpile_config (dict): configuration dictating how to transpile. The
261 dictionary has the following format:
262 {'optimization_level': int,
263 'pass_manager': PassManager,
264 'output_name': string,
265 'callback': callable,
266 'pass_manager_config': PassManagerConfig}
267 Returns:
268 The transpiled circuit
269 Raises:
270 TranspilerError: if transpile_config is not valid or transpilation incurs error
271 """
272 circuit, transpile_config = circuit_config_tuple
273
274 pass_manager_config = transpile_config['pass_manager_config']
275
276 ms_basis_swap = None
277 if pass_manager_config.basis_gates is not None:
278 # Workaround for ion trap support: If basis gates includes
279 # Mølmer-Sørensen (rxx) and the circuit includes gates outside the basis,
280 # first unroll to u3, cx, then run MSBasisDecomposer to target basis.
281 basic_insts = ['measure', 'reset', 'barrier', 'snapshot']
282 device_insts = set(pass_manager_config.basis_gates).union(basic_insts)
283 if 'rxx' in pass_manager_config.basis_gates and \
284 not device_insts >= circuit.count_ops().keys():
285 ms_basis_swap = pass_manager_config.basis_gates
286 pass_manager_config.basis_gates = list(
287 set(['u3', 'cx']).union(pass_manager_config.basis_gates))
288
289 # we choose an appropriate one based on desired optimization level
290 level = transpile_config['optimization_level']
291
292 if level == 0:
293 pass_manager = level_0_pass_manager(pass_manager_config)
294 elif level == 1:
295 pass_manager = level_1_pass_manager(pass_manager_config)
296 elif level == 2:
297 pass_manager = level_2_pass_manager(pass_manager_config)
298 elif level == 3:
299 pass_manager = level_3_pass_manager(pass_manager_config)
300 else:
301 raise TranspilerError("optimization_level can range from 0 to 3.")
302
303 if ms_basis_swap is not None:
304 pass_manager.append(MSBasisDecomposer(ms_basis_swap))
305
306 return pass_manager.run(circuit, callback=transpile_config['callback'],
307 output_name=transpile_config['output_name'])
308
309
310 def _parse_transpile_args(circuits, backend,
311 basis_gates, coupling_map, backend_properties,
312 initial_layout, layout_method, routing_method,
313 seed_transpiler, optimization_level,
314 callback, output_name) -> List[Dict]:
315 """Resolve the various types of args allowed to the transpile() function through
316 duck typing, overriding args, etc. Refer to the transpile() docstring for details on
317 what types of inputs are allowed.
318
319 Here the args are resolved by converting them to standard instances, and prioritizing
320 them in case a transpile option is passed through multiple args (explicitly setting an
321 arg has more priority than the arg set by backend).
322
323 Returns:
324 list[dicts]: a list of transpile parameters.
325 """
326 if initial_layout is not None and layout_method is not None:
327 warnings.warn("initial_layout provided; layout_method is ignored.",
328 UserWarning)
329 # Each arg could be single or a list. If list, it must be the same size as
330 # number of circuits. If single, duplicate to create a list of that size.
331 num_circuits = len(circuits)
332
333 basis_gates = _parse_basis_gates(basis_gates, backend, circuits)
334 coupling_map = _parse_coupling_map(coupling_map, backend, num_circuits)
335 backend_properties = _parse_backend_properties(backend_properties, backend, num_circuits)
336 initial_layout = _parse_initial_layout(initial_layout, circuits)
337 layout_method = _parse_layout_method(layout_method, num_circuits)
338 routing_method = _parse_routing_method(routing_method, num_circuits)
339 seed_transpiler = _parse_seed_transpiler(seed_transpiler, num_circuits)
340 optimization_level = _parse_optimization_level(optimization_level, num_circuits)
341 output_name = _parse_output_name(output_name, circuits)
342 callback = _parse_callback(callback, num_circuits)
343
344 list_transpile_args = []
345 for args in zip(basis_gates, coupling_map, backend_properties,
346 initial_layout, layout_method, routing_method,
347 seed_transpiler, optimization_level,
348 output_name, callback):
349 transpile_args = {'pass_manager_config': PassManagerConfig(basis_gates=args[0],
350 coupling_map=args[1],
351 backend_properties=args[2],
352 initial_layout=args[3],
353 layout_method=args[4],
354 routing_method=args[5],
355 seed_transpiler=args[6]),
356 'optimization_level': args[7],
357 'output_name': args[8],
358 'callback': args[9]}
359 list_transpile_args.append(transpile_args)
360
361 return list_transpile_args
362
363
364 def _parse_basis_gates(basis_gates, backend, circuits):
365 # try getting basis_gates from user, else backend
366 if basis_gates is None:
367 if getattr(backend, 'configuration', None):
368 basis_gates = getattr(backend.configuration(), 'basis_gates', None)
369 # basis_gates could be None, or a list of basis, e.g. ['u3', 'cx']
370 if basis_gates is None or (isinstance(basis_gates, list) and
371 all(isinstance(i, str) for i in basis_gates)):
372 basis_gates = [basis_gates] * len(circuits)
373
374 return basis_gates
375
376
377 def _parse_coupling_map(coupling_map, backend, num_circuits):
378 # try getting coupling_map from user, else backend
379 if coupling_map is None:
380 if getattr(backend, 'configuration', None):
381 configuration = backend.configuration()
382 if hasattr(configuration, 'coupling_map') and configuration.coupling_map:
383 coupling_map = CouplingMap(configuration.coupling_map)
384
385 # coupling_map could be None, or a list of lists, e.g. [[0, 1], [2, 1]]
386 if coupling_map is None or isinstance(coupling_map, CouplingMap):
387 coupling_map = [coupling_map] * num_circuits
388 elif isinstance(coupling_map, list) and all(isinstance(i, list) and len(i) == 2
389 for i in coupling_map):
390 coupling_map = [coupling_map] * num_circuits
391
392 coupling_map = [CouplingMap(cm) if isinstance(cm, list) else cm for cm in coupling_map]
393
394 return coupling_map
395
396
397 def _parse_backend_properties(backend_properties, backend, num_circuits):
398 # try getting backend_properties from user, else backend
399 if backend_properties is None:
400 if getattr(backend, 'properties', None):
401 backend_properties = backend.properties()
402 if not isinstance(backend_properties, list):
403 backend_properties = [backend_properties] * num_circuits
404 return backend_properties
405
406
407 def _parse_initial_layout(initial_layout, circuits):
408 # initial_layout could be None, or a list of ints, e.g. [0, 5, 14]
409 # or a list of tuples/None e.g. [qr[0], None, qr[1]] or a dict e.g. {qr[0]: 0}
410 def _layout_from_raw(initial_layout, circuit):
411 if initial_layout is None or isinstance(initial_layout, Layout):
412 return initial_layout
413 elif isinstancelist(initial_layout):
414 if all(isinstanceint(elem) for elem in initial_layout):
415 initial_layout = Layout.from_intlist(initial_layout, *circuit.qregs)
416 elif all(elem is None or isinstance(elem, Qubit) for elem in initial_layout):
417 initial_layout = Layout.from_qubit_list(initial_layout)
418 elif isinstance(initial_layout, dict):
419 initial_layout = Layout(initial_layout)
420 else:
421 raise TranspilerError("The initial_layout parameter could not be parsed")
422 return initial_layout
423
424 # multiple layouts?
425 if isinstance(initial_layout, list) and \
426 any(isinstance(i, (list, dict)) for i in initial_layout):
427 initial_layout = [_layout_from_raw(lo, circ) if isinstance(lo, (list, dict)) else lo
428 for lo, circ in zip(initial_layout, circuits)]
429 else:
430 # even if one layout, but multiple circuits, the layout needs to be adapted for each
431 initial_layout = [_layout_from_raw(initial_layout, circ) for circ in circuits]
432 if not isinstance(initial_layout, list):
433 initial_layout = [initial_layout] * len(circuits)
434 return initial_layout
435
436
437 def _parse_layout_method(layout_method, num_circuits):
438 if not isinstance(layout_method, list):
439 layout_method = [layout_method] * num_circuits
440 return layout_method
441
442
443 def _parse_routing_method(routing_method, num_circuits):
444 if not isinstance(routing_method, list):
445 routing_method = [routing_method] * num_circuits
446 return routing_method
447
448
449 def _parse_seed_transpiler(seed_transpiler, num_circuits):
450 if not isinstance(seed_transpiler, list):
451 seed_transpiler = [seed_transpiler] * num_circuits
452 return seed_transpiler
453
454
455 def _parse_optimization_level(optimization_level, num_circuits):
456 if not isinstance(optimization_level, list):
457 optimization_level = [optimization_level] * num_circuits
458 return optimization_level
459
460
461 def _parse_pass_manager(pass_manager, num_circuits):
462 if not isinstance(pass_manager, list):
463 pass_manager = [pass_manager] * num_circuits
464 return pass_manager
465
466
467 def _parse_callback(callback, num_circuits):
468 if not isinstance(callback, list):
469 callback = [callback] * num_circuits
470 return callback
471
472
473 def _parse_output_name(output_name, circuits):
474 # naming and returning circuits
475 # output_name could be either a string or a list
476 if output_name is not None:
477 if isinstance(output_name, str):
478 # single circuit
479 if len(circuits) == 1:
480 return [output_name]
481 # multiple circuits
482 else:
483 raise TranspilerError("Expected a list object of length equal " +
484 "to that of the number of circuits " +
485 "being transpiled")
486 elif isinstance(output_name, list):
487 if len(circuits) == len(output_name) and \
488 all(isinstance(name, str) for name in output_name):
489 return output_name
490 else:
491 raise TranspilerError("The length of output_name list "
492 "must be equal to the number of "
493 "transpiled circuits and the output_name "
494 "list should be strings.")
495 else:
496 raise TranspilerError("The parameter output_name should be a string or a"
497 "list of strings: %s was used." % type(output_name))
498 else:
499 return [circuit.name for circuit in circuits]
500
[end of qiskit/compiler/transpile.py]
[start of qiskit/transpiler/passes/basis/unroll_3q_or_more.py]
1 # -*- coding: utf-8 -*-
2
3 # This code is part of Qiskit.
4 #
5 # (C) Copyright IBM 2017, 2018.
6 #
7 # This code is licensed under the Apache License, Version 2.0. You may
8 # obtain a copy of this license in the LICENSE.txt file in the root directory
9 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
10 #
11 # Any modifications or derivative works of this code must retain this
12 # copyright notice, and modified files need to carry a notice indicating
13 # that they have been altered from the originals.
14
15 """Recursively expands 3q+ gates until the circuit only contains 2q or 1q gates."""
16
17 from qiskit.transpiler.basepasses import TransformationPass
18 from qiskit.dagcircuit import DAGCircuit
19 from qiskit.exceptions import QiskitError
20
21
22 class Unroll3qOrMore(TransformationPass):
23 """Recursively expands 3q+ gates until the circuit only contains 2q or 1q gates."""
24
25 def run(self, dag):
26 """Run the Unroll3qOrMore pass on `dag`.
27
28 Args:
29 dag(DAGCircuit): input dag
30 Returns:
31 DAGCircuit: output dag with maximum node degrees of 2
32 Raises:
33 QiskitError: if a 3q+ gate is not decomposable
34 """
35 for node in dag.multi_qubit_ops():
36 # TODO: allow choosing other possible decompositions
37 rule = node.op.definition
38 if not rule:
39 raise QiskitError("Cannot unroll all 3q or more gates. "
40 "No rule to expand instruction %s." %
41 node.op.name)
42
43 # hacky way to build a dag on the same register as the rule is defined
44 # TODO: need anonymous rules to address wires by index
45 decomposition = DAGCircuit()
46 qregs = {qb.register for inst in rule for qb in inst[1]}
47 cregs = {cb.register for inst in rule for cb in inst[2]}
48 for qreg in qregs:
49 decomposition.add_qreg(qreg)
50 for creg in cregs:
51 decomposition.add_creg(creg)
52 for inst in rule:
53 decomposition.apply_operation_back(*inst)
54 decomposition = self.run(decomposition) # recursively unroll
55 dag.substitute_node_with_dag(node, decomposition)
56 return dag
57
[end of qiskit/transpiler/passes/basis/unroll_3q_or_more.py]
[start of qiskit/transpiler/passmanager.py]
1 # -*- coding: utf-8 -*-
2
3 # This code is part of Qiskit.
4 #
5 # (C) Copyright IBM 2017, 2018.
6 #
7 # This code is licensed under the Apache License, Version 2.0. You may
8 # obtain a copy of this license in the LICENSE.txt file in the root directory
9 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
10 #
11 # Any modifications or derivative works of this code must retain this
12 # copyright notice, and modified files need to carry a notice indicating
13 # that they have been altered from the originals.
14
15 """Manager for a set of Passes and their scheduling during transpilation."""
16
17 import warnings
18 from typing import Union, List, Callable, Dict, Any
19
20 import dill
21
22 from qiskit.visualization import pass_manager_drawer
23 from qiskit.tools.parallel import parallel_map
24 from qiskit.circuit import QuantumCircuit
25 from .basepasses import BasePass
26 from .exceptions import TranspilerError
27 from .runningpassmanager import RunningPassManager
28
29
30 class PassManager:
31 """Manager for a set of Passes and their scheduling during transpilation."""
32
33 def __init__(
34 self,
35 passes: Union[BasePass, List[BasePass]] = None,
36 max_iteration: int = 1000,
37 callback: Callable = None
38 ):
39 """Initialize an empty `PassManager` object (with no passes scheduled).
40
41 Args:
42 passes: A pass set (as defined in :py:func:`qiskit.transpiler.PassManager.append`)
43 to be added to the pass manager schedule.
44 max_iteration: The maximum number of iterations the schedule will be looped if the
45 condition is not met.
46 callback: DEPRECATED - A callback function that will be called after each pass
47 execution.
48
49 .. deprecated:: 0.13.0
50 The ``callback`` parameter is deprecated in favor of
51 ``PassManager.run(..., callback=callback, ...)``.
52 """
53 self.callback = None
54
55 if callback:
56 warnings.warn("Setting a callback at construction time is being deprecated in favor of"
57 "PassManager.run(..., callback=callback,...)", DeprecationWarning, 2)
58 self.callback = callback
59 # the pass manager's schedule of passes, including any control-flow.
60 # Populated via PassManager.append().
61
62 self._pass_sets = []
63 if passes is not None:
64 self.append(passes)
65 self.max_iteration = max_iteration
66 self.property_set = None
67
68 def append(
69 self,
70 passes: Union[BasePass, List[BasePass]],
71 max_iteration: int = None,
72 **flow_controller_conditions: Any
73 ) -> None:
74 """Append a Pass Set to the schedule of passes.
75
76 Args:
77 passes: A set of passes (a pass set) to be added to schedule. A pass set is a list of
78 passes that are controlled by the same flow controller. If a single pass is
79 provided, the pass set will only have that pass a single element.
80 max_iteration: max number of iterations of passes.
81 flow_controller_conditions: control flow plugins.
82
83 Raises:
84 TranspilerError: if a pass in passes is not a proper pass.
85
86 See Also:
87 ``RunningPassManager.add_flow_controller()`` for more information about the control
88 flow plugins.
89 """
90 if max_iteration:
91 # TODO remove this argument from append
92 self.max_iteration = max_iteration
93
94 passes = PassManager._normalize_passes(passes)
95 self._pass_sets.append({'passes': passes, 'flow_controllers': flow_controller_conditions})
96
97 def replace(
98 self,
99 index: int,
100 passes: Union[BasePass, List[BasePass]],
101 max_iteration: int = None,
102 **flow_controller_conditions: Any
103 ) -> None:
104 """Replace a particular pass in the scheduler.
105
106 Args:
107 index: Pass index to replace, based on the position in passes().
108 passes: A pass set (as defined in :py:func:`qiskit.transpiler.PassManager.append`)
109 to be added to the pass manager schedule.
110 max_iteration: max number of iterations of passes.
111 flow_controller_conditions: control flow plugins.
112
113 Raises:
114 TranspilerError: if a pass in passes is not a proper pass.
115
116 See Also:
117 ``RunningPassManager.add_flow_controller()`` for more information about the control
118 flow plugins.
119 """
120 if max_iteration:
121 # TODO remove this argument from append
122 self.max_iteration = max_iteration
123
124 passes = PassManager._normalize_passes(passes)
125
126 try:
127 self._pass_sets[index] = {'passes': passes,
128 'flow_controllers': flow_controller_conditions}
129 except IndexError:
130 raise TranspilerError('Index to replace %s does not exists' % index)
131
132 def __setitem__(self, index, item):
133 self.replace(index, item)
134
135 def __len__(self):
136 return len(self._pass_sets)
137
138 def __getitem__(self, index):
139 new_passmanager = PassManager(max_iteration=self.max_iteration, callback=self.callback)
140 _pass_sets = self._pass_sets[index]
141 if isinstance(_pass_sets, dict):
142 _pass_sets = [_pass_sets]
143 new_passmanager._pass_sets = _pass_sets
144 return new_passmanager
145
146 def __add__(self, other):
147 if isinstance(other, PassManager):
148 new_passmanager = PassManager(max_iteration=self.max_iteration, callback=self.callback)
149 new_passmanager._pass_sets = self._pass_sets + other._pass_sets
150 return new_passmanager
151 else:
152 try:
153 new_passmanager = PassManager(max_iteration=self.max_iteration,
154 callback=self.callback)
155 new_passmanager._pass_sets += self._pass_sets
156 new_passmanager.append(other)
157 return new_passmanager
158 except TranspilerError:
159 raise TypeError('unsupported operand type + for %s and %s' % (self.__class__,
160 other.__class__))
161
162 @staticmethod
163 def _normalize_passes(passes: Union[BasePass, List[BasePass]]) -> List[BasePass]:
164 if isinstance(passes, BasePass):
165 passes = [passes]
166
167 for pass_ in passes:
168 if not isinstance(pass_, BasePass):
169 raise TranspilerError('%s is not a pass instance' % pass_.__class__)
170 return passes
171
172 def run(
173 self,
174 circuits: Union[QuantumCircuit, List[QuantumCircuit]],
175 output_name: str = None,
176 callback: Callable = None
177 ) -> Union[QuantumCircuit, List[QuantumCircuit]]:
178 """Run all the passes on the specified ``circuits``.
179
180 Args:
181 circuits: Circuit(s) to transform via all the registered passes.
182 output_name: The output circuit name. If ``None``, it will be set to the same as the
183 input circuit name.
184 callback: A callback function that will be called after each pass execution. The
185 function will be called with 5 keyword arguments::
186
187 pass_ (Pass): the pass being run
188 dag (DAGCircuit): the dag output of the pass
189 time (float): the time to execute the pass
190 property_set (PropertySet): the property set
191 count (int): the index for the pass execution
192
193 The exact arguments pass expose the internals of the pass
194 manager and are subject to change as the pass manager internals
195 change. If you intend to reuse a callback function over
196 multiple releases be sure to check that the arguments being
197 passed are the same.
198
199 To use the callback feature you define a function that will
200 take in kwargs dict and access the variables. For example::
201
202 def callback_func(**kwargs):
203 pass_ = kwargs['pass_']
204 dag = kwargs['dag']
205 time = kwargs['time']
206 property_set = kwargs['property_set']
207 count = kwargs['count']
208 ...
209
210 Returns:
211 The transformed circuit(s).
212 """
213 if isinstance(circuits, QuantumCircuit):
214 return self._run_single_circuit(circuits, output_name, callback)
215 elif len(circuits) == 1:
216 return self._run_single_circuit(circuits[0], output_name, callback)
217 else:
218 return self._run_several_circuits(circuits, output_name, callback)
219
220 def _create_running_passmanager(self) -> RunningPassManager:
221 running_passmanager = RunningPassManager(self.max_iteration)
222 for pass_set in self._pass_sets:
223 running_passmanager.append(pass_set['passes'], **pass_set['flow_controllers'])
224 return running_passmanager
225
226 @staticmethod
227 def _in_parallel(circuit, pm_dill=None) -> QuantumCircuit:
228 """Task used by the parallel map tools from ``_run_several_circuits``."""
229 running_passmanager = dill.loads(pm_dill)._create_running_passmanager()
230 result = running_passmanager.run(circuit)
231 return result
232
233 def _run_several_circuits(
234 self,
235 circuits: List[QuantumCircuit],
236 output_name: str = None,
237 callback: Callable = None
238 ) -> List[QuantumCircuit]:
239 """Run all the passes on the specified ``circuits``.
240
241 Args:
242 circuits: Circuits to transform via all the registered passes.
243 output_name: The output circuit name. If ``None``, it will be set to the same as the
244 input circuit name.
245 callback: A callback function that will be called after each pass execution.
246
247 Returns:
248 The transformed circuits.
249 """
250 # TODO support for List(output_name) and List(callback)
251 del output_name
252 del callback
253
254 return parallel_map(PassManager._in_parallel, circuits,
255 task_kwargs={'pm_dill': dill.dumps(self)})
256
257 def _run_single_circuit(
258 self,
259 circuit: QuantumCircuit,
260 output_name: str = None,
261 callback: Callable = None
262 ) -> QuantumCircuit:
263 """Run all the passes on a ``circuit``.
264
265 Args:
266 circuit: Circuit to transform via all the registered passes.
267 output_name: The output circuit name. If ``None``, it will be set to the same as the
268 input circuit name.
269 callback: A callback function that will be called after each pass execution.
270
271 Returns:
272 The transformed circuit.
273 """
274 running_passmanager = self._create_running_passmanager()
275 if callback is None and self.callback: # TODO to remove with __init__(callback)
276 callback = self.callback
277 result = running_passmanager.run(circuit, output_name=output_name, callback=callback)
278 self.property_set = running_passmanager.property_set
279 return result
280
281 def draw(self, filename=None, style=None, raw=False):
282 """Draw the pass manager.
283
284 This function needs `pydot <https://github.com/erocarrera/pydot>`__, which in turn needs
285 `Graphviz <https://www.graphviz.org/>`__ to be installed.
286
287 Args:
288 filename (str): file path to save image to.
289 style (dict): keys are the pass classes and the values are the colors to make them. An
290 example can be seen in the DEFAULT_STYLE. An ordered dict can be used to ensure
291 a priority coloring when pass falls into multiple categories. Any values not
292 included in the provided dict will be filled in from the default dict.
293 raw (bool): If ``True``, save the raw Dot output instead of the image.
294
295 Returns:
296 Optional[PassManager]: an in-memory representation of the pass manager, or ``None``
297 if no image was generated or `Pillow <https://pypi.org/project/Pillow/>`__
298 is not installed.
299
300 Raises:
301 ImportError: when nxpd or pydot not installed.
302 """
303 return pass_manager_drawer(self, filename=filename, style=style, raw=raw)
304
305 def passes(self) -> List[Dict[str, BasePass]]:
306 """Return a list structure of the appended passes and its options.
307
308 Returns:
309 A list of pass sets, as defined in ``append()``.
310 """
311 ret = []
312 for pass_set in self._pass_sets:
313 item = {'passes': pass_set['passes']}
314 if pass_set['flow_controllers']:
315 item['flow_controllers'] = set(pass_set['flow_controllers'].keys())
316 else:
317 item['flow_controllers'] = {}
318 ret.append(item)
319 return ret
320
[end of qiskit/transpiler/passmanager.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
Qiskit/qiskit
|
9a5d8577c10c58e28cd9d139c6a0aa0faf8bd868
|
Cannot unroll identity matrix of more than 2 qubits when coupling_map is set
<!-- ⚠️ If you do not respect this template, your issue will be closed -->
<!-- ⚠️ Make sure to browse the opened and closed issues -->
### Information
- **Qiskit Terra version**: 0.14.1
- **Python version**: 3.8
- **Operating system**: both Windows and Linux
### What is the current behavior?
The `transpile` function fails to unroll an `UnitaryGate` containing an identity matrix of more than 2 qubits when the `backend` argument is set to be a remote quantum computer or the `coupling_map` argument is set.
### Steps to reproduce the problem
```
>>> import numpy as np
>>> from qiskit import IBMQ, QuantumCircuit, transpile
>>> from qiskit.extensions import UnitaryGate
>>> provider = IBMQ.load_account()
>>> backend = provider.get_backend('ibmq_london') # arbitrary backend with at least 3 qubits
>>> circuit = QuantumCircuit(3)
>>> gate = UnitaryGate(np.eye(2 ** 3))
>>> circuit.append(gate, range(3))
<qiskit.circuit.instructionset.InstructionSet object at 0x7ff8b93a60d0>
>>> transpile(circuit, backend=backend)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.8/dist-packages/qiskit/compiler/transpile.py", line 210, in transpile circuits = parallel_map(_transpile_circuit, list(zip(circuits, transpile_args)))
File "/usr/local/lib/python3.8/dist-packages/qiskit/tools/parallel.py", line 105, in parallel_map
return [task(values[0], *task_args, **task_kwargs)]
File "/usr/local/lib/python3.8/dist-packages/qiskit/compiler/transpile.py", line 306, in _transpile_circuit
return pass_manager.run(circuit, callback=transpile_config['callback'],
File "/usr/local/lib/python3.8/dist-packages/qiskit/transpiler/passmanager.py", line 214, in run
return self._run_single_circuit(circuits, output_name, callback)
File "/usr/local/lib/python3.8/dist-packages/qiskit/transpiler/passmanager.py", line 277, in _run_single_circuit
result = running_passmanager.run(circuit, output_name=output_name, callback=callback)
File "/usr/local/lib/python3.8/dist-packages/qiskit/transpiler/runningpassmanager.py", line 115, in run
dag = self._do_pass(pass_, dag, passset.options)
File "/usr/local/lib/python3.8/dist-packages/qiskit/transpiler/runningpassmanager.py", line 145, in _do_pass
dag = self._run_this_pass(pass_, dag)
File "/usr/local/lib/python3.8/dist-packages/qiskit/transpiler/runningpassmanager.py", line 157, in _run_this_pass
new_dag = pass_.run(dag)
File "/usr/local/lib/python3.8/dist-packages/qiskit/transpiler/passes/basis/unroll_3q_or_more.py", line 54, in run
decomposition = self.run(decomposition) # recursively unroll
File "/usr/local/lib/python3.8/dist-packages/qiskit/transpiler/passes/basis/unroll_3q_or_more.py", line 54, in run
decomposition = self.run(decomposition) # recursively unroll
File "/usr/local/lib/python3.8/dist-packages/qiskit/transpiler/passes/basis/unroll_3q_or_more.py", line 39, in run
raise QiskitError("Cannot unroll all 3q or more gates. "
qiskit.exceptions.QiskitError: 'Cannot unroll all 3q or more gates. No rule to expand instruction circuit9_dg.'
```
Notes:
- This bug only happens when the `backend` argument is set to be a remote quantum computer or the `coupling_map` argument is set to be a coupling map of a remote quantum computer. Calling `transpile(circuit, basis_gates=['u1', 'u2', 'u3', 'cx', 'id'])` works fine.
- This bug only happens when the `UnitaryGate` contains an identity matrix of more than 2 qubits.
### What is the expected behavior?
Successfully transpile the circuit.
|
2020-06-19T19:50:53Z
|
<patch>
diff --git a/qiskit/transpiler/passes/basis/unroll_3q_or_more.py b/qiskit/transpiler/passes/basis/unroll_3q_or_more.py
--- a/qiskit/transpiler/passes/basis/unroll_3q_or_more.py
+++ b/qiskit/transpiler/passes/basis/unroll_3q_or_more.py
@@ -36,6 +36,9 @@ def run(self, dag):
# TODO: allow choosing other possible decompositions
rule = node.op.definition
if not rule:
+ if rule == []: # empty node
+ dag.remove_op_node(node)
+ continue
raise QiskitError("Cannot unroll all 3q or more gates. "
"No rule to expand instruction %s." %
node.op.name)
</patch>
|
[]
|
[]
| ||||
Lightning-AI__lightning-2789
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`replace_sampler_ddp` doesn't create a shuffled sampler
<!--
### Common bugs:
1. Tensorboard not showing in Jupyter-notebook see [issue 79](https://github.com/PyTorchLightning/pytorch-lightning/issues/79).
2. PyTorch 1.1.0 vs 1.2.0 support [see FAQ](https://github.com/PyTorchLightning/pytorch-lightning#faq)
-->
## 🐛 Bug
The `DistributedSampler` created using `replace_sampler_ddp` is not shuffled. Check the `kwargs` [here](https://github.com/PyTorchLightning/pytorch-lightning/blob/master/pytorch_lightning/trainer/data_loading.py#L195)
### Expected behavior
If training dataloader, create a shuffled `DistributedSampler`, else create a non-shuffled sampler. Even though the `train` flag is passed to the function [here](https://github.com/PyTorchLightning/pytorch-lightning/blob/master/pytorch_lightning/trainer/data_loading.py#L146), it is ignored.
### Environment
pytorch-lightning master
</issue>
<code>
[start of README.md]
1 <div align="center">
2
3 
4
5 # PyTorch Lightning
6
7 **The lightweight PyTorch wrapper for ML researchers. Scale your models. Write less boilerplate.**
8
9
10 [](https://badge.fury.io/py/pytorch-lightning)
11 [](https://pepy.tech/project/pytorch-lightning)
12 [](https://codecov.io/gh/PyTorchLightning/pytorch-lightning)
13
14 [](https://pytorch-lightning.readthedocs.io/en/stable/)
15 [](https://join.slack.com/t/pytorch-lightning/shared_invite/zt-f6bl2l0l-JYMK3tbAgAmGRrlNr00f1A)
16 [](https://github.com/PytorchLightning/pytorch-lightning/blob/master/LICENSE)
17 [](https://shields.io/)
18
19 <!--
20 [](https://www.codefactor.io/repository/github/pytorchlightning/pytorch-lightning)
21 -->
22 </div>
23
24 ---
25 ## Trending contributors
26
27 [](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/links/0)
28 [](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/links/1)
29 [](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/links/2)
30 [](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/links/3)
31 [](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/links/4)
32 [](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/links/5)
33 [](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/links/6)
34 [](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/links/7)
35
36 ---
37
38 ## Continuous Integration
39 <center>
40
41 | System / PyTorch ver. | 1.3 (min. req.) [w/o py3.8] | 1.4 | 1.5 | 1.6 (latest) |
42 | :---: | :---: | :---: | :---: | :---: |
43 | Conda py3.7 [linux] |  |  |  |  |
44 | Linux py3.7 [GPU] | - | - | - | [](http://35.192.60.23/PyTorchLightning/pytorch-lightning) |
45 | Linux py3.7 [TPU] | - | - | - |  |
46 | Linux py3.6 / py3.7 / py3.8 | [](https://github.com/PyTorchLightning/pytorch-lightning/actions?query=workflow%3A%22CI+testing%22) | - | - | [](https://github.com/PyTorchLightning/pytorch-lightning/actions?query=workflow%3A%22CI+testing%22) |
47 | OSX py3.6 / py3.7 | - | [](https://github.com/PyTorchLightning/pytorch-lightning/actions?query=workflow%3A%22CI+testing%22) | - | [](https://github.com/PyTorchLightning/pytorch-lightning/actions?query=workflow%3A%22CI+testing%22) |
48 | Windows py3.6 / py3.7 / py3.8 | [](https://github.com/PyTorchLightning/pytorch-lightning/actions?query=workflow%3A%22CI+testing%22) | - | - | [](https://github.com/PyTorchLightning/pytorch-lightning/actions?query=workflow%3A%22CI+testing%22)
49
50 </center>
51
52 Simple installation from PyPI
53 ```bash
54 pip install pytorch-lightning
55 ```
56
57 From Conda
58 ```bash
59 conda install pytorch-lightning -c conda-forge
60 ```
61
62 ## Docs
63 - [master](https://pytorch-lightning.readthedocs.io/en/latest)
64 - [stable](https://pytorch-lightning.readthedocs.io/en/stable)
65 - [0.8.5](https://pytorch-lightning.readthedocs.io/en/0.8.5/)
66 - [0.8.4](https://pytorch-lightning.readthedocs.io/en/0.8.4/)
67 - [0.8.3](https://pytorch-lightning.readthedocs.io/en/0.8.3/)
68 - [0.8.1](https://pytorch-lightning.readthedocs.io/en/0.8.1/)
69 - [0.7.6](https://pytorch-lightning.readthedocs.io/en/0.7.6/)
70
71 ## PyTorch Lightning is just organized PyTorch
72 
73
74 Lightning is a way to organize your PyTorch code to decouple the science code from the engineering.
75 It's more of a PyTorch style-guide than a framework.
76
77 In Lightning, you organize your code into 3 distinct categories:
78
79 1. Research code (goes in the LightningModule).
80 2. Engineering code (you delete, and is handled by the Trainer).
81 3. Non-essential research code (logging, etc... this goes in Callbacks).
82
83 Once you do this, you can train on multiple-GPUs, TPUs, CPUs and even in 16-bit precision without changing your code!
84
85 Get started with our [QUICK START PAGE](https://pytorch-lightning.readthedocs.io/en/stable/new-project.html)
86
87 ## Refactoring your PyTorch code + benefits + full walk-through
88 [](https://www.youtube.com/watch?v=QHww1JH7IDU)
89
90 ## Demo
91 Here's a minimal example without a validation or test loop.
92
93 ```python
94 # this is just a plain nn.Module with some structure
95
96 class LitClassifier(pl.LightningModule):
97
98 def __init__(self):
99 super().__init__()
100 self.l1 = torch.nn.Linear(28 * 28, 10)
101
102 def forward(self, x):
103 return torch.relu(self.l1(x.view(x.size(0), -1)))
104
105 def training_step(self, batch, batch_nb):
106 x, y = batch
107 loss = F.cross_entropy(self(x), y)
108 tensorboard_logs = {'train_loss': loss}
109 return {'loss': loss, 'log': tensorboard_logs}
110
111 def configure_optimizers(self):
112 return torch.optim.Adam(self.parameters(), lr=0.02)
113
114 # train!
115 train_loader = DataLoader(MNIST(os.getcwd(), train=True, download=True, transform=transforms.ToTensor()), batch_size=32)
116
117 model = LitClassifier()
118 trainer = pl.Trainer(gpus=8, precision=16)
119 trainer.fit(model, train_loader)
120 ```
121
122 Other examples:
123 [MNIST hello world](https://colab.research.google.com/drive/1F_RNcHzTfFuQf-LeKvSlud6x7jXYkG31#scrollTo=gEulmrbxwaYL)
124 [GAN](https://colab.research.google.com/drive/1F_RNcHzTfFuQf-LeKvSlud6x7jXYkG31#scrollTo=P0bSmCw57aV5)
125 [BERT](https://colab.research.google.com/drive/1F_RNcHzTfFuQf-LeKvSlud6x7jXYkG31#scrollTo=7uQVI-xv9Ddj)
126 [DQN](https://colab.research.google.com/drive/1F_RNcHzTfFuQf-LeKvSlud6x7jXYkG31#scrollTo=NWvMLBDySQI5)
127 [MNIST on TPUs](https://colab.research.google.com/drive/1-_LKx4HwAxl5M6xPJmqAAu444LTDQoa3)
128
129 ## Testing Rigour
130 All the automated code by the Trainer is [tested rigorously with every new PR](https://github.com/PyTorchLightning/pytorch-lightning/tree/master/tests).
131
132 For every PR we test all combinations of:
133 - PyTorch 1.3, 1.4, 1.5
134 - Python 3.6, 3.7, 3.8
135 - Linux, OSX, Windows
136 - Multiple GPUs
137
138 **How does performance compare with vanilla PyTorch?**
139 We have tests to ensure we get the EXACT same results in under 600 ms difference per epoch. In reality, lightning adds about a 300 ms overhead per epoch.
140 [Check out the parity tests here](https://github.com/PyTorchLightning/pytorch-lightning/tree/master/benchmarks).
141
142 Overall, Lightning guarantees rigorously tested, correct, modern best practices for the automated parts.
143
144 ## How flexible is it?
145 As you see, you're just organizing your PyTorch code - there's no abstraction.
146
147 And for the stuff that the Trainer abstracts out, you can [override any part](https://pytorch-lightning.readthedocs.io/en/latest/introduction_guide.html#extensibility) you want to do things like implement your own distributed training, 16-bit precision, or even a custom backward pass.
148
149 For example, here you could do your own backward pass without worrying about GPUs, TPUs or 16-bit since we already handle it.
150
151 ```python
152 class LitModel(LightningModule):
153 def optimizer_step(self, current_epoch, batch_idx, optimizer, optimizer_idx,
154 second_order_closure=None, on_tpu=False, using_native_amp=False, using_lbfgs=False):
155 optimizer.step()
156
157 def optimizer_zero_grad(self, current_epoch, batch_idx, optimizer, opt_idx):
158 optimizer.zero_grad()
159 ```
160
161 For anything else you might need, we have an extensive [callback system](https://pytorch-lightning.readthedocs.io/en/latest/introduction_guide.html#callbacks) you can use to add arbitrary functionality not implemented by our team in the Trainer.
162
163 ## Who is Lightning for?
164 - Professional researchers
165 - Ph.D. students
166 - Corporate production teams
167
168 If you're just getting into deep learning, we recommend you learn PyTorch first! Once you've implemented a few models, come back and use all the advanced features of Lightning :)
169
170 ## What does lightning control for me?
171
172 Everything in Blue!
173 This is how lightning separates the science (red) from engineering (blue).
174
175 
176
177 ## How much effort is it to convert?
178 If your code is not a huge mess you should be able to organize it into a LightningModule in less than 1 hour.
179 If your code IS a mess, then you needed to clean up anyhow ;)
180
181 [Check out this step-by-step guide](https://towardsdatascience.com/from-pytorch-to-pytorch-lightning-a-gentle-introduction-b371b7caaf09).
182 [Or watch this video](https://www.youtube.com/watch?v=QHww1JH7IDU).
183
184
185 ## Starting a new project?
186 [Use our seed-project aimed at reproducibility!](https://github.com/PytorchLightning/pytorch-lightning-conference-seed)
187
188 ## Why do I want to use lightning?
189 Although your research/production project might start simple, once you add things like GPU AND TPU training, 16-bit precision, etc, you end up spending more time engineering than researching. Lightning automates AND rigorously tests those parts for you.
190
191 ## Support
192 - [8 core contributors](https://pytorch-lightning.readthedocs.io/en/latest/governance.html) who are all a mix of professional engineers, Research Scientists, Ph.D. students from top AI labs.
193 - 100+ community contributors.
194
195 Lightning is also part of the [PyTorch ecosystem](https://pytorch.org/ecosystem/) which requires projects to have solid testing, documentation and support.
196
197 ---
198
199 ## README Table of Contents
200 - [How do I use it](https://github.com/PytorchLightning/pytorch-lightning#how-do-i-do-use-it)
201 - [What lightning automates](https://github.com/PytorchLightning/pytorch-lightning#what-does-lightning-control-for-me)
202 - [Tensorboard integration](https://github.com/PytorchLightning/pytorch-lightning#tensorboard)
203 - [Lightning features](https://github.com/PytorchLightning/pytorch-lightning#lightning-automates-all-of-the-following-each-is-also-configurable)
204 - [Examples](https://github.com/PytorchLightning/pytorch-lightning#examples)
205 - [Tutorials](https://github.com/PytorchLightning/pytorch-lightning#tutorials)
206 - [Asking for help](https://github.com/PytorchLightning/pytorch-lightning#asking-for-help)
207 - [Contributing](https://github.com/PytorchLightning/pytorch-lightning/blob/master/.github/CONTRIBUTING.md)
208 - [Bleeding edge install](https://github.com/PytorchLightning/pytorch-lightning#bleeding-edge)
209 - [Lightning Design Principles](https://github.com/PytorchLightning/pytorch-lightning#lightning-design-principles)
210 - [Lightning team](https://github.com/PytorchLightning/pytorch-lightning#lightning-team)
211 - [FAQ](https://github.com/PytorchLightning/pytorch-lightning#faq)
212
213 ---
214
215 ## Realistic example
216 Here's how you would organize a realistic PyTorch project into Lightning.
217
218 
219
220 The LightningModule defines a *system* such as seq-2-seq, GAN, etc...
221 It can ALSO define a simple classifier.
222
223 In summary, you:
224
225 1. Define a [LightningModule](https://pytorch-lightning.rtfd.io/en/latest/lightning-module.html)
226 ```python
227 class LitSystem(pl.LightningModule):
228
229 def __init__(self):
230 super().__init__()
231 # not the best model...
232 self.l1 = torch.nn.Linear(28 * 28, 10)
233
234 def forward(self, x):
235 return torch.relu(self.l1(x.view(x.size(0), -1)))
236
237 def training_step(self, batch, batch_idx):
238 ...
239 ```
240
241 2. Fit it with a [Trainer](https://pytorch-lightning.rtfd.io/en/latest/pytorch_lightning.trainer.html)
242 ```python
243 from pytorch_lightning import Trainer
244
245 model = LitSystem()
246
247 # most basic trainer, uses good defaults
248 trainer = Trainer()
249 trainer.fit(model)
250 ```
251
252 [Check out the COLAB demo here](https://colab.research.google.com/drive/1F_RNcHzTfFuQf-LeKvSlud6x7jXYkG31#scrollTo=HOk9c4_35FKg)
253
254 ## What types of research works?
255 Anything! Remember, that this is just organized PyTorch code.
256 The Training step defines the core complexity found in the training loop.
257
258 #### Could be as complex as a seq2seq
259
260 ```python
261 # define what happens for training here
262 def training_step(self, batch, batch_idx):
263 x, y = batch
264
265 # define your own forward and loss calculation
266 hidden_states = self.encoder(x)
267
268 # even as complex as a seq-2-seq + attn model
269 # (this is just a toy, non-working example to illustrate)
270 start_token = '<SOS>'
271 last_hidden = torch.zeros(...)
272 loss = 0
273 for step in range(max_seq_len):
274 attn_context = self.attention_nn(hidden_states, start_token)
275 pred = self.decoder(start_token, attn_context, last_hidden)
276 last_hidden = pred
277 pred = self.predict_nn(pred)
278 loss += self.loss(last_hidden, y[step])
279
280 #toy example as well
281 loss = loss / max_seq_len
282 return {'loss': loss}
283 ```
284
285 #### Or as basic as CNN image classification
286
287 ```python
288 # define what happens for validation here
289 def validation_step(self, batch, batch_idx):
290 x, y = batch
291
292 # or as basic as a CNN classification
293 out = self(x)
294 loss = my_loss(out, y)
295 return {'loss': loss}
296 ```
297
298 And without changing a single line of code, you could run on CPUs
299 ```python
300 trainer = Trainer(max_epochs=1)
301 ```
302
303
304 Or GPUs
305 ```python
306 # 8 GPUs
307 trainer = Trainer(max_epochs=1, gpus=8)
308
309 # 256 GPUs
310 trainer = Trainer(max_epochs=1, gpus=8, num_nodes=32)
311 ```
312
313 Or TPUs
314 ```python
315 # Distributes TPU core training
316 trainer = Trainer(tpu_cores=8)
317
318 # Single TPU core training
319 trainer = Trainer(tpu_cores=[1])
320 ```
321
322 When you're done training, run the test accuracy
323 ```python
324 trainer.test()
325 ```
326
327 ## Visualization
328 Lightning has out-of-the-box integration with the popular logging/visualizing frameworks
329
330 - [Tensorboard](https://pytorch.org/docs/stable/tensorboard.html)
331 - [MLFlow](https://mlflow.org/)
332 - [Neptune.ai](https://neptune.ai/)
333 - [Comet.ml](https://www.comet.ml/site/)
334 - [Wandb](https://www.wandb.com/)
335 - ...
336
337 
338
339
340 ## Lightning automates 40+ parts of DL/ML research
341 - GPU training
342 - Distributed GPU (cluster) training
343 - TPU training
344 - EarlyStopping
345 - Logging/Visualizing
346 - Checkpointing
347 - Experiment management
348 - [Full list here](https://pytorch-lightning.readthedocs.io/en/latest/#common-use-cases)
349
350
351 ## Running speed
352 Migrating to lightning does not mean compromising on speed! You can expect an overhead of about 300 ms per epoch compared with pure PyTorch.
353
354
355 ## Examples
356 Check out this awesome list of research papers and implementations done with Lightning.
357
358 - [Contextual Emotion Detection (DoubleDistilBert)](https://github.com/PyTorchLightning/emotion_transformer)
359 - [Generative Adversarial Network](https://colab.research.google.com/drive/1F_RNcHzTfFuQf-LeKvSlud6x7jXYkG31#scrollTo=TyYOdg8g77P0)
360 - [Hyperparameter optimization with Optuna](https://github.com/optuna/optuna/blob/master/examples/pytorch_lightning_simple.py)
361 - [Hyperparameter optimization with Ray Tune](https://docs.ray.io/en/master/tune/tutorials/tune-pytorch-lightning.html)
362 - [Image Inpainting using Partial Convolutions](https://github.com/ryanwongsa/Image-Inpainting)
363 - [MNIST on TPU](https://colab.research.google.com/drive/1-_LKx4HwAxl5M6xPJmqAAu444LTDQoa3#scrollTo=BHBz1_AnamN_)
364 - [NER (transformers, TPU, huggingface)](https://colab.research.google.com/drive/1dBN-wwYUngLYVt985wGs_OKPlK_ANB9D)
365 - [NeuralTexture (CVPR)](https://github.com/PyTorchLightning/neuraltexture)
366 - [Recurrent Attentive Neural Process](https://github.com/PyTorchLightning/attentive-neural-processes)
367 - [Siamese Nets for One-shot Image Recognition](https://github.com/PyTorchLightning/Siamese-Neural-Networks)
368 - [Speech Transformers](https://github.com/PyTorchLightning/speech-transformer-pytorch_lightning)
369 - [Transformers transfer learning (Huggingface)](https://colab.research.google.com/drive/1F_RNcHzTfFuQf-LeKvSlud6x7jXYkG31#scrollTo=yr7eaxkF-djf)
370 - [Transformers text classification](https://github.com/ricardorei/lightning-text-classification)
371 - [VAE Library of over 18+ VAE flavors](https://github.com/AntixK/PyTorch-VAE)
372 - [Transformers Question Answering (SQuAD)](https://github.com/tshrjn/Finetune-QA/)
373 - [Pytorch-Lightning + Microsoft NNI with Docker](https://github.com/davinnovation/pytorch-boilerplate)
374
375 ## Tutorials
376 Check out our [introduction guide](https://pytorch-lightning.readthedocs.io/en/latest/introduction_guide.html) to get started.
377 Or jump straight into [our tutorials](https://pytorch-lightning.readthedocs.io/en/latest/#tutorials).
378
379 ---
380
381 ## Asking for help
382 Welcome to the Lightning community!
383
384 If you have any questions, feel free to:
385 1. [read the docs](https://pytorch-lightning.rtfd.io/en/latest/).
386 2. [Search through the issues](https://github.com/PytorchLightning/pytorch-lightning/issues?utf8=%E2%9C%93&q=my++question).
387 3. [Ask on stackoverflow](https://stackoverflow.com/questions/ask?guided=false) with the tag pytorch-lightning.
388 4. [Join our slack](https://join.slack.com/t/pytorch-lightning/shared_invite/zt-f6bl2l0l-JYMK3tbAgAmGRrlNr00f1A).
389
390 ---
391
392 ## FAQ
393 **How do I use Lightning for rapid research?**
394 [Here's a walk-through](https://pytorch-lightning.readthedocs.io/en/latest/introduction_guide.html)
395
396 **Why was Lightning created?**
397 Lightning has 3 goals in mind:
398
399 1. Maximal flexibility while abstracting out the common boilerplate across research projects.
400 2. Reproducibility. If all projects use the LightningModule template, it will be much much easier to understand what's going on and where to look! It will also mean every implementation follows a standard format.
401 3. Democratizing PyTorch power-user features. Distributed training? 16-bit? know you need them but don't want to take the time to implement? All good... these come built into Lightning.
402
403 **How does Lightning compare with Ignite and fast.ai?**
404 [Here's a thorough comparison](https://medium.com/@_willfalcon/pytorch-lightning-vs-pytorch-ignite-vs-fast-ai-61dc7480ad8a).
405
406 **Is this another library I have to learn?**
407 Nope! We use pure Pytorch everywhere and don't add unnecessary abstractions!
408
409 **Are there plans to support Python 2?**
410 Nope.
411
412 **Are there plans to support virtualenv?**
413 Nope. Please use anaconda or miniconda.
414 ```bash
415 conda activate my_env
416 pip install pytorch-lightning
417 ```
418
419 ## Custom installation
420
421 ### Bleeding edge
422
423 If you can't wait for the next release, install the most up to date code with:
424 * using GIT (locally clone whole repo with full history)
425 ```bash
426 pip install git+https://github.com/PytorchLightning/pytorch-lightning.git@master --upgrade
427 ```
428 * using instant zip (last state of the repo without git history)
429 ```bash
430 pip install https://github.com/PytorchLightning/pytorch-lightning/archive/master.zip --upgrade
431 ```
432
433 ### Any release installation
434
435 You can also install any past release `0.X.Y` from this repository:
436 ```bash
437 pip install https://github.com/PytorchLightning/pytorch-lightning/archive/0.X.Y.zip --upgrade
438 ```
439
440 ---
441
442 ## Lightning team
443
444 #### Leads
445 - William Falcon [(williamFalcon)](https://github.com/williamFalcon) (Lightning founder)
446 - Jirka Borovec [(Borda)](https://github.com/Borda) (ghost :)
447 - Ethan Harris [(ethanwharris)](https://github.com/ethanwharris) (Torchbearer founder)
448 - Matthew Painter [(MattPainter01)](https://github.com/MattPainter01) (Torchbearer founder)
449 - Justus Schock [(justusschock)](https://github.com/justusschock) (Former Core Member PyTorch Ignite)
450
451 #### Core Maintainers
452
453 - Nick Eggert [(neggert)](https://github.com/neggert)
454 - Jeff Ling [(jeffling)](https://github.com/jeffling)
455 - Jeremy Jordan [(jeremyjordan)](https://github.com/jeremyjordan)
456 - Tullie Murrell [(tullie)](https://github.com/tullie)
457 - Adrian Wälchli [(awaelchli)](https://github.com/awaelchli)
458 - Nicki Skafte [(skaftenicki)](https://github.com/SkafteNicki)
459 - Peter Yu [(yukw777)](https://github.com/yukw777)
460 - Rohit Gupta [(rohitgr7)](https://github.com/rohitgr7)
461
462 ---
463
464 #### Funding
465 Building open-source software with only a few part-time people is hard! We've secured funding to make sure we can
466 hire a full-time staff, attend conferences, and move faster through implementing features you request.
467
468 Our goal is to build an incredible research platform and a big supportive community. Many open-source projects
469 have gone on to fund operations through things like support and special help for big corporations!
470
471 If you are one of these corporations, please feel free to reach out to [email protected]!
472
473 ## BibTeX
474 If you want to cite the framework feel free to use this (but only if you loved it 😊):
475
476 ```bibtex
477 @article{falcon2019pytorch,
478 title={PyTorch Lightning},
479 author={Falcon, WA},
480 journal={GitHub. Note: https://github.com/PyTorchLightning/pytorch-lightning Cited by},
481 volume={3},
482 year={2019}
483 }
484 ```
485
[end of README.md]
[start of docs/source/conf.py]
1 # -*- coding: utf-8 -*-
2 #
3 # Configuration file for the Sphinx documentation builder.
4 #
5 # This file does only contain a selection of the most common options. For a
6 # full list see the documentation:
7 # http://www.sphinx-doc.org/en/master/config
8
9 # -- Path setup --------------------------------------------------------------
10
11 # If extensions (or modules to document with autodoc) are in another directory,
12 # add these directories to sys.path here. If the directory is relative to the
13 # documentation root, use os.path.abspath to make it absolute, like shown here.
14
15 import os
16 import sys
17 import glob
18 import shutil
19 import inspect
20
21 # import m2r
22 import builtins
23 import pt_lightning_sphinx_theme
24 from sphinx.ext import apidoc
25
26 PATH_HERE = os.path.abspath(os.path.dirname(__file__))
27 PATH_ROOT = os.path.join(PATH_HERE, '..', '..')
28 sys.path.insert(0, os.path.abspath(PATH_ROOT))
29
30 builtins.__LIGHTNING_SETUP__ = True
31
32 SPHINX_MOCK_REQUIREMENTS = int(os.environ.get('SPHINX_MOCK_REQUIREMENTS', True))
33
34 import pytorch_lightning # noqa: E402
35
36 # -- Project documents -------------------------------------------------------
37
38 # # export the documentation
39 # with open('intro.rst', 'w') as fp:
40 # intro = pytorch_lightning.__doc__.replace(os.linesep + ' ', '')
41 # fp.write(m2r.convert(intro))
42 # # fp.write(pytorch_lightning.__doc__)
43
44 # # export the READme
45 # with open(os.path.join(PATH_ROOT, 'README.md'), 'r') as fp:
46 # readme = fp.read()
47 # # replace all paths to relative
48 # for ndir in (os.path.basename(p) for p in glob.glob(os.path.join(PATH_ROOT, '*'))
49 # if os.path.isdir(p)):
50 # readme = readme.replace('](%s/' % ndir, '](%s/%s/' % (PATH_ROOT, ndir))
51 # with open('readme.md', 'w') as fp:
52 # fp.write(readme)
53
54 for md in glob.glob(os.path.join(PATH_ROOT, '.github', '*.md')):
55 shutil.copy(md, os.path.join(PATH_HERE, os.path.basename(md)))
56
57 # -- Project information -----------------------------------------------------
58
59 project = 'PyTorch-Lightning'
60 copyright = pytorch_lightning.__copyright__
61 author = pytorch_lightning.__author__
62
63 # The short X.Y version
64 version = pytorch_lightning.__version__
65 # The full version, including alpha/beta/rc tags
66 release = pytorch_lightning.__version__
67
68 # Options for the linkcode extension
69 # ----------------------------------
70 github_user = 'PyTorchLightning'
71 github_repo = project
72
73 # -- General configuration ---------------------------------------------------
74
75 # If your documentation needs a minimal Sphinx version, state it here.
76
77 needs_sphinx = '2.0'
78
79 # Add any Sphinx extension module names here, as strings. They can be
80 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
81 # ones.
82 extensions = [
83 'sphinx.ext.autodoc',
84 # 'sphinxcontrib.mockautodoc', # raises error: directive 'automodule' is already registered ...
85 # 'sphinxcontrib.fulltoc', # breaks pytorch-theme with unexpected kw argument 'titles_only'
86 'sphinx.ext.doctest',
87 'sphinx.ext.intersphinx',
88 'sphinx.ext.todo',
89 'sphinx.ext.coverage',
90 'sphinx.ext.linkcode',
91 'sphinx.ext.autosummary',
92 'sphinx.ext.napoleon',
93 'sphinx.ext.imgmath',
94 'recommonmark',
95 'sphinx.ext.autosectionlabel',
96 # 'm2r',
97 'nbsphinx',
98 'sphinx_autodoc_typehints',
99 'sphinx_paramlinks',
100 ]
101
102 # Add any paths that contain templates here, relative to this directory.
103 templates_path = ['_templates']
104
105 # https://berkeley-stat159-f17.github.io/stat159-f17/lectures/14-sphinx..html#conf.py-(cont.)
106 # https://stackoverflow.com/questions/38526888/embed-ipython-notebook-in-sphinx-document
107 # I execute the notebooks manually in advance. If notebooks test the code,
108 # they should be run at build time.
109 nbsphinx_execute = 'never'
110 nbsphinx_allow_errors = True
111 nbsphinx_requirejs_path = ''
112
113 # The suffix(es) of source filenames.
114 # You can specify multiple suffix as a list of string:
115 #
116 # source_suffix = ['.rst', '.md']
117 # source_suffix = ['.rst', '.md', '.ipynb']
118 source_suffix = {
119 '.rst': 'restructuredtext',
120 '.txt': 'markdown',
121 '.md': 'markdown',
122 '.ipynb': 'nbsphinx',
123 }
124
125 # The master toctree document.
126 master_doc = 'index'
127
128 # The language for content autogenerated by Sphinx. Refer to documentation
129 # for a list of supported languages.
130 #
131 # This is also used if you do content translation via gettext catalogs.
132 # Usually you set "language" from the command line for these cases.
133 language = None
134
135 # List of patterns, relative to source directory, that match files and
136 # directories to ignore when looking for source files.
137 # This pattern also affects html_static_path and html_extra_path.
138 exclude_patterns = [
139 'api/pytorch_lightning.rst',
140 'api/pl_examples.*',
141 'api/pytorch_lightning.accelerator_backends.*',
142 'api/modules.rst',
143 'PULL_REQUEST_TEMPLATE.md',
144 ]
145
146 # The name of the Pygments (syntax highlighting) style to use.
147 pygments_style = None
148
149 # -- Options for HTML output -------------------------------------------------
150
151 # The theme to use for HTML and HTML Help pages. See the documentation for
152 # a list of builtin themes.
153 # http://www.sphinx-doc.org/en/master/usage/theming.html#builtin-themes
154 # html_theme = 'bizstyle'
155 # https://sphinx-themes.org
156 html_theme = 'pt_lightning_sphinx_theme'
157 html_theme_path = [pt_lightning_sphinx_theme.get_html_theme_path()]
158
159 # Theme options are theme-specific and customize the look and feel of a theme
160 # further. For a list of options available for each theme, see the
161 # documentation.
162
163 html_theme_options = {
164 'pytorch_project': pytorch_lightning.__homepage__,
165 'canonical_url': pytorch_lightning.__homepage__,
166 'collapse_navigation': False,
167 'display_version': True,
168 'logo_only': False,
169 }
170
171 html_logo = '_images/logos/lightning_logo-name.svg'
172
173 # Add any paths that contain custom static files (such as style sheets) here,
174 # relative to this directory. They are copied after the builtin static files,
175 # so a file named "default.css" will overwrite the builtin "default.css".
176 html_static_path = ['_images', '_templates', '_static']
177
178 # Custom sidebar templates, must be a dictionary that maps document names
179 # to template names.
180 #
181 # The default sidebars (for documents that don't match any pattern) are
182 # defined by theme itself. Builtin themes are using these templates by
183 # default: ``['localtoc.html', 'relations.html', 'sourcelink.html',
184 # 'searchbox.html']``.
185 #
186 # html_sidebars = {}
187
188
189 # -- Options for HTMLHelp output ---------------------------------------------
190
191 # Output file base name for HTML help builder.
192 htmlhelp_basename = project + '-doc'
193
194 # -- Options for LaTeX output ------------------------------------------------
195
196 latex_elements = {
197 # The paper size ('letterpaper' or 'a4paper').
198 # 'papersize': 'letterpaper',
199
200 # The font size ('10pt', '11pt' or '12pt').
201 # 'pointsize': '10pt',
202
203 # Additional stuff for the LaTeX preamble.
204 # 'preamble': '',
205
206 # Latex figure (float) alignment
207 'figure_align': 'htbp',
208 }
209
210 # Grouping the document tree into LaTeX files. List of tuples
211 # (source start file, target name, title,
212 # author, documentclass [howto, manual, or own class]).
213 latex_documents = [
214 (master_doc, project + '.tex', project + ' Documentation', author, 'manual'),
215 ]
216
217 # -- Options for manual page output ------------------------------------------
218
219 # One entry per manual page. List of tuples
220 # (source start file, name, description, authors, manual section).
221 man_pages = [
222 (master_doc, project, project + ' Documentation', [author], 1)
223 ]
224
225 # -- Options for Texinfo output ----------------------------------------------
226
227 # Grouping the document tree into Texinfo files. List of tuples
228 # (source start file, target name, title, author,
229 # dir menu entry, description, category)
230 texinfo_documents = [
231 (master_doc, project, project + ' Documentation', author, project,
232 'One line description of project.', 'Miscellaneous'),
233 ]
234
235 # -- Options for Epub output -------------------------------------------------
236
237 # Bibliographic Dublin Core info.
238 epub_title = project
239
240 # The unique identifier of the text. This can be a ISBN number
241 # or the project homepage.
242 #
243 # epub_identifier = ''
244
245 # A unique identification for the text.
246 #
247 # epub_uid = ''
248
249 # A list of files that should not be packed into the epub file.
250 epub_exclude_files = ['search.html']
251
252 # -- Extension configuration -------------------------------------------------
253
254 # -- Options for intersphinx extension ---------------------------------------
255
256 intersphinx_mapping = {
257 'python': ('https://docs.python.org/3', None),
258 'torch': ('https://pytorch.org/docs/stable/', None),
259 'numpy': ('https://docs.scipy.org/doc/numpy/', None),
260 'PIL': ('https://pillow.readthedocs.io/en/stable/', None),
261 }
262
263 # -- Options for todo extension ----------------------------------------------
264
265 # If true, `todo` and `todoList` produce output, else they produce nothing.
266 todo_include_todos = True
267
268
269 # packages for which sphinx-apidoc should generate the docs (.rst files)
270 PACKAGES = [
271 pytorch_lightning.__name__,
272 'pl_examples',
273 ]
274
275 apidoc_output_folder = os.path.join(PATH_HERE, 'api')
276
277
278 def run_apidoc(_):
279 sys.path.insert(0, apidoc_output_folder)
280
281 # delete api-doc files before generating them
282 if os.path.exists(apidoc_output_folder):
283 shutil.rmtree(apidoc_output_folder)
284
285 for pkg in PACKAGES:
286 argv = ['-e',
287 '-o', apidoc_output_folder,
288 os.path.join(PATH_ROOT, pkg),
289 '**/test_*',
290 '--force',
291 '--private',
292 '--module-first']
293
294 apidoc.main(argv)
295
296
297 def setup(app):
298 # this is for hiding doctest decoration,
299 # see: http://z4r.github.io/python/2011/12/02/hides-the-prompts-and-output/
300 app.add_javascript('copybutton.js')
301 app.connect('builder-inited', run_apidoc)
302
303
304 # copy all notebooks to local folder
305 path_nbs = os.path.join(PATH_HERE, 'notebooks')
306 if not os.path.isdir(path_nbs):
307 os.mkdir(path_nbs)
308 for path_ipynb in glob.glob(os.path.join(PATH_ROOT, 'notebooks', '*.ipynb')):
309 path_ipynb2 = os.path.join(path_nbs, os.path.basename(path_ipynb))
310 shutil.copy(path_ipynb, path_ipynb2)
311
312
313 # Ignoring Third-party packages
314 # https://stackoverflow.com/questions/15889621/sphinx-how-to-exclude-imports-in-automodule
315 def package_list_from_file(file):
316 mocked_packages = []
317 with open(file, 'r') as fp:
318 for ln in fp.readlines():
319 found = [ln.index(ch) for ch in list(',=<>#') if ch in ln]
320 pkg = ln[:min(found)] if found else ln
321 if pkg.rstrip():
322 mocked_packages.append(pkg.rstrip())
323 return mocked_packages
324
325
326 MOCK_PACKAGES = []
327 if SPHINX_MOCK_REQUIREMENTS:
328 # mock also base packages when we are on RTD since we don't install them there
329 MOCK_PACKAGES += package_list_from_file(os.path.join(PATH_ROOT, 'requirements/base.txt'))
330 MOCK_PACKAGES += package_list_from_file(os.path.join(PATH_ROOT, 'requirements/extra.txt'))
331
332 MOCK_MANUAL_PACKAGES = [
333 'torchvision',
334 'PIL',
335 # packages with different package name compare to import name
336 'yaml',
337 'comet_ml',
338 'neptune',
339 ]
340 autodoc_mock_imports = MOCK_PACKAGES + MOCK_MANUAL_PACKAGES
341
342
343 # Resolve function
344 # This function is used to populate the (source) links in the API
345 def linkcode_resolve(domain, info):
346 def find_source():
347 # try to find the file and line number, based on code from numpy:
348 # https://github.com/numpy/numpy/blob/master/doc/source/conf.py#L286
349 obj = sys.modules[info['module']]
350 for part in info['fullname'].split('.'):
351 obj = getattr(obj, part)
352 fname = inspect.getsourcefile(obj)
353 # https://github.com/rtfd/readthedocs.org/issues/5735
354 if any([s in fname for s in ('readthedocs', 'rtfd', 'checkouts')]):
355 # /home/docs/checkouts/readthedocs.org/user_builds/pytorch_lightning/checkouts/
356 # devel/pytorch_lightning/utilities/cls_experiment.py#L26-L176
357 path_top = os.path.abspath(os.path.join('..', '..', '..'))
358 fname = os.path.relpath(fname, start=path_top)
359 else:
360 # Local build, imitate master
361 fname = 'master/' + os.path.relpath(fname, start=os.path.abspath('..'))
362 source, lineno = inspect.getsourcelines(obj)
363 return fname, lineno, lineno + len(source) - 1
364
365 if domain != 'py' or not info['module']:
366 return None
367 try:
368 filename = '%s#L%d-L%d' % find_source()
369 except Exception:
370 filename = info['module'].replace('.', '/') + '.py'
371 # import subprocess
372 # tag = subprocess.Popen(['git', 'rev-parse', 'HEAD'], stdout=subprocess.PIPE,
373 # universal_newlines=True).communicate()[0][:-1]
374 branch = filename.split('/')[0]
375 # do mapping from latest tags to master
376 branch = {'latest': 'master', 'stable': 'master'}.get(branch, branch)
377 filename = '/'.join([branch] + filename.split('/')[1:])
378 return "https://github.com/%s/%s/blob/%s" \
379 % (github_user, github_repo, filename)
380
381
382 autodoc_member_order = 'groupwise'
383 autoclass_content = 'both'
384 # the options are fixed and will be soon in release,
385 # see https://github.com/sphinx-doc/sphinx/issues/5459
386 autodoc_default_options = {
387 'members': None,
388 'methods': None,
389 # 'attributes': None,
390 'special-members': '__call__',
391 'exclude-members': '_abc_impl',
392 'show-inheritance': True,
393 'private-members': True,
394 'noindex': True,
395 }
396
397 # Sphinx will add “permalinks” for each heading and description environment as paragraph signs that
398 # become visible when the mouse hovers over them.
399 # This value determines the text for the permalink; it defaults to "¶". Set it to None or the empty
400 # string to disable permalinks.
401 # https://www.sphinx-doc.org/en/master/usage/configuration.html#confval-html_add_permalinks
402 html_add_permalinks = "¶"
403
404 # True to prefix each section label with the name of the document it is in, followed by a colon.
405 # For example, index:Introduction for a section called Introduction that appears in document index.rst.
406 # Useful for avoiding ambiguity when the same section heading appears in different documents.
407 # http://www.sphinx-doc.org/en/master/usage/extensions/autosectionlabel.html
408 autosectionlabel_prefix_document = True
409
410 # only run doctests marked with a ".. doctest::" directive
411 doctest_test_doctest_blocks = ''
412 doctest_global_setup = """
413
414 import importlib
415 import os
416 import torch
417
418 from pytorch_lightning.utilities import NATIVE_AMP_AVALAIBLE
419 APEX_AVAILABLE = importlib.util.find_spec("apex") is not None
420 XLA_AVAILABLE = importlib.util.find_spec("torch_xla") is not None
421 TORCHVISION_AVAILABLE = importlib.util.find_spec("torchvision") is not None
422
423
424 """
425 coverage_skip_undoc_in_source = True
426
[end of docs/source/conf.py]
[start of setup.py]
1 #!/usr/bin/env python
2
3 import os
4 from io import open
5 # Always prefer setuptools over distutils
6 from setuptools import setup, find_packages
7
8 try:
9 import builtins
10 except ImportError:
11 import __builtin__ as builtins
12
13 # https://packaging.python.org/guides/single-sourcing-package-version/
14 # http://blog.ionelmc.ro/2014/05/25/python-packaging/
15
16 PATH_ROOT = os.path.dirname(__file__)
17 builtins.__LIGHTNING_SETUP__ = True
18
19 import pytorch_lightning # noqa: E402
20
21
22 def load_requirements(path_dir=PATH_ROOT, comment_char='#'):
23 with open(os.path.join(path_dir, 'requirements', 'base.txt'), 'r') as file:
24 lines = [ln.strip() for ln in file.readlines()]
25 reqs = []
26 for ln in lines:
27 # filer all comments
28 if comment_char in ln:
29 ln = ln[:ln.index(comment_char)]
30 if ln: # if requirement is not empty
31 reqs.append(ln)
32 return reqs
33
34
35 def load_long_description():
36 # https://github.com/PyTorchLightning/pytorch-lightning/raw/master/docs/source/_images/lightning_module/pt_to_pl.png
37 url = os.path.join(pytorch_lightning.__homepage__, 'raw', pytorch_lightning.__version__, 'docs')
38 text = open('README.md', encoding='utf-8').read()
39 # replace relative repository path to absolute link to the release
40 text = text.replace('](docs', f']({url}')
41 # SVG images are not readable on PyPI, so replace them with PNG
42 text = text.replace('.svg', '.png')
43 return text
44
45
46 # https://packaging.python.org/discussions/install-requires-vs-requirements /
47 # keep the meta-data here for simplicity in reading this file... it's not obvious
48 # what happens and to non-engineers they won't know to look in init ...
49 # the goal of the project is simplicity for researchers, don't want to add too much
50 # engineer specific practices
51 setup(
52 name='pytorch-lightning',
53 version=pytorch_lightning.__version__,
54 description=pytorch_lightning.__docs__,
55 author=pytorch_lightning.__author__,
56 author_email=pytorch_lightning.__author_email__,
57 url=pytorch_lightning.__homepage__,
58 download_url='https://github.com/PyTorchLightning/pytorch-lightning',
59 license=pytorch_lightning.__license__,
60 packages=find_packages(exclude=['tests', 'tests/*', 'benchmarks']),
61
62 long_description=load_long_description(),
63 long_description_content_type='text/markdown',
64 include_package_data=True,
65 zip_safe=False,
66
67 keywords=['deep learning', 'pytorch', 'AI'],
68 python_requires='>=3.6',
69 setup_requires=[],
70 install_requires=load_requirements(PATH_ROOT),
71
72 project_urls={
73 "Bug Tracker": "https://github.com/PyTorchLightning/pytorch-lightning/issues",
74 "Documentation": "https://pytorch-lightning.rtfd.io/en/latest/",
75 "Source Code": "https://github.com/PyTorchLightning/pytorch-lightning",
76 },
77
78 classifiers=[
79 'Environment :: Console',
80 'Natural Language :: English',
81 # How mature is this project? Common values are
82 # 3 - Alpha, 4 - Beta, 5 - Production/Stable
83 'Development Status :: 4 - Beta',
84 # Indicate who your project is intended for
85 'Intended Audience :: Developers',
86 'Topic :: Scientific/Engineering :: Artificial Intelligence',
87 'Topic :: Scientific/Engineering :: Image Recognition',
88 'Topic :: Scientific/Engineering :: Information Analysis',
89 # Pick your license as you wish
90 'License :: OSI Approved :: BSD License',
91 'Operating System :: OS Independent',
92 # Specify the Python versions you support here. In particular, ensure
93 # that you indicate whether you support Python 2, Python 3 or both.
94 'Programming Language :: Python :: 3',
95 'Programming Language :: Python :: 3.6',
96 'Programming Language :: Python :: 3.7',
97 'Programming Language :: Python :: 3.8',
98 ],
99 )
100
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
Lightning-AI/lightning
|
f9ccb0fd9b53973a87b18cf9126ad9454b328575
|
`replace_sampler_ddp` doesn't create a shuffled sampler
<!--
### Common bugs:
1. Tensorboard not showing in Jupyter-notebook see [issue 79](https://github.com/PyTorchLightning/pytorch-lightning/issues/79).
2. PyTorch 1.1.0 vs 1.2.0 support [see FAQ](https://github.com/PyTorchLightning/pytorch-lightning#faq)
-->
## 🐛 Bug
The `DistributedSampler` created using `replace_sampler_ddp` is not shuffled. Check the `kwargs` [here](https://github.com/PyTorchLightning/pytorch-lightning/blob/master/pytorch_lightning/trainer/data_loading.py#L195)
### Expected behavior
If training dataloader, create a shuffled `DistributedSampler`, else create a non-shuffled sampler. Even though the `train` flag is passed to the function [here](https://github.com/PyTorchLightning/pytorch-lightning/blob/master/pytorch_lightning/trainer/data_loading.py#L146), it is ignored.
### Environment
pytorch-lightning master
|
2020-08-01T13:26:27Z
|
<patch>
diff --git a/pytorch_lightning/trainer/data_loading.py b/pytorch_lightning/trainer/data_loading.py
--- a/pytorch_lightning/trainer/data_loading.py
+++ b/pytorch_lightning/trainer/data_loading.py
@@ -163,7 +163,7 @@ def auto_add_sampler(self, dataloader: DataLoader, train: bool) -> DataLoader:
' `replace_sampler_ddp`=False if you want to use your custom sampler.')
# replace with distributed sampler
- sampler = self._get_distributed_sampler(dataloader)
+ sampler = self._get_distributed_sampler(dataloader, train)
dataloader = self.replace_sampler(dataloader, sampler)
return dataloader
@@ -179,7 +179,7 @@ def replace_sampler(self, dataloader, sampler):
dataloader = type(dataloader)(**dl_args)
return dataloader
- def _get_distributed_sampler(self, dataloader):
+ def _get_distributed_sampler(self, dataloader, train):
if self.use_tpu:
kwargs = dict(num_replicas=xm.xrt_world_size(), rank=xm.get_ordinal())
elif self.use_horovod:
@@ -193,6 +193,8 @@ def _get_distributed_sampler(self, dataloader):
}
assert self.distributed_backend is not None
kwargs = dict(num_replicas=world_size[self.distributed_backend], rank=self.global_rank)
+
+ kwargs['shuffle'] = train
sampler = DistributedSampler(dataloader.dataset, **kwargs)
return sampler
</patch>
|
[]
|
[]
| ||||
pandas-dev__pandas-37270
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
BUG: PeriodIndex.dtype comparison with string exhibits contradiction
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the latest version of pandas.
- [x] (optional) I have confirmed this bug exists on the master branch of pandas.
---
#### Code Sample
```python
index = pd.period_range(start='01Jan2019', end='01Dec2019', freq='M')
string = "period[M]"
index.dtype == string # True
index.dtype != string # also True
```
#### Problem description
`PeriodIndex.dtype` is both `__eq__` and `__ne__` to the string representation of its frequency. I suspect that comparing the dtype to a string is not recommended, but the contradiction seemed worth reporting. I don't know when this was introduced, but it did not occur in 0.25.3.
#### Expected Output
One of the two comparisons to evaluate to `False`.
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit : a34a408e854b6300dbbdd0b29026d19bc5477d73
python : 3.8.6.final.0
python-bits : 64
OS : Linux
OS-release : 4.14.198-152.320.amzn2.x86_64
Version : #1 SMP Wed Sep 23 23:57:28 UTC 2020
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : en_US.UTF-8
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 1.2.0.dev0+830.ga34a408e8
numpy : 1.19.2
pytz : 2020.1
dateutil : 2.8.1
pip : 20.2.4
setuptools : 49.6.0.post20201009
Cython : 0.29.21
pytest : 6.1.1
hypothesis : 5.37.3
sphinx : 3.2.1
blosc : None
feather : None
xlsxwriter : 1.3.7
lxml.etree : 4.6.1
html5lib : 1.1
pymysql : None
psycopg2 : None
jinja2 : 2.11.2
IPython : 7.18.1
pandas_datareader: None
bs4 : 4.9.3
bottleneck : 1.3.2
fsspec : 0.8.4
fastparquet : 0.4.1
gcsfs : 0.7.1
matplotlib : 3.3.2
numexpr : 2.7.1
odfpy : None
openpyxl : 3.0.5
pandas_gbq : None
pyarrow : 1.0.1
pyxlsb : None
s3fs : 0.4.2
scipy : 1.5.2
sqlalchemy : 1.3.20
tables : 3.6.1
tabulate : 0.8.7
xarray : 0.16.1
xlrd : 1.2.0
xlwt : 1.3.0
numba : 0.51.2
</details>
</issue>
<code>
[start of README.md]
1 <div align="center">
2 <img src="https://dev.pandas.io/static/img/pandas.svg"><br>
3 </div>
4
5 -----------------
6
7 # pandas: powerful Python data analysis toolkit
8 [](https://pypi.org/project/pandas/)
9 [](https://anaconda.org/anaconda/pandas/)
10 [](https://doi.org/10.5281/zenodo.3509134)
11 [](https://pypi.org/project/pandas/)
12 [](https://github.com/pandas-dev/pandas/blob/master/LICENSE)
13 [](https://travis-ci.org/pandas-dev/pandas)
14 [](https://dev.azure.com/pandas-dev/pandas/_build/latest?definitionId=1&branch=master)
15 [](https://codecov.io/gh/pandas-dev/pandas)
16 [](https://pandas.pydata.org)
17 [](https://gitter.im/pydata/pandas)
18 [](https://numfocus.org)
19 [](https://github.com/psf/black)
20
21 ## What is it?
22
23 **pandas** is a Python package that provides fast, flexible, and expressive data
24 structures designed to make working with "relational" or "labeled" data both
25 easy and intuitive. It aims to be the fundamental high-level building block for
26 doing practical, **real world** data analysis in Python. Additionally, it has
27 the broader goal of becoming **the most powerful and flexible open source data
28 analysis / manipulation tool available in any language**. It is already well on
29 its way towards this goal.
30
31 ## Main Features
32 Here are just a few of the things that pandas does well:
33
34 - Easy handling of [**missing data**][missing-data] (represented as
35 `NaN`, `NA`, or `NaT`) in floating point as well as non-floating point data
36 - Size mutability: columns can be [**inserted and
37 deleted**][insertion-deletion] from DataFrame and higher dimensional
38 objects
39 - Automatic and explicit [**data alignment**][alignment]: objects can
40 be explicitly aligned to a set of labels, or the user can simply
41 ignore the labels and let `Series`, `DataFrame`, etc. automatically
42 align the data for you in computations
43 - Powerful, flexible [**group by**][groupby] functionality to perform
44 split-apply-combine operations on data sets, for both aggregating
45 and transforming data
46 - Make it [**easy to convert**][conversion] ragged,
47 differently-indexed data in other Python and NumPy data structures
48 into DataFrame objects
49 - Intelligent label-based [**slicing**][slicing], [**fancy
50 indexing**][fancy-indexing], and [**subsetting**][subsetting] of
51 large data sets
52 - Intuitive [**merging**][merging] and [**joining**][joining] data
53 sets
54 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of
55 data sets
56 - [**Hierarchical**][mi] labeling of axes (possible to have multiple
57 labels per tick)
58 - Robust IO tools for loading data from [**flat files**][flat-files]
59 (CSV and delimited), [**Excel files**][excel], [**databases**][db],
60 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore]
61 - [**Time series**][timeseries]-specific functionality: date range
62 generation and frequency conversion, moving window statistics,
63 date shifting and lagging.
64
65
66 [missing-data]: https://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data
67 [insertion-deletion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#column-selection-addition-deletion
68 [alignment]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html?highlight=alignment#intro-to-data-structures
69 [groupby]: https://pandas.pydata.org/pandas-docs/stable/groupby.html#group-by-split-apply-combine
70 [conversion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe
71 [slicing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges
72 [fancy-indexing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#advanced-indexing-with-ix
73 [subsetting]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing
74 [merging]: https://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging
75 [joining]: https://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index
76 [reshape]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-and-pivot-tables
77 [pivot-table]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#pivot-tables-and-cross-tabulations
78 [mi]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex
79 [flat-files]: https://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files
80 [excel]: https://pandas.pydata.org/pandas-docs/stable/io.html#excel-files
81 [db]: https://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries
82 [hdfstore]: https://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables
83 [timeseries]: https://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-series-date-functionality
84
85 ## Where to get it
86 The source code is currently hosted on GitHub at:
87 https://github.com/pandas-dev/pandas
88
89 Binary installers for the latest released version are available at the [Python
90 package index](https://pypi.org/project/pandas) and on conda.
91
92 ```sh
93 # conda
94 conda install pandas
95 ```
96
97 ```sh
98 # or PyPI
99 pip install pandas
100 ```
101
102 ## Dependencies
103 - [NumPy](https://www.numpy.org)
104 - [python-dateutil](https://labix.org/python-dateutil)
105 - [pytz](https://pythonhosted.org/pytz)
106
107 See the [full installation instructions](https://pandas.pydata.org/pandas-docs/stable/install.html#dependencies) for minimum supported versions of required, recommended and optional dependencies.
108
109 ## Installation from sources
110 To install pandas from source you need Cython in addition to the normal
111 dependencies above. Cython can be installed from pypi:
112
113 ```sh
114 pip install cython
115 ```
116
117 In the `pandas` directory (same one where you found this file after
118 cloning the git repo), execute:
119
120 ```sh
121 python setup.py install
122 ```
123
124 or for installing in [development mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs):
125
126
127 ```sh
128 python -m pip install -e . --no-build-isolation --no-use-pep517
129 ```
130
131 If you have `make`, you can also use `make develop` to run the same command.
132
133 or alternatively
134
135 ```sh
136 python setup.py develop
137 ```
138
139 See the full instructions for [installing from source](https://pandas.pydata.org/pandas-docs/stable/install.html#installing-from-source).
140
141 ## License
142 [BSD 3](LICENSE)
143
144 ## Documentation
145 The official documentation is hosted on PyData.org: https://pandas.pydata.org/pandas-docs/stable
146
147 ## Background
148 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and
149 has been under active development since then.
150
151 ## Getting Help
152
153 For usage questions, the best place to go to is [StackOverflow](https://stackoverflow.com/questions/tagged/pandas).
154 Further, general questions and discussions can also take place on the [pydata mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata).
155
156 ## Discussion and Development
157 Most development discussions take place on github in this repo. Further, the [pandas-dev mailing list](https://mail.python.org/mailman/listinfo/pandas-dev) can also be used for specialized discussions or design issues, and a [Gitter channel](https://gitter.im/pydata/pandas) is available for quick development related questions.
158
159 ## Contributing to pandas [](https://www.codetriage.com/pandas-dev/pandas)
160
161 All contributions, bug reports, bug fixes, documentation improvements, enhancements, and ideas are welcome.
162
163 A detailed overview on how to contribute can be found in the **[contributing guide](https://pandas.pydata.org/docs/dev/development/contributing.html)**. There is also an [overview](.github/CONTRIBUTING.md) on GitHub.
164
165 If you are simply looking to start working with the pandas codebase, navigate to the [GitHub "issues" tab](https://github.com/pandas-dev/pandas/issues) and start looking through interesting issues. There are a number of issues listed under [Docs](https://github.com/pandas-dev/pandas/issues?labels=Docs&sort=updated&state=open) and [good first issue](https://github.com/pandas-dev/pandas/issues?labels=good+first+issue&sort=updated&state=open) where you could start out.
166
167 You can also triage issues which may include reproducing bug reports, or asking for vital information such as version numbers or reproduction instructions. If you would like to start triaging issues, one easy way to get started is to [subscribe to pandas on CodeTriage](https://www.codetriage.com/pandas-dev/pandas).
168
169 Or maybe through using pandas you have an idea of your own or are looking for something in the documentation and thinking ‘this can be improved’...you can do something about it!
170
171 Feel free to ask questions on the [mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata) or on [Gitter](https://gitter.im/pydata/pandas).
172
173 As contributors and maintainers to this project, you are expected to abide by pandas' code of conduct. More information can be found at: [Contributor Code of Conduct](https://github.com/pandas-dev/pandas/blob/master/.github/CODE_OF_CONDUCT.md)
174
[end of README.md]
[start of pandas/compat/_optional.py]
1 import distutils.version
2 import importlib
3 import types
4 import warnings
5
6 # Update install.rst when updating versions!
7
8 VERSIONS = {
9 "bs4": "4.6.0",
10 "bottleneck": "1.2.1",
11 "fsspec": "0.7.4",
12 "fastparquet": "0.3.2",
13 "gcsfs": "0.6.0",
14 "lxml.etree": "4.3.0",
15 "matplotlib": "2.2.3",
16 "numexpr": "2.6.8",
17 "odfpy": "1.3.0",
18 "openpyxl": "2.5.7",
19 "pandas_gbq": "0.12.0",
20 "pyarrow": "0.15.0",
21 "pytest": "5.0.1",
22 "pyxlsb": "1.0.6",
23 "s3fs": "0.4.0",
24 "scipy": "1.2.0",
25 "sqlalchemy": "1.2.8",
26 "tables": "3.5.1",
27 "tabulate": "0.8.3",
28 "xarray": "0.12.0",
29 "xlrd": "1.2.0",
30 "xlwt": "1.3.0",
31 "xlsxwriter": "1.0.2",
32 "numba": "0.46.0",
33 }
34
35 # A mapping from import name to package name (on PyPI) for packages where
36 # these two names are different.
37
38 INSTALL_MAPPING = {
39 "bs4": "beautifulsoup4",
40 "bottleneck": "Bottleneck",
41 "lxml.etree": "lxml",
42 "odf": "odfpy",
43 "pandas_gbq": "pandas-gbq",
44 "sqlalchemy": "SQLAlchemy",
45 "jinja2": "Jinja2",
46 }
47
48
49 def _get_version(module: types.ModuleType) -> str:
50 version = getattr(module, "__version__", None)
51 if version is None:
52 # xlrd uses a capitalized attribute name
53 version = getattr(module, "__VERSION__", None)
54
55 if version is None:
56 raise ImportError(f"Can't determine version for {module.__name__}")
57 return version
58
59
60 def import_optional_dependency(
61 name: str, extra: str = "", raise_on_missing: bool = True, on_version: str = "raise"
62 ):
63 """
64 Import an optional dependency.
65
66 By default, if a dependency is missing an ImportError with a nice
67 message will be raised. If a dependency is present, but too old,
68 we raise.
69
70 Parameters
71 ----------
72 name : str
73 The module name. This should be top-level only, so that the
74 version may be checked.
75 extra : str
76 Additional text to include in the ImportError message.
77 raise_on_missing : bool, default True
78 Whether to raise if the optional dependency is not found.
79 When False and the module is not present, None is returned.
80 on_version : str {'raise', 'warn'}
81 What to do when a dependency's version is too old.
82
83 * raise : Raise an ImportError
84 * warn : Warn that the version is too old. Returns None
85 * ignore: Return the module, even if the version is too old.
86 It's expected that users validate the version locally when
87 using ``on_version="ignore"`` (see. ``io/html.py``)
88
89 Returns
90 -------
91 maybe_module : Optional[ModuleType]
92 The imported module, when found and the version is correct.
93 None is returned when the package is not found and `raise_on_missing`
94 is False, or when the package's version is too old and `on_version`
95 is ``'warn'``.
96 """
97
98 package_name = INSTALL_MAPPING.get(name)
99 install_name = package_name if package_name is not None else name
100
101 msg = (
102 f"Missing optional dependency '{install_name}'. {extra} "
103 f"Use pip or conda to install {install_name}."
104 )
105 try:
106 module = importlib.import_module(name)
107 except ImportError:
108 if raise_on_missing:
109 raise ImportError(msg) from None
110 else:
111 return None
112
113 minimum_version = VERSIONS.get(name)
114 if minimum_version:
115 version = _get_version(module)
116 if distutils.version.LooseVersion(version) < minimum_version:
117 assert on_version in {"warn", "raise", "ignore"}
118 msg = (
119 f"Pandas requires version '{minimum_version}' or newer of '{name}' "
120 f"(version '{version}' currently installed)."
121 )
122 if on_version == "warn":
123 warnings.warn(msg, UserWarning)
124 return None
125 elif on_version == "raise":
126 raise ImportError(msg)
127
128 return module
129
[end of pandas/compat/_optional.py]
[start of pandas/core/config_init.py]
1 """
2 This module is imported from the pandas package __init__.py file
3 in order to ensure that the core.config options registered here will
4 be available as soon as the user loads the package. if register_option
5 is invoked inside specific modules, they will not be registered until that
6 module is imported, which may or may not be a problem.
7
8 If you need to make sure options are available even before a certain
9 module is imported, register them here rather than in the module.
10
11 """
12 import warnings
13
14 import pandas._config.config as cf
15 from pandas._config.config import (
16 is_bool,
17 is_callable,
18 is_instance_factory,
19 is_int,
20 is_nonnegative_int,
21 is_one_of_factory,
22 is_text,
23 )
24
25 # compute
26
27 use_bottleneck_doc = """
28 : bool
29 Use the bottleneck library to accelerate if it is installed,
30 the default is True
31 Valid values: False,True
32 """
33
34
35 def use_bottleneck_cb(key):
36 from pandas.core import nanops
37
38 nanops.set_use_bottleneck(cf.get_option(key))
39
40
41 use_numexpr_doc = """
42 : bool
43 Use the numexpr library to accelerate computation if it is installed,
44 the default is True
45 Valid values: False,True
46 """
47
48
49 def use_numexpr_cb(key):
50 from pandas.core.computation import expressions
51
52 expressions.set_use_numexpr(cf.get_option(key))
53
54
55 use_numba_doc = """
56 : bool
57 Use the numba engine option for select operations if it is installed,
58 the default is False
59 Valid values: False,True
60 """
61
62
63 def use_numba_cb(key):
64 from pandas.core.util import numba_
65
66 numba_.set_use_numba(cf.get_option(key))
67
68
69 with cf.config_prefix("compute"):
70 cf.register_option(
71 "use_bottleneck",
72 True,
73 use_bottleneck_doc,
74 validator=is_bool,
75 cb=use_bottleneck_cb,
76 )
77 cf.register_option(
78 "use_numexpr", True, use_numexpr_doc, validator=is_bool, cb=use_numexpr_cb
79 )
80 cf.register_option(
81 "use_numba", False, use_numba_doc, validator=is_bool, cb=use_numba_cb
82 )
83 #
84 # options from the "display" namespace
85
86 pc_precision_doc = """
87 : int
88 Floating point output precision in terms of number of places after the
89 decimal, for regular formatting as well as scientific notation. Similar
90 to ``precision`` in :meth:`numpy.set_printoptions`.
91 """
92
93 pc_colspace_doc = """
94 : int
95 Default space for DataFrame columns.
96 """
97
98 pc_max_rows_doc = """
99 : int
100 If max_rows is exceeded, switch to truncate view. Depending on
101 `large_repr`, objects are either centrally truncated or printed as
102 a summary view. 'None' value means unlimited.
103
104 In case python/IPython is running in a terminal and `large_repr`
105 equals 'truncate' this can be set to 0 and pandas will auto-detect
106 the height of the terminal and print a truncated object which fits
107 the screen height. The IPython notebook, IPython qtconsole, or
108 IDLE do not run in a terminal and hence it is not possible to do
109 correct auto-detection.
110 """
111
112 pc_min_rows_doc = """
113 : int
114 The numbers of rows to show in a truncated view (when `max_rows` is
115 exceeded). Ignored when `max_rows` is set to None or 0. When set to
116 None, follows the value of `max_rows`.
117 """
118
119 pc_max_cols_doc = """
120 : int
121 If max_cols is exceeded, switch to truncate view. Depending on
122 `large_repr`, objects are either centrally truncated or printed as
123 a summary view. 'None' value means unlimited.
124
125 In case python/IPython is running in a terminal and `large_repr`
126 equals 'truncate' this can be set to 0 and pandas will auto-detect
127 the width of the terminal and print a truncated object which fits
128 the screen width. The IPython notebook, IPython qtconsole, or IDLE
129 do not run in a terminal and hence it is not possible to do
130 correct auto-detection.
131 """
132
133 pc_max_categories_doc = """
134 : int
135 This sets the maximum number of categories pandas should output when
136 printing out a `Categorical` or a Series of dtype "category".
137 """
138
139 pc_max_info_cols_doc = """
140 : int
141 max_info_columns is used in DataFrame.info method to decide if
142 per column information will be printed.
143 """
144
145 pc_nb_repr_h_doc = """
146 : boolean
147 When True, IPython notebook will use html representation for
148 pandas objects (if it is available).
149 """
150
151 pc_pprint_nest_depth = """
152 : int
153 Controls the number of nested levels to process when pretty-printing
154 """
155
156 pc_multi_sparse_doc = """
157 : boolean
158 "sparsify" MultiIndex display (don't display repeated
159 elements in outer levels within groups)
160 """
161
162 float_format_doc = """
163 : callable
164 The callable should accept a floating point number and return
165 a string with the desired format of the number. This is used
166 in some places like SeriesFormatter.
167 See formats.format.EngFormatter for an example.
168 """
169
170 max_colwidth_doc = """
171 : int or None
172 The maximum width in characters of a column in the repr of
173 a pandas data structure. When the column overflows, a "..."
174 placeholder is embedded in the output. A 'None' value means unlimited.
175 """
176
177 colheader_justify_doc = """
178 : 'left'/'right'
179 Controls the justification of column headers. used by DataFrameFormatter.
180 """
181
182 pc_expand_repr_doc = """
183 : boolean
184 Whether to print out the full DataFrame repr for wide DataFrames across
185 multiple lines, `max_columns` is still respected, but the output will
186 wrap-around across multiple "pages" if its width exceeds `display.width`.
187 """
188
189 pc_show_dimensions_doc = """
190 : boolean or 'truncate'
191 Whether to print out dimensions at the end of DataFrame repr.
192 If 'truncate' is specified, only print out the dimensions if the
193 frame is truncated (e.g. not display all rows and/or columns)
194 """
195
196 pc_east_asian_width_doc = """
197 : boolean
198 Whether to use the Unicode East Asian Width to calculate the display text
199 width.
200 Enabling this may affect to the performance (default: False)
201 """
202
203 pc_ambiguous_as_wide_doc = """
204 : boolean
205 Whether to handle Unicode characters belong to Ambiguous as Wide (width=2)
206 (default: False)
207 """
208
209 pc_latex_repr_doc = """
210 : boolean
211 Whether to produce a latex DataFrame representation for jupyter
212 environments that support it.
213 (default: False)
214 """
215
216 pc_table_schema_doc = """
217 : boolean
218 Whether to publish a Table Schema representation for frontends
219 that support it.
220 (default: False)
221 """
222
223 pc_html_border_doc = """
224 : int
225 A ``border=value`` attribute is inserted in the ``<table>`` tag
226 for the DataFrame HTML repr.
227 """
228
229 pc_html_use_mathjax_doc = """\
230 : boolean
231 When True, Jupyter notebook will process table contents using MathJax,
232 rendering mathematical expressions enclosed by the dollar symbol.
233 (default: True)
234 """
235
236 pc_width_doc = """
237 : int
238 Width of the display in characters. In case python/IPython is running in
239 a terminal this can be set to None and pandas will correctly auto-detect
240 the width.
241 Note that the IPython notebook, IPython qtconsole, or IDLE do not run in a
242 terminal and hence it is not possible to correctly detect the width.
243 """
244
245 pc_chop_threshold_doc = """
246 : float or None
247 if set to a float value, all float values smaller then the given threshold
248 will be displayed as exactly 0 by repr and friends.
249 """
250
251 pc_max_seq_items = """
252 : int or None
253 When pretty-printing a long sequence, no more then `max_seq_items`
254 will be printed. If items are omitted, they will be denoted by the
255 addition of "..." to the resulting string.
256
257 If set to None, the number of items to be printed is unlimited.
258 """
259
260 pc_max_info_rows_doc = """
261 : int or None
262 df.info() will usually show null-counts for each column.
263 For large frames this can be quite slow. max_info_rows and max_info_cols
264 limit this null check only to frames with smaller dimensions than
265 specified.
266 """
267
268 pc_large_repr_doc = """
269 : 'truncate'/'info'
270 For DataFrames exceeding max_rows/max_cols, the repr (and HTML repr) can
271 show a truncated table (the default from 0.13), or switch to the view from
272 df.info() (the behaviour in earlier versions of pandas).
273 """
274
275 pc_memory_usage_doc = """
276 : bool, string or None
277 This specifies if the memory usage of a DataFrame should be displayed when
278 df.info() is called. Valid values True,False,'deep'
279 """
280
281 pc_latex_escape = """
282 : bool
283 This specifies if the to_latex method of a Dataframe uses escapes special
284 characters.
285 Valid values: False,True
286 """
287
288 pc_latex_longtable = """
289 :bool
290 This specifies if the to_latex method of a Dataframe uses the longtable
291 format.
292 Valid values: False,True
293 """
294
295 pc_latex_multicolumn = """
296 : bool
297 This specifies if the to_latex method of a Dataframe uses multicolumns
298 to pretty-print MultiIndex columns.
299 Valid values: False,True
300 """
301
302 pc_latex_multicolumn_format = """
303 : string
304 This specifies the format for multicolumn headers.
305 Can be surrounded with '|'.
306 Valid values: 'l', 'c', 'r', 'p{<width>}'
307 """
308
309 pc_latex_multirow = """
310 : bool
311 This specifies if the to_latex method of a Dataframe uses multirows
312 to pretty-print MultiIndex rows.
313 Valid values: False,True
314 """
315
316
317 def table_schema_cb(key):
318 from pandas.io.formats.printing import enable_data_resource_formatter
319
320 enable_data_resource_formatter(cf.get_option(key))
321
322
323 def is_terminal() -> bool:
324 """
325 Detect if Python is running in a terminal.
326
327 Returns True if Python is running in a terminal or False if not.
328 """
329 try:
330 # error: Name 'get_ipython' is not defined
331 ip = get_ipython() # type: ignore[name-defined]
332 except NameError: # assume standard Python interpreter in a terminal
333 return True
334 else:
335 if hasattr(ip, "kernel"): # IPython as a Jupyter kernel
336 return False
337 else: # IPython in a terminal
338 return True
339
340
341 with cf.config_prefix("display"):
342 cf.register_option("precision", 6, pc_precision_doc, validator=is_nonnegative_int)
343 cf.register_option(
344 "float_format",
345 None,
346 float_format_doc,
347 validator=is_one_of_factory([None, is_callable]),
348 )
349 cf.register_option("column_space", 12, validator=is_int)
350 cf.register_option(
351 "max_info_rows",
352 1690785,
353 pc_max_info_rows_doc,
354 validator=is_instance_factory((int, type(None))),
355 )
356 cf.register_option("max_rows", 60, pc_max_rows_doc, validator=is_nonnegative_int)
357 cf.register_option(
358 "min_rows",
359 10,
360 pc_min_rows_doc,
361 validator=is_instance_factory([type(None), int]),
362 )
363 cf.register_option("max_categories", 8, pc_max_categories_doc, validator=is_int)
364
365 def _deprecate_negative_int_max_colwidth(key):
366 value = cf.get_option(key)
367 if value is not None and value < 0:
368 warnings.warn(
369 "Passing a negative integer is deprecated in version 1.0 and "
370 "will not be supported in future version. Instead, use None "
371 "to not limit the column width.",
372 FutureWarning,
373 stacklevel=4,
374 )
375
376 cf.register_option(
377 # TODO(2.0): change `validator=is_nonnegative_int` see GH#31569
378 "max_colwidth",
379 50,
380 max_colwidth_doc,
381 validator=is_instance_factory([type(None), int]),
382 cb=_deprecate_negative_int_max_colwidth,
383 )
384 if is_terminal():
385 max_cols = 0 # automatically determine optimal number of columns
386 else:
387 max_cols = 20 # cannot determine optimal number of columns
388 cf.register_option(
389 "max_columns", max_cols, pc_max_cols_doc, validator=is_nonnegative_int
390 )
391 cf.register_option(
392 "large_repr",
393 "truncate",
394 pc_large_repr_doc,
395 validator=is_one_of_factory(["truncate", "info"]),
396 )
397 cf.register_option("max_info_columns", 100, pc_max_info_cols_doc, validator=is_int)
398 cf.register_option(
399 "colheader_justify", "right", colheader_justify_doc, validator=is_text
400 )
401 cf.register_option("notebook_repr_html", True, pc_nb_repr_h_doc, validator=is_bool)
402 cf.register_option("pprint_nest_depth", 3, pc_pprint_nest_depth, validator=is_int)
403 cf.register_option("multi_sparse", True, pc_multi_sparse_doc, validator=is_bool)
404 cf.register_option("expand_frame_repr", True, pc_expand_repr_doc)
405 cf.register_option(
406 "show_dimensions",
407 "truncate",
408 pc_show_dimensions_doc,
409 validator=is_one_of_factory([True, False, "truncate"]),
410 )
411 cf.register_option("chop_threshold", None, pc_chop_threshold_doc)
412 cf.register_option("max_seq_items", 100, pc_max_seq_items)
413 cf.register_option(
414 "width", 80, pc_width_doc, validator=is_instance_factory([type(None), int])
415 )
416 cf.register_option(
417 "memory_usage",
418 True,
419 pc_memory_usage_doc,
420 validator=is_one_of_factory([None, True, False, "deep"]),
421 )
422 cf.register_option(
423 "unicode.east_asian_width", False, pc_east_asian_width_doc, validator=is_bool
424 )
425 cf.register_option(
426 "unicode.ambiguous_as_wide", False, pc_east_asian_width_doc, validator=is_bool
427 )
428 cf.register_option("latex.repr", False, pc_latex_repr_doc, validator=is_bool)
429 cf.register_option("latex.escape", True, pc_latex_escape, validator=is_bool)
430 cf.register_option("latex.longtable", False, pc_latex_longtable, validator=is_bool)
431 cf.register_option(
432 "latex.multicolumn", True, pc_latex_multicolumn, validator=is_bool
433 )
434 cf.register_option(
435 "latex.multicolumn_format", "l", pc_latex_multicolumn, validator=is_text
436 )
437 cf.register_option("latex.multirow", False, pc_latex_multirow, validator=is_bool)
438 cf.register_option(
439 "html.table_schema",
440 False,
441 pc_table_schema_doc,
442 validator=is_bool,
443 cb=table_schema_cb,
444 )
445 cf.register_option("html.border", 1, pc_html_border_doc, validator=is_int)
446 cf.register_option(
447 "html.use_mathjax", True, pc_html_use_mathjax_doc, validator=is_bool
448 )
449
450 tc_sim_interactive_doc = """
451 : boolean
452 Whether to simulate interactive mode for purposes of testing
453 """
454
455 with cf.config_prefix("mode"):
456 cf.register_option("sim_interactive", False, tc_sim_interactive_doc)
457
458 use_inf_as_null_doc = """
459 : boolean
460 use_inf_as_null had been deprecated and will be removed in a future
461 version. Use `use_inf_as_na` instead.
462 """
463
464 use_inf_as_na_doc = """
465 : boolean
466 True means treat None, NaN, INF, -INF as NA (old way),
467 False means None and NaN are null, but INF, -INF are not NA
468 (new way).
469 """
470
471 # We don't want to start importing everything at the global context level
472 # or we'll hit circular deps.
473
474
475 def use_inf_as_na_cb(key):
476 from pandas.core.dtypes.missing import _use_inf_as_na
477
478 _use_inf_as_na(key)
479
480
481 with cf.config_prefix("mode"):
482 cf.register_option("use_inf_as_na", False, use_inf_as_na_doc, cb=use_inf_as_na_cb)
483 cf.register_option(
484 "use_inf_as_null", False, use_inf_as_null_doc, cb=use_inf_as_na_cb
485 )
486
487 cf.deprecate_option(
488 "mode.use_inf_as_null", msg=use_inf_as_null_doc, rkey="mode.use_inf_as_na"
489 )
490
491
492 # user warnings
493 chained_assignment = """
494 : string
495 Raise an exception, warn, or no action if trying to use chained assignment,
496 The default is warn
497 """
498
499 with cf.config_prefix("mode"):
500 cf.register_option(
501 "chained_assignment",
502 "warn",
503 chained_assignment,
504 validator=is_one_of_factory([None, "warn", "raise"]),
505 )
506
507
508 # Set up the io.excel specific reader configuration.
509 reader_engine_doc = """
510 : string
511 The default Excel reader engine for '{ext}' files. Available options:
512 auto, {others}.
513 """
514
515 _xls_options = ["xlrd"]
516 _xlsm_options = ["xlrd", "openpyxl"]
517 _xlsx_options = ["xlrd", "openpyxl"]
518 _ods_options = ["odf"]
519 _xlsb_options = ["pyxlsb"]
520
521
522 with cf.config_prefix("io.excel.xls"):
523 cf.register_option(
524 "reader",
525 "auto",
526 reader_engine_doc.format(ext="xls", others=", ".join(_xls_options)),
527 validator=str,
528 )
529
530 with cf.config_prefix("io.excel.xlsm"):
531 cf.register_option(
532 "reader",
533 "auto",
534 reader_engine_doc.format(ext="xlsm", others=", ".join(_xlsm_options)),
535 validator=str,
536 )
537
538
539 with cf.config_prefix("io.excel.xlsx"):
540 cf.register_option(
541 "reader",
542 "auto",
543 reader_engine_doc.format(ext="xlsx", others=", ".join(_xlsx_options)),
544 validator=str,
545 )
546
547
548 with cf.config_prefix("io.excel.ods"):
549 cf.register_option(
550 "reader",
551 "auto",
552 reader_engine_doc.format(ext="ods", others=", ".join(_ods_options)),
553 validator=str,
554 )
555
556 with cf.config_prefix("io.excel.xlsb"):
557 cf.register_option(
558 "reader",
559 "auto",
560 reader_engine_doc.format(ext="xlsb", others=", ".join(_xlsb_options)),
561 validator=str,
562 )
563
564 # Set up the io.excel specific writer configuration.
565 writer_engine_doc = """
566 : string
567 The default Excel writer engine for '{ext}' files. Available options:
568 auto, {others}.
569 """
570
571 _xls_options = ["xlwt"]
572 _xlsm_options = ["openpyxl"]
573 _xlsx_options = ["openpyxl", "xlsxwriter"]
574 _ods_options = ["odf"]
575
576
577 with cf.config_prefix("io.excel.xls"):
578 cf.register_option(
579 "writer",
580 "auto",
581 writer_engine_doc.format(ext="xls", others=", ".join(_xls_options)),
582 validator=str,
583 )
584
585 with cf.config_prefix("io.excel.xlsm"):
586 cf.register_option(
587 "writer",
588 "auto",
589 writer_engine_doc.format(ext="xlsm", others=", ".join(_xlsm_options)),
590 validator=str,
591 )
592
593
594 with cf.config_prefix("io.excel.xlsx"):
595 cf.register_option(
596 "writer",
597 "auto",
598 writer_engine_doc.format(ext="xlsx", others=", ".join(_xlsx_options)),
599 validator=str,
600 )
601
602
603 with cf.config_prefix("io.excel.ods"):
604 cf.register_option(
605 "writer",
606 "auto",
607 writer_engine_doc.format(ext="ods", others=", ".join(_ods_options)),
608 validator=str,
609 )
610
611
612 # Set up the io.parquet specific configuration.
613 parquet_engine_doc = """
614 : string
615 The default parquet reader/writer engine. Available options:
616 'auto', 'pyarrow', 'fastparquet', the default is 'auto'
617 """
618
619 with cf.config_prefix("io.parquet"):
620 cf.register_option(
621 "engine",
622 "auto",
623 parquet_engine_doc,
624 validator=is_one_of_factory(["auto", "pyarrow", "fastparquet"]),
625 )
626
627 # --------
628 # Plotting
629 # ---------
630
631 plotting_backend_doc = """
632 : str
633 The plotting backend to use. The default value is "matplotlib", the
634 backend provided with pandas. Other backends can be specified by
635 providing the name of the module that implements the backend.
636 """
637
638
639 def register_plotting_backend_cb(key):
640 if key == "matplotlib":
641 # We defer matplotlib validation, since it's the default
642 return
643 from pandas.plotting._core import _get_plot_backend
644
645 _get_plot_backend(key)
646
647
648 with cf.config_prefix("plotting"):
649 cf.register_option(
650 "backend",
651 defval="matplotlib",
652 doc=plotting_backend_doc,
653 validator=register_plotting_backend_cb,
654 )
655
656
657 register_converter_doc = """
658 : bool or 'auto'.
659 Whether to register converters with matplotlib's units registry for
660 dates, times, datetimes, and Periods. Toggling to False will remove
661 the converters, restoring any converters that pandas overwrote.
662 """
663
664
665 def register_converter_cb(key):
666 from pandas.plotting import (
667 deregister_matplotlib_converters,
668 register_matplotlib_converters,
669 )
670
671 if cf.get_option(key):
672 register_matplotlib_converters()
673 else:
674 deregister_matplotlib_converters()
675
676
677 with cf.config_prefix("plotting.matplotlib"):
678 cf.register_option(
679 "register_converters",
680 "auto",
681 register_converter_doc,
682 validator=is_one_of_factory(["auto", True, False]),
683 cb=register_converter_cb,
684 )
685
[end of pandas/core/config_init.py]
[start of pandas/util/_print_versions.py]
1 import codecs
2 import json
3 import locale
4 import os
5 import platform
6 import struct
7 import sys
8 from typing import Dict, Optional, Union
9
10 from pandas._typing import JSONSerializable
11 from pandas.compat._optional import VERSIONS, _get_version, import_optional_dependency
12
13
14 def _get_commit_hash() -> Optional[str]:
15 """
16 Use vendored versioneer code to get git hash, which handles
17 git worktree correctly.
18 """
19 from pandas._version import get_versions
20
21 versions = get_versions()
22 return versions["full-revisionid"]
23
24
25 def _get_sys_info() -> Dict[str, JSONSerializable]:
26 """
27 Returns system information as a JSON serializable dictionary.
28 """
29 uname_result = platform.uname()
30 language_code, encoding = locale.getlocale()
31 return {
32 "commit": _get_commit_hash(),
33 "python": ".".join(str(i) for i in sys.version_info),
34 "python-bits": struct.calcsize("P") * 8,
35 "OS": uname_result.system,
36 "OS-release": uname_result.release,
37 "Version": uname_result.version,
38 "machine": uname_result.machine,
39 "processor": uname_result.processor,
40 "byteorder": sys.byteorder,
41 "LC_ALL": os.environ.get("LC_ALL"),
42 "LANG": os.environ.get("LANG"),
43 "LOCALE": {"language-code": language_code, "encoding": encoding},
44 }
45
46
47 def _get_dependency_info() -> Dict[str, JSONSerializable]:
48 """
49 Returns dependency information as a JSON serializable dictionary.
50 """
51 deps = [
52 "pandas",
53 # required
54 "numpy",
55 "pytz",
56 "dateutil",
57 # install / build,
58 "pip",
59 "setuptools",
60 "Cython",
61 # test
62 "pytest",
63 "hypothesis",
64 # docs
65 "sphinx",
66 # Other, need a min version
67 "blosc",
68 "feather",
69 "xlsxwriter",
70 "lxml.etree",
71 "html5lib",
72 "pymysql",
73 "psycopg2",
74 "jinja2",
75 # Other, not imported.
76 "IPython",
77 "pandas_datareader",
78 ]
79 deps.extend(list(VERSIONS))
80
81 result: Dict[str, JSONSerializable] = {}
82 for modname in deps:
83 mod = import_optional_dependency(
84 modname, raise_on_missing=False, on_version="ignore"
85 )
86 result[modname] = _get_version(mod) if mod else None
87 return result
88
89
90 def show_versions(as_json: Union[str, bool] = False) -> None:
91 """
92 Provide useful information, important for bug reports.
93
94 It comprises info about hosting operation system, pandas version,
95 and versions of other installed relative packages.
96
97 Parameters
98 ----------
99 as_json : str or bool, default False
100 * If False, outputs info in a human readable form to the console.
101 * If str, it will be considered as a path to a file.
102 Info will be written to that file in JSON format.
103 * If True, outputs info in JSON format to the console.
104 """
105 sys_info = _get_sys_info()
106 deps = _get_dependency_info()
107
108 if as_json:
109 j = dict(system=sys_info, dependencies=deps)
110
111 if as_json is True:
112 print(j)
113 else:
114 assert isinstance(as_json, str) # needed for mypy
115 with codecs.open(as_json, "wb", encoding="utf8") as f:
116 json.dump(j, f, indent=2)
117
118 else:
119 assert isinstance(sys_info["LOCALE"], dict) # needed for mypy
120 language_code = sys_info["LOCALE"]["language-code"]
121 encoding = sys_info["LOCALE"]["encoding"]
122 sys_info["LOCALE"] = f"{language_code}.{encoding}"
123
124 maxlen = max(len(x) for x in deps)
125 print("\nINSTALLED VERSIONS")
126 print("------------------")
127 for k, v in sys_info.items():
128 print(f"{k:<{maxlen}}: {v}")
129 print("")
130 for k, v in deps.items():
131 print(f"{k:<{maxlen}}: {v}")
132
133
134 def main() -> int:
135 from optparse import OptionParser
136
137 parser = OptionParser()
138 parser.add_option(
139 "-j",
140 "--json",
141 metavar="FILE",
142 nargs=1,
143 help="Save output as JSON into file, pass in '-' to output to stdout",
144 )
145
146 (options, args) = parser.parse_args()
147
148 if options.json == "-":
149 options.json = True
150
151 show_versions(as_json=options.json)
152
153 return 0
154
155
156 if __name__ == "__main__":
157 sys.exit(main())
158
[end of pandas/util/_print_versions.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pandas-dev/pandas
|
01cf2f2d1d491663c60123f28afaebe0f72d948e
|
BUG: PeriodIndex.dtype comparison with string exhibits contradiction
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the latest version of pandas.
- [x] (optional) I have confirmed this bug exists on the master branch of pandas.
---
#### Code Sample
```python
index = pd.period_range(start='01Jan2019', end='01Dec2019', freq='M')
string = "period[M]"
index.dtype == string # True
index.dtype != string # also True
```
#### Problem description
`PeriodIndex.dtype` is both `__eq__` and `__ne__` to the string representation of its frequency. I suspect that comparing the dtype to a string is not recommended, but the contradiction seemed worth reporting. I don't know when this was introduced, but it did not occur in 0.25.3.
#### Expected Output
One of the two comparisons to evaluate to `False`.
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit : a34a408e854b6300dbbdd0b29026d19bc5477d73
python : 3.8.6.final.0
python-bits : 64
OS : Linux
OS-release : 4.14.198-152.320.amzn2.x86_64
Version : #1 SMP Wed Sep 23 23:57:28 UTC 2020
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : en_US.UTF-8
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 1.2.0.dev0+830.ga34a408e8
numpy : 1.19.2
pytz : 2020.1
dateutil : 2.8.1
pip : 20.2.4
setuptools : 49.6.0.post20201009
Cython : 0.29.21
pytest : 6.1.1
hypothesis : 5.37.3
sphinx : 3.2.1
blosc : None
feather : None
xlsxwriter : 1.3.7
lxml.etree : 4.6.1
html5lib : 1.1
pymysql : None
psycopg2 : None
jinja2 : 2.11.2
IPython : 7.18.1
pandas_datareader: None
bs4 : 4.9.3
bottleneck : 1.3.2
fsspec : 0.8.4
fastparquet : 0.4.1
gcsfs : 0.7.1
matplotlib : 3.3.2
numexpr : 2.7.1
odfpy : None
openpyxl : 3.0.5
pandas_gbq : None
pyarrow : 1.0.1
pyxlsb : None
s3fs : 0.4.2
scipy : 1.5.2
sqlalchemy : 1.3.20
tables : 3.6.1
tabulate : 0.8.7
xarray : 0.16.1
xlrd : 1.2.0
xlwt : 1.3.0
numba : 0.51.2
</details>
|
2020-10-20T01:59:04Z
|
<patch>
diff --git a/doc/source/whatsnew/v1.1.4.rst b/doc/source/whatsnew/v1.1.4.rst
--- a/doc/source/whatsnew/v1.1.4.rst
+++ b/doc/source/whatsnew/v1.1.4.rst
@@ -21,6 +21,7 @@ Fixed regressions
- Fixed regression in :meth:`Series.astype` converting ``None`` to ``"nan"`` when casting to string (:issue:`36904`)
- Fixed regression in :class:`RollingGroupby` causing a segmentation fault with Index of dtype object (:issue:`36727`)
- Fixed regression in :meth:`DataFrame.resample(...).apply(...)` raised ``AttributeError`` when input was a :class:`DataFrame` and only a :class:`Series` was evaluated (:issue:`36951`)
+- Fixed regression in :class:`PeriodDtype` comparing both equal and unequal to its string representation (:issue:`37265`)
.. ---------------------------------------------------------------------------
diff --git a/pandas/core/dtypes/dtypes.py b/pandas/core/dtypes/dtypes.py
--- a/pandas/core/dtypes/dtypes.py
+++ b/pandas/core/dtypes/dtypes.py
@@ -907,6 +907,9 @@ def __eq__(self, other: Any) -> bool:
return isinstance(other, PeriodDtype) and self.freq == other.freq
+ def __ne__(self, other: Any) -> bool:
+ return not self.__eq__(other)
+
def __setstate__(self, state):
# for pickle compat. __getstate__ is defined in the
# PandasExtensionDtype superclass and uses the public properties to
</patch>
|
[]
|
[]
| ||||
ipython__ipython-4521
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
unicode error when trying Audio('data/Bach Cello Suite #3.wav')
Hello,
I'm under ipython2.0dev/windows7 / python3
Trying the greatest and latest @ellisonbg notebook at
https://github.com/ellisonbg/talk-pydata-nyc2013/blob/master/Close to Data.ipynb
I get an unicode error at the wave example.
Does anyone else has the same problem ? (I suspect a bug, as the file plays music if I click on it via the explorer)
```
from IPython.display import display, Audio, Latex
a = Audio('data/Bach Cello Suite #3.wav')
```
```
--------------------------------------------------------------------------
UnicodeDecodeError Traceback (most recent call last)
<ipython-input-4-35380f5da86b> in <module>()
----> 1 a = Audio('data/Bach Cello Suite #3.wav')
c:\users\famille\documents\winpython\winpython-32bit-3.3.2.3ggplotip2\python-3.3.2\src\master\IPython\lib\display.py in __init__(self, data, filename, url, embed, rate, autoplay)
80 self.embed = True
81 self.autoplay = autoplay
---> 82 super(Audio, self).__init__(data=data, url=url, filename=filename)
83
84 if self.data is not None and not isinstance(self.data, bytes):
c:\users\famille\documents\winpython\winpython-32bit-3.3.2.3ggplotip2\python-3.3.2\src\master\IPython\core\display.py in __init__(self, data, url, filename)
304 self.filename = None if filename is None else unicode_type(filename)
305
--> 306 self.reload()
307
308 def reload(self):
c:\users\famille\documents\winpython\winpython-32bit-3.3.2.3ggplotip2\python-3.3.2\src\master\IPython\lib\display.py in reload(self)
89 import mimetypes
90 if self.embed:
---> 91 super(Audio, self).reload()
92
93 if self.filename is not None:
c:\users\famille\documents\winpython\winpython-32bit-3.3.2.3ggplotip2\python-3.3.2\src\master\IPython\core\display.py in reload(self)
310 if self.filename is not None:
311 with open(self.filename, self._read_flags) as f:
--> 312 self.data = f.read()
313 elif self.url is not None:
314 try:
C:\Users\famille\Documents\winpython\WinPython-32bit-3.3.2.3ggplotip2\python-3.3.2\lib\encodings\cp1252.py in decode(self, input, final)
21 class IncrementalDecoder(codecs.IncrementalDecoder):
22 def decode(self, input, final=False):
---> 23 return codecs.charmap_decode(input,self.errors,decoding_table)[0]
24
25 class StreamWriter(Codec,codecs.StreamWriter):
UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 75132: character maps to <undefined>
```
</issue>
<code>
[start of README.rst]
1 ===========================================
2 IPython: Productive Interactive Computing
3 ===========================================
4
5 Overview
6 ========
7
8 Welcome to IPython. Our full documentation is available on `our website
9 <http://ipython.org/documentation.html>`_; if you downloaded a built source
10 distribution the ``docs/source`` directory contains the plaintext version of
11 these manuals. If you have Sphinx installed, you can build them by typing
12 ``cd docs; make html`` for local browsing.
13
14
15 Dependencies and supported Python versions
16 ==========================================
17
18 For full details, see the installation section of the manual. The basic parts
19 of IPython only need the Python standard library, but much of its more advanced
20 functionality requires extra packages.
21
22 Officially, IPython requires Python version 2.7, or 3.3 and above.
23 IPython 1.x is the last IPython version to support Python 2.6 and 3.2.
24
25
26 Instant running
27 ===============
28
29 You can run IPython from this directory without even installing it system-wide
30 by typing at the terminal::
31
32 $ python -m IPython
33
34
35 Development installation
36 ========================
37
38 If you want to hack on certain parts, e.g. the IPython notebook, in a clean
39 environment (such as a virtualenv) you can use ``pip`` to grab the necessary
40 dependencies quickly::
41
42 $ git clone --recursive https://github.com/ipython/ipython.git
43 $ cd ipython
44 $ pip install -e ".[notebook]"
45
46 This installs the necessary packages and symlinks IPython into your current
47 environment so that you can work on your local repo copy and run it from anywhere::
48
49 $ ipython notebook
50
51 The same process applies for other parts, such as the qtconsole (the
52 ``extras_require`` attribute in the setup.py file lists all the possibilities).
53
54 Git Hooks and Submodules
55 ************************
56
57 IPython now uses git submodules to ship its javascript dependencies.
58 If you run IPython from git master, you may need to update submodules once in a while with::
59
60 $ git submodule update
61
62 or::
63
64 $ python setup.py submodule
65
66 We have some git hooks for helping keep your submodules always in sync,
67 see our ``git-hooks`` directory for more info.
68
[end of README.rst]
[start of IPython/core/display.py]
1 # -*- coding: utf-8 -*-
2 """Top-level display functions for displaying object in different formats.
3
4 Authors:
5
6 * Brian Granger
7 """
8
9 #-----------------------------------------------------------------------------
10 # Copyright (C) 2013 The IPython Development Team
11 #
12 # Distributed under the terms of the BSD License. The full license is in
13 # the file COPYING, distributed as part of this software.
14 #-----------------------------------------------------------------------------
15
16 #-----------------------------------------------------------------------------
17 # Imports
18 #-----------------------------------------------------------------------------
19
20 from __future__ import print_function
21
22 import os
23 import struct
24
25 from IPython.utils.py3compat import (string_types, cast_bytes_py2, cast_unicode,
26 unicode_type)
27
28 from .displaypub import publish_display_data
29
30 #-----------------------------------------------------------------------------
31 # utility functions
32 #-----------------------------------------------------------------------------
33
34 def _safe_exists(path):
35 """Check path, but don't let exceptions raise"""
36 try:
37 return os.path.exists(path)
38 except Exception:
39 return False
40
41 def _merge(d1, d2):
42 """Like update, but merges sub-dicts instead of clobbering at the top level.
43
44 Updates d1 in-place
45 """
46
47 if not isinstance(d2, dict) or not isinstance(d1, dict):
48 return d2
49 for key, value in d2.items():
50 d1[key] = _merge(d1.get(key), value)
51 return d1
52
53 def _display_mimetype(mimetype, objs, raw=False, metadata=None):
54 """internal implementation of all display_foo methods
55
56 Parameters
57 ----------
58 mimetype : str
59 The mimetype to be published (e.g. 'image/png')
60 objs : tuple of objects
61 The Python objects to display, or if raw=True raw text data to
62 display.
63 raw : bool
64 Are the data objects raw data or Python objects that need to be
65 formatted before display? [default: False]
66 metadata : dict (optional)
67 Metadata to be associated with the specific mimetype output.
68 """
69 if metadata:
70 metadata = {mimetype: metadata}
71 if raw:
72 # turn list of pngdata into list of { 'image/png': pngdata }
73 objs = [ {mimetype: obj} for obj in objs ]
74 display(*objs, raw=raw, metadata=metadata, include=[mimetype])
75
76 #-----------------------------------------------------------------------------
77 # Main functions
78 #-----------------------------------------------------------------------------
79
80 def display(*objs, **kwargs):
81 """Display a Python object in all frontends.
82
83 By default all representations will be computed and sent to the frontends.
84 Frontends can decide which representation is used and how.
85
86 Parameters
87 ----------
88 objs : tuple of objects
89 The Python objects to display.
90 raw : bool, optional
91 Are the objects to be displayed already mimetype-keyed dicts of raw display data,
92 or Python objects that need to be formatted before display? [default: False]
93 include : list or tuple, optional
94 A list of format type strings (MIME types) to include in the
95 format data dict. If this is set *only* the format types included
96 in this list will be computed.
97 exclude : list or tuple, optional
98 A list of format type strings (MIME types) to exclude in the format
99 data dict. If this is set all format types will be computed,
100 except for those included in this argument.
101 metadata : dict, optional
102 A dictionary of metadata to associate with the output.
103 mime-type keys in this dictionary will be associated with the individual
104 representation formats, if they exist.
105 """
106 raw = kwargs.get('raw', False)
107 include = kwargs.get('include')
108 exclude = kwargs.get('exclude')
109 metadata = kwargs.get('metadata')
110
111 from IPython.core.interactiveshell import InteractiveShell
112
113 if raw:
114 for obj in objs:
115 publish_display_data('display', obj, metadata)
116 else:
117 format = InteractiveShell.instance().display_formatter.format
118 for obj in objs:
119 format_dict, md_dict = format(obj, include=include, exclude=exclude)
120 if metadata:
121 # kwarg-specified metadata gets precedence
122 _merge(md_dict, metadata)
123 publish_display_data('display', format_dict, md_dict)
124
125
126 def display_pretty(*objs, **kwargs):
127 """Display the pretty (default) representation of an object.
128
129 Parameters
130 ----------
131 objs : tuple of objects
132 The Python objects to display, or if raw=True raw text data to
133 display.
134 raw : bool
135 Are the data objects raw data or Python objects that need to be
136 formatted before display? [default: False]
137 metadata : dict (optional)
138 Metadata to be associated with the specific mimetype output.
139 """
140 _display_mimetype('text/plain', objs, **kwargs)
141
142
143 def display_html(*objs, **kwargs):
144 """Display the HTML representation of an object.
145
146 Parameters
147 ----------
148 objs : tuple of objects
149 The Python objects to display, or if raw=True raw HTML data to
150 display.
151 raw : bool
152 Are the data objects raw data or Python objects that need to be
153 formatted before display? [default: False]
154 metadata : dict (optional)
155 Metadata to be associated with the specific mimetype output.
156 """
157 _display_mimetype('text/html', objs, **kwargs)
158
159
160 def display_svg(*objs, **kwargs):
161 """Display the SVG representation of an object.
162
163 Parameters
164 ----------
165 objs : tuple of objects
166 The Python objects to display, or if raw=True raw svg data to
167 display.
168 raw : bool
169 Are the data objects raw data or Python objects that need to be
170 formatted before display? [default: False]
171 metadata : dict (optional)
172 Metadata to be associated with the specific mimetype output.
173 """
174 _display_mimetype('image/svg+xml', objs, **kwargs)
175
176
177 def display_png(*objs, **kwargs):
178 """Display the PNG representation of an object.
179
180 Parameters
181 ----------
182 objs : tuple of objects
183 The Python objects to display, or if raw=True raw png data to
184 display.
185 raw : bool
186 Are the data objects raw data or Python objects that need to be
187 formatted before display? [default: False]
188 metadata : dict (optional)
189 Metadata to be associated with the specific mimetype output.
190 """
191 _display_mimetype('image/png', objs, **kwargs)
192
193
194 def display_jpeg(*objs, **kwargs):
195 """Display the JPEG representation of an object.
196
197 Parameters
198 ----------
199 objs : tuple of objects
200 The Python objects to display, or if raw=True raw JPEG data to
201 display.
202 raw : bool
203 Are the data objects raw data or Python objects that need to be
204 formatted before display? [default: False]
205 metadata : dict (optional)
206 Metadata to be associated with the specific mimetype output.
207 """
208 _display_mimetype('image/jpeg', objs, **kwargs)
209
210
211 def display_latex(*objs, **kwargs):
212 """Display the LaTeX representation of an object.
213
214 Parameters
215 ----------
216 objs : tuple of objects
217 The Python objects to display, or if raw=True raw latex data to
218 display.
219 raw : bool
220 Are the data objects raw data or Python objects that need to be
221 formatted before display? [default: False]
222 metadata : dict (optional)
223 Metadata to be associated with the specific mimetype output.
224 """
225 _display_mimetype('text/latex', objs, **kwargs)
226
227
228 def display_json(*objs, **kwargs):
229 """Display the JSON representation of an object.
230
231 Note that not many frontends support displaying JSON.
232
233 Parameters
234 ----------
235 objs : tuple of objects
236 The Python objects to display, or if raw=True raw json data to
237 display.
238 raw : bool
239 Are the data objects raw data or Python objects that need to be
240 formatted before display? [default: False]
241 metadata : dict (optional)
242 Metadata to be associated with the specific mimetype output.
243 """
244 _display_mimetype('application/json', objs, **kwargs)
245
246
247 def display_javascript(*objs, **kwargs):
248 """Display the Javascript representation of an object.
249
250 Parameters
251 ----------
252 objs : tuple of objects
253 The Python objects to display, or if raw=True raw javascript data to
254 display.
255 raw : bool
256 Are the data objects raw data or Python objects that need to be
257 formatted before display? [default: False]
258 metadata : dict (optional)
259 Metadata to be associated with the specific mimetype output.
260 """
261 _display_mimetype('application/javascript', objs, **kwargs)
262
263 #-----------------------------------------------------------------------------
264 # Smart classes
265 #-----------------------------------------------------------------------------
266
267
268 class DisplayObject(object):
269 """An object that wraps data to be displayed."""
270
271 _read_flags = 'r'
272
273 def __init__(self, data=None, url=None, filename=None):
274 """Create a display object given raw data.
275
276 When this object is returned by an expression or passed to the
277 display function, it will result in the data being displayed
278 in the frontend. The MIME type of the data should match the
279 subclasses used, so the Png subclass should be used for 'image/png'
280 data. If the data is a URL, the data will first be downloaded
281 and then displayed. If
282
283 Parameters
284 ----------
285 data : unicode, str or bytes
286 The raw data or a URL or file to load the data from
287 url : unicode
288 A URL to download the data from.
289 filename : unicode
290 Path to a local file to load the data from.
291 """
292 if data is not None and isinstance(data, string_types):
293 if data.startswith('http') and url is None:
294 url = data
295 filename = None
296 data = None
297 elif _safe_exists(data) and filename is None:
298 url = None
299 filename = data
300 data = None
301
302 self.data = data
303 self.url = url
304 self.filename = None if filename is None else unicode_type(filename)
305
306 self.reload()
307
308 def reload(self):
309 """Reload the raw data from file or URL."""
310 if self.filename is not None:
311 with open(self.filename, self._read_flags) as f:
312 self.data = f.read()
313 elif self.url is not None:
314 try:
315 import urllib2
316 response = urllib2.urlopen(self.url)
317 self.data = response.read()
318 # extract encoding from header, if there is one:
319 encoding = None
320 for sub in response.headers['content-type'].split(';'):
321 sub = sub.strip()
322 if sub.startswith('charset'):
323 encoding = sub.split('=')[-1].strip()
324 break
325 # decode data, if an encoding was specified
326 if encoding:
327 self.data = self.data.decode(encoding, 'replace')
328 except:
329 self.data = None
330
331 class Pretty(DisplayObject):
332
333 def _repr_pretty_(self):
334 return self.data
335
336
337 class HTML(DisplayObject):
338
339 def _repr_html_(self):
340 return self.data
341
342 def __html__(self):
343 """
344 This method exists to inform other HTML-using modules (e.g. Markupsafe,
345 htmltag, etc) that this object is HTML and does not need things like
346 special characters (<>&) escaped.
347 """
348 return self._repr_html_()
349
350
351 class Math(DisplayObject):
352
353 def _repr_latex_(self):
354 s = self.data.strip('$')
355 return "$$%s$$" % s
356
357
358 class Latex(DisplayObject):
359
360 def _repr_latex_(self):
361 return self.data
362
363
364 class SVG(DisplayObject):
365
366 # wrap data in a property, which extracts the <svg> tag, discarding
367 # document headers
368 _data = None
369
370 @property
371 def data(self):
372 return self._data
373
374 @data.setter
375 def data(self, svg):
376 if svg is None:
377 self._data = None
378 return
379 # parse into dom object
380 from xml.dom import minidom
381 svg = cast_bytes_py2(svg)
382 x = minidom.parseString(svg)
383 # get svg tag (should be 1)
384 found_svg = x.getElementsByTagName('svg')
385 if found_svg:
386 svg = found_svg[0].toxml()
387 else:
388 # fallback on the input, trust the user
389 # but this is probably an error.
390 pass
391 svg = cast_unicode(svg)
392 self._data = svg
393
394 def _repr_svg_(self):
395 return self.data
396
397
398 class JSON(DisplayObject):
399
400 def _repr_json_(self):
401 return self.data
402
403 css_t = """$("head").append($("<link/>").attr({
404 rel: "stylesheet",
405 type: "text/css",
406 href: "%s"
407 }));
408 """
409
410 lib_t1 = """$.getScript("%s", function () {
411 """
412 lib_t2 = """});
413 """
414
415 class Javascript(DisplayObject):
416
417 def __init__(self, data=None, url=None, filename=None, lib=None, css=None):
418 """Create a Javascript display object given raw data.
419
420 When this object is returned by an expression or passed to the
421 display function, it will result in the data being displayed
422 in the frontend. If the data is a URL, the data will first be
423 downloaded and then displayed.
424
425 In the Notebook, the containing element will be available as `element`,
426 and jQuery will be available. The output area starts hidden, so if
427 the js appends content to `element` that should be visible, then
428 it must call `container.show()` to unhide the area.
429
430 Parameters
431 ----------
432 data : unicode, str or bytes
433 The Javascript source code or a URL to download it from.
434 url : unicode
435 A URL to download the data from.
436 filename : unicode
437 Path to a local file to load the data from.
438 lib : list or str
439 A sequence of Javascript library URLs to load asynchronously before
440 running the source code. The full URLs of the libraries should
441 be given. A single Javascript library URL can also be given as a
442 string.
443 css: : list or str
444 A sequence of css files to load before running the source code.
445 The full URLs of the css files should be given. A single css URL
446 can also be given as a string.
447 """
448 if isinstance(lib, string_types):
449 lib = [lib]
450 elif lib is None:
451 lib = []
452 if isinstance(css, string_types):
453 css = [css]
454 elif css is None:
455 css = []
456 if not isinstance(lib, (list,tuple)):
457 raise TypeError('expected sequence, got: %r' % lib)
458 if not isinstance(css, (list,tuple)):
459 raise TypeError('expected sequence, got: %r' % css)
460 self.lib = lib
461 self.css = css
462 super(Javascript, self).__init__(data=data, url=url, filename=filename)
463
464 def _repr_javascript_(self):
465 r = ''
466 for c in self.css:
467 r += css_t % c
468 for l in self.lib:
469 r += lib_t1 % l
470 r += self.data
471 r += lib_t2*len(self.lib)
472 return r
473
474 # constants for identifying png/jpeg data
475 _PNG = b'\x89PNG\r\n\x1a\n'
476 _JPEG = b'\xff\xd8'
477
478 def _pngxy(data):
479 """read the (width, height) from a PNG header"""
480 ihdr = data.index(b'IHDR')
481 # next 8 bytes are width/height
482 w4h4 = data[ihdr+4:ihdr+12]
483 return struct.unpack('>ii', w4h4)
484
485 def _jpegxy(data):
486 """read the (width, height) from a JPEG header"""
487 # adapted from http://www.64lines.com/jpeg-width-height
488
489 idx = 4
490 while True:
491 block_size = struct.unpack('>H', data[idx:idx+2])[0]
492 idx = idx + block_size
493 if data[idx:idx+2] == b'\xFF\xC0':
494 # found Start of Frame
495 iSOF = idx
496 break
497 else:
498 # read another block
499 idx += 2
500
501 h, w = struct.unpack('>HH', data[iSOF+5:iSOF+9])
502 return w, h
503
504 class Image(DisplayObject):
505
506 _read_flags = 'rb'
507 _FMT_JPEG = u'jpeg'
508 _FMT_PNG = u'png'
509 _ACCEPTABLE_EMBEDDINGS = [_FMT_JPEG, _FMT_PNG]
510
511 def __init__(self, data=None, url=None, filename=None, format=u'png', embed=None, width=None, height=None, retina=False):
512 """Create a PNG/JPEG image object given raw data.
513
514 When this object is returned by an input cell or passed to the
515 display function, it will result in the image being displayed
516 in the frontend.
517
518 Parameters
519 ----------
520 data : unicode, str or bytes
521 The raw image data or a URL or filename to load the data from.
522 This always results in embedded image data.
523 url : unicode
524 A URL to download the data from. If you specify `url=`,
525 the image data will not be embedded unless you also specify `embed=True`.
526 filename : unicode
527 Path to a local file to load the data from.
528 Images from a file are always embedded.
529 format : unicode
530 The format of the image data (png/jpeg/jpg). If a filename or URL is given
531 for format will be inferred from the filename extension.
532 embed : bool
533 Should the image data be embedded using a data URI (True) or be
534 loaded using an <img> tag. Set this to True if you want the image
535 to be viewable later with no internet connection in the notebook.
536
537 Default is `True`, unless the keyword argument `url` is set, then
538 default value is `False`.
539
540 Note that QtConsole is not able to display images if `embed` is set to `False`
541 width : int
542 Width to which to constrain the image in html
543 height : int
544 Height to which to constrain the image in html
545 retina : bool
546 Automatically set the width and height to half of the measured
547 width and height.
548 This only works for embedded images because it reads the width/height
549 from image data.
550 For non-embedded images, you can just set the desired display width
551 and height directly.
552
553 Examples
554 --------
555 # embedded image data, works in qtconsole and notebook
556 # when passed positionally, the first arg can be any of raw image data,
557 # a URL, or a filename from which to load image data.
558 # The result is always embedding image data for inline images.
559 Image('http://www.google.fr/images/srpr/logo3w.png')
560 Image('/path/to/image.jpg')
561 Image(b'RAW_PNG_DATA...')
562
563 # Specifying Image(url=...) does not embed the image data,
564 # it only generates `<img>` tag with a link to the source.
565 # This will not work in the qtconsole or offline.
566 Image(url='http://www.google.fr/images/srpr/logo3w.png')
567
568 """
569 if filename is not None:
570 ext = self._find_ext(filename)
571 elif url is not None:
572 ext = self._find_ext(url)
573 elif data is None:
574 raise ValueError("No image data found. Expecting filename, url, or data.")
575 elif isinstance(data, string_types) and (
576 data.startswith('http') or _safe_exists(data)
577 ):
578 ext = self._find_ext(data)
579 else:
580 ext = None
581
582 if ext is not None:
583 format = ext.lower()
584 if ext == u'jpg' or ext == u'jpeg':
585 format = self._FMT_JPEG
586 if ext == u'png':
587 format = self._FMT_PNG
588 elif isinstance(data, bytes) and format == 'png':
589 # infer image type from image data header,
590 # only if format might not have been specified.
591 if data[:2] == _JPEG:
592 format = 'jpeg'
593
594 self.format = unicode_type(format).lower()
595 self.embed = embed if embed is not None else (url is None)
596
597 if self.embed and self.format not in self._ACCEPTABLE_EMBEDDINGS:
598 raise ValueError("Cannot embed the '%s' image format" % (self.format))
599 self.width = width
600 self.height = height
601 self.retina = retina
602 super(Image, self).__init__(data=data, url=url, filename=filename)
603
604 if retina:
605 self._retina_shape()
606
607 def _retina_shape(self):
608 """load pixel-doubled width and height from image data"""
609 if not self.embed:
610 return
611 if self.format == 'png':
612 w, h = _pngxy(self.data)
613 elif self.format == 'jpeg':
614 w, h = _jpegxy(self.data)
615 else:
616 # retina only supports png
617 return
618 self.width = w // 2
619 self.height = h // 2
620
621 def reload(self):
622 """Reload the raw data from file or URL."""
623 if self.embed:
624 super(Image,self).reload()
625 if self.retina:
626 self._retina_shape()
627
628 def _repr_html_(self):
629 if not self.embed:
630 width = height = ''
631 if self.width:
632 width = ' width="%d"' % self.width
633 if self.height:
634 height = ' height="%d"' % self.height
635 return u'<img src="%s"%s%s/>' % (self.url, width, height)
636
637 def _data_and_metadata(self):
638 """shortcut for returning metadata with shape information, if defined"""
639 md = {}
640 if self.width:
641 md['width'] = self.width
642 if self.height:
643 md['height'] = self.height
644 if md:
645 return self.data, md
646 else:
647 return self.data
648
649 def _repr_png_(self):
650 if self.embed and self.format == u'png':
651 return self._data_and_metadata()
652
653 def _repr_jpeg_(self):
654 if self.embed and (self.format == u'jpeg' or self.format == u'jpg'):
655 return self._data_and_metadata()
656
657 def _find_ext(self, s):
658 return unicode_type(s.split('.')[-1].lower())
659
660
661 def clear_output(wait=False):
662 """Clear the output of the current cell receiving output.
663
664 Parameters
665 ----------
666 wait : bool [default: false]
667 Wait to clear the output until new output is available to replace it."""
668 from IPython.core.interactiveshell import InteractiveShell
669 if InteractiveShell.initialized():
670 InteractiveShell.instance().display_pub.clear_output(wait)
671 else:
672 from IPython.utils import io
673 print('\033[2K\r', file=io.stdout, end='')
674 io.stdout.flush()
675 print('\033[2K\r', file=io.stderr, end='')
676 io.stderr.flush()
677
[end of IPython/core/display.py]
[start of IPython/lib/display.py]
1 """Various display related classes.
2
3 Authors : MinRK, gregcaporaso, dannystaple
4 """
5 from os.path import exists, isfile, splitext, abspath, join, isdir
6 from os import walk, sep
7
8 from IPython.core.display import DisplayObject
9
10
11 class Audio(DisplayObject):
12 """Create an audio object.
13
14 When this object is returned by an input cell or passed to the
15 display function, it will result in Audio controls being displayed
16 in the frontend (only works in the notebook).
17
18 Parameters
19 ----------
20 data : numpy array, list, unicode, str or bytes
21 Can be a
22 * Numpy 1d array containing the desired waveform (mono)
23 * List of float or integer representing the waveform (mono)
24 * String containing the filename
25 * Bytestring containing raw PCM data or
26 * URL pointing to a file on the web.
27
28 If the array option is used the waveform will be normalized.
29
30 If a filename or url is used the format support will be browser
31 dependent.
32 url : unicode
33 A URL to download the data from.
34 filename : unicode
35 Path to a local file to load the data from.
36 embed : boolean
37 Should the image data be embedded using a data URI (True) or should
38 the original source be referenced. Set this to True if you want the
39 audio to playable later with no internet connection in the notebook.
40
41 Default is `True`, unless the keyword argument `url` is set, then
42 default value is `False`.
43 rate : integer
44 The sampling rate of the raw data.
45 Only required when data parameter is being used as an array
46 autoplay : bool
47 Set to True if the audio should immediately start playing.
48 Default is `False`.
49
50 Examples
51 --------
52
53 # Generate a sound
54 import numpy as np
55 framerate = 44100
56 t = np.linspace(0,5,framerate*5)
57 data = np.sin(2*np.pi*220*t) + np.sin(2*np.pi*224*t))
58 Audio(data,rate=framerate)
59
60 Audio("http://www.nch.com.au/acm/8k16bitpcm.wav")
61 Audio(url="http://www.w3schools.com/html/horse.ogg")
62
63 Audio('/path/to/sound.wav')
64 Audio(filename='/path/to/sound.ogg')
65
66 Audio(b'RAW_WAV_DATA..)
67 Audio(data=b'RAW_WAV_DATA..)
68
69 """
70
71 def __init__(self, data=None, filename=None, url=None, embed=None, rate=None, autoplay=False):
72 if filename is None and url is None and data is None:
73 raise ValueError("No image data found. Expecting filename, url, or data.")
74 if embed is False and url is None:
75 raise ValueError("No url found. Expecting url when embed=False")
76
77 if url is not None and embed is not True:
78 self.embed = False
79 else:
80 self.embed = True
81 self.autoplay = autoplay
82 super(Audio, self).__init__(data=data, url=url, filename=filename)
83
84 if self.data is not None and not isinstance(self.data, bytes):
85 self.data = self._make_wav(data,rate)
86
87 def reload(self):
88 """Reload the raw data from file or URL."""
89 import mimetypes
90 if self.embed:
91 super(Audio, self).reload()
92
93 if self.filename is not None:
94 self.mimetype = mimetypes.guess_type(self.filename)[0]
95 elif self.url is not None:
96 self.mimetype = mimetypes.guess_type(self.url)[0]
97 else:
98 self.mimetype = "audio/wav"
99
100 def _make_wav(self, data, rate):
101 """ Transform a numpy array to a PCM bytestring """
102 import struct
103 from io import BytesIO
104 import wave
105 try:
106 import numpy as np
107 data = np.array(data,dtype=float)
108 if len(data.shape) > 1:
109 raise ValueError("encoding of stereo PCM signals are unsupported")
110 scaled = np.int16(data/np.max(np.abs(data))*32767).tolist()
111 except ImportError:
112 maxabsvalue = float(max([abs(x) for x in data]))
113 scaled = [int(x/maxabsvalue*32767) for x in data]
114 fp = BytesIO()
115 waveobj = wave.open(fp,mode='wb')
116 waveobj.setnchannels(1)
117 waveobj.setframerate(rate)
118 waveobj.setsampwidth(2)
119 waveobj.setcomptype('NONE','NONE')
120 waveobj.writeframes(b''.join([struct.pack('<h',x) for x in scaled]))
121 val = fp.getvalue()
122 waveobj.close()
123 return val
124
125 def _data_and_metadata(self):
126 """shortcut for returning metadata with url information, if defined"""
127 md = {}
128 if self.url:
129 md['url'] = self.url
130 if md:
131 return self.data, md
132 else:
133 return self.data
134
135 def _repr_html_(self):
136 src = """
137 <audio controls="controls" {autoplay}>
138 <source src="{src}" type="{type}" />
139 Your browser does not support the audio element.
140 </audio>
141 """
142 return src.format(src=self.src_attr(),type=self.mimetype, autoplay=self.autoplay_attr())
143
144 def src_attr(self):
145 import base64
146 if self.embed and (self.data is not None):
147 data = base64=base64.b64encode(self.data).decode('ascii')
148 return """data:{type};base64,{base64}""".format(type=self.mimetype,
149 base64=data)
150 elif self.url is not None:
151 return self.url
152 else:
153 return ""
154
155 def autoplay_attr(self):
156 if(self.autoplay):
157 return 'autoplay="autoplay"'
158 else:
159 return ''
160
161 class IFrame(object):
162 """
163 Generic class to embed an iframe in an IPython notebook
164 """
165
166 iframe = """
167 <iframe
168 width="{width}"
169 height={height}"
170 src="{src}{params}"
171 frameborder="0"
172 allowfullscreen
173 ></iframe>
174 """
175
176 def __init__(self, src, width, height, **kwargs):
177 self.src = src
178 self.width = width
179 self.height = height
180 self.params = kwargs
181
182 def _repr_html_(self):
183 """return the embed iframe"""
184 if self.params:
185 from urllib import urlencode
186 params = "?" + urlencode(self.params)
187 else:
188 params = ""
189 return self.iframe.format(src=self.src,
190 width=self.width,
191 height=self.height,
192 params=params)
193
194 class YouTubeVideo(IFrame):
195 """Class for embedding a YouTube Video in an IPython session, based on its video id.
196
197 e.g. to embed the video on this page:
198
199 http://www.youtube.com/watch?v=foo
200
201 you would do:
202
203 vid = YouTubeVideo("foo")
204 display(vid)
205
206 To start from 30 seconds:
207
208 vid = YouTubeVideo("abc", start=30)
209 display(vid)
210
211 To calculate seconds from time as hours, minutes, seconds use:
212 start=int(timedelta(hours=1, minutes=46, seconds=40).total_seconds())
213
214 Other parameters can be provided as documented at
215 https://developers.google.com/youtube/player_parameters#parameter-subheader
216 """
217
218 def __init__(self, id, width=400, height=300, **kwargs):
219 src = "http://www.youtube.com/embed/{0}".format(id)
220 super(YouTubeVideo, self).__init__(src, width, height, **kwargs)
221
222 class VimeoVideo(IFrame):
223 """
224 Class for embedding a Vimeo video in an IPython session, based on its video id.
225 """
226
227 def __init__(self, id, width=400, height=300, **kwargs):
228 src="http://player.vimeo.com/video/{0}".format(id)
229 super(VimeoVideo, self).__init__(src, width, height, **kwargs)
230
231 class ScribdDocument(IFrame):
232 """
233 Class for embedding a Scribd document in an IPython session
234
235 Use the start_page params to specify a starting point in the document
236 Use the view_mode params to specify display type one off scroll | slideshow | book
237
238 e.g to Display Wes' foundational paper about PANDAS in book mode from page 3
239
240 ScribdDocument(71048089, width=800, height=400, start_page=3, view_mode="book")
241 """
242
243 def __init__(self, id, width=400, height=300, **kwargs):
244 src="http://www.scribd.com/embeds/{0}/content".format(id)
245 super(ScribdDocument, self).__init__(src, width, height, **kwargs)
246
247 class FileLink(object):
248 """Class for embedding a local file link in an IPython session, based on path
249
250 e.g. to embed a link that was generated in the IPython notebook as my/data.txt
251
252 you would do::
253
254 local_file = FileLink("my/data.txt")
255 display(local_file)
256
257 or in the HTML notebook, just::
258
259 FileLink("my/data.txt")
260 """
261
262 html_link_str = "<a href='%s' target='_blank'>%s</a>"
263
264 def __init__(self,
265 path,
266 url_prefix='files/',
267 result_html_prefix='',
268 result_html_suffix='<br>'):
269 """
270 Parameters
271 ----------
272 path : str
273 path to the file or directory that should be formatted
274 directory_prefix : str
275 prefix to be prepended to all files to form a working link [default:
276 'files']
277 result_html_prefix : str
278 text to append to beginning to link [default: none]
279 result_html_suffix : str
280 text to append at the end of link [default: '<br>']
281 """
282 if isdir(path):
283 raise ValueError("Cannot display a directory using FileLink. "
284 "Use FileLinks to display '%s'." % path)
285 self.path = path
286 self.url_prefix = url_prefix
287 self.result_html_prefix = result_html_prefix
288 self.result_html_suffix = result_html_suffix
289
290 def _format_path(self):
291 fp = ''.join([self.url_prefix,self.path])
292 return ''.join([self.result_html_prefix,
293 self.html_link_str % (fp, self.path),
294 self.result_html_suffix])
295
296 def _repr_html_(self):
297 """return html link to file
298 """
299 if not exists(self.path):
300 return ("Path (<tt>%s</tt>) doesn't exist. "
301 "It may still be in the process of "
302 "being generated, or you may have the "
303 "incorrect path." % self.path)
304
305 return self._format_path()
306
307 def __repr__(self):
308 """return absolute path to file
309 """
310 return abspath(self.path)
311
312 class FileLinks(FileLink):
313 """Class for embedding local file links in an IPython session, based on path
314
315 e.g. to embed links to files that were generated in the IPython notebook under my/data
316
317 you would do:
318
319 local_files = FileLinks("my/data")
320 display(local_files)
321
322 or in the HTML notebook, just
323
324 FileLinks("my/data")
325
326 """
327 def __init__(self,
328 path,
329 url_prefix='files/',
330 included_suffixes=None,
331 result_html_prefix='',
332 result_html_suffix='<br>',
333 notebook_display_formatter=None,
334 terminal_display_formatter=None):
335 """
336 included_suffixes : list of filename suffixes to include when
337 formatting output [default: include all files]
338
339 See the FileLink (baseclass of LocalDirectory) docstring for
340 information on additional parameters.
341
342 notebook_display_formatter : func used to format links for display
343 in the notebook. See discussion of formatter function below.
344
345 terminal_display_formatter : func used to format links for display
346 in the terminal. See discussion of formatter function below.
347
348
349 Passing custom formatter functions
350 ----------------------------------
351 Formatter functions must be of the form:
352 f(dirname, fnames, included_suffixes)
353 dirname : the name of a directory (a string),
354 fnames : a list of the files in that directory
355 included_suffixes : a list of the file suffixes that should be
356 included in the output (passing None means
357 to include all suffixes in the output in
358 the built-in formatters)
359
360 returns a list of lines that should will be print in the
361 notebook (if passing notebook_display_formatter) or the terminal
362 (if passing terminal_display_formatter). This function is iterated
363 over for each directory in self.path. Default formatters are in
364 place, can be passed here to support alternative formatting.
365
366 """
367 if isfile(path):
368 raise ValueError("Cannot display a file using FileLinks. "
369 "Use FileLink to display '%s'." % path)
370 self.included_suffixes = included_suffixes
371 # remove trailing slashs for more consistent output formatting
372 path = path.rstrip('/')
373
374 self.path = path
375 self.url_prefix = url_prefix
376 self.result_html_prefix = result_html_prefix
377 self.result_html_suffix = result_html_suffix
378
379 self.notebook_display_formatter = \
380 notebook_display_formatter or self._get_notebook_display_formatter()
381 self.terminal_display_formatter = \
382 terminal_display_formatter or self._get_terminal_display_formatter()
383
384 def _get_display_formatter(self,
385 dirname_output_format,
386 fname_output_format,
387 fp_format,
388 fp_cleaner=None):
389 """ generate built-in formatter function
390
391 this is used to define both the notebook and terminal built-in
392 formatters as they only differ by some wrapper text for each entry
393
394 dirname_output_format: string to use for formatting directory
395 names, dirname will be substituted for a single "%s" which
396 must appear in this string
397 fname_output_format: string to use for formatting file names,
398 if a single "%s" appears in the string, fname will be substituted
399 if two "%s" appear in the string, the path to fname will be
400 substituted for the first and fname will be substituted for the
401 second
402 fp_format: string to use for formatting filepaths, must contain
403 exactly two "%s" and the dirname will be subsituted for the first
404 and fname will be substituted for the second
405 """
406 def f(dirname, fnames, included_suffixes=None):
407 result = []
408 # begin by figuring out which filenames, if any,
409 # are going to be displayed
410 display_fnames = []
411 for fname in fnames:
412 if (isfile(join(dirname,fname)) and
413 (included_suffixes == None or
414 splitext(fname)[1] in included_suffixes)):
415 display_fnames.append(fname)
416
417 if len(display_fnames) == 0:
418 # if there are no filenames to display, don't print anything
419 # (not even the directory name)
420 pass
421 else:
422 # otherwise print the formatted directory name followed by
423 # the formatted filenames
424 dirname_output_line = dirname_output_format % dirname
425 result.append(dirname_output_line)
426 for fname in display_fnames:
427 fp = fp_format % (dirname,fname)
428 if fp_cleaner is not None:
429 fp = fp_cleaner(fp)
430 try:
431 # output can include both a filepath and a filename...
432 fname_output_line = fname_output_format % (fp, fname)
433 except TypeError:
434 # ... or just a single filepath
435 fname_output_line = fname_output_format % fname
436 result.append(fname_output_line)
437 return result
438 return f
439
440 def _get_notebook_display_formatter(self,
441 spacer=" "):
442 """ generate function to use for notebook formatting
443 """
444 dirname_output_format = \
445 self.result_html_prefix + "%s/" + self.result_html_suffix
446 fname_output_format = \
447 self.result_html_prefix + spacer + self.html_link_str + self.result_html_suffix
448 fp_format = self.url_prefix + '%s/%s'
449 if sep == "\\":
450 # Working on a platform where the path separator is "\", so
451 # must convert these to "/" for generating a URI
452 def fp_cleaner(fp):
453 # Replace all occurences of backslash ("\") with a forward
454 # slash ("/") - this is necessary on windows when a path is
455 # provided as input, but we must link to a URI
456 return fp.replace('\\','/')
457 else:
458 fp_cleaner = None
459
460 return self._get_display_formatter(dirname_output_format,
461 fname_output_format,
462 fp_format,
463 fp_cleaner)
464
465 def _get_terminal_display_formatter(self,
466 spacer=" "):
467 """ generate function to use for terminal formatting
468 """
469 dirname_output_format = "%s/"
470 fname_output_format = spacer + "%s"
471 fp_format = '%s/%s'
472
473 return self._get_display_formatter(dirname_output_format,
474 fname_output_format,
475 fp_format)
476
477 def _format_path(self):
478 result_lines = []
479 walked_dir = list(walk(self.path))
480 walked_dir.sort()
481 for dirname, subdirs, fnames in walked_dir:
482 result_lines += self.notebook_display_formatter(dirname, fnames, self.included_suffixes)
483 return '\n'.join(result_lines)
484
485 def __repr__(self):
486 """return newline-separated absolute paths
487 """
488 result_lines = []
489 walked_dir = list(walk(self.path))
490 walked_dir.sort()
491 for dirname, subdirs, fnames in walked_dir:
492 result_lines += self.terminal_display_formatter(dirname, fnames, self.included_suffixes)
493 return '\n'.join(result_lines)
494
[end of IPython/lib/display.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
ipython/ipython
|
210694b9dec80c8bd4585c58e145603c8144f176
|
unicode error when trying Audio('data/Bach Cello Suite #3.wav')
Hello,
I'm under ipython2.0dev/windows7 / python3
Trying the greatest and latest @ellisonbg notebook at
https://github.com/ellisonbg/talk-pydata-nyc2013/blob/master/Close to Data.ipynb
I get an unicode error at the wave example.
Does anyone else has the same problem ? (I suspect a bug, as the file plays music if I click on it via the explorer)
```
from IPython.display import display, Audio, Latex
a = Audio('data/Bach Cello Suite #3.wav')
```
```
--------------------------------------------------------------------------
UnicodeDecodeError Traceback (most recent call last)
<ipython-input-4-35380f5da86b> in <module>()
----> 1 a = Audio('data/Bach Cello Suite #3.wav')
c:\users\famille\documents\winpython\winpython-32bit-3.3.2.3ggplotip2\python-3.3.2\src\master\IPython\lib\display.py in __init__(self, data, filename, url, embed, rate, autoplay)
80 self.embed = True
81 self.autoplay = autoplay
---> 82 super(Audio, self).__init__(data=data, url=url, filename=filename)
83
84 if self.data is not None and not isinstance(self.data, bytes):
c:\users\famille\documents\winpython\winpython-32bit-3.3.2.3ggplotip2\python-3.3.2\src\master\IPython\core\display.py in __init__(self, data, url, filename)
304 self.filename = None if filename is None else unicode_type(filename)
305
--> 306 self.reload()
307
308 def reload(self):
c:\users\famille\documents\winpython\winpython-32bit-3.3.2.3ggplotip2\python-3.3.2\src\master\IPython\lib\display.py in reload(self)
89 import mimetypes
90 if self.embed:
---> 91 super(Audio, self).reload()
92
93 if self.filename is not None:
c:\users\famille\documents\winpython\winpython-32bit-3.3.2.3ggplotip2\python-3.3.2\src\master\IPython\core\display.py in reload(self)
310 if self.filename is not None:
311 with open(self.filename, self._read_flags) as f:
--> 312 self.data = f.read()
313 elif self.url is not None:
314 try:
C:\Users\famille\Documents\winpython\WinPython-32bit-3.3.2.3ggplotip2\python-3.3.2\lib\encodings\cp1252.py in decode(self, input, final)
21 class IncrementalDecoder(codecs.IncrementalDecoder):
22 def decode(self, input, final=False):
---> 23 return codecs.charmap_decode(input,self.errors,decoding_table)[0]
24
25 class StreamWriter(Codec,codecs.StreamWriter):
UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 75132: character maps to <undefined>
```
|
2013-11-11T17:38:26Z
|
<patch>
diff --git a/IPython/lib/display.py b/IPython/lib/display.py
--- a/IPython/lib/display.py
+++ b/IPython/lib/display.py
@@ -67,7 +67,8 @@ class Audio(DisplayObject):
Audio(data=b'RAW_WAV_DATA..)
"""
-
+ _read_flags = 'rb'
+
def __init__(self, data=None, filename=None, url=None, embed=None, rate=None, autoplay=False):
if filename is None and url is None and data is None:
raise ValueError("No image data found. Expecting filename, url, or data.")
diff --git a/setupbase.py b/setupbase.py
--- a/setupbase.py
+++ b/setupbase.py
@@ -151,6 +151,7 @@ def find_package_data():
package_data = {
'IPython.config.profile' : ['README*', '*/*.py'],
'IPython.core.tests' : ['*.png', '*.jpg'],
+ 'IPython.lib.tests' : ['*.wav'],
'IPython.testing' : ['*.txt'],
'IPython.testing.plugin' : ['*.txt'],
'IPython.html' : ['templates/*'] + static_data,
</patch>
|
[]
|
[]
| ||||
ytdl-org__youtube-dl-7514
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
neteasemusic no longer works
</issue>
<code>
[start of README.md]
1 youtube-dl - download videos from youtube.com or other video platforms
2
3 - [INSTALLATION](#installation)
4 - [DESCRIPTION](#description)
5 - [OPTIONS](#options)
6 - [CONFIGURATION](#configuration)
7 - [OUTPUT TEMPLATE](#output-template)
8 - [FORMAT SELECTION](#format-selection)
9 - [VIDEO SELECTION](#video-selection)
10 - [FAQ](#faq)
11 - [DEVELOPER INSTRUCTIONS](#developer-instructions)
12 - [EMBEDDING YOUTUBE-DL](#embedding-youtube-dl)
13 - [BUGS](#bugs)
14 - [COPYRIGHT](#copyright)
15
16 # INSTALLATION
17
18 To install it right away for all UNIX users (Linux, OS X, etc.), type:
19
20 sudo curl https://yt-dl.org/latest/youtube-dl -o /usr/local/bin/youtube-dl
21 sudo chmod a+rx /usr/local/bin/youtube-dl
22
23 If you do not have curl, you can alternatively use a recent wget:
24
25 sudo wget https://yt-dl.org/downloads/latest/youtube-dl -O /usr/local/bin/youtube-dl
26 sudo chmod a+rx /usr/local/bin/youtube-dl
27
28 Windows users can [download a .exe file](https://yt-dl.org/latest/youtube-dl.exe) and place it in their home directory or any other location on their [PATH](http://en.wikipedia.org/wiki/PATH_%28variable%29).
29
30 OS X users can install **youtube-dl** with [Homebrew](http://brew.sh/).
31
32 brew install youtube-dl
33
34 You can also use pip:
35
36 sudo pip install youtube-dl
37
38 Alternatively, refer to the [developer instructions](#developer-instructions) for how to check out and work with the git repository. For further options, including PGP signatures, see https://rg3.github.io/youtube-dl/download.html .
39
40 # DESCRIPTION
41 **youtube-dl** is a small command-line program to download videos from
42 YouTube.com and a few more sites. It requires the Python interpreter, version
43 2.6, 2.7, or 3.2+, and it is not platform specific. It should work on
44 your Unix box, on Windows or on Mac OS X. It is released to the public domain,
45 which means you can modify it, redistribute it or use it however you like.
46
47 youtube-dl [OPTIONS] URL [URL...]
48
49 # OPTIONS
50 -h, --help Print this help text and exit
51 --version Print program version and exit
52 -U, --update Update this program to latest version. Make
53 sure that you have sufficient permissions
54 (run with sudo if needed)
55 -i, --ignore-errors Continue on download errors, for example to
56 skip unavailable videos in a playlist
57 --abort-on-error Abort downloading of further videos (in the
58 playlist or the command line) if an error
59 occurs
60 --dump-user-agent Display the current browser identification
61 --list-extractors List all supported extractors
62 --extractor-descriptions Output descriptions of all supported
63 extractors
64 --force-generic-extractor Force extraction to use the generic
65 extractor
66 --default-search PREFIX Use this prefix for unqualified URLs. For
67 example "gvsearch2:" downloads two videos
68 from google videos for youtube-dl "large
69 apple". Use the value "auto" to let
70 youtube-dl guess ("auto_warning" to emit a
71 warning when guessing). "error" just throws
72 an error. The default value "fixup_error"
73 repairs broken URLs, but emits an error if
74 this is not possible instead of searching.
75 --ignore-config Do not read configuration files. When given
76 in the global configuration file /etc
77 /youtube-dl.conf: Do not read the user
78 configuration in ~/.config/youtube-
79 dl/config (%APPDATA%/youtube-dl/config.txt
80 on Windows)
81 --flat-playlist Do not extract the videos of a playlist,
82 only list them.
83 --no-color Do not emit color codes in output
84
85 ## Network Options:
86 --proxy URL Use the specified HTTP/HTTPS proxy. Pass in
87 an empty string (--proxy "") for direct
88 connection
89 --socket-timeout SECONDS Time to wait before giving up, in seconds
90 --source-address IP Client-side IP address to bind to
91 (experimental)
92 -4, --force-ipv4 Make all connections via IPv4
93 (experimental)
94 -6, --force-ipv6 Make all connections via IPv6
95 (experimental)
96 --cn-verification-proxy URL Use this proxy to verify the IP address for
97 some Chinese sites. The default proxy
98 specified by --proxy (or none, if the
99 options is not present) is used for the
100 actual downloading. (experimental)
101
102 ## Video Selection:
103 --playlist-start NUMBER Playlist video to start at (default is 1)
104 --playlist-end NUMBER Playlist video to end at (default is last)
105 --playlist-items ITEM_SPEC Playlist video items to download. Specify
106 indices of the videos in the playlist
107 separated by commas like: "--playlist-items
108 1,2,5,8" if you want to download videos
109 indexed 1, 2, 5, 8 in the playlist. You can
110 specify range: "--playlist-items
111 1-3,7,10-13", it will download the videos
112 at index 1, 2, 3, 7, 10, 11, 12 and 13.
113 --match-title REGEX Download only matching titles (regex or
114 caseless sub-string)
115 --reject-title REGEX Skip download for matching titles (regex or
116 caseless sub-string)
117 --max-downloads NUMBER Abort after downloading NUMBER files
118 --min-filesize SIZE Do not download any videos smaller than
119 SIZE (e.g. 50k or 44.6m)
120 --max-filesize SIZE Do not download any videos larger than SIZE
121 (e.g. 50k or 44.6m)
122 --date DATE Download only videos uploaded in this date
123 --datebefore DATE Download only videos uploaded on or before
124 this date (i.e. inclusive)
125 --dateafter DATE Download only videos uploaded on or after
126 this date (i.e. inclusive)
127 --min-views COUNT Do not download any videos with less than
128 COUNT views
129 --max-views COUNT Do not download any videos with more than
130 COUNT views
131 --match-filter FILTER Generic video filter (experimental).
132 Specify any key (see help for -o for a list
133 of available keys) to match if the key is
134 present, !key to check if the key is not
135 present,key > NUMBER (like "comment_count >
136 12", also works with >=, <, <=, !=, =) to
137 compare against a number, and & to require
138 multiple matches. Values which are not
139 known are excluded unless you put a
140 question mark (?) after the operator.For
141 example, to only match videos that have
142 been liked more than 100 times and disliked
143 less than 50 times (or the dislike
144 functionality is not available at the given
145 service), but who also have a description,
146 use --match-filter "like_count > 100 &
147 dislike_count <? 50 & description" .
148 --no-playlist Download only the video, if the URL refers
149 to a video and a playlist.
150 --yes-playlist Download the playlist, if the URL refers to
151 a video and a playlist.
152 --age-limit YEARS Download only videos suitable for the given
153 age
154 --download-archive FILE Download only videos not listed in the
155 archive file. Record the IDs of all
156 downloaded videos in it.
157 --include-ads Download advertisements as well
158 (experimental)
159
160 ## Download Options:
161 -r, --rate-limit LIMIT Maximum download rate in bytes per second
162 (e.g. 50K or 4.2M)
163 -R, --retries RETRIES Number of retries (default is 10), or
164 "infinite".
165 --buffer-size SIZE Size of download buffer (e.g. 1024 or 16K)
166 (default is 1024)
167 --no-resize-buffer Do not automatically adjust the buffer
168 size. By default, the buffer size is
169 automatically resized from an initial value
170 of SIZE.
171 --playlist-reverse Download playlist videos in reverse order
172 --xattr-set-filesize Set file xattribute ytdl.filesize with
173 expected filesize (experimental)
174 --hls-prefer-native Use the native HLS downloader instead of
175 ffmpeg (experimental)
176 --external-downloader COMMAND Use the specified external downloader.
177 Currently supports
178 aria2c,axel,curl,httpie,wget
179 --external-downloader-args ARGS Give these arguments to the external
180 downloader
181
182 ## Filesystem Options:
183 -a, --batch-file FILE File containing URLs to download ('-' for
184 stdin)
185 --id Use only video ID in file name
186 -o, --output TEMPLATE Output filename template. Use %(title)s to
187 get the title, %(uploader)s for the
188 uploader name, %(uploader_id)s for the
189 uploader nickname if different,
190 %(autonumber)s to get an automatically
191 incremented number, %(ext)s for the
192 filename extension, %(format)s for the
193 format description (like "22 - 1280x720" or
194 "HD"), %(format_id)s for the unique id of
195 the format (like YouTube's itags: "137"),
196 %(upload_date)s for the upload date
197 (YYYYMMDD), %(extractor)s for the provider
198 (youtube, metacafe, etc), %(id)s for the
199 video id, %(playlist_title)s,
200 %(playlist_id)s, or %(playlist)s (=title if
201 present, ID otherwise) for the playlist the
202 video is in, %(playlist_index)s for the
203 position in the playlist. %(height)s and
204 %(width)s for the width and height of the
205 video format. %(resolution)s for a textual
206 description of the resolution of the video
207 format. %% for a literal percent. Use - to
208 output to stdout. Can also be used to
209 download to a different directory, for
210 example with -o '/my/downloads/%(uploader)s
211 /%(title)s-%(id)s.%(ext)s' .
212 --autonumber-size NUMBER Specify the number of digits in
213 %(autonumber)s when it is present in output
214 filename template or --auto-number option
215 is given
216 --restrict-filenames Restrict filenames to only ASCII
217 characters, and avoid "&" and spaces in
218 filenames
219 -A, --auto-number [deprecated; use -o
220 "%(autonumber)s-%(title)s.%(ext)s" ] Number
221 downloaded files starting from 00000
222 -t, --title [deprecated] Use title in file name
223 (default)
224 -l, --literal [deprecated] Alias of --title
225 -w, --no-overwrites Do not overwrite files
226 -c, --continue Force resume of partially downloaded files.
227 By default, youtube-dl will resume
228 downloads if possible.
229 --no-continue Do not resume partially downloaded files
230 (restart from beginning)
231 --no-part Do not use .part files - write directly
232 into output file
233 --no-mtime Do not use the Last-modified header to set
234 the file modification time
235 --write-description Write video description to a .description
236 file
237 --write-info-json Write video metadata to a .info.json file
238 --write-annotations Write video annotations to a
239 .annotations.xml file
240 --load-info FILE JSON file containing the video information
241 (created with the "--write-info-json"
242 option)
243 --cookies FILE File to read cookies from and dump cookie
244 jar in
245 --cache-dir DIR Location in the filesystem where youtube-dl
246 can store some downloaded information
247 permanently. By default $XDG_CACHE_HOME
248 /youtube-dl or ~/.cache/youtube-dl . At the
249 moment, only YouTube player files (for
250 videos with obfuscated signatures) are
251 cached, but that may change.
252 --no-cache-dir Disable filesystem caching
253 --rm-cache-dir Delete all filesystem cache files
254
255 ## Thumbnail images:
256 --write-thumbnail Write thumbnail image to disk
257 --write-all-thumbnails Write all thumbnail image formats to disk
258 --list-thumbnails Simulate and list all available thumbnail
259 formats
260
261 ## Verbosity / Simulation Options:
262 -q, --quiet Activate quiet mode
263 --no-warnings Ignore warnings
264 -s, --simulate Do not download the video and do not write
265 anything to disk
266 --skip-download Do not download the video
267 -g, --get-url Simulate, quiet but print URL
268 -e, --get-title Simulate, quiet but print title
269 --get-id Simulate, quiet but print id
270 --get-thumbnail Simulate, quiet but print thumbnail URL
271 --get-description Simulate, quiet but print video description
272 --get-duration Simulate, quiet but print video length
273 --get-filename Simulate, quiet but print output filename
274 --get-format Simulate, quiet but print output format
275 -j, --dump-json Simulate, quiet but print JSON information.
276 See --output for a description of available
277 keys.
278 -J, --dump-single-json Simulate, quiet but print JSON information
279 for each command-line argument. If the URL
280 refers to a playlist, dump the whole
281 playlist information in a single line.
282 --print-json Be quiet and print the video information as
283 JSON (video is still being downloaded).
284 --newline Output progress bar as new lines
285 --no-progress Do not print progress bar
286 --console-title Display progress in console titlebar
287 -v, --verbose Print various debugging information
288 --dump-pages Print downloaded pages encoded using base64
289 to debug problems (very verbose)
290 --write-pages Write downloaded intermediary pages to
291 files in the current directory to debug
292 problems
293 --print-traffic Display sent and read HTTP traffic
294 -C, --call-home Contact the youtube-dl server for debugging
295 --no-call-home Do NOT contact the youtube-dl server for
296 debugging
297
298 ## Workarounds:
299 --encoding ENCODING Force the specified encoding (experimental)
300 --no-check-certificate Suppress HTTPS certificate validation
301 --prefer-insecure Use an unencrypted connection to retrieve
302 information about the video. (Currently
303 supported only for YouTube)
304 --user-agent UA Specify a custom user agent
305 --referer URL Specify a custom referer, use if the video
306 access is restricted to one domain
307 --add-header FIELD:VALUE Specify a custom HTTP header and its value,
308 separated by a colon ':'. You can use this
309 option multiple times
310 --bidi-workaround Work around terminals that lack
311 bidirectional text support. Requires bidiv
312 or fribidi executable in PATH
313 --sleep-interval SECONDS Number of seconds to sleep before each
314 download.
315
316 ## Video Format Options:
317 -f, --format FORMAT Video format code, see the "FORMAT
318 SELECTION" for all the info
319 --all-formats Download all available video formats
320 --prefer-free-formats Prefer free video formats unless a specific
321 one is requested
322 -F, --list-formats List all available formats
323 --youtube-skip-dash-manifest Do not download the DASH manifests and
324 related data on YouTube videos
325 --merge-output-format FORMAT If a merge is required (e.g.
326 bestvideo+bestaudio), output to given
327 container format. One of mkv, mp4, ogg,
328 webm, flv. Ignored if no merge is required
329
330 ## Subtitle Options:
331 --write-sub Write subtitle file
332 --write-auto-sub Write automatic subtitle file (YouTube
333 only)
334 --all-subs Download all the available subtitles of the
335 video
336 --list-subs List all available subtitles for the video
337 --sub-format FORMAT Subtitle format, accepts formats
338 preference, for example: "srt" or
339 "ass/srt/best"
340 --sub-lang LANGS Languages of the subtitles to download
341 (optional) separated by commas, use IETF
342 language tags like 'en,pt'
343
344 ## Authentication Options:
345 -u, --username USERNAME Login with this account ID
346 -p, --password PASSWORD Account password. If this option is left
347 out, youtube-dl will ask interactively.
348 -2, --twofactor TWOFACTOR Two-factor auth code
349 -n, --netrc Use .netrc authentication data
350 --video-password PASSWORD Video password (vimeo, smotri, youku)
351
352 ## Post-processing Options:
353 -x, --extract-audio Convert video files to audio-only files
354 (requires ffmpeg or avconv and ffprobe or
355 avprobe)
356 --audio-format FORMAT Specify audio format: "best", "aac",
357 "vorbis", "mp3", "m4a", "opus", or "wav";
358 "best" by default
359 --audio-quality QUALITY Specify ffmpeg/avconv audio quality, insert
360 a value between 0 (better) and 9 (worse)
361 for VBR or a specific bitrate like 128K
362 (default 5)
363 --recode-video FORMAT Encode the video to another format if
364 necessary (currently supported:
365 mp4|flv|ogg|webm|mkv|avi)
366 --postprocessor-args ARGS Give these arguments to the postprocessor
367 -k, --keep-video Keep the video file on disk after the post-
368 processing; the video is erased by default
369 --no-post-overwrites Do not overwrite post-processed files; the
370 post-processed files are overwritten by
371 default
372 --embed-subs Embed subtitles in the video (only for mkv
373 and mp4 videos)
374 --embed-thumbnail Embed thumbnail in the audio as cover art
375 --add-metadata Write metadata to the video file
376 --metadata-from-title FORMAT Parse additional metadata like song title /
377 artist from the video title. The format
378 syntax is the same as --output, the parsed
379 parameters replace existing values.
380 Additional templates: %(album)s,
381 %(artist)s. Example: --metadata-from-title
382 "%(artist)s - %(title)s" matches a title
383 like "Coldplay - Paradise"
384 --xattrs Write metadata to the video file's xattrs
385 (using dublin core and xdg standards)
386 --fixup POLICY Automatically correct known faults of the
387 file. One of never (do nothing), warn (only
388 emit a warning), detect_or_warn (the
389 default; fix file if we can, warn
390 otherwise)
391 --prefer-avconv Prefer avconv over ffmpeg for running the
392 postprocessors (default)
393 --prefer-ffmpeg Prefer ffmpeg over avconv for running the
394 postprocessors
395 --ffmpeg-location PATH Location of the ffmpeg/avconv binary;
396 either the path to the binary or its
397 containing directory.
398 --exec CMD Execute a command on the file after
399 downloading, similar to find's -exec
400 syntax. Example: --exec 'adb push {}
401 /sdcard/Music/ && rm {}'
402 --convert-subtitles FORMAT Convert the subtitles to other format
403 (currently supported: srt|ass|vtt)
404
405 # CONFIGURATION
406
407 You can configure youtube-dl by placing any supported command line option to a configuration file. On Linux, the system wide configuration file is located at `/etc/youtube-dl.conf` and the user wide configuration file at `~/.config/youtube-dl/config`. On Windows, the user wide configuration file locations are `%APPDATA%\youtube-dl\config.txt` or `C:\Users\<user name>\youtube-dl.conf`. For example, with the following configuration file youtube-dl will always extract the audio, not copy the mtime and use a proxy:
408 ```
409 --extract-audio
410 --no-mtime
411 --proxy 127.0.0.1:3128
412 ```
413
414 You can use `--ignore-config` if you want to disable the configuration file for a particular youtube-dl run.
415
416 ### Authentication with `.netrc` file ###
417
418 You may also want to configure automatic credentials storage for extractors that support authentication (by providing login and password with `--username` and `--password`) in order not to pass credentials as command line arguments on every youtube-dl execution and prevent tracking plain text passwords in the shell command history. You can achieve this using a [`.netrc` file](http://stackoverflow.com/tags/.netrc/info) on per extractor basis. For that you will need to create a`.netrc` file in your `$HOME` and restrict permissions to read/write by you only:
419 ```
420 touch $HOME/.netrc
421 chmod a-rwx,u+rw $HOME/.netrc
422 ```
423 After that you can add credentials for extractor in the following format, where *extractor* is the name of extractor in lowercase:
424 ```
425 machine <extractor> login <login> password <password>
426 ```
427 For example:
428 ```
429 machine youtube login [email protected] password my_youtube_password
430 machine twitch login my_twitch_account_name password my_twitch_password
431 ```
432 To activate authentication with the `.netrc` file you should pass `--netrc` to youtube-dl or place it in the [configuration file](#configuration).
433
434 On Windows you may also need to setup the `%HOME%` environment variable manually.
435
436 # OUTPUT TEMPLATE
437
438 The `-o` option allows users to indicate a template for the output file names. The basic usage is not to set any template arguments when downloading a single file, like in `youtube-dl -o funny_video.flv "http://some/video"`. However, it may contain special sequences that will be replaced when downloading each video. The special sequences have the format `%(NAME)s`. To clarify, that is a percent symbol followed by a name in parentheses, followed by a lowercase S. Allowed names are:
439
440 - `id`: The sequence will be replaced by the video identifier.
441 - `url`: The sequence will be replaced by the video URL.
442 - `uploader`: The sequence will be replaced by the nickname of the person who uploaded the video.
443 - `upload_date`: The sequence will be replaced by the upload date in YYYYMMDD format.
444 - `title`: The sequence will be replaced by the video title.
445 - `ext`: The sequence will be replaced by the appropriate extension (like flv or mp4).
446 - `epoch`: The sequence will be replaced by the Unix epoch when creating the file.
447 - `autonumber`: The sequence will be replaced by a five-digit number that will be increased with each download, starting at zero.
448 - `playlist`: The sequence will be replaced by the name or the id of the playlist that contains the video.
449 - `playlist_index`: The sequence will be replaced by the index of the video in the playlist padded with leading zeros according to the total length of the playlist.
450 - `format_id`: The sequence will be replaced by the format code specified by `--format`.
451 - `duration`: The sequence will be replaced by the length of the video in seconds.
452
453 The current default template is `%(title)s-%(id)s.%(ext)s`.
454
455 In some cases, you don't want special characters such as 中, spaces, or &, such as when transferring the downloaded filename to a Windows system or the filename through an 8bit-unsafe channel. In these cases, add the `--restrict-filenames` flag to get a shorter title:
456
457 ```bash
458 $ youtube-dl --get-filename -o "%(title)s.%(ext)s" BaW_jenozKc
459 youtube-dl test video ''_ä↭𝕐.mp4 # All kinds of weird characters
460 $ youtube-dl --get-filename -o "%(title)s.%(ext)s" BaW_jenozKc --restrict-filenames
461 youtube-dl_test_video_.mp4 # A simple file name
462 ```
463
464 # FORMAT SELECTION
465
466 By default youtube-dl tries to download the best quality, but sometimes you may want to download in a different format.
467 The simplest case is requesting a specific format, for example `-f 22`. You can get the list of available formats using `--list-formats`, you can also use a file extension (currently it supports aac, m4a, mp3, mp4, ogg, wav, webm) or the special names `best`, `bestvideo`, `bestaudio` and `worst`.
468
469 If you want to download multiple videos and they don't have the same formats available, you can specify the order of preference using slashes, as in `-f 22/17/18`. You can also filter the video results by putting a condition in brackets, as in `-f "best[height=720]"` (or `-f "[filesize>10M]"`). This works for filesize, height, width, tbr, abr, vbr, asr, and fps and the comparisons <, <=, >, >=, =, != and for ext, acodec, vcodec, container, and protocol and the comparisons =, != . Formats for which the value is not known are excluded unless you put a question mark (?) after the operator. You can combine format filters, so `-f "[height <=? 720][tbr>500]"` selects up to 720p videos (or videos where the height is not known) with a bitrate of at least 500 KBit/s. Use commas to download multiple formats, such as `-f 136/137/mp4/bestvideo,140/m4a/bestaudio`. You can merge the video and audio of two formats into a single file using `-f <video-format>+<audio-format>` (requires ffmpeg or avconv), for example `-f bestvideo+bestaudio`. Format selectors can also be grouped using parentheses, for example if you want to download the best mp4 and webm formats with a height lower than 480 you can use `-f '(mp4,webm)[height<480]'`.
470
471 Since the end of April 2015 and version 2015.04.26 youtube-dl uses `-f bestvideo+bestaudio/best` as default format selection (see #5447, #5456). If ffmpeg or avconv are installed this results in downloading `bestvideo` and `bestaudio` separately and muxing them together into a single file giving the best overall quality available. Otherwise it falls back to `best` and results in downloading the best available quality served as a single file. `best` is also needed for videos that don't come from YouTube because they don't provide the audio and video in two different files. If you want to only download some dash formats (for example if you are not interested in getting videos with a resolution higher than 1080p), you can add `-f bestvideo[height<=?1080]+bestaudio/best` to your configuration file. Note that if you use youtube-dl to stream to `stdout` (and most likely to pipe it to your media player then), i.e. you explicitly specify output template as `-o -`, youtube-dl still uses `-f best` format selection in order to start content delivery immediately to your player and not to wait until `bestvideo` and `bestaudio` are downloaded and muxed.
472
473 If you want to preserve the old format selection behavior (prior to youtube-dl 2015.04.26), i.e. you want to download the best available quality media served as a single file, you should explicitly specify your choice with `-f best`. You may want to add it to the [configuration file](#configuration) in order not to type it every time you run youtube-dl.
474
475 # VIDEO SELECTION
476
477 Videos can be filtered by their upload date using the options `--date`, `--datebefore` or `--dateafter`. They accept dates in two formats:
478
479 - Absolute dates: Dates in the format `YYYYMMDD`.
480 - Relative dates: Dates in the format `(now|today)[+-][0-9](day|week|month|year)(s)?`
481
482 Examples:
483
484 ```bash
485 # Download only the videos uploaded in the last 6 months
486 $ youtube-dl --dateafter now-6months
487
488 # Download only the videos uploaded on January 1, 1970
489 $ youtube-dl --date 19700101
490
491 $ # Download only the videos uploaded in the 200x decade
492 $ youtube-dl --dateafter 20000101 --datebefore 20091231
493 ```
494
495 # FAQ
496
497 ### How do I update youtube-dl?
498
499 If you've followed [our manual installation instructions](http://rg3.github.io/youtube-dl/download.html), you can simply run `youtube-dl -U` (or, on Linux, `sudo youtube-dl -U`).
500
501 If you have used pip, a simple `sudo pip install -U youtube-dl` is sufficient to update.
502
503 If you have installed youtube-dl using a package manager like *apt-get* or *yum*, use the standard system update mechanism to update. Note that distribution packages are often outdated. As a rule of thumb, youtube-dl releases at least once a month, and often weekly or even daily. Simply go to http://yt-dl.org/ to find out the current version. Unfortunately, there is nothing we youtube-dl developers can do if your distribution serves a really outdated version. You can (and should) complain to your distribution in their bugtracker or support forum.
504
505 As a last resort, you can also uninstall the version installed by your package manager and follow our manual installation instructions. For that, remove the distribution's package, with a line like
506
507 sudo apt-get remove -y youtube-dl
508
509 Afterwards, simply follow [our manual installation instructions](http://rg3.github.io/youtube-dl/download.html):
510
511 ```
512 sudo wget https://yt-dl.org/latest/youtube-dl -O /usr/local/bin/youtube-dl
513 sudo chmod a+x /usr/local/bin/youtube-dl
514 hash -r
515 ```
516
517 Again, from then on you'll be able to update with `sudo youtube-dl -U`.
518
519 ### I'm getting an error `Unable to extract OpenGraph title` on YouTube playlists
520
521 YouTube changed their playlist format in March 2014 and later on, so you'll need at least youtube-dl 2014.07.25 to download all YouTube videos.
522
523 If you have installed youtube-dl with a package manager, pip, setup.py or a tarball, please use that to update. Note that Ubuntu packages do not seem to get updated anymore. Since we are not affiliated with Ubuntu, there is little we can do. Feel free to [report bugs](https://bugs.launchpad.net/ubuntu/+source/youtube-dl/+filebug) to the [Ubuntu packaging guys](mailto:[email protected]?subject=outdated%20version%20of%20youtube-dl) - all they have to do is update the package to a somewhat recent version. See above for a way to update.
524
525 ### Do I always have to pass `-citw`?
526
527 By default, youtube-dl intends to have the best options (incidentally, if you have a convincing case that these should be different, [please file an issue where you explain that](https://yt-dl.org/bug)). Therefore, it is unnecessary and sometimes harmful to copy long option strings from webpages. In particular, the only option out of `-citw` that is regularly useful is `-i`.
528
529 ### Can you please put the `-b` option back?
530
531 Most people asking this question are not aware that youtube-dl now defaults to downloading the highest available quality as reported by YouTube, which will be 1080p or 720p in some cases, so you no longer need the `-b` option. For some specific videos, maybe YouTube does not report them to be available in a specific high quality format you're interested in. In that case, simply request it with the `-f` option and youtube-dl will try to download it.
532
533 ### I get HTTP error 402 when trying to download a video. What's this?
534
535 Apparently YouTube requires you to pass a CAPTCHA test if you download too much. We're [considering to provide a way to let you solve the CAPTCHA](https://github.com/rg3/youtube-dl/issues/154), but at the moment, your best course of action is pointing a webbrowser to the youtube URL, solving the CAPTCHA, and restart youtube-dl.
536
537 ### I have downloaded a video but how can I play it?
538
539 Once the video is fully downloaded, use any video player, such as [vlc](http://www.videolan.org) or [mplayer](http://www.mplayerhq.hu/).
540
541 ### I extracted a video URL with `-g`, but it does not play on another machine / in my webbrowser.
542
543 It depends a lot on the service. In many cases, requests for the video (to download/play it) must come from the same IP address and with the same cookies. Use the `--cookies` option to write the required cookies into a file, and advise your downloader to read cookies from that file. Some sites also require a common user agent to be used, use `--dump-user-agent` to see the one in use by youtube-dl.
544
545 It may be beneficial to use IPv6; in some cases, the restrictions are only applied to IPv4. Some services (sometimes only for a subset of videos) do not restrict the video URL by IP address, cookie, or user-agent, but these are the exception rather than the rule.
546
547 Please bear in mind that some URL protocols are **not** supported by browsers out of the box, including RTMP. If you are using `-g`, your own downloader must support these as well.
548
549 If you want to play the video on a machine that is not running youtube-dl, you can relay the video content from the machine that runs youtube-dl. You can use `-o -` to let youtube-dl stream a video to stdout, or simply allow the player to download the files written by youtube-dl in turn.
550
551 ### ERROR: no fmt_url_map or conn information found in video info
552
553 YouTube has switched to a new video info format in July 2011 which is not supported by old versions of youtube-dl. See [above](#how-do-i-update-youtube-dl) for how to update youtube-dl.
554
555 ### ERROR: unable to download video ###
556
557 YouTube requires an additional signature since September 2012 which is not supported by old versions of youtube-dl. See [above](#how-do-i-update-youtube-dl) for how to update youtube-dl.
558
559 ### Video URL contains an ampersand and I'm getting some strange output `[1] 2839` or `'v' is not recognized as an internal or external command` ###
560
561 That's actually the output from your shell. Since ampersand is one of the special shell characters it's interpreted by the shell preventing you from passing the whole URL to youtube-dl. To disable your shell from interpreting the ampersands (or any other special characters) you have to either put the whole URL in quotes or escape them with a backslash (which approach will work depends on your shell).
562
563 For example if your URL is https://www.youtube.com/watch?t=4&v=BaW_jenozKc you should end up with following command:
564
565 ```youtube-dl 'https://www.youtube.com/watch?t=4&v=BaW_jenozKc'```
566
567 or
568
569 ```youtube-dl https://www.youtube.com/watch?t=4\&v=BaW_jenozKc```
570
571 For Windows you have to use the double quotes:
572
573 ```youtube-dl "https://www.youtube.com/watch?t=4&v=BaW_jenozKc"```
574
575 ### ExtractorError: Could not find JS function u'OF'
576
577 In February 2015, the new YouTube player contained a character sequence in a string that was misinterpreted by old versions of youtube-dl. See [above](#how-do-i-update-youtube-dl) for how to update youtube-dl.
578
579 ### HTTP Error 429: Too Many Requests or 402: Payment Required
580
581 These two error codes indicate that the service is blocking your IP address because of overuse. Contact the service and ask them to unblock your IP address, or - if you have acquired a whitelisted IP address already - use the [`--proxy` or `--source-address` options](#network-options) to select another IP address.
582
583 ### SyntaxError: Non-ASCII character ###
584
585 The error
586
587 File "youtube-dl", line 2
588 SyntaxError: Non-ASCII character '\x93' ...
589
590 means you're using an outdated version of Python. Please update to Python 2.6 or 2.7.
591
592 ### What is this binary file? Where has the code gone?
593
594 Since June 2012 (#342) youtube-dl is packed as an executable zipfile, simply unzip it (might need renaming to `youtube-dl.zip` first on some systems) or clone the git repository, as laid out above. If you modify the code, you can run it by executing the `__main__.py` file. To recompile the executable, run `make youtube-dl`.
595
596 ### The exe throws a *Runtime error from Visual C++*
597
598 To run the exe you need to install first the [Microsoft Visual C++ 2008 Redistributable Package](http://www.microsoft.com/en-us/download/details.aspx?id=29).
599
600 ### On Windows, how should I set up ffmpeg and youtube-dl? Where should I put the exe files?
601
602 If you put youtube-dl and ffmpeg in the same directory that you're running the command from, it will work, but that's rather cumbersome.
603
604 To make a different directory work - either for ffmpeg, or for youtube-dl, or for both - simply create the directory (say, `C:\bin`, or `C:\Users\<User name>\bin`), put all the executables directly in there, and then [set your PATH environment variable](https://www.java.com/en/download/help/path.xml) to include that directory.
605
606 From then on, after restarting your shell, you will be able to access both youtube-dl and ffmpeg (and youtube-dl will be able to find ffmpeg) by simply typing `youtube-dl` or `ffmpeg`, no matter what directory you're in.
607
608 ### How do I put downloads into a specific folder?
609
610 Use the `-o` to specify an [output template](#output-template), for example `-o "/home/user/videos/%(title)s-%(id)s.%(ext)s"`. If you want this for all of your downloads, put the option into your [configuration file](#configuration).
611
612 ### How do I download a video starting with a `-` ?
613
614 Either prepend `http://www.youtube.com/watch?v=` or separate the ID from the options with `--`:
615
616 youtube-dl -- -wNyEUrxzFU
617 youtube-dl "http://www.youtube.com/watch?v=-wNyEUrxzFU"
618
619 ### How do I pass cookies to youtube-dl?
620
621 Use the `--cookies` option, for example `--cookies /path/to/cookies/file.txt`. Note that the cookies file must be in Mozilla/Netscape format and the first line of the cookies file must be either `# HTTP Cookie File` or `# Netscape HTTP Cookie File`. Make sure you have correct [newline format](https://en.wikipedia.org/wiki/Newline) in the cookies file and convert newlines if necessary to correspond with your OS, namely `CRLF` (`\r\n`) for Windows, `LF` (`\n`) for Linux and `CR` (`\r`) for Mac OS. `HTTP Error 400: Bad Request` when using `--cookies` is a good sign of invalid newline format.
622
623 Passing cookies to youtube-dl is a good way to workaround login when a particular extractor does not implement it explicitly.
624
625 ### Can you add support for this anime video site, or site which shows current movies for free?
626
627 As a matter of policy (as well as legality), youtube-dl does not include support for services that specialize in infringing copyright. As a rule of thumb, if you cannot easily find a video that the service is quite obviously allowed to distribute (i.e. that has been uploaded by the creator, the creator's distributor, or is published under a free license), the service is probably unfit for inclusion to youtube-dl.
628
629 A note on the service that they don't host the infringing content, but just link to those who do, is evidence that the service should **not** be included into youtube-dl. The same goes for any DMCA note when the whole front page of the service is filled with videos they are not allowed to distribute. A "fair use" note is equally unconvincing if the service shows copyright-protected videos in full without authorization.
630
631 Support requests for services that **do** purchase the rights to distribute their content are perfectly fine though. If in doubt, you can simply include a source that mentions the legitimate purchase of content.
632
633 ### How can I speed up work on my issue?
634
635 (Also known as: Help, my important issue not being solved!) The youtube-dl core developer team is quite small. While we do our best to solve as many issues as possible, sometimes that can take quite a while. To speed up your issue, here's what you can do:
636
637 First of all, please do report the issue [at our issue tracker](https://yt-dl.org/bugs). That allows us to coordinate all efforts by users and developers, and serves as a unified point. Unfortunately, the youtube-dl project has grown too large to use personal email as an effective communication channel.
638
639 Please read the [bug reporting instructions](#bugs) below. A lot of bugs lack all the necessary information. If you can, offer proxy, VPN, or shell access to the youtube-dl developers. If you are able to, test the issue from multiple computers in multiple countries to exclude local censorship or misconfiguration issues.
640
641 If nobody is interested in solving your issue, you are welcome to take matters into your own hands and submit a pull request (or coerce/pay somebody else to do so).
642
643 Feel free to bump the issue from time to time by writing a small comment ("Issue is still present in youtube-dl version ...from France, but fixed from Belgium"), but please not more than once a month. Please do not declare your issue as `important` or `urgent`.
644
645 ### How can I detect whether a given URL is supported by youtube-dl?
646
647 For one, have a look at the [list of supported sites](docs/supportedsites.md). Note that it can sometimes happen that the site changes its URL scheme (say, from http://example.com/video/1234567 to http://example.com/v/1234567 ) and youtube-dl reports an URL of a service in that list as unsupported. In that case, simply report a bug.
648
649 It is *not* possible to detect whether a URL is supported or not. That's because youtube-dl contains a generic extractor which matches **all** URLs. You may be tempted to disable, exclude, or remove the generic extractor, but the generic extractor not only allows users to extract videos from lots of websites that embed a video from another service, but may also be used to extract video from a service that it's hosting itself. Therefore, we neither recommend nor support disabling, excluding, or removing the generic extractor.
650
651 If you want to find out whether a given URL is supported, simply call youtube-dl with it. If you get no videos back, chances are the URL is either not referring to a video or unsupported. You can find out which by examining the output (if you run youtube-dl on the console) or catching an `UnsupportedError` exception if you run it from a Python program.
652
653 # DEVELOPER INSTRUCTIONS
654
655 Most users do not need to build youtube-dl and can [download the builds](http://rg3.github.io/youtube-dl/download.html) or get them from their distribution.
656
657 To run youtube-dl as a developer, you don't need to build anything either. Simply execute
658
659 python -m youtube_dl
660
661 To run the test, simply invoke your favorite test runner, or execute a test file directly; any of the following work:
662
663 python -m unittest discover
664 python test/test_download.py
665 nosetests
666
667 If you want to create a build of youtube-dl yourself, you'll need
668
669 * python
670 * make
671 * pandoc
672 * zip
673 * nosetests
674
675 ### Adding support for a new site
676
677 If you want to add support for a new site, you can follow this quick list (assuming your service is called `yourextractor`):
678
679 1. [Fork this repository](https://github.com/rg3/youtube-dl/fork)
680 2. Check out the source code with `git clone [email protected]:YOUR_GITHUB_USERNAME/youtube-dl.git`
681 3. Start a new git branch with `cd youtube-dl; git checkout -b yourextractor`
682 4. Start with this simple template and save it to `youtube_dl/extractor/yourextractor.py`:
683 ```python
684 # coding: utf-8
685 from __future__ import unicode_literals
686
687 from .common import InfoExtractor
688
689
690 class YourExtractorIE(InfoExtractor):
691 _VALID_URL = r'https?://(?:www\.)?yourextractor\.com/watch/(?P<id>[0-9]+)'
692 _TEST = {
693 'url': 'http://yourextractor.com/watch/42',
694 'md5': 'TODO: md5 sum of the first 10241 bytes of the video file (use --test)',
695 'info_dict': {
696 'id': '42',
697 'ext': 'mp4',
698 'title': 'Video title goes here',
699 'thumbnail': 're:^https?://.*\.jpg$',
700 # TODO more properties, either as:
701 # * A value
702 # * MD5 checksum; start the string with md5:
703 # * A regular expression; start the string with re:
704 # * Any Python type (for example int or float)
705 }
706 }
707
708 def _real_extract(self, url):
709 video_id = self._match_id(url)
710 webpage = self._download_webpage(url, video_id)
711
712 # TODO more code goes here, for example ...
713 title = self._html_search_regex(r'<h1>(.+?)</h1>', webpage, 'title')
714
715 return {
716 'id': video_id,
717 'title': title,
718 'description': self._og_search_description(webpage),
719 'uploader': self._search_regex(r'<div[^>]+id="uploader"[^>]*>([^<]+)<', webpage, 'uploader', fatal=False),
720 # TODO more properties (see youtube_dl/extractor/common.py)
721 }
722 ```
723 5. Add an import in [`youtube_dl/extractor/__init__.py`](https://github.com/rg3/youtube-dl/blob/master/youtube_dl/extractor/__init__.py).
724 6. Run `python test/test_download.py TestDownload.test_YourExtractor`. This *should fail* at first, but you can continually re-run it until you're done. If you decide to add more than one test, then rename ``_TEST`` to ``_TESTS`` and make it into a list of dictionaries. The tests will then be named `TestDownload.test_YourExtractor`, `TestDownload.test_YourExtractor_1`, `TestDownload.test_YourExtractor_2`, etc.
725 7. Have a look at [`youtube_dl/extractor/common.py`](https://github.com/rg3/youtube-dl/blob/master/youtube_dl/extractor/common.py) for possible helper methods and a [detailed description of what your extractor should and may return](https://github.com/rg3/youtube-dl/blob/master/youtube_dl/extractor/common.py#L62-L200). Add tests and code for as many as you want.
726 8. If you can, check the code with [flake8](https://pypi.python.org/pypi/flake8).
727 9. When the tests pass, [add](http://git-scm.com/docs/git-add) the new files and [commit](http://git-scm.com/docs/git-commit) them and [push](http://git-scm.com/docs/git-push) the result, like this:
728
729 $ git add youtube_dl/extractor/__init__.py
730 $ git add youtube_dl/extractor/yourextractor.py
731 $ git commit -m '[yourextractor] Add new extractor'
732 $ git push origin yourextractor
733
734 10. Finally, [create a pull request](https://help.github.com/articles/creating-a-pull-request). We'll then review and merge it.
735
736 In any case, thank you very much for your contributions!
737
738 # EMBEDDING YOUTUBE-DL
739
740 youtube-dl makes the best effort to be a good command-line program, and thus should be callable from any programming language. If you encounter any problems parsing its output, feel free to [create a report](https://github.com/rg3/youtube-dl/issues/new).
741
742 From a Python program, you can embed youtube-dl in a more powerful fashion, like this:
743
744 ```python
745 from __future__ import unicode_literals
746 import youtube_dl
747
748 ydl_opts = {}
749 with youtube_dl.YoutubeDL(ydl_opts) as ydl:
750 ydl.download(['http://www.youtube.com/watch?v=BaW_jenozKc'])
751 ```
752
753 Most likely, you'll want to use various options. For a list of what can be done, have a look at [youtube_dl/YoutubeDL.py](https://github.com/rg3/youtube-dl/blob/master/youtube_dl/YoutubeDL.py#L117-L265). For a start, if you want to intercept youtube-dl's output, set a `logger` object.
754
755 Here's a more complete example of a program that outputs only errors (and a short message after the download is finished), and downloads/converts the video to an mp3 file:
756
757 ```python
758 from __future__ import unicode_literals
759 import youtube_dl
760
761
762 class MyLogger(object):
763 def debug(self, msg):
764 pass
765
766 def warning(self, msg):
767 pass
768
769 def error(self, msg):
770 print(msg)
771
772
773 def my_hook(d):
774 if d['status'] == 'finished':
775 print('Done downloading, now converting ...')
776
777
778 ydl_opts = {
779 'format': 'bestaudio/best',
780 'postprocessors': [{
781 'key': 'FFmpegExtractAudio',
782 'preferredcodec': 'mp3',
783 'preferredquality': '192',
784 }],
785 'logger': MyLogger(),
786 'progress_hooks': [my_hook],
787 }
788 with youtube_dl.YoutubeDL(ydl_opts) as ydl:
789 ydl.download(['http://www.youtube.com/watch?v=BaW_jenozKc'])
790 ```
791
792 # BUGS
793
794 Bugs and suggestions should be reported at: <https://github.com/rg3/youtube-dl/issues> . Unless you were prompted so or there is another pertinent reason (e.g. GitHub fails to accept the bug report), please do not send bug reports via personal email. For discussions, join us in the irc channel #youtube-dl on freenode.
795
796 **Please include the full output of youtube-dl when run with `-v`**.
797
798 The output (including the first lines) contains important debugging information. Issues without the full output are often not reproducible and therefore do not get solved in short order, if ever.
799
800 Please re-read your issue once again to avoid a couple of common mistakes (you can and should use this as a checklist):
801
802 ### Is the description of the issue itself sufficient?
803
804 We often get issue reports that we cannot really decipher. While in most cases we eventually get the required information after asking back multiple times, this poses an unnecessary drain on our resources. Many contributors, including myself, are also not native speakers, so we may misread some parts.
805
806 So please elaborate on what feature you are requesting, or what bug you want to be fixed. Make sure that it's obvious
807
808 - What the problem is
809 - How it could be fixed
810 - How your proposed solution would look like
811
812 If your report is shorter than two lines, it is almost certainly missing some of these, which makes it hard for us to respond to it. We're often too polite to close the issue outright, but the missing info makes misinterpretation likely. As a commiter myself, I often get frustrated by these issues, since the only possible way for me to move forward on them is to ask for clarification over and over.
813
814 For bug reports, this means that your report should contain the *complete* output of youtube-dl when called with the `-v` flag. The error message you get for (most) bugs even says so, but you would not believe how many of our bug reports do not contain this information.
815
816 If your server has multiple IPs or you suspect censorship, adding `--call-home` may be a good idea to get more diagnostics. If the error is `ERROR: Unable to extract ...` and you cannot reproduce it from multiple countries, add `--dump-pages` (warning: this will yield a rather large output, redirect it to the file `log.txt` by adding `>log.txt 2>&1` to your command-line) or upload the `.dump` files you get when you add `--write-pages` [somewhere](https://gist.github.com/).
817
818 **Site support requests must contain an example URL**. An example URL is a URL you might want to download, like http://www.youtube.com/watch?v=BaW_jenozKc . There should be an obvious video present. Except under very special circumstances, the main page of a video service (e.g. http://www.youtube.com/ ) is *not* an example URL.
819
820 ### Are you using the latest version?
821
822 Before reporting any issue, type `youtube-dl -U`. This should report that you're up-to-date. About 20% of the reports we receive are already fixed, but people are using outdated versions. This goes for feature requests as well.
823
824 ### Is the issue already documented?
825
826 Make sure that someone has not already opened the issue you're trying to open. Search at the top of the window or at https://github.com/rg3/youtube-dl/search?type=Issues . If there is an issue, feel free to write something along the lines of "This affects me as well, with version 2015.01.01. Here is some more information on the issue: ...". While some issues may be old, a new post into them often spurs rapid activity.
827
828 ### Why are existing options not enough?
829
830 Before requesting a new feature, please have a quick peek at [the list of supported options](https://github.com/rg3/youtube-dl/blob/master/README.md#synopsis). Many feature requests are for features that actually exist already! Please, absolutely do show off your work in the issue report and detail how the existing similar options do *not* solve your problem.
831
832 ### Is there enough context in your bug report?
833
834 People want to solve problems, and often think they do us a favor by breaking down their larger problems (e.g. wanting to skip already downloaded files) to a specific request (e.g. requesting us to look whether the file exists before downloading the info page). However, what often happens is that they break down the problem into two steps: One simple, and one impossible (or extremely complicated one).
835
836 We are then presented with a very complicated request when the original problem could be solved far easier, e.g. by recording the downloaded video IDs in a separate file. To avoid this, you must include the greater context where it is non-obvious. In particular, every feature request that does not consist of adding support for a new site should contain a use case scenario that explains in what situation the missing feature would be useful.
837
838 ### Does the issue involve one problem, and one problem only?
839
840 Some of our users seem to think there is a limit of issues they can or should open. There is no limit of issues they can or should open. While it may seem appealing to be able to dump all your issues into one ticket, that means that someone who solves one of your issues cannot mark the issue as closed. Typically, reporting a bunch of issues leads to the ticket lingering since nobody wants to attack that behemoth, until someone mercifully splits the issue into multiple ones.
841
842 In particular, every site support request issue should only pertain to services at one site (generally under a common domain, but always using the same backend technology). Do not request support for vimeo user videos, Whitehouse podcasts, and Google Plus pages in the same issue. Also, make sure that you don't post bug reports alongside feature requests. As a rule of thumb, a feature request does not include outputs of youtube-dl that are not immediately related to the feature at hand. Do not post reports of a network error alongside the request for a new video service.
843
844 ### Is anyone going to need the feature?
845
846 Only post features that you (or an incapacitated friend you can personally talk to) require. Do not post features because they seem like a good idea. If they are really useful, they will be requested by someone who requires them.
847
848 ### Is your question about youtube-dl?
849
850 It may sound strange, but some bug reports we receive are completely unrelated to youtube-dl and relate to a different or even the reporter's own application. Please make sure that you are actually using youtube-dl. If you are using a UI for youtube-dl, report the bug to the maintainer of the actual application providing the UI. On the other hand, if your UI for youtube-dl fails in some way you believe is related to youtube-dl, by all means, go ahead and report the bug.
851
852 # COPYRIGHT
853
854 youtube-dl is released into the public domain by the copyright holders.
855
856 This README file was originally written by Daniel Bolton (<https://github.com/dbbolton>) and is likewise released into the public domain.
857
[end of README.md]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
ytdl-org/youtube-dl
|
bd1512d19649c280197729814766d590ea6c023b
|
neteasemusic no longer works
|
[You haven't posted all the required info](https://github.com/rg3/youtube-dl/blob/edeb3e7cb1ab2d82ff7c712a7cc1e338a9dcd8f8/README.md#bugs), once you post it we'll reopen the issue and try to help you.
I do believe description is self sufficient, current master will faill all of the tests in youtube_dl/extractor/neteasemusic.py
and btw the issue is just with hardcoded m1.music.126.net server you use, m2.music.126.net works just fine
Confirmed.
|
2015-11-16T03:42:50Z
|
<patch>
diff --git a/youtube_dl/extractor/neteasemusic.py b/youtube_dl/extractor/neteasemusic.py
--- a/youtube_dl/extractor/neteasemusic.py
+++ b/youtube_dl/extractor/neteasemusic.py
@@ -40,7 +40,7 @@ def extract_formats(cls, info):
if not details:
continue
formats.append({
- 'url': 'http://m1.music.126.net/%s/%s.%s' %
+ 'url': 'http://m5.music.126.net/%s/%s.%s' %
(cls._encrypt(details['dfsId']), details['dfsId'],
details['extension']),
'ext': details.get('extension'),
</patch>
|
[]
|
[]
| |||
pandas-dev__pandas-22054
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
np.ndarray[object] - Timedelta raises
```
arr = np.array([pd.Timestamp.now(), pd.Timedelta('2D')])
>>> arr - pd.Timedelta('1D')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unsupported operand type(s) for -: 'numpy.ndarray' and 'Timedelta'
```
It should attempt to operate element-wise.
</issue>
<code>
[start of README.md]
1 <div align="center">
2 <img src="https://github.com/pandas-dev/pandas/blob/master/doc/logo/pandas_logo.png"><br>
3 </div>
4
5 -----------------
6
7 # pandas: powerful Python data analysis toolkit
8
9 <table>
10 <tr>
11 <td>Latest Release</td>
12 <td>
13 <a href="https://pypi.org/project/pandas/">
14 <img src="https://img.shields.io/pypi/v/pandas.svg" alt="latest release" />
15 </a>
16 </td>
17 </tr>
18 <td></td>
19 <td>
20 <a href="https://anaconda.org/anaconda/pandas/">
21 <img src="https://anaconda.org/conda-forge/pandas/badges/version.svg" alt="latest release" />
22 </a>
23 </td>
24 </tr>
25 <tr>
26 <td>Package Status</td>
27 <td>
28 <a href="https://pypi.org/project/pandas/">
29 <img src="https://img.shields.io/pypi/status/pandas.svg" alt="status" /></td>
30 </a>
31 </tr>
32 <tr>
33 <td>License</td>
34 <td>
35 <a href="https://github.com/pandas-dev/pandas/blob/master/LICENSE">
36 <img src="https://img.shields.io/pypi/l/pandas.svg" alt="license" />
37 </a>
38 </td>
39 </tr>
40 <tr>
41 <td>Build Status</td>
42 <td>
43 <a href="https://travis-ci.org/pandas-dev/pandas">
44 <img src="https://travis-ci.org/pandas-dev/pandas.svg?branch=master" alt="travis build status" />
45 </a>
46 </td>
47 </tr>
48 <tr>
49 <td></td>
50 <td>
51 <a href="https://circleci.com/gh/pandas-dev/pandas">
52 <img src="https://circleci.com/gh/circleci/mongofinil/tree/master.svg?style=shield&circle-token=223d8cafa7b02902c3e150242520af8944e34671" alt="circleci build status" />
53 </a>
54 </td>
55 </tr>
56 <tr>
57 <td></td>
58 <td>
59 <a href="https://ci.appveyor.com/project/pandas-dev/pandas">
60 <img src="https://ci.appveyor.com/api/projects/status/86vn83mxgnl4xf1s/branch/master?svg=true" alt="appveyor build status" />
61 </a>
62 </td>
63 </tr>
64 <tr>
65 <td>Coverage</td>
66 <td>
67 <a href="https://codecov.io/gh/pandas-dev/pandas">
68 <img src="https://codecov.io/github/pandas-dev/pandas/coverage.svg?branch=master" alt="coverage" />
69 </a>
70 </td>
71 </tr>
72 <tr>
73 <td>Downloads</td>
74 <td>
75 <a href="https://pandas.pydata.org">
76 <img src="https://anaconda.org/conda-forge/pandas/badges/downloads.svg" alt="conda-forge downloads" />
77 </a>
78 </td>
79 </tr>
80 <tr>
81 <td>Gitter</td>
82 <td>
83 <a href="https://gitter.im/pydata/pandas">
84 <img src="https://badges.gitter.im/Join%20Chat.svg"
85 </a>
86 </td>
87 </tr>
88 </table>
89
90
91
92 ## What is it?
93
94 **pandas** is a Python package providing fast, flexible, and expressive data
95 structures designed to make working with "relational" or "labeled" data both
96 easy and intuitive. It aims to be the fundamental high-level building block for
97 doing practical, **real world** data analysis in Python. Additionally, it has
98 the broader goal of becoming **the most powerful and flexible open source data
99 analysis / manipulation tool available in any language**. It is already well on
100 its way toward this goal.
101
102 ## Main Features
103 Here are just a few of the things that pandas does well:
104
105 - Easy handling of [**missing data**][missing-data] (represented as
106 `NaN`) in floating point as well as non-floating point data
107 - Size mutability: columns can be [**inserted and
108 deleted**][insertion-deletion] from DataFrame and higher dimensional
109 objects
110 - Automatic and explicit [**data alignment**][alignment]: objects can
111 be explicitly aligned to a set of labels, or the user can simply
112 ignore the labels and let `Series`, `DataFrame`, etc. automatically
113 align the data for you in computations
114 - Powerful, flexible [**group by**][groupby] functionality to perform
115 split-apply-combine operations on data sets, for both aggregating
116 and transforming data
117 - Make it [**easy to convert**][conversion] ragged,
118 differently-indexed data in other Python and NumPy data structures
119 into DataFrame objects
120 - Intelligent label-based [**slicing**][slicing], [**fancy
121 indexing**][fancy-indexing], and [**subsetting**][subsetting] of
122 large data sets
123 - Intuitive [**merging**][merging] and [**joining**][joining] data
124 sets
125 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of
126 data sets
127 - [**Hierarchical**][mi] labeling of axes (possible to have multiple
128 labels per tick)
129 - Robust IO tools for loading data from [**flat files**][flat-files]
130 (CSV and delimited), [**Excel files**][excel], [**databases**][db],
131 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore]
132 - [**Time series**][timeseries]-specific functionality: date range
133 generation and frequency conversion, moving window statistics,
134 moving window linear regressions, date shifting and lagging, etc.
135
136
137 [missing-data]: https://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data
138 [insertion-deletion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#column-selection-addition-deletion
139 [alignment]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html?highlight=alignment#intro-to-data-structures
140 [groupby]: https://pandas.pydata.org/pandas-docs/stable/groupby.html#group-by-split-apply-combine
141 [conversion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe
142 [slicing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges
143 [fancy-indexing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#advanced-indexing-with-ix
144 [subsetting]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing
145 [merging]: https://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging
146 [joining]: https://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index
147 [reshape]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-and-pivot-tables
148 [pivot-table]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#pivot-tables-and-cross-tabulations
149 [mi]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex
150 [flat-files]: https://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files
151 [excel]: https://pandas.pydata.org/pandas-docs/stable/io.html#excel-files
152 [db]: https://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries
153 [hdfstore]: https://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables
154 [timeseries]: https://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-series-date-functionality
155
156 ## Where to get it
157 The source code is currently hosted on GitHub at:
158 https://github.com/pandas-dev/pandas
159
160 Binary installers for the latest released version are available at the [Python
161 package index](https://pypi.org/project/pandas) and on conda.
162
163 ```sh
164 # conda
165 conda install pandas
166 ```
167
168 ```sh
169 # or PyPI
170 pip install pandas
171 ```
172
173 ## Dependencies
174 - [NumPy](https://www.numpy.org): 1.9.0 or higher
175 - [python-dateutil](https://labix.org/python-dateutil): 2.5.0 or higher
176 - [pytz](https://pythonhosted.org/pytz): 2011k or higher
177
178 See the [full installation instructions](https://pandas.pydata.org/pandas-docs/stable/install.html#dependencies)
179 for recommended and optional dependencies.
180
181 ## Installation from sources
182 To install pandas from source you need Cython in addition to the normal
183 dependencies above. Cython can be installed from pypi:
184
185 ```sh
186 pip install cython
187 ```
188
189 In the `pandas` directory (same one where you found this file after
190 cloning the git repo), execute:
191
192 ```sh
193 python setup.py install
194 ```
195
196 or for installing in [development mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs):
197
198 ```sh
199 python setup.py develop
200 ```
201
202 Alternatively, you can use `pip` if you want all the dependencies pulled
203 in automatically (the `-e` option is for installing it in [development
204 mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs)):
205
206 ```sh
207 pip install -e .
208 ```
209
210 See the full instructions for [installing from source](https://pandas.pydata.org/pandas-docs/stable/install.html#installing-from-source).
211
212 ## License
213 [BSD 3](LICENSE)
214
215 ## Documentation
216 The official documentation is hosted on PyData.org: https://pandas.pydata.org/pandas-docs/stable
217
218 ## Background
219 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and
220 has been under active development since then.
221
222 ## Getting Help
223
224 For usage questions, the best place to go to is [StackOverflow](https://stackoverflow.com/questions/tagged/pandas).
225 Further, general questions and discussions can also take place on the [pydata mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata).
226
227 ## Discussion and Development
228 Most development discussion is taking place on github in this repo. Further, the [pandas-dev mailing list](https://mail.python.org/mailman/listinfo/pandas-dev) can also be used for specialized discussions or design issues, and a [Gitter channel](https://gitter.im/pydata/pandas) is available for quick development related questions.
229
230 ## Contributing to pandas [](https://www.codetriage.com/pandas-dev/pandas)
231
232 All contributions, bug reports, bug fixes, documentation improvements, enhancements and ideas are welcome.
233
234 A detailed overview on how to contribute can be found in the **[contributing guide.](https://pandas.pydata.org/pandas-docs/stable/contributing.html)**
235
236 If you are simply looking to start working with the pandas codebase, navigate to the [GitHub “issues” tab](https://github.com/pandas-dev/pandas/issues) and start looking through interesting issues. There are a number of issues listed under [Docs](https://github.com/pandas-dev/pandas/issues?labels=Docs&sort=updated&state=open) and [good first issue](https://github.com/pandas-dev/pandas/issues?labels=good+first+issue&sort=updated&state=open) where you could start out.
237
238 You can also triage issues which may include reproducing bug reports, or asking for vital information such as version numbers or reproduction instructions. If you would like to start triaging issues, one easy way to get started is to [subscribe to pandas on CodeTriage](https://www.codetriage.com/pandas-dev/pandas).
239
240 Or maybe through using pandas you have an idea of your own or are looking for something in the documentation and thinking ‘this can be improved’...you can do something about it!
241
242 Feel free to ask questions on the [mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata) or on [Gitter](https://gitter.im/pydata/pandas).
243
[end of README.md]
[start of pandas/core/arrays/base.py]
1 """An interface for extending pandas with custom arrays.
2
3 .. warning::
4
5 This is an experimental API and subject to breaking changes
6 without warning.
7 """
8 import numpy as np
9
10 import operator
11
12 from pandas.errors import AbstractMethodError
13 from pandas.compat.numpy import function as nv
14 from pandas.compat import set_function_name, PY3
15 from pandas.core import ops
16 from pandas.core.dtypes.common import is_list_like
17
18 _not_implemented_message = "{} does not implement {}."
19
20
21 class ExtensionArray(object):
22 """Abstract base class for custom 1-D array types.
23
24 pandas will recognize instances of this class as proper arrays
25 with a custom type and will not attempt to coerce them to objects. They
26 may be stored directly inside a :class:`DataFrame` or :class:`Series`.
27
28 .. versionadded:: 0.23.0
29
30 Notes
31 -----
32 The interface includes the following abstract methods that must be
33 implemented by subclasses:
34
35 * _from_sequence
36 * _from_factorized
37 * __getitem__
38 * __len__
39 * dtype
40 * nbytes
41 * isna
42 * take
43 * copy
44 * _concat_same_type
45
46 An additional method is available to satisfy pandas' internal,
47 private block API.
48
49 * _formatting_values
50
51 Some methods require casting the ExtensionArray to an ndarray of Python
52 objects with ``self.astype(object)``, which may be expensive. When
53 performance is a concern, we highly recommend overriding the following
54 methods:
55
56 * fillna
57 * dropna
58 * unique
59 * factorize / _values_for_factorize
60 * argsort / _values_for_argsort
61
62 The remaining methods implemented on this class should be performant,
63 as they only compose abstract methods. Still, a more efficient
64 implementation may be available, and these methods can be overridden.
65
66 This class does not inherit from 'abc.ABCMeta' for performance reasons.
67 Methods and properties required by the interface raise
68 ``pandas.errors.AbstractMethodError`` and no ``register`` method is
69 provided for registering virtual subclasses.
70
71 ExtensionArrays are limited to 1 dimension.
72
73 They may be backed by none, one, or many NumPy arrays. For example,
74 ``pandas.Categorical`` is an extension array backed by two arrays,
75 one for codes and one for categories. An array of IPv6 address may
76 be backed by a NumPy structured array with two fields, one for the
77 lower 64 bits and one for the upper 64 bits. Or they may be backed
78 by some other storage type, like Python lists. Pandas makes no
79 assumptions on how the data are stored, just that it can be converted
80 to a NumPy array.
81 The ExtensionArray interface does not impose any rules on how this data
82 is stored. However, currently, the backing data cannot be stored in
83 attributes called ``.values`` or ``._values`` to ensure full compatibility
84 with pandas internals. But other names as ``.data``, ``._data``,
85 ``._items``, ... can be freely used.
86 """
87 # '_typ' is for pandas.core.dtypes.generic.ABCExtensionArray.
88 # Don't override this.
89 _typ = 'extension'
90
91 # ------------------------------------------------------------------------
92 # Constructors
93 # ------------------------------------------------------------------------
94 @classmethod
95 def _from_sequence(cls, scalars, dtype=None, copy=False):
96 """Construct a new ExtensionArray from a sequence of scalars.
97
98 Parameters
99 ----------
100 scalars : Sequence
101 Each element will be an instance of the scalar type for this
102 array, ``cls.dtype.type``.
103 dtype : dtype, optional
104 Construct for this particular dtype. This should be a Dtype
105 compatible with the ExtensionArray.
106 copy : boolean, default False
107 If True, copy the underlying data.
108 Returns
109 -------
110 ExtensionArray
111 """
112 raise AbstractMethodError(cls)
113
114 @classmethod
115 def _from_factorized(cls, values, original):
116 """Reconstruct an ExtensionArray after factorization.
117
118 Parameters
119 ----------
120 values : ndarray
121 An integer ndarray with the factorized values.
122 original : ExtensionArray
123 The original ExtensionArray that factorize was called on.
124
125 See Also
126 --------
127 pandas.factorize
128 ExtensionArray.factorize
129 """
130 raise AbstractMethodError(cls)
131
132 # ------------------------------------------------------------------------
133 # Must be a Sequence
134 # ------------------------------------------------------------------------
135
136 def __getitem__(self, item):
137 # type (Any) -> Any
138 """Select a subset of self.
139
140 Parameters
141 ----------
142 item : int, slice, or ndarray
143 * int: The position in 'self' to get.
144
145 * slice: A slice object, where 'start', 'stop', and 'step' are
146 integers or None
147
148 * ndarray: A 1-d boolean NumPy ndarray the same length as 'self'
149
150 Returns
151 -------
152 item : scalar or ExtensionArray
153
154 Notes
155 -----
156 For scalar ``item``, return a scalar value suitable for the array's
157 type. This should be an instance of ``self.dtype.type``.
158
159 For slice ``key``, return an instance of ``ExtensionArray``, even
160 if the slice is length 0 or 1.
161
162 For a boolean mask, return an instance of ``ExtensionArray``, filtered
163 to the values where ``item`` is True.
164 """
165 raise AbstractMethodError(self)
166
167 def __setitem__(self, key, value):
168 # type: (Union[int, np.ndarray], Any) -> None
169 """Set one or more values inplace.
170
171 This method is not required to satisfy the pandas extension array
172 interface.
173
174 Parameters
175 ----------
176 key : int, ndarray, or slice
177 When called from, e.g. ``Series.__setitem__``, ``key`` will be
178 one of
179
180 * scalar int
181 * ndarray of integers.
182 * boolean ndarray
183 * slice object
184
185 value : ExtensionDtype.type, Sequence[ExtensionDtype.type], or object
186 value or values to be set of ``key``.
187
188 Returns
189 -------
190 None
191 """
192 # Some notes to the ExtensionArray implementor who may have ended up
193 # here. While this method is not required for the interface, if you
194 # *do* choose to implement __setitem__, then some semantics should be
195 # observed:
196 #
197 # * Setting multiple values : ExtensionArrays should support setting
198 # multiple values at once, 'key' will be a sequence of integers and
199 # 'value' will be a same-length sequence.
200 #
201 # * Broadcasting : For a sequence 'key' and a scalar 'value',
202 # each position in 'key' should be set to 'value'.
203 #
204 # * Coercion : Most users will expect basic coercion to work. For
205 # example, a string like '2018-01-01' is coerced to a datetime
206 # when setting on a datetime64ns array. In general, if the
207 # __init__ method coerces that value, then so should __setitem__
208 raise NotImplementedError(_not_implemented_message.format(
209 type(self), '__setitem__')
210 )
211
212 def __len__(self):
213 # type: () -> int
214 """Length of this array
215
216 Returns
217 -------
218 length : int
219 """
220 raise AbstractMethodError(self)
221
222 def __iter__(self):
223 """Iterate over elements of the array.
224
225 """
226 # This needs to be implemented so that pandas recognizes extension
227 # arrays as list-like. The default implementation makes successive
228 # calls to ``__getitem__``, which may be slower than necessary.
229 for i in range(len(self)):
230 yield self[i]
231
232 # ------------------------------------------------------------------------
233 # Required attributes
234 # ------------------------------------------------------------------------
235 @property
236 def dtype(self):
237 # type: () -> ExtensionDtype
238 """An instance of 'ExtensionDtype'."""
239 raise AbstractMethodError(self)
240
241 @property
242 def shape(self):
243 # type: () -> Tuple[int, ...]
244 """Return a tuple of the array dimensions."""
245 return (len(self),)
246
247 @property
248 def ndim(self):
249 # type: () -> int
250 """Extension Arrays are only allowed to be 1-dimensional."""
251 return 1
252
253 @property
254 def nbytes(self):
255 # type: () -> int
256 """The number of bytes needed to store this object in memory.
257
258 """
259 # If this is expensive to compute, return an approximate lower bound
260 # on the number of bytes needed.
261 raise AbstractMethodError(self)
262
263 # ------------------------------------------------------------------------
264 # Additional Methods
265 # ------------------------------------------------------------------------
266 def astype(self, dtype, copy=True):
267 """Cast to a NumPy array with 'dtype'.
268
269 Parameters
270 ----------
271 dtype : str or dtype
272 Typecode or data-type to which the array is cast.
273 copy : bool, default True
274 Whether to copy the data, even if not necessary. If False,
275 a copy is made only if the old dtype does not match the
276 new dtype.
277
278 Returns
279 -------
280 array : ndarray
281 NumPy ndarray with 'dtype' for its dtype.
282 """
283 return np.array(self, dtype=dtype, copy=copy)
284
285 def isna(self):
286 # type: () -> np.ndarray
287 """Boolean NumPy array indicating if each value is missing.
288
289 This should return a 1-D array the same length as 'self'.
290 """
291 raise AbstractMethodError(self)
292
293 def _values_for_argsort(self):
294 # type: () -> ndarray
295 """Return values for sorting.
296
297 Returns
298 -------
299 ndarray
300 The transformed values should maintain the ordering between values
301 within the array.
302
303 See Also
304 --------
305 ExtensionArray.argsort
306 """
307 # Note: this is used in `ExtensionArray.argsort`.
308 return np.array(self)
309
310 def argsort(self, ascending=True, kind='quicksort', *args, **kwargs):
311 """
312 Return the indices that would sort this array.
313
314 Parameters
315 ----------
316 ascending : bool, default True
317 Whether the indices should result in an ascending
318 or descending sort.
319 kind : {'quicksort', 'mergesort', 'heapsort'}, optional
320 Sorting algorithm.
321 *args, **kwargs:
322 passed through to :func:`numpy.argsort`.
323
324 Returns
325 -------
326 index_array : ndarray
327 Array of indices that sort ``self``.
328
329 See Also
330 --------
331 numpy.argsort : Sorting implementation used internally.
332 """
333 # Implementor note: You have two places to override the behavior of
334 # argsort.
335 # 1. _values_for_argsort : construct the values passed to np.argsort
336 # 2. argsort : total control over sorting.
337 ascending = nv.validate_argsort_with_ascending(ascending, args, kwargs)
338 values = self._values_for_argsort()
339 result = np.argsort(values, kind=kind, **kwargs)
340 if not ascending:
341 result = result[::-1]
342 return result
343
344 def fillna(self, value=None, method=None, limit=None):
345 """ Fill NA/NaN values using the specified method.
346
347 Parameters
348 ----------
349 value : scalar, array-like
350 If a scalar value is passed it is used to fill all missing values.
351 Alternatively, an array-like 'value' can be given. It's expected
352 that the array-like have the same length as 'self'.
353 method : {'backfill', 'bfill', 'pad', 'ffill', None}, default None
354 Method to use for filling holes in reindexed Series
355 pad / ffill: propagate last valid observation forward to next valid
356 backfill / bfill: use NEXT valid observation to fill gap
357 limit : int, default None
358 If method is specified, this is the maximum number of consecutive
359 NaN values to forward/backward fill. In other words, if there is
360 a gap with more than this number of consecutive NaNs, it will only
361 be partially filled. If method is not specified, this is the
362 maximum number of entries along the entire axis where NaNs will be
363 filled.
364
365 Returns
366 -------
367 filled : ExtensionArray with NA/NaN filled
368 """
369 from pandas.api.types import is_array_like
370 from pandas.util._validators import validate_fillna_kwargs
371 from pandas.core.missing import pad_1d, backfill_1d
372
373 value, method = validate_fillna_kwargs(value, method)
374
375 mask = self.isna()
376
377 if is_array_like(value):
378 if len(value) != len(self):
379 raise ValueError("Length of 'value' does not match. Got ({}) "
380 " expected {}".format(len(value), len(self)))
381 value = value[mask]
382
383 if mask.any():
384 if method is not None:
385 func = pad_1d if method == 'pad' else backfill_1d
386 new_values = func(self.astype(object), limit=limit,
387 mask=mask)
388 new_values = self._from_sequence(new_values, dtype=self.dtype)
389 else:
390 # fill with value
391 new_values = self.copy()
392 new_values[mask] = value
393 else:
394 new_values = self.copy()
395 return new_values
396
397 def dropna(self):
398 """ Return ExtensionArray without NA values
399
400 Returns
401 -------
402 valid : ExtensionArray
403 """
404
405 return self[~self.isna()]
406
407 def shift(self, periods=1):
408 # type: (int) -> ExtensionArray
409 """
410 Shift values by desired number.
411
412 Newly introduced missing values are filled with
413 ``self.dtype.na_value``.
414
415 .. versionadded:: 0.24.0
416
417 Parameters
418 ----------
419 periods : int, default 1
420 The number of periods to shift. Negative values are allowed
421 for shifting backwards.
422
423 Returns
424 -------
425 shifted : ExtensionArray
426 """
427 # Note: this implementation assumes that `self.dtype.na_value` can be
428 # stored in an instance of your ExtensionArray with `self.dtype`.
429 if periods == 0:
430 return self.copy()
431 empty = self._from_sequence([self.dtype.na_value] * abs(periods),
432 dtype=self.dtype)
433 if periods > 0:
434 a = empty
435 b = self[:-periods]
436 else:
437 a = self[abs(periods):]
438 b = empty
439 return self._concat_same_type([a, b])
440
441 def unique(self):
442 """Compute the ExtensionArray of unique values.
443
444 Returns
445 -------
446 uniques : ExtensionArray
447 """
448 from pandas import unique
449
450 uniques = unique(self.astype(object))
451 return self._from_sequence(uniques, dtype=self.dtype)
452
453 def _values_for_factorize(self):
454 # type: () -> Tuple[ndarray, Any]
455 """Return an array and missing value suitable for factorization.
456
457 Returns
458 -------
459 values : ndarray
460
461 An array suitable for factorization. This should maintain order
462 and be a supported dtype (Float64, Int64, UInt64, String, Object).
463 By default, the extension array is cast to object dtype.
464 na_value : object
465 The value in `values` to consider missing. This will be treated
466 as NA in the factorization routines, so it will be coded as
467 `na_sentinal` and not included in `uniques`. By default,
468 ``np.nan`` is used.
469 """
470 return self.astype(object), np.nan
471
472 def factorize(self, na_sentinel=-1):
473 # type: (int) -> Tuple[ndarray, ExtensionArray]
474 """Encode the extension array as an enumerated type.
475
476 Parameters
477 ----------
478 na_sentinel : int, default -1
479 Value to use in the `labels` array to indicate missing values.
480
481 Returns
482 -------
483 labels : ndarray
484 An integer NumPy array that's an indexer into the original
485 ExtensionArray.
486 uniques : ExtensionArray
487 An ExtensionArray containing the unique values of `self`.
488
489 .. note::
490
491 uniques will *not* contain an entry for the NA value of
492 the ExtensionArray if there are any missing values present
493 in `self`.
494
495 See Also
496 --------
497 pandas.factorize : Top-level factorize method that dispatches here.
498
499 Notes
500 -----
501 :meth:`pandas.factorize` offers a `sort` keyword as well.
502 """
503 # Impelmentor note: There are two ways to override the behavior of
504 # pandas.factorize
505 # 1. _values_for_factorize and _from_factorize.
506 # Specify the values passed to pandas' internal factorization
507 # routines, and how to convert from those values back to the
508 # original ExtensionArray.
509 # 2. ExtensionArray.factorize.
510 # Complete control over factorization.
511 from pandas.core.algorithms import _factorize_array
512
513 arr, na_value = self._values_for_factorize()
514
515 labels, uniques = _factorize_array(arr, na_sentinel=na_sentinel,
516 na_value=na_value)
517
518 uniques = self._from_factorized(uniques, self)
519 return labels, uniques
520
521 # ------------------------------------------------------------------------
522 # Indexing methods
523 # ------------------------------------------------------------------------
524
525 def take(self, indices, allow_fill=False, fill_value=None):
526 # type: (Sequence[int], bool, Optional[Any]) -> ExtensionArray
527 """Take elements from an array.
528
529 Parameters
530 ----------
531 indices : sequence of integers
532 Indices to be taken.
533 allow_fill : bool, default False
534 How to handle negative values in `indices`.
535
536 * False: negative values in `indices` indicate positional indices
537 from the right (the default). This is similar to
538 :func:`numpy.take`.
539
540 * True: negative values in `indices` indicate
541 missing values. These values are set to `fill_value`. Any other
542 other negative values raise a ``ValueError``.
543
544 fill_value : any, optional
545 Fill value to use for NA-indices when `allow_fill` is True.
546 This may be ``None``, in which case the default NA value for
547 the type, ``self.dtype.na_value``, is used.
548
549 For many ExtensionArrays, there will be two representations of
550 `fill_value`: a user-facing "boxed" scalar, and a low-level
551 physical NA value. `fill_value` should be the user-facing version,
552 and the implementation should handle translating that to the
553 physical version for processing the take if necessary.
554
555 Returns
556 -------
557 ExtensionArray
558
559 Raises
560 ------
561 IndexError
562 When the indices are out of bounds for the array.
563 ValueError
564 When `indices` contains negative values other than ``-1``
565 and `allow_fill` is True.
566
567 Notes
568 -----
569 ExtensionArray.take is called by ``Series.__getitem__``, ``.loc``,
570 ``iloc``, when `indices` is a sequence of values. Additionally,
571 it's called by :meth:`Series.reindex`, or any other method
572 that causes realignment, with a `fill_value`.
573
574 See Also
575 --------
576 numpy.take
577 pandas.api.extensions.take
578
579 Examples
580 --------
581 Here's an example implementation, which relies on casting the
582 extension array to object dtype. This uses the helper method
583 :func:`pandas.api.extensions.take`.
584
585 .. code-block:: python
586
587 def take(self, indices, allow_fill=False, fill_value=None):
588 from pandas.core.algorithms import take
589
590 # If the ExtensionArray is backed by an ndarray, then
591 # just pass that here instead of coercing to object.
592 data = self.astype(object)
593
594 if allow_fill and fill_value is None:
595 fill_value = self.dtype.na_value
596
597 # fill value should always be translated from the scalar
598 # type for the array, to the physical storage type for
599 # the data, before passing to take.
600
601 result = take(data, indices, fill_value=fill_value,
602 allow_fill=allow_fill)
603 return self._from_sequence(result, dtype=self.dtype)
604 """
605 # Implementer note: The `fill_value` parameter should be a user-facing
606 # value, an instance of self.dtype.type. When passed `fill_value=None`,
607 # the default of `self.dtype.na_value` should be used.
608 # This may differ from the physical storage type your ExtensionArray
609 # uses. In this case, your implementation is responsible for casting
610 # the user-facing type to the storage type, before using
611 # pandas.api.extensions.take
612 raise AbstractMethodError(self)
613
614 def copy(self, deep=False):
615 # type: (bool) -> ExtensionArray
616 """Return a copy of the array.
617
618 Parameters
619 ----------
620 deep : bool, default False
621 Also copy the underlying data backing this array.
622
623 Returns
624 -------
625 ExtensionArray
626 """
627 raise AbstractMethodError(self)
628
629 # ------------------------------------------------------------------------
630 # Block-related methods
631 # ------------------------------------------------------------------------
632
633 def _formatting_values(self):
634 # type: () -> np.ndarray
635 # At the moment, this has to be an array since we use result.dtype
636 """An array of values to be printed in, e.g. the Series repr"""
637 return np.array(self)
638
639 @classmethod
640 def _concat_same_type(cls, to_concat):
641 # type: (Sequence[ExtensionArray]) -> ExtensionArray
642 """Concatenate multiple array
643
644 Parameters
645 ----------
646 to_concat : sequence of this type
647
648 Returns
649 -------
650 ExtensionArray
651 """
652 raise AbstractMethodError(cls)
653
654 # The _can_hold_na attribute is set to True so that pandas internals
655 # will use the ExtensionDtype.na_value as the NA value in operations
656 # such as take(), reindex(), shift(), etc. In addition, those results
657 # will then be of the ExtensionArray subclass rather than an array
658 # of objects
659 _can_hold_na = True
660
661 @property
662 def _ndarray_values(self):
663 # type: () -> np.ndarray
664 """Internal pandas method for lossy conversion to a NumPy ndarray.
665
666 This method is not part of the pandas interface.
667
668 The expectation is that this is cheap to compute, and is primarily
669 used for interacting with our indexers.
670 """
671 return np.array(self)
672
673
674 class ExtensionOpsMixin(object):
675 """
676 A base class for linking the operators to their dunder names
677 """
678
679 @classmethod
680 def _add_arithmetic_ops(cls):
681 cls.__add__ = cls._create_arithmetic_method(operator.add)
682 cls.__radd__ = cls._create_arithmetic_method(ops.radd)
683 cls.__sub__ = cls._create_arithmetic_method(operator.sub)
684 cls.__rsub__ = cls._create_arithmetic_method(ops.rsub)
685 cls.__mul__ = cls._create_arithmetic_method(operator.mul)
686 cls.__rmul__ = cls._create_arithmetic_method(ops.rmul)
687 cls.__pow__ = cls._create_arithmetic_method(operator.pow)
688 cls.__rpow__ = cls._create_arithmetic_method(ops.rpow)
689 cls.__mod__ = cls._create_arithmetic_method(operator.mod)
690 cls.__rmod__ = cls._create_arithmetic_method(ops.rmod)
691 cls.__floordiv__ = cls._create_arithmetic_method(operator.floordiv)
692 cls.__rfloordiv__ = cls._create_arithmetic_method(ops.rfloordiv)
693 cls.__truediv__ = cls._create_arithmetic_method(operator.truediv)
694 cls.__rtruediv__ = cls._create_arithmetic_method(ops.rtruediv)
695 if not PY3:
696 cls.__div__ = cls._create_arithmetic_method(operator.div)
697 cls.__rdiv__ = cls._create_arithmetic_method(ops.rdiv)
698
699 cls.__divmod__ = cls._create_arithmetic_method(divmod)
700 cls.__rdivmod__ = cls._create_arithmetic_method(ops.rdivmod)
701
702 @classmethod
703 def _add_comparison_ops(cls):
704 cls.__eq__ = cls._create_comparison_method(operator.eq)
705 cls.__ne__ = cls._create_comparison_method(operator.ne)
706 cls.__lt__ = cls._create_comparison_method(operator.lt)
707 cls.__gt__ = cls._create_comparison_method(operator.gt)
708 cls.__le__ = cls._create_comparison_method(operator.le)
709 cls.__ge__ = cls._create_comparison_method(operator.ge)
710
711
712 class ExtensionScalarOpsMixin(ExtensionOpsMixin):
713 """A mixin for defining the arithmetic and logical operations on
714 an ExtensionArray class, where it is assumed that the underlying objects
715 have the operators already defined.
716
717 Usage
718 ------
719 If you have defined a subclass MyExtensionArray(ExtensionArray), then
720 use MyExtensionArray(ExtensionArray, ExtensionScalarOpsMixin) to
721 get the arithmetic operators. After the definition of MyExtensionArray,
722 insert the lines
723
724 MyExtensionArray._add_arithmetic_ops()
725 MyExtensionArray._add_comparison_ops()
726
727 to link the operators to your class.
728 """
729
730 @classmethod
731 def _create_method(cls, op, coerce_to_dtype=True):
732 """
733 A class method that returns a method that will correspond to an
734 operator for an ExtensionArray subclass, by dispatching to the
735 relevant operator defined on the individual elements of the
736 ExtensionArray.
737
738 Parameters
739 ----------
740 op : function
741 An operator that takes arguments op(a, b)
742 coerce_to_dtype : bool
743 boolean indicating whether to attempt to convert
744 the result to the underlying ExtensionArray dtype
745 (default True)
746
747 Returns
748 -------
749 A method that can be bound to a method of a class
750
751 Example
752 -------
753 Given an ExtensionArray subclass called MyExtensionArray, use
754
755 >>> __add__ = cls._create_method(operator.add)
756
757 in the class definition of MyExtensionArray to create the operator
758 for addition, that will be based on the operator implementation
759 of the underlying elements of the ExtensionArray
760
761 """
762
763 def _binop(self, other):
764 def convert_values(param):
765 if isinstance(param, ExtensionArray) or is_list_like(param):
766 ovalues = param
767 else: # Assume its an object
768 ovalues = [param] * len(self)
769 return ovalues
770 lvalues = self
771 rvalues = convert_values(other)
772
773 # If the operator is not defined for the underlying objects,
774 # a TypeError should be raised
775 res = [op(a, b) for (a, b) in zip(lvalues, rvalues)]
776
777 if coerce_to_dtype:
778 try:
779 res = self._from_sequence(res)
780 except TypeError:
781 pass
782
783 return res
784
785 op_name = ops._get_op_name(op, True)
786 return set_function_name(_binop, op_name, cls)
787
788 @classmethod
789 def _create_arithmetic_method(cls, op):
790 return cls._create_method(op)
791
792 @classmethod
793 def _create_comparison_method(cls, op):
794 return cls._create_method(op, coerce_to_dtype=False)
795
[end of pandas/core/arrays/base.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pandas-dev/pandas
|
71852da03994c7c79a4ba3a0f91c6d723be6a299
|
np.ndarray[object] - Timedelta raises
```
arr = np.array([pd.Timestamp.now(), pd.Timedelta('2D')])
>>> arr - pd.Timedelta('1D')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unsupported operand type(s) for -: 'numpy.ndarray' and 'Timedelta'
```
It should attempt to operate element-wise.
|
I suppose this is because of
https://github.com/pandas-dev/pandas/blob/27ebb3e1e40513ad5f8919a5bbc7298e2e070a39/pandas/_libs/tslibs/timedeltas.pyx#L539-L544
Any idea what the "wrong" answer would be? (with timedelta.timedelta instead of Timedelta that seems to work just fine, so I assume with Timedelta it will be the same)
No idea what the wrong answer would be. This should be easy to fix; if no one else picks it up I'll take care of it once the current PR queue settles down.
Yes, PR with a fix is certainly welcome I think
Is this still an issue? I wasn't able to repro from master.
> Is this still an issue? I wasn't able to repro from master.
What platform etc? I still get it on OSX in both py27 and py37.
OSX 10.11.6 with Python 3.6. I just pulled up a REPL and imported pandas from a compile I did yesterday from master and didn't get an exception from the example code posted. Specifically ```Python 3.6.6 (default, Jul 23 2018, 11:08:18)
[GCC 4.2.1 Compatible Clang 6.0.0 (tags/RELEASE_600/final)] on darwin```
I also didn't see the issue from the latest install from pip either. Both times I just got
```python
>>> arr = np.array([pd.Timestamp.now(), pd.Timedelta('2D')])
>>> arr
array([Timestamp('2018-07-24 10:49:41.898067'),
Timedelta('2 days 00:00:00')], dtype=object)
```
Did you try subtracting a `Timedelta` from `arr`?
Ah! 🤦♂️ yea I missed that part in the example. I repro'd the bug with that on master and latest pip. So with this then how should I go about the fix? It's not operating element wise on the array because the timedeltas.pyx isn't returning that it is a timedelta correctly? or...?
> how should I go about the fix?
Take a look at the code block Joris quoted above. At the moment that lets 'm' and 'M' dtypes through but stops everything else. The fix will involve letting 'o' dtypes through (and making sure they are handled correctly)
|
2018-07-25T19:49:53Z
|
<patch>
diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt
--- a/doc/source/whatsnew/v0.24.0.txt
+++ b/doc/source/whatsnew/v0.24.0.txt
@@ -642,6 +642,7 @@ Timedelta
- Bug in :class:`Series` with numeric dtype when adding or subtracting an an array or ``Series`` with ``timedelta64`` dtype (:issue:`22390`)
- Bug in :class:`Index` with numeric dtype when multiplying or dividing an array with dtype ``timedelta64`` (:issue:`22390`)
- Bug in :class:`TimedeltaIndex` incorrectly allowing indexing with ``Timestamp`` object (:issue:`20464`)
+- Fixed bug where subtracting :class:`Timedelta` from an object-dtyped array would raise ``TypeError`` (:issue:`21980`)
-
-
diff --git a/pandas/_libs/tslibs/timedeltas.pyx b/pandas/_libs/tslibs/timedeltas.pyx
--- a/pandas/_libs/tslibs/timedeltas.pyx
+++ b/pandas/_libs/tslibs/timedeltas.pyx
@@ -541,10 +541,12 @@ def _binary_op_method_timedeltalike(op, name):
elif hasattr(other, 'dtype'):
# nd-array like
- if other.dtype.kind not in ['m', 'M']:
- # raise rathering than letting numpy return wrong answer
+ if other.dtype.kind in ['m', 'M']:
+ return op(self.to_timedelta64(), other)
+ elif other.dtype.kind == 'O':
+ return np.array([op(self, x) for x in other])
+ else:
return NotImplemented
- return op(self.to_timedelta64(), other)
elif not _validate_ops_compat(other):
return NotImplemented
</patch>
|
[]
|
[]
| |||
pandas-dev__pandas-7581
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
SQL: read_sql functions should not inspect all tables of database
Related: comment of @balancap: https://github.com/pydata/pandas/issues/6416#issuecomment-45343694 and issue #7380 where you get a warning about a not understood type (probably in another table).
**Situation now**: in the creation of a `PandasSQLAlchemy` object (https://github.com/pydata/pandas/blob/v0.14.0/pandas/io/sql.py#L777), a full `MetaData` object is created and reflected (this means: all tables in the database are inspected and the schema's are stored as sqlalchemy `Table` objects). This is done each time `read_sql/read_sql_query/read_sql_table` is called.
**Consequence**:
- this can be costly when having a very large database or having a distant server (https://github.com/pydata/pandas/issues/6416#issuecomment-45343694).
- this can trigger warnings that does not have to do anything with your current query, or the current table you want to read, when eg one of the types in other tables is not known to sqlalchemy (eg a postgis geometry column).
**Possible solution**:
- I think the `read_sql` functions never should inspect all tables, but only the specified table (and `read_sql_query` even not that table, as with a query this information is not used, only for `read_sql_table`)
- This can maybe be achieved with using the `only` keyword in `meta.reflect(engine, only=...)`
- For the OO API interface, we can discuss what should be the default (inspect all tables or not)
@mangecoeur @danielballan @hayd
</issue>
<code>
[start of README.md]
1 # pandas: powerful Python data analysis toolkit
2
3 
4
5 [](http://scatterci.github.io/pydata/pandas)
6
7 ## What is it
8
9 **pandas** is a Python package providing fast, flexible, and expressive data
10 structures designed to make working with "relational" or "labeled" data both
11 easy and intuitive. It aims to be the fundamental high-level building block for
12 doing practical, **real world** data analysis in Python. Additionally, it has
13 the broader goal of becoming **the most powerful and flexible open source data
14 analysis / manipulation tool available in any language**. It is already well on
15 its way toward this goal.
16
17 ## Main Features
18 Here are just a few of the things that pandas does well:
19
20 - Easy handling of [**missing data**][missing-data] (represented as
21 `NaN`) in floating point as well as non-floating point data
22 - Size mutability: columns can be [**inserted and
23 deleted**][insertion-deletion] from DataFrame and higher dimensional
24 objects
25 - Automatic and explicit [**data alignment**][alignment]: objects can
26 be explicitly aligned to a set of labels, or the user can simply
27 ignore the labels and let `Series`, `DataFrame`, etc. automatically
28 align the data for you in computations
29 - Powerful, flexible [**group by**][groupby] functionality to perform
30 split-apply-combine operations on data sets, for both aggregating
31 and transforming data
32 - Make it [**easy to convert**][conversion] ragged,
33 differently-indexed data in other Python and NumPy data structures
34 into DataFrame objects
35 - Intelligent label-based [**slicing**][slicing], [**fancy
36 indexing**][fancy-indexing], and [**subsetting**][subsetting] of
37 large data sets
38 - Intuitive [**merging**][merging] and [**joining**][joining] data
39 sets
40 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of
41 data sets
42 - [**Hierarchical**][mi] labeling of axes (possible to have multiple
43 labels per tick)
44 - Robust IO tools for loading data from [**flat files**][flat-files]
45 (CSV and delimited), [**Excel files**][excel], [**databases**][db],
46 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore]
47 - [**Time series**][timeseries]-specific functionality: date range
48 generation and frequency conversion, moving window statistics,
49 moving window linear regressions, date shifting and lagging, etc.
50
51
52 [missing-data]: http://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data
53 [insertion-deletion]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html#column-selection-addition-deletion
54 [alignment]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html?highlight=alignment#intro-to-data-structures
55 [groupby]: http://pandas.pydata.org/pandas-docs/stable/groupby.html#group-by-split-apply-combine
56 [conversion]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe
57 [slicing]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges
58 [fancy-indexing]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#advanced-indexing-with-ix
59 [subsetting]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing
60 [merging]: http://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging
61 [joining]: http://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index
62 [reshape]: http://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-and-pivot-tables
63 [pivot-table]: http://pandas.pydata.org/pandas-docs/stable/reshaping.html#pivot-tables-and-cross-tabulations
64 [mi]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex
65 [flat-files]: http://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files
66 [excel]: http://pandas.pydata.org/pandas-docs/stable/io.html#excel-files
67 [db]: http://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries
68 [hdfstore]: http://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables
69 [timeseries]: http://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-series-date-functionality
70
71 ## Where to get it
72 The source code is currently hosted on GitHub at:
73 http://github.com/pydata/pandas
74
75 Binary installers for the latest released version are available at the Python
76 package index
77
78 http://pypi.python.org/pypi/pandas/
79
80 And via `easy_install`:
81
82 ```sh
83 easy_install pandas
84 ```
85
86 or `pip`:
87
88 ```sh
89 pip install pandas
90 ```
91
92 ## Dependencies
93 - [NumPy](http://www.numpy.org): 1.6.1 or higher
94 - [python-dateutil](http://labix.org/python-dateutil): 1.5 or higher
95 - [pytz](http://pytz.sourceforge.net)
96 - Needed for time zone support with ``pandas.date_range``
97
98 ### Highly Recommended Dependencies
99 - [numexpr](http://code.google.com/p/numexpr/)
100 - Needed to accelerate some expression evaluation operations
101 - Required by PyTables
102 - [bottleneck](http://berkeleyanalytics.com/bottleneck)
103 - Needed to accelerate certain numerical operations
104
105 ### Optional dependencies
106 - [Cython](http://www.cython.org): Only necessary to build development version. Version 0.17.1 or higher.
107 - [SciPy](http://www.scipy.org): miscellaneous statistical functions
108 - [PyTables](http://www.pytables.org): necessary for HDF5-based storage
109 - [SQLAlchemy](http://www.sqlalchemy.org): for SQL database support. Version 0.8.1 or higher recommended.
110 - [matplotlib](http://matplotlib.sourceforge.net/): for plotting
111 - [statsmodels](http://statsmodels.sourceforge.net/)
112 - Needed for parts of `pandas.stats`
113 - For Excel I/O:
114 - [xlrd/xlwt](http://www.python-excel.org/)
115 - Excel reading (xlrd) and writing (xlwt)
116 - [openpyxl](http://packages.python.org/openpyxl/)
117 - openpyxl version 1.6.1 or higher, but lower than 2.0.0, for
118 writing .xlsx files
119 - xlrd >= 0.9.0
120 - [XlsxWriter](https://pypi.python.org/pypi/XlsxWriter)
121 - Alternative Excel writer.
122 - [Google bq Command Line Tool](https://developers.google.com/bigquery/bq-command-line-tool/)
123 - Needed for `pandas.io.gbq`
124 - [boto](https://pypi.python.org/pypi/boto): necessary for Amazon S3 access.
125 - One of the following combinations of libraries is needed to use the
126 top-level [`pandas.read_html`][read-html-docs] function:
127 - [BeautifulSoup4][BeautifulSoup4] and [html5lib][html5lib] (Any
128 recent version of [html5lib][html5lib] is okay.)
129 - [BeautifulSoup4][BeautifulSoup4] and [lxml][lxml]
130 - [BeautifulSoup4][BeautifulSoup4] and [html5lib][html5lib] and [lxml][lxml]
131 - Only [lxml][lxml], although see [HTML reading gotchas][html-gotchas]
132 for reasons as to why you should probably **not** take this approach.
133
134 #### Notes about HTML parsing libraries
135 - If you install [BeautifulSoup4][BeautifulSoup4] you must install
136 either [lxml][lxml] or [html5lib][html5lib] or both.
137 `pandas.read_html` will **not** work with *only* `BeautifulSoup4`
138 installed.
139 - You are strongly encouraged to read [HTML reading
140 gotchas][html-gotchas]. It explains issues surrounding the
141 installation and usage of the above three libraries.
142 - You may need to install an older version of
143 [BeautifulSoup4][BeautifulSoup4]:
144 - Versions 4.2.1, 4.1.3 and 4.0.2 have been confirmed for 64 and
145 32-bit Ubuntu/Debian
146 - Additionally, if you're using [Anaconda][Anaconda] you should
147 definitely read [the gotchas about HTML parsing][html-gotchas]
148 libraries
149 - If you're on a system with `apt-get` you can do
150
151 ```sh
152 sudo apt-get build-dep python-lxml
153 ```
154
155 to get the necessary dependencies for installation of [lxml][lxml].
156 This will prevent further headaches down the line.
157
158 [html5lib]: https://github.com/html5lib/html5lib-python "html5lib"
159 [BeautifulSoup4]: http://www.crummy.com/software/BeautifulSoup "BeautifulSoup4"
160 [lxml]: http://lxml.de
161 [Anaconda]: https://store.continuum.io/cshop/anaconda
162 [NumPy]: http://numpy.scipy.org/
163 [html-gotchas]: http://pandas.pydata.org/pandas-docs/stable/gotchas.html#html-table-parsing
164 [read-html-docs]: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.io.html.read_html.html#pandas.io.html.read_html
165
166 ## Installation from sources
167 To install pandas from source you need Cython in addition to the normal
168 dependencies above. Cython can be installed from pypi:
169
170 ```sh
171 pip install cython
172 ```
173
174 In the `pandas` directory (same one where you found this file after
175 cloning the git repo), execute:
176
177 ```sh
178 python setup.py install
179 ```
180
181 or for installing in [development mode](http://www.pip-installer.org/en/latest/usage.html):
182
183 ```sh
184 python setup.py develop
185 ```
186
187 Alternatively, you can use `pip` if you want all the dependencies pulled
188 in automatically (the `-e` option is for installing it in [development
189 mode](http://www.pip-installer.org/en/latest/usage.html)):
190
191 ```sh
192 pip install -e .
193 ```
194
195 On Windows, you will need to install MinGW and execute:
196
197 ```sh
198 python setup.py build --compiler=mingw32
199 python setup.py install
200 ```
201
202 See http://pandas.pydata.org/ for more information.
203
204 ## License
205 BSD
206
207 ## Documentation
208 The official documentation is hosted on PyData.org: http://pandas.pydata.org/
209
210 The Sphinx documentation should provide a good starting point for learning how
211 to use the library. Expect the docs to continue to expand as time goes on.
212
213 ## Background
214 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and
215 has been under active development since then.
216
217 ## Discussion and Development
218 Since pandas development is related to a number of other scientific
219 Python projects, questions are welcome on the scipy-user mailing
220 list. Specialized discussions or design issues should take place on
221 the pystatsmodels mailing list / Google group, where
222 ``scikits.statsmodels`` and other libraries will also be discussed:
223
224 http://groups.google.com/group/pystatsmodels
225
[end of README.md]
[start of pandas/io/html.py]
1 """:mod:`pandas.io.html` is a module containing functionality for dealing with
2 HTML IO.
3
4 """
5
6 import os
7 import re
8 import numbers
9 import collections
10 import warnings
11
12 from distutils.version import LooseVersion
13
14 import numpy as np
15
16 from pandas.io.common import _is_url, urlopen, parse_url
17 from pandas.io.parsers import TextParser
18 from pandas.compat import (lrange, lmap, u, string_types, iteritems, text_type,
19 raise_with_traceback)
20 from pandas.core import common as com
21 from pandas import Series
22
23
24 try:
25 import bs4
26 except ImportError:
27 _HAS_BS4 = False
28 else:
29 _HAS_BS4 = True
30
31
32 try:
33 import lxml
34 except ImportError:
35 _HAS_LXML = False
36 else:
37 _HAS_LXML = True
38
39
40 try:
41 import html5lib
42 except ImportError:
43 _HAS_HTML5LIB = False
44 else:
45 _HAS_HTML5LIB = True
46
47
48 #############
49 # READ HTML #
50 #############
51 _RE_WHITESPACE = re.compile(r'[\r\n]+|\s{2,}')
52
53
54 def _remove_whitespace(s, regex=_RE_WHITESPACE):
55 """Replace extra whitespace inside of a string with a single space.
56
57 Parameters
58 ----------
59 s : str or unicode
60 The string from which to remove extra whitespace.
61
62 regex : regex
63 The regular expression to use to remove extra whitespace.
64
65 Returns
66 -------
67 subd : str or unicode
68 `s` with all extra whitespace replaced with a single space.
69 """
70 return regex.sub(' ', s.strip())
71
72
73 def _get_skiprows(skiprows):
74 """Get an iterator given an integer, slice or container.
75
76 Parameters
77 ----------
78 skiprows : int, slice, container
79 The iterator to use to skip rows; can also be a slice.
80
81 Raises
82 ------
83 TypeError
84 * If `skiprows` is not a slice, integer, or Container
85
86 Returns
87 -------
88 it : iterable
89 A proper iterator to use to skip rows of a DataFrame.
90 """
91 if isinstance(skiprows, slice):
92 return lrange(skiprows.start or 0, skiprows.stop, skiprows.step or 1)
93 elif isinstance(skiprows, numbers.Integral) or com.is_list_like(skiprows):
94 return skiprows
95 elif skiprows is None:
96 return 0
97 raise TypeError('%r is not a valid type for skipping rows' %
98 type(skiprows).__name__)
99
100
101 def _read(obj):
102 """Try to read from a url, file or string.
103
104 Parameters
105 ----------
106 obj : str, unicode, or file-like
107
108 Returns
109 -------
110 raw_text : str
111 """
112 if _is_url(obj):
113 with urlopen(obj) as url:
114 text = url.read()
115 elif hasattr(obj, 'read'):
116 text = obj.read()
117 elif isinstance(obj, string_types):
118 text = obj
119 try:
120 if os.path.isfile(text):
121 with open(text, 'rb') as f:
122 return f.read()
123 except TypeError:
124 pass
125 else:
126 raise TypeError("Cannot read object of type %r" % type(obj).__name__)
127 return text
128
129
130 class _HtmlFrameParser(object):
131 """Base class for parsers that parse HTML into DataFrames.
132
133 Parameters
134 ----------
135 io : str or file-like
136 This can be either a string of raw HTML, a valid URL using the HTTP,
137 FTP, or FILE protocols or a file-like object.
138
139 match : str or regex
140 The text to match in the document.
141
142 attrs : dict
143 List of HTML <table> element attributes to match.
144
145 Attributes
146 ----------
147 io : str or file-like
148 raw HTML, URL, or file-like object
149
150 match : regex
151 The text to match in the raw HTML
152
153 attrs : dict-like
154 A dictionary of valid table attributes to use to search for table
155 elements.
156
157 Notes
158 -----
159 To subclass this class effectively you must override the following methods:
160 * :func:`_build_doc`
161 * :func:`_text_getter`
162 * :func:`_parse_td`
163 * :func:`_parse_tables`
164 * :func:`_parse_tr`
165 * :func:`_parse_thead`
166 * :func:`_parse_tbody`
167 * :func:`_parse_tfoot`
168 See each method's respective documentation for details on their
169 functionality.
170 """
171 def __init__(self, io, match, attrs, encoding):
172 self.io = io
173 self.match = match
174 self.attrs = attrs
175 self.encoding = encoding
176
177 def parse_tables(self):
178 tables = self._parse_tables(self._build_doc(), self.match, self.attrs)
179 return (self._build_table(table) for table in tables)
180
181 def _parse_raw_data(self, rows):
182 """Parse the raw data into a list of lists.
183
184 Parameters
185 ----------
186 rows : iterable of node-like
187 A list of row elements.
188
189 text_getter : callable
190 A callable that gets the text from an individual node. This must be
191 defined by subclasses.
192
193 column_finder : callable
194 A callable that takes a row node as input and returns a list of the
195 column node in that row. This must be defined by subclasses.
196
197 Returns
198 -------
199 data : list of list of strings
200 """
201 data = [[_remove_whitespace(self._text_getter(col)) for col in
202 self._parse_td(row)] for row in rows]
203 return data
204
205 def _text_getter(self, obj):
206 """Return the text of an individual DOM node.
207
208 Parameters
209 ----------
210 obj : node-like
211 A DOM node.
212
213 Returns
214 -------
215 text : str or unicode
216 The text from an individual DOM node.
217 """
218 raise NotImplementedError
219
220 def _parse_td(self, obj):
221 """Return the td elements from a row element.
222
223 Parameters
224 ----------
225 obj : node-like
226
227 Returns
228 -------
229 columns : list of node-like
230 These are the elements of each row, i.e., the columns.
231 """
232 raise NotImplementedError
233
234 def _parse_tables(self, doc, match, attrs):
235 """Return all tables from the parsed DOM.
236
237 Parameters
238 ----------
239 doc : tree-like
240 The DOM from which to parse the table element.
241
242 match : str or regular expression
243 The text to search for in the DOM tree.
244
245 attrs : dict
246 A dictionary of table attributes that can be used to disambiguate
247 mutliple tables on a page.
248
249 Raises
250 ------
251 ValueError
252 * If `match` does not match any text in the document.
253
254 Returns
255 -------
256 tables : list of node-like
257 A list of <table> elements to be parsed into raw data.
258 """
259 raise NotImplementedError
260
261 def _parse_tr(self, table):
262 """Return the list of row elements from the parsed table element.
263
264 Parameters
265 ----------
266 table : node-like
267 A table element that contains row elements.
268
269 Returns
270 -------
271 rows : list of node-like
272 A list row elements of a table, usually <tr> or <th> elements.
273 """
274 raise NotImplementedError
275
276 def _parse_thead(self, table):
277 """Return the header of a table.
278
279 Parameters
280 ----------
281 table : node-like
282 A table element that contains row elements.
283
284 Returns
285 -------
286 thead : node-like
287 A <thead>...</thead> element.
288 """
289 raise NotImplementedError
290
291 def _parse_tbody(self, table):
292 """Return the body of the table.
293
294 Parameters
295 ----------
296 table : node-like
297 A table element that contains row elements.
298
299 Returns
300 -------
301 tbody : node-like
302 A <tbody>...</tbody> element.
303 """
304 raise NotImplementedError
305
306 def _parse_tfoot(self, table):
307 """Return the footer of the table if any.
308
309 Parameters
310 ----------
311 table : node-like
312 A table element that contains row elements.
313
314 Returns
315 -------
316 tfoot : node-like
317 A <tfoot>...</tfoot> element.
318 """
319 raise NotImplementedError
320
321 def _build_doc(self):
322 """Return a tree-like object that can be used to iterate over the DOM.
323
324 Returns
325 -------
326 obj : tree-like
327 """
328 raise NotImplementedError
329
330 def _build_table(self, table):
331 header = self._parse_raw_thead(table)
332 body = self._parse_raw_tbody(table)
333 footer = self._parse_raw_tfoot(table)
334 return header, body, footer
335
336 def _parse_raw_thead(self, table):
337 thead = self._parse_thead(table)
338 res = []
339 if thead:
340 res = lmap(self._text_getter, self._parse_th(thead[0]))
341 return np.array(res).squeeze() if res and len(res) == 1 else res
342
343 def _parse_raw_tfoot(self, table):
344 tfoot = self._parse_tfoot(table)
345 res = []
346 if tfoot:
347 res = lmap(self._text_getter, self._parse_td(tfoot[0]))
348 return np.array(res).squeeze() if res and len(res) == 1 else res
349
350 def _parse_raw_tbody(self, table):
351 tbody = self._parse_tbody(table)
352
353 try:
354 res = self._parse_tr(tbody[0])
355 except IndexError:
356 res = self._parse_tr(table)
357 return self._parse_raw_data(res)
358
359
360 class _BeautifulSoupHtml5LibFrameParser(_HtmlFrameParser):
361 """HTML to DataFrame parser that uses BeautifulSoup under the hood.
362
363 See Also
364 --------
365 pandas.io.html._HtmlFrameParser
366 pandas.io.html._LxmlFrameParser
367
368 Notes
369 -----
370 Documentation strings for this class are in the base class
371 :class:`pandas.io.html._HtmlFrameParser`.
372 """
373 def __init__(self, *args, **kwargs):
374 super(_BeautifulSoupHtml5LibFrameParser, self).__init__(*args,
375 **kwargs)
376 from bs4 import SoupStrainer
377 self._strainer = SoupStrainer('table')
378
379 def _text_getter(self, obj):
380 return obj.text
381
382 def _parse_td(self, row):
383 return row.find_all(('td', 'th'))
384
385 def _parse_tr(self, element):
386 return element.find_all('tr')
387
388 def _parse_th(self, element):
389 return element.find_all('th')
390
391 def _parse_thead(self, table):
392 return table.find_all('thead')
393
394 def _parse_tbody(self, table):
395 return table.find_all('tbody')
396
397 def _parse_tfoot(self, table):
398 return table.find_all('tfoot')
399
400 def _parse_tables(self, doc, match, attrs):
401 element_name = self._strainer.name
402 tables = doc.find_all(element_name, attrs=attrs)
403
404 if not tables:
405 raise ValueError('No tables found')
406
407 result = []
408 unique_tables = set()
409
410 for table in tables:
411 if (table not in unique_tables and
412 table.find(text=match) is not None):
413 result.append(table)
414 unique_tables.add(table)
415
416 if not result:
417 raise ValueError("No tables found matching pattern %r" %
418 match.pattern)
419 return result
420
421 def _setup_build_doc(self):
422 raw_text = _read(self.io)
423 if not raw_text:
424 raise ValueError('No text parsed from document: %s' % self.io)
425 return raw_text
426
427 def _build_doc(self):
428 from bs4 import BeautifulSoup
429 return BeautifulSoup(self._setup_build_doc(), features='html5lib',
430 from_encoding=self.encoding)
431
432
433 def _build_xpath_expr(attrs):
434 """Build an xpath expression to simulate bs4's ability to pass in kwargs to
435 search for attributes when using the lxml parser.
436
437 Parameters
438 ----------
439 attrs : dict
440 A dict of HTML attributes. These are NOT checked for validity.
441
442 Returns
443 -------
444 expr : unicode
445 An XPath expression that checks for the given HTML attributes.
446 """
447 # give class attribute as class_ because class is a python keyword
448 if 'class_' in attrs:
449 attrs['class'] = attrs.pop('class_')
450
451 s = [u("@%s=%r") % (k, v) for k, v in iteritems(attrs)]
452 return u('[%s]') % ' and '.join(s)
453
454
455 _re_namespace = {'re': 'http://exslt.org/regular-expressions'}
456 _valid_schemes = 'http', 'file', 'ftp'
457
458
459 class _LxmlFrameParser(_HtmlFrameParser):
460 """HTML to DataFrame parser that uses lxml under the hood.
461
462 Warning
463 -------
464 This parser can only handle HTTP, FTP, and FILE urls.
465
466 See Also
467 --------
468 _HtmlFrameParser
469 _BeautifulSoupLxmlFrameParser
470
471 Notes
472 -----
473 Documentation strings for this class are in the base class
474 :class:`_HtmlFrameParser`.
475 """
476 def __init__(self, *args, **kwargs):
477 super(_LxmlFrameParser, self).__init__(*args, **kwargs)
478
479 def _text_getter(self, obj):
480 return obj.text_content()
481
482 def _parse_td(self, row):
483 return row.xpath('.//td|.//th')
484
485 def _parse_tr(self, table):
486 expr = './/tr[normalize-space()]'
487 return table.xpath(expr)
488
489 def _parse_tables(self, doc, match, kwargs):
490 pattern = match.pattern
491
492 # 1. check all descendants for the given pattern and only search tables
493 # 2. go up the tree until we find a table
494 query = '//table//*[re:test(text(), %r)]/ancestor::table'
495 xpath_expr = u(query) % pattern
496
497 # if any table attributes were given build an xpath expression to
498 # search for them
499 if kwargs:
500 xpath_expr += _build_xpath_expr(kwargs)
501
502 tables = doc.xpath(xpath_expr, namespaces=_re_namespace)
503
504 if not tables:
505 raise ValueError("No tables found matching regex %r" % pattern)
506 return tables
507
508 def _build_doc(self):
509 """
510 Raises
511 ------
512 ValueError
513 * If a URL that lxml cannot parse is passed.
514
515 Exception
516 * Any other ``Exception`` thrown. For example, trying to parse a
517 URL that is syntactically correct on a machine with no internet
518 connection will fail.
519
520 See Also
521 --------
522 pandas.io.html._HtmlFrameParser._build_doc
523 """
524 from lxml.html import parse, fromstring, HTMLParser
525 from lxml.etree import XMLSyntaxError
526
527 parser = HTMLParser(recover=False, encoding=self.encoding)
528
529 try:
530 # try to parse the input in the simplest way
531 r = parse(self.io, parser=parser)
532
533 try:
534 r = r.getroot()
535 except AttributeError:
536 pass
537 except (UnicodeDecodeError, IOError):
538 # if the input is a blob of html goop
539 if not _is_url(self.io):
540 r = fromstring(self.io, parser=parser)
541
542 try:
543 r = r.getroot()
544 except AttributeError:
545 pass
546 else:
547 # not a url
548 scheme = parse_url(self.io).scheme
549 if scheme not in _valid_schemes:
550 # lxml can't parse it
551 msg = ('%r is not a valid url scheme, valid schemes are '
552 '%s') % (scheme, _valid_schemes)
553 raise ValueError(msg)
554 else:
555 # something else happened: maybe a faulty connection
556 raise
557 else:
558 if not hasattr(r, 'text_content'):
559 raise XMLSyntaxError("no text parsed from document", 0, 0, 0)
560 return r
561
562 def _parse_tbody(self, table):
563 return table.xpath('.//tbody')
564
565 def _parse_thead(self, table):
566 return table.xpath('.//thead')
567
568 def _parse_tfoot(self, table):
569 return table.xpath('.//tfoot')
570
571 def _parse_raw_thead(self, table):
572 expr = './/thead//th'
573 return [_remove_whitespace(x.text_content()) for x in
574 table.xpath(expr)]
575
576 def _parse_raw_tfoot(self, table):
577 expr = './/tfoot//th'
578 return [_remove_whitespace(x.text_content()) for x in
579 table.xpath(expr)]
580
581
582 def _expand_elements(body):
583 lens = Series(lmap(len, body))
584 lens_max = lens.max()
585 not_max = lens[lens != lens_max]
586
587 empty = ['']
588 for ind, length in iteritems(not_max):
589 body[ind] += empty * (lens_max - length)
590
591
592 def _data_to_frame(data, header, index_col, skiprows, infer_types,
593 parse_dates, tupleize_cols, thousands):
594 head, body, _ = data # _ is footer which is rarely used: ignore for now
595
596 if head:
597 body = [head] + body
598
599 if header is None: # special case when a table has <th> elements
600 header = 0
601
602 # fill out elements of body that are "ragged"
603 _expand_elements(body)
604
605 tp = TextParser(body, header=header, index_col=index_col,
606 skiprows=_get_skiprows(skiprows),
607 parse_dates=parse_dates, tupleize_cols=tupleize_cols,
608 thousands=thousands)
609 df = tp.read()
610
611 if infer_types: # TODO: rm this code so infer_types has no effect in 0.14
612 df = df.convert_objects(convert_dates='coerce')
613 else:
614 df = df.applymap(text_type)
615 return df
616
617
618 _valid_parsers = {'lxml': _LxmlFrameParser, None: _LxmlFrameParser,
619 'html5lib': _BeautifulSoupHtml5LibFrameParser,
620 'bs4': _BeautifulSoupHtml5LibFrameParser}
621
622
623 def _parser_dispatch(flavor):
624 """Choose the parser based on the input flavor.
625
626 Parameters
627 ----------
628 flavor : str
629 The type of parser to use. This must be a valid backend.
630
631 Returns
632 -------
633 cls : _HtmlFrameParser subclass
634 The parser class based on the requested input flavor.
635
636 Raises
637 ------
638 ValueError
639 * If `flavor` is not a valid backend.
640 ImportError
641 * If you do not have the requested `flavor`
642 """
643 valid_parsers = list(_valid_parsers.keys())
644 if flavor not in valid_parsers:
645 raise ValueError('%r is not a valid flavor, valid flavors are %s' %
646 (flavor, valid_parsers))
647
648 if flavor in ('bs4', 'html5lib'):
649 if not _HAS_HTML5LIB:
650 raise ImportError("html5lib not found please install it")
651 if not _HAS_BS4:
652 raise ImportError("bs4 not found please install it")
653 if bs4.__version__ == LooseVersion('4.2.0'):
654 raise ValueError("You're using a version"
655 " of BeautifulSoup4 (4.2.0) that has been"
656 " known to cause problems on certain"
657 " operating systems such as Debian. "
658 "Please install a version of"
659 " BeautifulSoup4 != 4.2.0, both earlier"
660 " and later releases will work.")
661 else:
662 if not _HAS_LXML:
663 raise ImportError("lxml not found please install it")
664 return _valid_parsers[flavor]
665
666
667 def _print_as_set(s):
668 return '{%s}' % ', '.join([com.pprint_thing(el) for el in s])
669
670
671 def _validate_flavor(flavor):
672 if flavor is None:
673 flavor = 'lxml', 'bs4'
674 elif isinstance(flavor, string_types):
675 flavor = flavor,
676 elif isinstance(flavor, collections.Iterable):
677 if not all(isinstance(flav, string_types) for flav in flavor):
678 raise TypeError('Object of type %r is not an iterable of strings' %
679 type(flavor).__name__)
680 else:
681 fmt = '{0!r}' if isinstance(flavor, string_types) else '{0}'
682 fmt += ' is not a valid flavor'
683 raise ValueError(fmt.format(flavor))
684
685 flavor = tuple(flavor)
686 valid_flavors = set(_valid_parsers)
687 flavor_set = set(flavor)
688
689 if not flavor_set & valid_flavors:
690 raise ValueError('%s is not a valid set of flavors, valid flavors are '
691 '%s' % (_print_as_set(flavor_set),
692 _print_as_set(valid_flavors)))
693 return flavor
694
695
696 def _parse(flavor, io, match, header, index_col, skiprows, infer_types,
697 parse_dates, tupleize_cols, thousands, attrs, encoding):
698 flavor = _validate_flavor(flavor)
699 compiled_match = re.compile(match) # you can pass a compiled regex here
700
701 # hack around python 3 deleting the exception variable
702 retained = None
703 for flav in flavor:
704 parser = _parser_dispatch(flav)
705 p = parser(io, compiled_match, attrs, encoding)
706
707 try:
708 tables = p.parse_tables()
709 except Exception as caught:
710 retained = caught
711 else:
712 break
713 else:
714 raise_with_traceback(retained)
715
716 return [_data_to_frame(table, header, index_col, skiprows, infer_types,
717 parse_dates, tupleize_cols, thousands)
718 for table in tables]
719
720
721 def read_html(io, match='.+', flavor=None, header=None, index_col=None,
722 skiprows=None, infer_types=None, attrs=None, parse_dates=False,
723 tupleize_cols=False, thousands=',', encoding=None):
724 r"""Read HTML tables into a ``list`` of ``DataFrame`` objects.
725
726 Parameters
727 ----------
728 io : str or file-like
729 A URL, a file-like object, or a raw string containing HTML. Note that
730 lxml only accepts the http, ftp and file url protocols. If you have a
731 URL that starts with ``'https'`` you might try removing the ``'s'``.
732
733 match : str or compiled regular expression, optional
734 The set of tables containing text matching this regex or string will be
735 returned. Unless the HTML is extremely simple you will probably need to
736 pass a non-empty string here. Defaults to '.+' (match any non-empty
737 string). The default value will return all tables contained on a page.
738 This value is converted to a regular expression so that there is
739 consistent behavior between Beautiful Soup and lxml.
740
741 flavor : str or None, container of strings
742 The parsing engine to use. 'bs4' and 'html5lib' are synonymous with
743 each other, they are both there for backwards compatibility. The
744 default of ``None`` tries to use ``lxml`` to parse and if that fails it
745 falls back on ``bs4`` + ``html5lib``.
746
747 header : int or list-like or None, optional
748 The row (or list of rows for a :class:`~pandas.MultiIndex`) to use to
749 make the columns headers.
750
751 index_col : int or list-like or None, optional
752 The column (or list of columns) to use to create the index.
753
754 skiprows : int or list-like or slice or None, optional
755 0-based. Number of rows to skip after parsing the column integer. If a
756 sequence of integers or a slice is given, will skip the rows indexed by
757 that sequence. Note that a single element sequence means 'skip the nth
758 row' whereas an integer means 'skip n rows'.
759
760 infer_types : bool, optional
761 This option is deprecated in 0.13, an will have no effect in 0.14. It
762 defaults to ``True``.
763
764 attrs : dict or None, optional
765 This is a dictionary of attributes that you can pass to use to identify
766 the table in the HTML. These are not checked for validity before being
767 passed to lxml or Beautiful Soup. However, these attributes must be
768 valid HTML table attributes to work correctly. For example, ::
769
770 attrs = {'id': 'table'}
771
772 is a valid attribute dictionary because the 'id' HTML tag attribute is
773 a valid HTML attribute for *any* HTML tag as per `this document
774 <http://www.w3.org/TR/html-markup/global-attributes.html>`__. ::
775
776 attrs = {'asdf': 'table'}
777
778 is *not* a valid attribute dictionary because 'asdf' is not a valid
779 HTML attribute even if it is a valid XML attribute. Valid HTML 4.01
780 table attributes can be found `here
781 <http://www.w3.org/TR/REC-html40/struct/tables.html#h-11.2>`__. A
782 working draft of the HTML 5 spec can be found `here
783 <http://www.w3.org/TR/html-markup/table.html>`__. It contains the
784 latest information on table attributes for the modern web.
785
786 parse_dates : bool, optional
787 See :func:`~pandas.io.parsers.read_csv` for more details. In 0.13, this
788 parameter can sometimes interact strangely with ``infer_types``. If you
789 get a large number of ``NaT`` values in your results, consider passing
790 ``infer_types=False`` and manually converting types afterwards.
791
792 tupleize_cols : bool, optional
793 If ``False`` try to parse multiple header rows into a
794 :class:`~pandas.MultiIndex`, otherwise return raw tuples. Defaults to
795 ``False``.
796
797 thousands : str, optional
798 Separator to use to parse thousands. Defaults to ``','``.
799
800 encoding : str or None, optional
801 The encoding used to decode the web page. Defaults to ``None``.``None``
802 preserves the previous encoding behavior, which depends on the
803 underlying parser library (e.g., the parser library will try to use
804 the encoding provided by the document).
805
806 Returns
807 -------
808 dfs : list of DataFrames
809
810 Notes
811 -----
812 Before using this function you should read the :ref:`gotchas about the
813 HTML parsing libraries <html-gotchas>`.
814
815 Expect to do some cleanup after you call this function. For example, you
816 might need to manually assign column names if the column names are
817 converted to NaN when you pass the `header=0` argument. We try to assume as
818 little as possible about the structure of the table and push the
819 idiosyncrasies of the HTML contained in the table to the user.
820
821 This function searches for ``<table>`` elements and only for ``<tr>``
822 and ``<th>`` rows and ``<td>`` elements within each ``<tr>`` or ``<th>``
823 element in the table. ``<td>`` stands for "table data".
824
825 Similar to :func:`~pandas.read_csv` the `header` argument is applied
826 **after** `skiprows` is applied.
827
828 This function will *always* return a list of :class:`DataFrame` *or*
829 it will fail, e.g., it will *not* return an empty list.
830
831 Examples
832 --------
833 See the :ref:`read_html documentation in the IO section of the docs
834 <io.read_html>` for some examples of reading in HTML tables.
835
836 See Also
837 --------
838 pandas.io.parsers.read_csv
839 """
840 if infer_types is not None:
841 warnings.warn("infer_types will have no effect in 0.14", FutureWarning)
842 else:
843 infer_types = True # TODO: remove effect of this in 0.14
844
845 # Type check here. We don't want to parse only to fail because of an
846 # invalid value of an integer skiprows.
847 if isinstance(skiprows, numbers.Integral) and skiprows < 0:
848 raise ValueError('cannot skip rows starting from the end of the '
849 'data (you passed a negative value)')
850 return _parse(flavor, io, match, header, index_col, skiprows, infer_types,
851 parse_dates, tupleize_cols, thousands, attrs, encoding)
852
[end of pandas/io/html.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pandas-dev/pandas
|
f8b101c5323a69c1c8a4e71c102dd51ef6e50bc4
|
SQL: read_sql functions should not inspect all tables of database
Related: comment of @balancap: https://github.com/pydata/pandas/issues/6416#issuecomment-45343694 and issue #7380 where you get a warning about a not understood type (probably in another table).
**Situation now**: in the creation of a `PandasSQLAlchemy` object (https://github.com/pydata/pandas/blob/v0.14.0/pandas/io/sql.py#L777), a full `MetaData` object is created and reflected (this means: all tables in the database are inspected and the schema's are stored as sqlalchemy `Table` objects). This is done each time `read_sql/read_sql_query/read_sql_table` is called.
**Consequence**:
- this can be costly when having a very large database or having a distant server (https://github.com/pydata/pandas/issues/6416#issuecomment-45343694).
- this can trigger warnings that does not have to do anything with your current query, or the current table you want to read, when eg one of the types in other tables is not known to sqlalchemy (eg a postgis geometry column).
**Possible solution**:
- I think the `read_sql` functions never should inspect all tables, but only the specified table (and `read_sql_query` even not that table, as with a query this information is not used, only for `read_sql_table`)
- This can maybe be achieved with using the `only` keyword in `meta.reflect(engine, only=...)`
- For the OO API interface, we can discuss what should be the default (inspect all tables or not)
@mangecoeur @danielballan @hayd
|
Sounds good. As for the OO API, if we take a cue from `HDFStore` it should _not_ inspect all tables. `HDFStore.keys()` can run slow for > 10 keys, so it is not run on instantiation.
In the case of HDFStore, the keys can be inspected once and cached, because only one user at a time can open the file for writing. Pandas doesn't cache them, but it's possible. (I'll mention that @nkeim has implemented some code for this; maybe others have too.) But for SQL tables, with multi-user access as a full feature and a common use case, any relevant inspection should be done at query-execution time, not at connection time. Maybe the `only` keyword can make this fast.
OK, for the query functions it is clear I think they should not inspect the full table. I have a PR coming to fix that.
For the OO API, we should discuss that further together with the rest of the interface how this should look like.
Sounds good, I think the OO api can be stabilised after the functional one is sorted.
|
2014-06-26T20:01:57Z
|
<patch>
diff --git a/pandas/io/sql.py b/pandas/io/sql.py
--- a/pandas/io/sql.py
+++ b/pandas/io/sql.py
@@ -76,6 +76,17 @@ def _parse_date_columns(data_frame, parse_dates):
return data_frame
+def _is_sqlalchemy_engine(con):
+ try:
+ import sqlalchemy
+ if isinstance(con, sqlalchemy.engine.Engine):
+ return True
+ else:
+ return False
+ except ImportError:
+ return False
+
+
def execute(sql, con, cur=None, params=None):
"""
Execute the given SQL query using the provided connection object.
@@ -262,7 +273,15 @@ def read_sql_table(table_name, con, index_col=None, coerce_float=True,
"""
- pandas_sql = PandasSQLAlchemy(con)
+ import sqlalchemy
+ from sqlalchemy.schema import MetaData
+ meta = MetaData(con)
+ try:
+ meta.reflect(only=[table_name])
+ except sqlalchemy.exc.InvalidRequestError:
+ raise ValueError("Table %s not found" % table_name)
+
+ pandas_sql = PandasSQLAlchemy(con, meta=meta)
table = pandas_sql.read_table(
table_name, index_col=index_col, coerce_float=coerce_float,
parse_dates=parse_dates, columns=columns)
@@ -380,6 +399,7 @@ def read_sql(sql, con, index_col=None, coerce_float=True, params=None,
coerce_float=coerce_float, parse_dates=parse_dates)
if pandas_sql.has_table(sql):
+ pandas_sql.meta.reflect(only=[sql])
return pandas_sql.read_table(
sql, index_col=index_col, coerce_float=coerce_float,
parse_dates=parse_dates, columns=columns)
@@ -471,17 +491,9 @@ def pandasSQL_builder(con, flavor=None, meta=None, is_cursor=False):
"""
# When support for DBAPI connections is removed,
# is_cursor should not be necessary.
- try:
- import sqlalchemy
-
- if isinstance(con, sqlalchemy.engine.Engine):
- return PandasSQLAlchemy(con, meta=meta)
- else:
- if flavor == 'mysql':
- warnings.warn(_MYSQL_WARNING, FutureWarning)
- return PandasSQLLegacy(con, flavor, is_cursor=is_cursor)
-
- except ImportError:
+ if _is_sqlalchemy_engine(con):
+ return PandasSQLAlchemy(con, meta=meta)
+ else:
if flavor == 'mysql':
warnings.warn(_MYSQL_WARNING, FutureWarning)
return PandasSQLLegacy(con, flavor, is_cursor=is_cursor)
@@ -767,7 +779,6 @@ def __init__(self, engine, meta=None):
if not meta:
from sqlalchemy.schema import MetaData
meta = MetaData(self.engine)
- meta.reflect(self.engine)
self.meta = meta
@@ -812,19 +823,16 @@ def tables(self):
return self.meta.tables
def has_table(self, name):
- if self.meta.tables.get(name) is not None:
- return True
- else:
- return False
+ return self.engine.has_table(name)
def get_table(self, table_name):
return self.meta.tables.get(table_name)
def drop_table(self, table_name):
if self.engine.has_table(table_name):
+ self.meta.reflect(only=[table_name])
self.get_table(table_name).drop()
self.meta.clear()
- self.meta.reflect()
def _create_sql_schema(self, frame, table_name):
table = PandasSQLTable(table_name, self, frame=frame)
</patch>
|
[]
|
[]
| |||
Qiskit__qiskit-5588
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
circuit.repeat duplicates global phase
<!-- ⚠️ If you do not respect this template, your issue will be closed -->
<!-- ⚠️ Make sure to browse the opened and closed issues -->
### Information
- **Qiskit Terra version**: master @ 4d3ed6a3
- **Python version**: 3.7.9
- **Operating system**: macOS catalina
### What is the current behavior?
Calling `circuit.repeat(1)` on a circuit with global phase seems to duplicate the global phase.
### Steps to reproduce the problem
If `u` is a circuit with only a Z gate and `u_with_phase` the same circuit but with a global phase of pi, then
```python
In [75]: u.draw()
Out[75]:
┌───┐
q_0: ┤ Z ├
└───┘
In [79]: u_with_phase.draw()
Out[79]:
global phase: π
┌───┐
q_0: ┤ Z ├
└───┘
In [80]: Operator(u) == Operator(u_with_phase) # should be False since the matrices are not the same
Out[80]: False
In [81]: Operator(u) == Operator(u_with_phase.power(1)) # should still be False
Out[81]: True
```
### What is the expected behavior?
### Suggested solutions
</issue>
<code>
[start of README.md]
1 # Qiskit Terra
2
3 [](https://opensource.org/licenses/Apache-2.0)[](https://travis-ci.com/Qiskit/qiskit-terra)[](https://github.com/Qiskit/qiskit-terra/releases)[](https://pypi.org/project/qiskit-terra/)[](https://coveralls.io/github/Qiskit/qiskit-terra?branch=master)
4
5 **Qiskit** is an open-source framework for working with noisy quantum computers at the level of pulses, circuits, and algorithms.
6
7 Qiskit is made up of elements that work together to enable quantum computing. This element is **Terra** and is the foundation on which the rest of Qiskit is built.
8
9 ## Installation
10
11 We encourage installing Qiskit via the pip tool (a python package manager), which installs all Qiskit elements, including Terra.
12
13 ```bash
14 pip install qiskit
15 ```
16
17 PIP will handle all dependencies automatically and you will always install the latest (and well-tested) version.
18
19 To install from source, follow the instructions in the [documentation](https://qiskit.org/documentation/contributing_to_qiskit.html#install-terra-from-source).
20
21 ## Creating Your First Quantum Program in Qiskit Terra
22
23 Now that Qiskit is installed, it's time to begin working with Terra.
24
25 We are ready to try out a quantum circuit example, which is simulated locally using
26 the Qiskit BasicAer element. This is a simple example that makes an entangled state.
27
28 ```
29 $ python
30 ```
31
32 ```python
33 >>> from qiskit import *
34 >>> qc = QuantumCircuit(2, 2)
35 >>> qc.h(0)
36 >>> qc.cx(0, 1)
37 >>> qc.measure([0,1], [0,1])
38 >>> backend_sim = BasicAer.get_backend('qasm_simulator')
39 >>> transpiled_qc = transpile(qc, backend_sim)
40 >>> result = backend_sim.run(assemble(transpiled_qc)).result()
41 >>> print(result.get_counts(qc))
42 ```
43
44 In this case, the output will be:
45
46 ```python
47 {'00': 513, '11': 511}
48 ```
49
50 A script is available [here](examples/python/ibmq/hello_quantum.py), where we also show how to
51 run the same program on a real quantum computer via IBMQ.
52
53 ### Executing your code on a real quantum chip
54
55 You can also use Qiskit to execute your code on a
56 **real quantum chip**.
57 In order to do so, you need to configure Qiskit for using the credentials in
58 your IBM Q account:
59
60 #### Configure your IBMQ credentials
61
62 1. Create an _[IBM Q](https://quantum-computing.ibm.com) > Account_ if you haven't already done so.
63
64 2. Get an API token from the IBM Q website under _My Account > API Token_ and the URL for the account.
65
66 3. Take your token and url from step 2, here called `MY_API_TOKEN`, `MY_URL`, and run:
67
68 ```python
69 >>> from qiskit import IBMQ
70 >>> IBMQ.save_account('MY_API_TOKEN', 'MY_URL')
71 ```
72
73 After calling `IBMQ.save_account()`, your credentials will be stored on disk.
74 Once they are stored, at any point in the future you can load and use them
75 in your program simply via:
76
77 ```python
78 >>> from qiskit import IBMQ
79 >>> IBMQ.load_account()
80 ```
81
82 Those who do not want to save their credentials to disk should use instead:
83
84 ```python
85 >>> from qiskit import IBMQ
86 >>> IBMQ.enable_account('MY_API_TOKEN')
87 ```
88
89 and the token will only be active for the session. For examples using Terra with real
90 devices we have provided a set of examples in **examples/python** and we suggest starting with [using_qiskit_terra_level_0.py](examples/python/using_qiskit_terra_level_0.py) and working up in
91 the levels.
92
93 ## Contribution Guidelines
94
95 If you'd like to contribute to Qiskit Terra, please take a look at our
96 [contribution guidelines](CONTRIBUTING.md). This project adheres to Qiskit's [code of conduct](CODE_OF_CONDUCT.md). By participating, you are expected to uphold this code.
97
98 We use [GitHub issues](https://github.com/Qiskit/qiskit-terra/issues) for tracking requests and bugs. Please
99 [join the Qiskit Slack community](https://ibm.co/joinqiskitslack)
100 and use our [Qiskit Slack channel](https://qiskit.slack.com) for discussion and simple questions.
101 For questions that are more suited for a forum we use the Qiskit tag in the [Stack Exchange](https://quantumcomputing.stackexchange.com/questions/tagged/qiskit).
102
103 ## Next Steps
104
105 Now you're set up and ready to check out some of the other examples from our
106 [Qiskit Tutorials](https://github.com/Qiskit/qiskit-tutorials) repository.
107
108 ## Authors and Citation
109
110 Qiskit Terra is the work of [many people](https://github.com/Qiskit/qiskit-terra/graphs/contributors) who contribute
111 to the project at different levels. If you use Qiskit, please cite as per the included [BibTeX file](https://github.com/Qiskit/qiskit/blob/master/Qiskit.bib).
112
113 ## Changelog and Release Notes
114
115 The changelog for a particular release is dynamically generated and gets
116 written to the release page on Github for each release. For example, you can
117 find the page for the `0.9.0` release here:
118
119 https://github.com/Qiskit/qiskit-terra/releases/tag/0.9.0
120
121 The changelog for the current release can be found in the releases tab:
122 
123 The changelog provides a quick overview of notable changes for a given
124 release.
125
126 Additionally, as part of each release detailed release notes are written to
127 document in detail what has changed as part of a release. This includes any
128 documentation on potential breaking changes on upgrade and new features.
129 For example, You can find the release notes for the `0.9.0` release in the
130 Qiskit documentation here:
131
132 https://qiskit.org/documentation/release_notes.html#terra-0-9
133
134 ## License
135
136 [Apache License 2.0](LICENSE.txt)
137
[end of README.md]
[start of qiskit/circuit/library/grover_operator.py]
1 # This code is part of Qiskit.
2 #
3 # (C) Copyright IBM 2017, 2020.
4 #
5 # This code is licensed under the Apache License, Version 2.0. You may
6 # obtain a copy of this license in the LICENSE.txt file in the root directory
7 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
8 #
9 # Any modifications or derivative works of this code must retain this
10 # copyright notice, and modified files need to carry a notice indicating
11 # that they have been altered from the originals.
12
13 """The Grover operator."""
14
15 from typing import List, Optional, Union
16 import numpy
17 from qiskit.circuit import QuantumCircuit, QuantumRegister, AncillaRegister
18 from qiskit.quantum_info import Statevector, Operator, DensityMatrix
19 from .standard_gates import MCXGate
20
21
22 class GroverOperator(QuantumCircuit):
23 r"""The Grover operator.
24
25 Grover's search algorithm [1, 2] consists of repeated applications of the so-called
26 Grover operator used to amplify the amplitudes of the desired output states.
27 This operator, :math:`\mathcal{Q}`, consists of the phase oracle, :math:`\mathcal{S}_f`,
28 zero phase-shift or zero reflection, :math:`\mathcal{S}_0`, and an
29 input state preparation :math:`\mathcal{A}`:
30
31 .. math::
32 \mathcal{Q} = \mathcal{A} \mathcal{S}_0 \mathcal{A}^\dagger \mathcal{S}_f
33
34 In the standard Grover search we have :math:`\mathcal{A} = H^{\otimes n}`:
35
36 .. math::
37 \mathcal{Q} = H^{\otimes n} \mathcal{S}_0 H^{\otimes n} \mathcal{S}_f
38 = D \mathcal{S_f}
39
40 The operation :math:`D = H^{\otimes n} \mathcal{S}_0 H^{\otimes n}` is also referred to as
41 diffusion operator. In this formulation we can see that Grover's operator consists of two
42 steps: first, the phase oracle multiplies the good states by -1 (with :math:`\mathcal{S}_f`)
43 and then the whole state is reflected around the mean (with :math:`D`).
44
45 This class allows setting a different state preparation, as in quantum amplitude
46 amplification (a generalization of Grover's algorithm), :math:`\mathcal{A}` might not be
47 a layer of Hardamard gates [3].
48
49 The action of the phase oracle :math:`\mathcal{S}_f` is defined as
50
51 .. math::
52 \mathcal{S}_f: |x\rangle \mapsto (-1)^{f(x)}|x\rangle
53
54 where :math:`f(x) = 1` if :math:`x` is a good state and 0 otherwise. To highlight the fact
55 that this oracle flips the phase of the good states and does not flip the state of a result
56 qubit, we call :math:`\mathcal{S}_f` a phase oracle.
57
58 Note that you can easily construct a phase oracle from a bitflip oracle by sandwiching the
59 controlled X gate on the result qubit by a X and H gate. For instance
60
61 .. parsed-literal::
62
63 Bitflip oracle Phaseflip oracle
64 q_0: ──■── q_0: ────────────■────────────
65 ┌─┴─┐ ┌───┐┌───┐┌─┴─┐┌───┐┌───┐
66 out: ┤ X ├ out: ┤ X ├┤ H ├┤ X ├┤ H ├┤ X ├
67 └───┘ └───┘└───┘└───┘└───┘└───┘
68
69 There is some flexibility in defining the oracle and :math:`\mathcal{A}` operator. Before the
70 Grover operator is applied in Grover's algorithm, the qubits are first prepared with one
71 application of the :math:`\mathcal{A}` operator (or Hadamard gates in the standard formulation).
72 Thus, we always have operation of the form
73 :math:`\mathcal{A} \mathcal{S}_f \mathcal{A}^\dagger`. Therefore it is possible to move
74 bitflip logic into :math:`\mathcal{A}` and leaving the oracle only to do phaseflips via Z gates
75 based on the bitflips. One possible use-case for this are oracles that do not uncompute the
76 state qubits.
77
78 The zero reflection :math:`\mathcal{S}_0` is usually defined as
79
80 .. math::
81 \mathcal{S}_0 = 2 |0\rangle^{\otimes n} \langle 0|^{\otimes n} - \mathbb{I}_n
82
83 where :math:`\mathbb{I}_n` is the identity on :math:`n` qubits.
84 By default, this class implements the negative version
85 :math:`2 |0\rangle^{\otimes n} \langle 0|^{\otimes n} - \mathbb{I}_n`, since this can simply
86 be implemented with a multi-controlled Z sandwiched by X gates on the target qubit and the
87 introduced global phase does not matter for Grover's algorithm.
88
89 Examples:
90 >>> from qiskit.circuit import QuantumCircuit
91 >>> from qiskit.circuit.library import GroverOperator
92 >>> oracle = QuantumCircuit(2)
93 >>> oracle.z(0) # good state = first qubit is |1>
94 >>> grover_op = GroverOperator(oracle, insert_barriers=True)
95 >>> grover_op.draw()
96 ┌───┐ ░ ┌───┐ ░ ┌───┐ ┌───┐ ░ ┌───┐
97 state_0: ┤ Z ├─░─┤ H ├─░─┤ X ├───────■──┤ X ├──────░─┤ H ├
98 └───┘ ░ ├───┤ ░ ├───┤┌───┐┌─┴─┐├───┤┌───┐ ░ ├───┤
99 state_1: ──────░─┤ H ├─░─┤ X ├┤ H ├┤ X ├┤ H ├┤ X ├─░─┤ H ├
100 ░ └───┘ ░ └───┘└───┘└───┘└───┘└───┘ ░ └───┘
101
102 >>> oracle = QuantumCircuit(1)
103 >>> oracle.z(0) # the qubit state |1> is the good state
104 >>> state_preparation = QuantumCircuit(1)
105 >>> state_preparation.ry(0.2, 0) # non-uniform state preparation
106 >>> grover_op = GroverOperator(oracle, state_preparation)
107 >>> grover_op.draw()
108 ┌───┐┌──────────┐┌───┐┌───┐┌───┐┌─────────┐
109 state_0: ┤ Z ├┤ RY(-0.2) ├┤ X ├┤ Z ├┤ X ├┤ RY(0.2) ├
110 └───┘└──────────┘└───┘└───┘└───┘└─────────┘
111
112 >>> oracle = QuantumCircuit(4)
113 >>> oracle.z(3)
114 >>> reflection_qubits = [0, 3]
115 >>> state_preparation = QuantumCircuit(4)
116 >>> state_preparation.cry(0.1, 0, 3)
117 >>> state_preparation.ry(0.5, 3)
118 >>> grover_op = GroverOperator(oracle, state_preparation,
119 ... reflection_qubits=reflection_qubits)
120 >>> grover_op.draw()
121 ┌───┐ ┌───┐
122 state_0: ──────────────────────■──────┤ X ├───────■──┤ X ├──────────■────────────────
123 │ └───┘ │ └───┘ │
124 state_1: ──────────────────────┼──────────────────┼─────────────────┼────────────────
125 │ │ │
126 state_2: ──────────────────────┼──────────────────┼─────────────────┼────────────────
127 ┌───┐┌──────────┐┌────┴─────┐┌───┐┌───┐┌─┴─┐┌───┐┌───┐┌────┴────┐┌─────────┐
128 state_3: ┤ Z ├┤ RY(-0.5) ├┤ RY(-0.1) ├┤ X ├┤ H ├┤ X ├┤ H ├┤ X ├┤ RY(0.1) ├┤ RY(0.5) ├
129 └───┘└──────────┘└──────────┘└───┘└───┘└───┘└───┘└───┘└─────────┘└─────────┘
130
131 >>> mark_state = Statevector.from_label('011')
132 >>> diffuse_operator = 2 * DensityMatrix.from_label('000') - Operator.from_label('III')
133 >>> grover_op = GroverOperator(oracle=mark_state, zero_reflection=diffuse_operator)
134 >>> grover_op.draw(fold=70)
135 ┌─────────────────┐ ┌───┐ »
136 state_0: ┤0 ├──────┤ H ├──────────────────────────»
137 │ │┌─────┴───┴─────┐ ┌───┐ »
138 state_1: ┤1 UCRZ(0,pi,0,0) ├┤0 ├─────┤ H ├──────────»
139 │ ││ UCRZ(pi/2,0) │┌────┴───┴────┐┌───┐»
140 state_2: ┤2 ├┤1 ├┤ UCRZ(-pi/4) ├┤ H ├»
141 └─────────────────┘└───────────────┘└─────────────┘└───┘»
142 « ┌─────────────────┐ ┌───┐
143 «state_0: ┤0 ├──────┤ H ├─────────────────────────
144 « │ │┌─────┴───┴─────┐ ┌───┐
145 «state_1: ┤1 UCRZ(pi,0,0,0) ├┤0 ├────┤ H ├──────────
146 « │ ││ UCRZ(pi/2,0) │┌───┴───┴────┐┌───┐
147 «state_2: ┤2 ├┤1 ├┤ UCRZ(pi/4) ├┤ H ├
148 « └─────────────────┘└───────────────┘└────────────┘└───┘
149
150 References:
151 [1]: L. K. Grover (1996), A fast quantum mechanical algorithm for database search,
152 `arXiv:quant-ph/9605043 <https://arxiv.org/abs/quant-ph/9605043>`_.
153 [2]: I. Chuang & M. Nielsen, Quantum Computation and Quantum Information,
154 Cambridge: Cambridge University Press, 2000. Chapter 6.1.2.
155 [3]: Brassard, G., Hoyer, P., Mosca, M., & Tapp, A. (2000).
156 Quantum Amplitude Amplification and Estimation.
157 `arXiv:quant-ph/0005055 <http://arxiv.org/abs/quant-ph/0005055>`_.
158 """
159
160 def __init__(self, oracle: Union[QuantumCircuit, Statevector],
161 state_preparation: Optional[QuantumCircuit] = None,
162 zero_reflection: Optional[Union[QuantumCircuit, DensityMatrix, Operator]] = None,
163 reflection_qubits: Optional[List[int]] = None,
164 insert_barriers: bool = False,
165 mcx_mode: str = 'noancilla',
166 name: str = 'Q') -> None:
167 r"""
168 Args:
169 oracle: The phase oracle implementing a reflection about the bad state. Note that this
170 is not a bitflip oracle, see the docstring for more information.
171 state_preparation: The operator preparing the good and bad state.
172 For Grover's algorithm, this is a n-qubit Hadamard gate and for amplitude
173 amplification or estimation the operator :math:`\mathcal{A}`.
174 zero_reflection: The reflection about the zero state, :math:`\mathcal{S}_0`.
175 reflection_qubits: Qubits on which the zero reflection acts on.
176 insert_barriers: Whether barriers should be inserted between the reflections and A.
177 mcx_mode: The mode to use for building the default zero reflection.
178 name: The name of the circuit.
179 """
180 super().__init__(name=name)
181
182 # store inputs
183 if isinstance(oracle, Statevector):
184 from qiskit.circuit.library import Diagonal # pylint: disable=cyclic-import
185 oracle = Diagonal((-1) ** oracle.data)
186 self._oracle = oracle
187
188 if isinstance(zero_reflection, (Operator, DensityMatrix)):
189 from qiskit.circuit.library import Diagonal # pylint: disable=cyclic-import
190 zero_reflection = Diagonal(zero_reflection.data.diagonal())
191 self._zero_reflection = zero_reflection
192
193 self._reflection_qubits = reflection_qubits
194 self._state_preparation = state_preparation
195 self._insert_barriers = insert_barriers
196 self._mcx_mode = mcx_mode
197
198 # build circuit
199 self._build()
200
201 @property
202 def reflection_qubits(self):
203 """Reflection qubits, on which S0 is applied (if S0 is not user-specified)."""
204 if self._reflection_qubits is not None:
205 return self._reflection_qubits
206
207 num_state_qubits = self.oracle.num_qubits - self.oracle.num_ancillas
208 return list(range(num_state_qubits))
209
210 @property
211 def zero_reflection(self) -> QuantumCircuit:
212 """The subcircuit implementing the reflection about 0."""
213 if self._zero_reflection is not None:
214 return self._zero_reflection
215
216 num_state_qubits = self.oracle.num_qubits - self.oracle.num_ancillas
217 return _zero_reflection(num_state_qubits, self.reflection_qubits, self._mcx_mode)
218
219 @property
220 def state_preparation(self) -> QuantumCircuit:
221 """The subcircuit implementing the A operator or Hadamards."""
222 if self._state_preparation is not None:
223 return self._state_preparation
224
225 num_state_qubits = self.oracle.num_qubits - self.oracle.num_ancillas
226 hadamards = QuantumCircuit(num_state_qubits, name='H')
227 # apply Hadamards only on reflection qubits, rest will cancel out
228 hadamards.h(self.reflection_qubits)
229 return hadamards
230
231 @property
232 def oracle(self):
233 """The oracle implementing a reflection about the bad state."""
234 return self._oracle
235
236 def _build(self):
237 num_state_qubits = self.oracle.num_qubits - self.oracle.num_ancillas
238 self.add_register(QuantumRegister(num_state_qubits, name='state'))
239 num_ancillas = numpy.max([self.oracle.num_ancillas,
240 self.zero_reflection.num_ancillas,
241 self.state_preparation.num_ancillas])
242 if num_ancillas > 0:
243 self.add_register(AncillaRegister(num_ancillas, name='ancilla'))
244
245 self.compose(self.oracle, list(range(self.oracle.num_qubits)), inplace=True)
246 if self._insert_barriers:
247 self.barrier()
248 self.compose(self.state_preparation.inverse(),
249 list(range(self.state_preparation.num_qubits)),
250 inplace=True)
251 if self._insert_barriers:
252 self.barrier()
253 self.compose(self.zero_reflection, list(range(self.zero_reflection.num_qubits)),
254 inplace=True)
255 if self._insert_barriers:
256 self.barrier()
257 self.compose(self.state_preparation, list(range(self.state_preparation.num_qubits)),
258 inplace=True)
259
260
261 # TODO use the oracle compiler or the bit string oracle
262 def _zero_reflection(num_state_qubits: int, qubits: List[int], mcx_mode: Optional[str] = None
263 ) -> QuantumCircuit:
264 qr_state = QuantumRegister(num_state_qubits, 'state')
265 reflection = QuantumCircuit(qr_state, name='S_0')
266
267 num_ancillas = MCXGate.get_num_ancilla_qubits(len(qubits) - 1, mcx_mode)
268 if num_ancillas > 0:
269 qr_ancilla = AncillaRegister(num_ancillas, 'ancilla')
270 reflection.add_register(qr_ancilla)
271 else:
272 qr_ancilla = []
273
274 reflection.x(qubits)
275 if len(qubits) == 1:
276 reflection.z(0) # MCX does not allow 0 control qubits, therefore this is separate
277 else:
278 reflection.h(qubits[-1])
279 reflection.mcx(qubits[:-1], qubits[-1], qr_ancilla[:], mode=mcx_mode)
280 reflection.h(qubits[-1])
281 reflection.x(qubits)
282
283 return reflection
284
[end of qiskit/circuit/library/grover_operator.py]
[start of qiskit/transpiler/passes/optimization/optimize_1q_gates.py]
1 # This code is part of Qiskit.
2 #
3 # (C) Copyright IBM 2017, 2018.
4 #
5 # This code is licensed under the Apache License, Version 2.0. You may
6 # obtain a copy of this license in the LICENSE.txt file in the root directory
7 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
8 #
9 # Any modifications or derivative works of this code must retain this
10 # copyright notice, and modified files need to carry a notice indicating
11 # that they have been altered from the originals.
12
13 """Optimize chains of single-qubit u1, u2, u3 gates by combining them into a single gate."""
14
15 from itertools import groupby
16
17 import numpy as np
18
19 from qiskit.transpiler.exceptions import TranspilerError
20 from qiskit.circuit.library.standard_gates.p import PhaseGate
21 from qiskit.circuit.library.standard_gates.u import UGate
22 from qiskit.circuit.library.standard_gates.u1 import U1Gate
23 from qiskit.circuit.library.standard_gates.u2 import U2Gate
24 from qiskit.circuit.library.standard_gates.u3 import U3Gate
25 from qiskit.circuit.gate import Gate
26 from qiskit.transpiler.basepasses import TransformationPass
27 from qiskit.quantum_info.synthesis import Quaternion
28
29 _CHOP_THRESHOLD = 1e-15
30
31
32 class Optimize1qGates(TransformationPass):
33 """Optimize chains of single-qubit u1, u2, u3 gates by combining them into a single gate."""
34
35 def __init__(self, basis=None, eps=1e-15):
36 """Optimize1qGates initializer.
37
38 Args:
39 basis (list[str]): Basis gates to consider, e.g. `['u3', 'cx']`. For the effects
40 of this pass, the basis is the set intersection between the `basis` parameter and
41 the set `{'u1','u2','u3', 'u', 'p'}`.
42 eps (float): EPS to check against
43 """
44 super().__init__()
45 self.basis = basis if basis else ["u1", "u2", "u3"]
46 self.eps = eps
47
48 def run(self, dag):
49 """Run the Optimize1qGates pass on `dag`.
50
51 Args:
52 dag (DAGCircuit): the DAG to be optimized.
53
54 Returns:
55 DAGCircuit: the optimized DAG.
56
57 Raises:
58 TranspilerError: if YZY and ZYZ angles do not give same rotation matrix.
59 """
60 use_u = 'u' in self.basis
61 use_p = 'p' in self.basis
62 runs = dag.collect_runs(["u1", "u2", "u3", "u", 'p'])
63 runs = _split_runs_on_parameters(runs)
64 for run in runs:
65 if use_p:
66 right_name = "p"
67 else:
68 right_name = "u1"
69 right_parameters = (0, 0, 0) # (theta, phi, lambda)
70 right_global_phase = 0
71 for current_node in run:
72 left_name = current_node.name
73 if (current_node.condition is not None
74 or len(current_node.qargs) != 1
75 or left_name not in ["p", "u1", "u2", "u3", 'u', "id"]):
76 raise TranspilerError("internal error")
77 if left_name in ("u1", "p"):
78 left_parameters = (0, 0, current_node.op.params[0])
79 elif left_name == "u2":
80 left_parameters = (np.pi / 2, current_node.op.params[0],
81 current_node.op.params[1])
82 elif left_name in ("u3", 'u'):
83 left_parameters = tuple(current_node.op.params)
84 else:
85 if use_p:
86 left_name = "p"
87 else:
88 left_name = "u1" # replace id with u1
89 left_parameters = (0, 0, 0)
90 if (current_node.op.definition is not None and
91 current_node.op.definition.global_phase):
92 right_global_phase += current_node.op.definition.global_phase
93 # If there are any sympy objects coming from the gate convert
94 # to numpy.
95 left_parameters = tuple([float(x) for x in left_parameters])
96 # Compose gates
97 name_tuple = (left_name, right_name)
98 if name_tuple in (("u1", "u1"), ("p", "p")):
99 # u1(lambda1) * u1(lambda2) = u1(lambda1 + lambda2)
100 right_parameters = (0, 0, right_parameters[2] +
101 left_parameters[2])
102 elif name_tuple in (("u1", "u2"), ("p", "u2")):
103 # u1(lambda1) * u2(phi2, lambda2) = u2(phi2 + lambda1, lambda2)
104 right_parameters = (np.pi / 2, right_parameters[1] +
105 left_parameters[2], right_parameters[2])
106 elif name_tuple in (("u2", "u1"), ("u2", "p")):
107 # u2(phi1, lambda1) * u1(lambda2) = u2(phi1, lambda1 + lambda2)
108 right_name = "u2"
109 right_parameters = (np.pi / 2, left_parameters[1],
110 right_parameters[2] + left_parameters[2])
111 elif name_tuple in (("u1", "u3"), ("u1", "u"), ("p", "u3"), ("p", "u")):
112 # u1(lambda1) * u3(theta2, phi2, lambda2) =
113 # u3(theta2, phi2 + lambda1, lambda2)
114 right_parameters = (right_parameters[0], right_parameters[1] +
115 left_parameters[2], right_parameters[2])
116 elif name_tuple in (("u3", "u1"), ('u', 'u1'), ("u3", "p"), ("u", "p")):
117 # u3(theta1, phi1, lambda1) * u1(lambda2) =
118 # u3(theta1, phi1, lambda1 + lambda2)
119 if use_u:
120 right_name = 'u'
121 else:
122 right_name = "u3"
123 right_parameters = (left_parameters[0], left_parameters[1],
124 right_parameters[2] + left_parameters[2])
125 elif name_tuple == ("u2", "u2"):
126 # Using Ry(pi/2).Rz(2*lambda).Ry(pi/2) =
127 # Rz(pi/2).Ry(pi-2*lambda).Rz(pi/2),
128 # u2(phi1, lambda1) * u2(phi2, lambda2) =
129 # u3(pi - lambda1 - phi2, phi1 + pi/2, lambda2 + pi/2)
130 if use_u:
131 right_name = 'u'
132 else:
133 right_name = "u3"
134 right_parameters = (np.pi - left_parameters[2] -
135 right_parameters[1], left_parameters[1] +
136 np.pi / 2, right_parameters[2] +
137 np.pi / 2)
138 elif name_tuple[1] == "nop":
139 right_name = left_name
140 right_parameters = left_parameters
141 else:
142 # For composing u3's or u2's with u3's, use
143 # u2(phi, lambda) = u3(pi/2, phi, lambda)
144 # together with the qiskit.mapper.compose_u3 method.
145 if use_u:
146 right_name = 'u'
147 else:
148 right_name = "u3"
149 # Evaluate the symbolic expressions for efficiency
150 right_parameters = Optimize1qGates.compose_u3(left_parameters[0],
151 left_parameters[1],
152 left_parameters[2],
153 right_parameters[0],
154 right_parameters[1],
155 right_parameters[2])
156 # Why evalf()? This program:
157 # OPENQASM 2.0;
158 # include "qelib1.inc";
159 # qreg q[2];
160 # creg c[2];
161 # u3(0.518016983430947*pi,1.37051598592907*pi,1.36816383603222*pi) q[0];
162 # u3(1.69867232277986*pi,0.371448347747471*pi,0.461117217930936*pi) q[0];
163 # u3(0.294319836336836*pi,0.450325871124225*pi,1.46804720442555*pi) q[0];
164 # measure q -> c;
165 # took >630 seconds (did not complete) to optimize without
166 # calling evalf() at all, 19 seconds to optimize calling
167 # evalf() AFTER compose_u3, and 1 second to optimize
168 # calling evalf() BEFORE compose_u3.
169 # 1. Here down, when we simplify, we add f(theta) to lambda to
170 # correct the global phase when f(theta) is 2*pi. This isn't
171 # necessary but the other steps preserve the global phase, so
172 # we continue in that manner.
173 # 2. The final step will remove Z rotations by 2*pi.
174 # 3. Note that is_zero is true only if the expression is exactly
175 # zero. If the input expressions have already been evaluated
176 # then these final simplifications will not occur.
177 # TODO After we refactor, we should have separate passes for
178 # exact and approximate rewriting.
179
180 # Y rotation is 0 mod 2*pi, so the gate is a u1
181 if abs(np.mod(right_parameters[0],
182 (2 * np.pi))) < self.eps and right_name != "u1" \
183 and right_name != "p":
184 if use_p:
185 right_name = "p"
186 else:
187 right_name = "u1"
188 right_parameters = (0, 0, right_parameters[1] +
189 right_parameters[2] +
190 right_parameters[0])
191 # Y rotation is pi/2 or -pi/2 mod 2*pi, so the gate is a u2
192 if right_name == "u3" or 'u':
193 # theta = pi/2 + 2*k*pi
194 right_angle = right_parameters[0] - np.pi / 2
195 if abs(right_angle) < self.eps:
196 right_angle = 0
197 if abs(np.mod((right_angle),
198 2 * np.pi)) < self.eps:
199 right_name = "u2"
200 right_parameters = (np.pi / 2, right_parameters[1],
201 right_parameters[2] +
202 (right_parameters[0] - np.pi / 2))
203 # theta = -pi/2 + 2*k*pi
204 right_angle = right_parameters[0] + np.pi / 2
205 if abs(right_angle) < self.eps:
206 right_angle = 0
207 if abs(np.mod(right_angle,
208 2 * np.pi)) < self.eps:
209 right_name = "u2"
210 right_parameters = (np.pi / 2, right_parameters[1] +
211 np.pi, right_parameters[2] -
212 np.pi + (right_parameters[0] +
213 np.pi / 2))
214 # u1 and lambda is 0 mod 2*pi so gate is nop (up to a global phase)
215 if right_name in ("u1", "p") and abs(
216 np.mod(right_parameters[2], 2 * np.pi)) < self.eps:
217 right_name = "nop"
218
219 if right_name == "u2" and "u2" not in self.basis:
220 if use_u:
221 right_name = 'u'
222 else:
223 right_name = "u3"
224 if right_name in ("u1", "p") and right_name not in self.basis:
225 if use_u:
226 right_name = 'u'
227 else:
228 right_name = "u3"
229
230 new_op = Gate(name="", num_qubits=1, params=[])
231 if right_name == "u1":
232 new_op = U1Gate(right_parameters[2])
233 if right_name == "p":
234 new_op = PhaseGate(right_parameters[2])
235 if right_name == "u2":
236 new_op = U2Gate(right_parameters[1], right_parameters[2])
237 if right_name == "u":
238 if "u" in self.basis:
239 new_op = UGate(*right_parameters)
240 if right_name == "u3":
241 if "u3" in self.basis:
242 new_op = U3Gate(*right_parameters)
243 else:
244 raise TranspilerError('It was not possible to use the basis %s' % self.basis)
245
246 dag.global_phase += right_global_phase
247
248 if right_name != 'nop':
249 dag.substitute_node(run[0], new_op, inplace=True)
250
251 # Delete the other nodes in the run
252 for current_node in run[1:]:
253 dag.remove_op_node(current_node)
254 if right_name == "nop":
255 dag.remove_op_node(run[0])
256
257 return dag
258
259 @staticmethod
260 def compose_u3(theta1, phi1, lambda1, theta2, phi2, lambda2):
261 """Return a triple theta, phi, lambda for the product.
262
263 u3(theta, phi, lambda)
264 = u3(theta1, phi1, lambda1).u3(theta2, phi2, lambda2)
265 = Rz(phi1).Ry(theta1).Rz(lambda1+phi2).Ry(theta2).Rz(lambda2)
266 = Rz(phi1).Rz(phi').Ry(theta').Rz(lambda').Rz(lambda2)
267 = u3(theta', phi1 + phi', lambda2 + lambda')
268
269 Return theta, phi, lambda.
270 """
271 # Careful with the factor of two in yzy_to_zyz
272 thetap, phip, lambdap = Optimize1qGates.yzy_to_zyz((lambda1 + phi2), theta1, theta2)
273 (theta, phi, lamb) = (thetap, phi1 + phip, lambda2 + lambdap)
274
275 return (theta, phi, lamb)
276
277 @staticmethod
278 def yzy_to_zyz(xi, theta1, theta2, eps=1e-9): # pylint: disable=invalid-name
279 """Express a Y.Z.Y single qubit gate as a Z.Y.Z gate.
280
281 Solve the equation
282
283 .. math::
284
285 Ry(theta1).Rz(xi).Ry(theta2) = Rz(phi).Ry(theta).Rz(lambda)
286
287 for theta, phi, and lambda.
288
289 Return a solution theta, phi, and lambda.
290 """
291 quaternion_yzy = Quaternion.from_euler([theta1, xi, theta2], 'yzy')
292 euler = quaternion_yzy.to_zyz()
293 quaternion_zyz = Quaternion.from_euler(euler, 'zyz')
294 # output order different than rotation order
295 out_angles = (euler[1], euler[0], euler[2])
296 abs_inner = abs(quaternion_zyz.data.dot(quaternion_yzy.data))
297 if not np.allclose(abs_inner, 1, eps):
298 raise TranspilerError('YZY and ZYZ angles do not give same rotation matrix.')
299 out_angles = tuple(0 if np.abs(angle) < _CHOP_THRESHOLD else angle
300 for angle in out_angles)
301 return out_angles
302
303
304 def _split_runs_on_parameters(runs):
305 """Finds runs containing parameterized gates and splits them into sequential
306 runs excluding the parameterized gates.
307 """
308
309 out = []
310 for run in runs:
311 groups = groupby(run, lambda x: x.op.is_parameterized())
312
313 for group_is_parameterized, gates in groups:
314 if not group_is_parameterized:
315 out.append(list(gates))
316
317 return out
318
[end of qiskit/transpiler/passes/optimization/optimize_1q_gates.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
Qiskit/qiskit
|
cf7433a62fdeab9014f4b46abee9099ac1086be8
|
circuit.repeat duplicates global phase
<!-- ⚠️ If you do not respect this template, your issue will be closed -->
<!-- ⚠️ Make sure to browse the opened and closed issues -->
### Information
- **Qiskit Terra version**: master @ 4d3ed6a3
- **Python version**: 3.7.9
- **Operating system**: macOS catalina
### What is the current behavior?
Calling `circuit.repeat(1)` on a circuit with global phase seems to duplicate the global phase.
### Steps to reproduce the problem
If `u` is a circuit with only a Z gate and `u_with_phase` the same circuit but with a global phase of pi, then
```python
In [75]: u.draw()
Out[75]:
┌───┐
q_0: ┤ Z ├
└───┘
In [79]: u_with_phase.draw()
Out[79]:
global phase: π
┌───┐
q_0: ┤ Z ├
└───┘
In [80]: Operator(u) == Operator(u_with_phase) # should be False since the matrices are not the same
Out[80]: False
In [81]: Operator(u) == Operator(u_with_phase.power(1)) # should still be False
Out[81]: True
```
### What is the expected behavior?
### Suggested solutions
|
2021-01-06T14:13:26Z
|
<patch>
diff --git a/qiskit/circuit/quantumcircuit.py b/qiskit/circuit/quantumcircuit.py
--- a/qiskit/circuit/quantumcircuit.py
+++ b/qiskit/circuit/quantumcircuit.py
@@ -437,8 +437,7 @@ def repeat(self, reps):
QuantumCircuit: A circuit containing ``reps`` repetitions of this circuit.
"""
repeated_circ = QuantumCircuit(*self.qregs, *self.cregs,
- name=self.name + '**{}'.format(reps),
- global_phase=reps * self.global_phase)
+ name=self.name + '**{}'.format(reps))
# benefit of appending instructions: decomposing shows the subparts, i.e. the power
# is actually `reps` times this circuit, and it is currently much faster than `compose`.
</patch>
|
[]
|
[]
| ||||
numpy__numpy-6660
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Allow pathlib.Path arguments
`savez`, `savez_compressed`, and `load` all take `str` as an argument, but give an error when passed a `pathlib.Path` argument as a filename. Now that `pathlib.Path` is in the standard library, it would be nice if these functions could accept a `Path` instance as an argument, in addition to `str` and `file` objects.
Example:
``` python
p = Path('test.npz')
np.savez_compressed(p, data=[10])
```
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-16-304bca81cdf4> in <module>()
1 p = Path('test.npz')
----> 2 np.savez_compressed(p, data=[10])
/usr/lib/python3.5/site-packages/numpy/lib/npyio.py in savez_compressed(file, *args, **kwds)
510
511 """
--> 512 _savez(file, args, kwds, True)
513
514
/usr/lib/python3.5/site-packages/numpy/lib/npyio.py in _savez(file, args, kwds, compress)
550 fid.close()
551 fid = None
--> 552 zipf.write(tmpfile, arcname=fname)
553 finally:
554 if fid:
/usr/lib/python3.5/zipfile.py in write(self, filename, arcname, compress_type)
1481 zip64 = self._allowZip64 and \
1482 zinfo.file_size * 1.05 > ZIP64_LIMIT
-> 1483 self.fp.write(zinfo.FileHeader(zip64))
1484 file_size = 0
1485 while 1:
/usr/lib/python3.5/zipfile.py in write(self, data)
675
676 def write(self, data):
--> 677 n = self.fp.write(data)
678 self.offset += n
679 return n
AttributeError: 'PosixPath' object has no attribute 'write'
```
</issue>
<code>
[start of README.md]
1 <div align="center">
2 <img src="http://www.numpy.org/_static/numpy_logo.png"><br>
3 </div>
4 -----------------
5 | **`Travis CI Status`** |
6 |-------------------|
7 |[](https://travis-ci.org/numpy/numpy)|
8
9
10 NumPy is the fundamental package needed for scientific computing with Python.
11 This package contains:
12
13 * a powerful N-dimensional array object
14 * sophisticated (broadcasting) functions
15 * tools for integrating C/C++ and Fortran code
16 * useful linear algebra, Fourier transform, and random number capabilities.
17
18 It derives from the old Numeric code base and can be used as a replacement for Numeric. It also adds the features introduced by numarray and can be used to replace numarray.
19
20 More information can be found at the website:
21
22 * http://www.numpy.org
23
24 After installation, tests can be run (if ``nose`` is installed) with:
25
26 python -c 'import numpy; numpy.test()'
27
28 The most current development version is always available from our
29 git repository:
30
31 * http://github.com/numpy/numpy
32
[end of README.md]
[start of numpy/lib/utils.py]
1 from __future__ import division, absolute_import, print_function
2
3 import os
4 import sys
5 import types
6 import re
7 import warnings
8
9 from numpy.core.numerictypes import issubclass_, issubsctype, issubdtype
10 from numpy.core import ndarray, ufunc, asarray
11
12 # getargspec and formatargspec were removed in Python 3.6
13 from numpy.compat import getargspec, formatargspec
14
15 __all__ = [
16 'issubclass_', 'issubsctype', 'issubdtype', 'deprecate',
17 'deprecate_with_doc', 'get_include', 'info', 'source', 'who',
18 'lookfor', 'byte_bounds', 'safe_eval'
19 ]
20
21 def get_include():
22 """
23 Return the directory that contains the NumPy \\*.h header files.
24
25 Extension modules that need to compile against NumPy should use this
26 function to locate the appropriate include directory.
27
28 Notes
29 -----
30 When using ``distutils``, for example in ``setup.py``.
31 ::
32
33 import numpy as np
34 ...
35 Extension('extension_name', ...
36 include_dirs=[np.get_include()])
37 ...
38
39 """
40 import numpy
41 if numpy.show_config is None:
42 # running from numpy source directory
43 d = os.path.join(os.path.dirname(numpy.__file__), 'core', 'include')
44 else:
45 # using installed numpy core headers
46 import numpy.core as core
47 d = os.path.join(os.path.dirname(core.__file__), 'include')
48 return d
49
50
51 def _set_function_name(func, name):
52 func.__name__ = name
53 return func
54
55
56 class _Deprecate(object):
57 """
58 Decorator class to deprecate old functions.
59
60 Refer to `deprecate` for details.
61
62 See Also
63 --------
64 deprecate
65
66 """
67
68 def __init__(self, old_name=None, new_name=None, message=None):
69 self.old_name = old_name
70 self.new_name = new_name
71 self.message = message
72
73 def __call__(self, func, *args, **kwargs):
74 """
75 Decorator call. Refer to ``decorate``.
76
77 """
78 old_name = self.old_name
79 new_name = self.new_name
80 message = self.message
81
82 import warnings
83 if old_name is None:
84 try:
85 old_name = func.__name__
86 except AttributeError:
87 old_name = func.__name__
88 if new_name is None:
89 depdoc = "`%s` is deprecated!" % old_name
90 else:
91 depdoc = "`%s` is deprecated, use `%s` instead!" % \
92 (old_name, new_name)
93
94 if message is not None:
95 depdoc += "\n" + message
96
97 def newfunc(*args,**kwds):
98 """`arrayrange` is deprecated, use `arange` instead!"""
99 warnings.warn(depdoc, DeprecationWarning)
100 return func(*args, **kwds)
101
102 newfunc = _set_function_name(newfunc, old_name)
103 doc = func.__doc__
104 if doc is None:
105 doc = depdoc
106 else:
107 doc = '\n\n'.join([depdoc, doc])
108 newfunc.__doc__ = doc
109 try:
110 d = func.__dict__
111 except AttributeError:
112 pass
113 else:
114 newfunc.__dict__.update(d)
115 return newfunc
116
117 def deprecate(*args, **kwargs):
118 """
119 Issues a DeprecationWarning, adds warning to `old_name`'s
120 docstring, rebinds ``old_name.__name__`` and returns the new
121 function object.
122
123 This function may also be used as a decorator.
124
125 Parameters
126 ----------
127 func : function
128 The function to be deprecated.
129 old_name : str, optional
130 The name of the function to be deprecated. Default is None, in
131 which case the name of `func` is used.
132 new_name : str, optional
133 The new name for the function. Default is None, in which case the
134 deprecation message is that `old_name` is deprecated. If given, the
135 deprecation message is that `old_name` is deprecated and `new_name`
136 should be used instead.
137 message : str, optional
138 Additional explanation of the deprecation. Displayed in the
139 docstring after the warning.
140
141 Returns
142 -------
143 old_func : function
144 The deprecated function.
145
146 Examples
147 --------
148 Note that ``olduint`` returns a value after printing Deprecation
149 Warning:
150
151 >>> olduint = np.deprecate(np.uint)
152 >>> olduint(6)
153 /usr/lib/python2.5/site-packages/numpy/lib/utils.py:114:
154 DeprecationWarning: uint32 is deprecated
155 warnings.warn(str1, DeprecationWarning)
156 6
157
158 """
159 # Deprecate may be run as a function or as a decorator
160 # If run as a function, we initialise the decorator class
161 # and execute its __call__ method.
162
163 if args:
164 fn = args[0]
165 args = args[1:]
166
167 # backward compatibility -- can be removed
168 # after next release
169 if 'newname' in kwargs:
170 kwargs['new_name'] = kwargs.pop('newname')
171 if 'oldname' in kwargs:
172 kwargs['old_name'] = kwargs.pop('oldname')
173
174 return _Deprecate(*args, **kwargs)(fn)
175 else:
176 return _Deprecate(*args, **kwargs)
177
178 deprecate_with_doc = lambda msg: _Deprecate(message=msg)
179
180
181 #--------------------------------------------
182 # Determine if two arrays can share memory
183 #--------------------------------------------
184
185 def byte_bounds(a):
186 """
187 Returns pointers to the end-points of an array.
188
189 Parameters
190 ----------
191 a : ndarray
192 Input array. It must conform to the Python-side of the array
193 interface.
194
195 Returns
196 -------
197 (low, high) : tuple of 2 integers
198 The first integer is the first byte of the array, the second
199 integer is just past the last byte of the array. If `a` is not
200 contiguous it will not use every byte between the (`low`, `high`)
201 values.
202
203 Examples
204 --------
205 >>> I = np.eye(2, dtype='f'); I.dtype
206 dtype('float32')
207 >>> low, high = np.byte_bounds(I)
208 >>> high - low == I.size*I.itemsize
209 True
210 >>> I = np.eye(2, dtype='G'); I.dtype
211 dtype('complex192')
212 >>> low, high = np.byte_bounds(I)
213 >>> high - low == I.size*I.itemsize
214 True
215
216 """
217 ai = a.__array_interface__
218 a_data = ai['data'][0]
219 astrides = ai['strides']
220 ashape = ai['shape']
221 bytes_a = asarray(a).dtype.itemsize
222
223 a_low = a_high = a_data
224 if astrides is None:
225 # contiguous case
226 a_high += a.size * bytes_a
227 else:
228 for shape, stride in zip(ashape, astrides):
229 if stride < 0:
230 a_low += (shape-1)*stride
231 else:
232 a_high += (shape-1)*stride
233 a_high += bytes_a
234 return a_low, a_high
235
236
237 #-----------------------------------------------------------------------------
238 # Function for output and information on the variables used.
239 #-----------------------------------------------------------------------------
240
241
242 def who(vardict=None):
243 """
244 Print the Numpy arrays in the given dictionary.
245
246 If there is no dictionary passed in or `vardict` is None then returns
247 Numpy arrays in the globals() dictionary (all Numpy arrays in the
248 namespace).
249
250 Parameters
251 ----------
252 vardict : dict, optional
253 A dictionary possibly containing ndarrays. Default is globals().
254
255 Returns
256 -------
257 out : None
258 Returns 'None'.
259
260 Notes
261 -----
262 Prints out the name, shape, bytes and type of all of the ndarrays
263 present in `vardict`.
264
265 Examples
266 --------
267 >>> a = np.arange(10)
268 >>> b = np.ones(20)
269 >>> np.who()
270 Name Shape Bytes Type
271 ===========================================================
272 a 10 40 int32
273 b 20 160 float64
274 Upper bound on total bytes = 200
275
276 >>> d = {'x': np.arange(2.0), 'y': np.arange(3.0), 'txt': 'Some str',
277 ... 'idx':5}
278 >>> np.who(d)
279 Name Shape Bytes Type
280 ===========================================================
281 y 3 24 float64
282 x 2 16 float64
283 Upper bound on total bytes = 40
284
285 """
286 if vardict is None:
287 frame = sys._getframe().f_back
288 vardict = frame.f_globals
289 sta = []
290 cache = {}
291 for name in vardict.keys():
292 if isinstance(vardict[name], ndarray):
293 var = vardict[name]
294 idv = id(var)
295 if idv in cache.keys():
296 namestr = name + " (%s)" % cache[idv]
297 original = 0
298 else:
299 cache[idv] = name
300 namestr = name
301 original = 1
302 shapestr = " x ".join(map(str, var.shape))
303 bytestr = str(var.nbytes)
304 sta.append([namestr, shapestr, bytestr, var.dtype.name,
305 original])
306
307 maxname = 0
308 maxshape = 0
309 maxbyte = 0
310 totalbytes = 0
311 for k in range(len(sta)):
312 val = sta[k]
313 if maxname < len(val[0]):
314 maxname = len(val[0])
315 if maxshape < len(val[1]):
316 maxshape = len(val[1])
317 if maxbyte < len(val[2]):
318 maxbyte = len(val[2])
319 if val[4]:
320 totalbytes += int(val[2])
321
322 if len(sta) > 0:
323 sp1 = max(10, maxname)
324 sp2 = max(10, maxshape)
325 sp3 = max(10, maxbyte)
326 prval = "Name %s Shape %s Bytes %s Type" % (sp1*' ', sp2*' ', sp3*' ')
327 print(prval + "\n" + "="*(len(prval)+5) + "\n")
328
329 for k in range(len(sta)):
330 val = sta[k]
331 print("%s %s %s %s %s %s %s" % (val[0], ' '*(sp1-len(val[0])+4),
332 val[1], ' '*(sp2-len(val[1])+5),
333 val[2], ' '*(sp3-len(val[2])+5),
334 val[3]))
335 print("\nUpper bound on total bytes = %d" % totalbytes)
336 return
337
338 #-----------------------------------------------------------------------------
339
340
341 # NOTE: pydoc defines a help function which works simliarly to this
342 # except it uses a pager to take over the screen.
343
344 # combine name and arguments and split to multiple lines of width
345 # characters. End lines on a comma and begin argument list indented with
346 # the rest of the arguments.
347 def _split_line(name, arguments, width):
348 firstwidth = len(name)
349 k = firstwidth
350 newstr = name
351 sepstr = ", "
352 arglist = arguments.split(sepstr)
353 for argument in arglist:
354 if k == firstwidth:
355 addstr = ""
356 else:
357 addstr = sepstr
358 k = k + len(argument) + len(addstr)
359 if k > width:
360 k = firstwidth + 1 + len(argument)
361 newstr = newstr + ",\n" + " "*(firstwidth+2) + argument
362 else:
363 newstr = newstr + addstr + argument
364 return newstr
365
366 _namedict = None
367 _dictlist = None
368
369 # Traverse all module directories underneath globals
370 # to see if something is defined
371 def _makenamedict(module='numpy'):
372 module = __import__(module, globals(), locals(), [])
373 thedict = {module.__name__:module.__dict__}
374 dictlist = [module.__name__]
375 totraverse = [module.__dict__]
376 while True:
377 if len(totraverse) == 0:
378 break
379 thisdict = totraverse.pop(0)
380 for x in thisdict.keys():
381 if isinstance(thisdict[x], types.ModuleType):
382 modname = thisdict[x].__name__
383 if modname not in dictlist:
384 moddict = thisdict[x].__dict__
385 dictlist.append(modname)
386 totraverse.append(moddict)
387 thedict[modname] = moddict
388 return thedict, dictlist
389
390
391 def _info(obj, output=sys.stdout):
392 """Provide information about ndarray obj.
393
394 Parameters
395 ----------
396 obj: ndarray
397 Must be ndarray, not checked.
398 output:
399 Where printed output goes.
400
401 Notes
402 -----
403 Copied over from the numarray module prior to its removal.
404 Adapted somewhat as only numpy is an option now.
405
406 Called by info.
407
408 """
409 extra = ""
410 tic = ""
411 bp = lambda x: x
412 cls = getattr(obj, '__class__', type(obj))
413 nm = getattr(cls, '__name__', cls)
414 strides = obj.strides
415 endian = obj.dtype.byteorder
416
417 print("class: ", nm, file=output)
418 print("shape: ", obj.shape, file=output)
419 print("strides: ", strides, file=output)
420 print("itemsize: ", obj.itemsize, file=output)
421 print("aligned: ", bp(obj.flags.aligned), file=output)
422 print("contiguous: ", bp(obj.flags.contiguous), file=output)
423 print("fortran: ", obj.flags.fortran, file=output)
424 print(
425 "data pointer: %s%s" % (hex(obj.ctypes._as_parameter_.value), extra),
426 file=output
427 )
428 print("byteorder: ", end=' ', file=output)
429 if endian in ['|', '=']:
430 print("%s%s%s" % (tic, sys.byteorder, tic), file=output)
431 byteswap = False
432 elif endian == '>':
433 print("%sbig%s" % (tic, tic), file=output)
434 byteswap = sys.byteorder != "big"
435 else:
436 print("%slittle%s" % (tic, tic), file=output)
437 byteswap = sys.byteorder != "little"
438 print("byteswap: ", bp(byteswap), file=output)
439 print("type: %s" % obj.dtype, file=output)
440
441
442 def info(object=None, maxwidth=76, output=sys.stdout, toplevel='numpy'):
443 """
444 Get help information for a function, class, or module.
445
446 Parameters
447 ----------
448 object : object or str, optional
449 Input object or name to get information about. If `object` is a
450 numpy object, its docstring is given. If it is a string, available
451 modules are searched for matching objects. If None, information
452 about `info` itself is returned.
453 maxwidth : int, optional
454 Printing width.
455 output : file like object, optional
456 File like object that the output is written to, default is
457 ``stdout``. The object has to be opened in 'w' or 'a' mode.
458 toplevel : str, optional
459 Start search at this level.
460
461 See Also
462 --------
463 source, lookfor
464
465 Notes
466 -----
467 When used interactively with an object, ``np.info(obj)`` is equivalent
468 to ``help(obj)`` on the Python prompt or ``obj?`` on the IPython
469 prompt.
470
471 Examples
472 --------
473 >>> np.info(np.polyval) # doctest: +SKIP
474 polyval(p, x)
475 Evaluate the polynomial p at x.
476 ...
477
478 When using a string for `object` it is possible to get multiple results.
479
480 >>> np.info('fft') # doctest: +SKIP
481 *** Found in numpy ***
482 Core FFT routines
483 ...
484 *** Found in numpy.fft ***
485 fft(a, n=None, axis=-1)
486 ...
487 *** Repeat reference found in numpy.fft.fftpack ***
488 *** Total of 3 references found. ***
489
490 """
491 global _namedict, _dictlist
492 # Local import to speed up numpy's import time.
493 import pydoc
494 import inspect
495
496 if (hasattr(object, '_ppimport_importer') or
497 hasattr(object, '_ppimport_module')):
498 object = object._ppimport_module
499 elif hasattr(object, '_ppimport_attr'):
500 object = object._ppimport_attr
501
502 if object is None:
503 info(info)
504 elif isinstance(object, ndarray):
505 _info(object, output=output)
506 elif isinstance(object, str):
507 if _namedict is None:
508 _namedict, _dictlist = _makenamedict(toplevel)
509 numfound = 0
510 objlist = []
511 for namestr in _dictlist:
512 try:
513 obj = _namedict[namestr][object]
514 if id(obj) in objlist:
515 print("\n "
516 "*** Repeat reference found in %s *** " % namestr,
517 file=output
518 )
519 else:
520 objlist.append(id(obj))
521 print(" *** Found in %s ***" % namestr, file=output)
522 info(obj)
523 print("-"*maxwidth, file=output)
524 numfound += 1
525 except KeyError:
526 pass
527 if numfound == 0:
528 print("Help for %s not found." % object, file=output)
529 else:
530 print("\n "
531 "*** Total of %d references found. ***" % numfound,
532 file=output
533 )
534
535 elif inspect.isfunction(object):
536 name = object.__name__
537 arguments = formatargspec(*getargspec(object))
538
539 if len(name+arguments) > maxwidth:
540 argstr = _split_line(name, arguments, maxwidth)
541 else:
542 argstr = name + arguments
543
544 print(" " + argstr + "\n", file=output)
545 print(inspect.getdoc(object), file=output)
546
547 elif inspect.isclass(object):
548 name = object.__name__
549 arguments = "()"
550 try:
551 if hasattr(object, '__init__'):
552 arguments = formatargspec(
553 *getargspec(object.__init__.__func__)
554 )
555 arglist = arguments.split(', ')
556 if len(arglist) > 1:
557 arglist[1] = "("+arglist[1]
558 arguments = ", ".join(arglist[1:])
559 except:
560 pass
561
562 if len(name+arguments) > maxwidth:
563 argstr = _split_line(name, arguments, maxwidth)
564 else:
565 argstr = name + arguments
566
567 print(" " + argstr + "\n", file=output)
568 doc1 = inspect.getdoc(object)
569 if doc1 is None:
570 if hasattr(object, '__init__'):
571 print(inspect.getdoc(object.__init__), file=output)
572 else:
573 print(inspect.getdoc(object), file=output)
574
575 methods = pydoc.allmethods(object)
576 if methods != []:
577 print("\n\nMethods:\n", file=output)
578 for meth in methods:
579 if meth[0] == '_':
580 continue
581 thisobj = getattr(object, meth, None)
582 if thisobj is not None:
583 methstr, other = pydoc.splitdoc(
584 inspect.getdoc(thisobj) or "None"
585 )
586 print(" %s -- %s" % (meth, methstr), file=output)
587
588 elif (sys.version_info[0] < 3
589 and isinstance(object, types.InstanceType)):
590 # check for __call__ method
591 # types.InstanceType is the type of the instances of oldstyle classes
592 print("Instance of class: ", object.__class__.__name__, file=output)
593 print(file=output)
594 if hasattr(object, '__call__'):
595 arguments = formatargspec(
596 *getargspec(object.__call__.__func__)
597 )
598 arglist = arguments.split(', ')
599 if len(arglist) > 1:
600 arglist[1] = "("+arglist[1]
601 arguments = ", ".join(arglist[1:])
602 else:
603 arguments = "()"
604
605 if hasattr(object, 'name'):
606 name = "%s" % object.name
607 else:
608 name = "<name>"
609 if len(name+arguments) > maxwidth:
610 argstr = _split_line(name, arguments, maxwidth)
611 else:
612 argstr = name + arguments
613
614 print(" " + argstr + "\n", file=output)
615 doc = inspect.getdoc(object.__call__)
616 if doc is not None:
617 print(inspect.getdoc(object.__call__), file=output)
618 print(inspect.getdoc(object), file=output)
619
620 else:
621 print(inspect.getdoc(object), file=output)
622
623 elif inspect.ismethod(object):
624 name = object.__name__
625 arguments = formatargspec(
626 *getargspec(object.__func__)
627 )
628 arglist = arguments.split(', ')
629 if len(arglist) > 1:
630 arglist[1] = "("+arglist[1]
631 arguments = ", ".join(arglist[1:])
632 else:
633 arguments = "()"
634
635 if len(name+arguments) > maxwidth:
636 argstr = _split_line(name, arguments, maxwidth)
637 else:
638 argstr = name + arguments
639
640 print(" " + argstr + "\n", file=output)
641 print(inspect.getdoc(object), file=output)
642
643 elif hasattr(object, '__doc__'):
644 print(inspect.getdoc(object), file=output)
645
646
647 def source(object, output=sys.stdout):
648 """
649 Print or write to a file the source code for a Numpy object.
650
651 The source code is only returned for objects written in Python. Many
652 functions and classes are defined in C and will therefore not return
653 useful information.
654
655 Parameters
656 ----------
657 object : numpy object
658 Input object. This can be any object (function, class, module,
659 ...).
660 output : file object, optional
661 If `output` not supplied then source code is printed to screen
662 (sys.stdout). File object must be created with either write 'w' or
663 append 'a' modes.
664
665 See Also
666 --------
667 lookfor, info
668
669 Examples
670 --------
671 >>> np.source(np.interp) #doctest: +SKIP
672 In file: /usr/lib/python2.6/dist-packages/numpy/lib/function_base.py
673 def interp(x, xp, fp, left=None, right=None):
674 \"\"\".... (full docstring printed)\"\"\"
675 if isinstance(x, (float, int, number)):
676 return compiled_interp([x], xp, fp, left, right).item()
677 else:
678 return compiled_interp(x, xp, fp, left, right)
679
680 The source code is only returned for objects written in Python.
681
682 >>> np.source(np.array) #doctest: +SKIP
683 Not available for this object.
684
685 """
686 # Local import to speed up numpy's import time.
687 import inspect
688 try:
689 print("In file: %s\n" % inspect.getsourcefile(object), file=output)
690 print(inspect.getsource(object), file=output)
691 except:
692 print("Not available for this object.", file=output)
693
694
695 # Cache for lookfor: {id(module): {name: (docstring, kind, index), ...}...}
696 # where kind: "func", "class", "module", "object"
697 # and index: index in breadth-first namespace traversal
698 _lookfor_caches = {}
699
700 # regexp whose match indicates that the string may contain a function
701 # signature
702 _function_signature_re = re.compile(r"[a-z0-9_]+\(.*[,=].*\)", re.I)
703
704 def lookfor(what, module=None, import_modules=True, regenerate=False,
705 output=None):
706 """
707 Do a keyword search on docstrings.
708
709 A list of of objects that matched the search is displayed,
710 sorted by relevance. All given keywords need to be found in the
711 docstring for it to be returned as a result, but the order does
712 not matter.
713
714 Parameters
715 ----------
716 what : str
717 String containing words to look for.
718 module : str or list, optional
719 Name of module(s) whose docstrings to go through.
720 import_modules : bool, optional
721 Whether to import sub-modules in packages. Default is True.
722 regenerate : bool, optional
723 Whether to re-generate the docstring cache. Default is False.
724 output : file-like, optional
725 File-like object to write the output to. If omitted, use a pager.
726
727 See Also
728 --------
729 source, info
730
731 Notes
732 -----
733 Relevance is determined only roughly, by checking if the keywords occur
734 in the function name, at the start of a docstring, etc.
735
736 Examples
737 --------
738 >>> np.lookfor('binary representation')
739 Search results for 'binary representation'
740 ------------------------------------------
741 numpy.binary_repr
742 Return the binary representation of the input number as a string.
743 numpy.core.setup_common.long_double_representation
744 Given a binary dump as given by GNU od -b, look for long double
745 numpy.base_repr
746 Return a string representation of a number in the given base system.
747 ...
748
749 """
750 import pydoc
751
752 # Cache
753 cache = _lookfor_generate_cache(module, import_modules, regenerate)
754
755 # Search
756 # XXX: maybe using a real stemming search engine would be better?
757 found = []
758 whats = str(what).lower().split()
759 if not whats:
760 return
761
762 for name, (docstring, kind, index) in cache.items():
763 if kind in ('module', 'object'):
764 # don't show modules or objects
765 continue
766 ok = True
767 doc = docstring.lower()
768 for w in whats:
769 if w not in doc:
770 ok = False
771 break
772 if ok:
773 found.append(name)
774
775 # Relevance sort
776 # XXX: this is full Harrison-Stetson heuristics now,
777 # XXX: it probably could be improved
778
779 kind_relevance = {'func': 1000, 'class': 1000,
780 'module': -1000, 'object': -1000}
781
782 def relevance(name, docstr, kind, index):
783 r = 0
784 # do the keywords occur within the start of the docstring?
785 first_doc = "\n".join(docstr.lower().strip().split("\n")[:3])
786 r += sum([200 for w in whats if w in first_doc])
787 # do the keywords occur in the function name?
788 r += sum([30 for w in whats if w in name])
789 # is the full name long?
790 r += -len(name) * 5
791 # is the object of bad type?
792 r += kind_relevance.get(kind, -1000)
793 # is the object deep in namespace hierarchy?
794 r += -name.count('.') * 10
795 r += max(-index / 100, -100)
796 return r
797
798 def relevance_value(a):
799 return relevance(a, *cache[a])
800 found.sort(key=relevance_value)
801
802 # Pretty-print
803 s = "Search results for '%s'" % (' '.join(whats))
804 help_text = [s, "-"*len(s)]
805 for name in found[::-1]:
806 doc, kind, ix = cache[name]
807
808 doclines = [line.strip() for line in doc.strip().split("\n")
809 if line.strip()]
810
811 # find a suitable short description
812 try:
813 first_doc = doclines[0].strip()
814 if _function_signature_re.search(first_doc):
815 first_doc = doclines[1].strip()
816 except IndexError:
817 first_doc = ""
818 help_text.append("%s\n %s" % (name, first_doc))
819
820 if not found:
821 help_text.append("Nothing found.")
822
823 # Output
824 if output is not None:
825 output.write("\n".join(help_text))
826 elif len(help_text) > 10:
827 pager = pydoc.getpager()
828 pager("\n".join(help_text))
829 else:
830 print("\n".join(help_text))
831
832 def _lookfor_generate_cache(module, import_modules, regenerate):
833 """
834 Generate docstring cache for given module.
835
836 Parameters
837 ----------
838 module : str, None, module
839 Module for which to generate docstring cache
840 import_modules : bool
841 Whether to import sub-modules in packages.
842 regenerate : bool
843 Re-generate the docstring cache
844
845 Returns
846 -------
847 cache : dict {obj_full_name: (docstring, kind, index), ...}
848 Docstring cache for the module, either cached one (regenerate=False)
849 or newly generated.
850
851 """
852 global _lookfor_caches
853 # Local import to speed up numpy's import time.
854 import inspect
855
856 if sys.version_info[0] >= 3:
857 # In Python3 stderr, stdout are text files.
858 from io import StringIO
859 else:
860 from StringIO import StringIO
861
862 if module is None:
863 module = "numpy"
864
865 if isinstance(module, str):
866 try:
867 __import__(module)
868 except ImportError:
869 return {}
870 module = sys.modules[module]
871 elif isinstance(module, list) or isinstance(module, tuple):
872 cache = {}
873 for mod in module:
874 cache.update(_lookfor_generate_cache(mod, import_modules,
875 regenerate))
876 return cache
877
878 if id(module) in _lookfor_caches and not regenerate:
879 return _lookfor_caches[id(module)]
880
881 # walk items and collect docstrings
882 cache = {}
883 _lookfor_caches[id(module)] = cache
884 seen = {}
885 index = 0
886 stack = [(module.__name__, module)]
887 while stack:
888 name, item = stack.pop(0)
889 if id(item) in seen:
890 continue
891 seen[id(item)] = True
892
893 index += 1
894 kind = "object"
895
896 if inspect.ismodule(item):
897 kind = "module"
898 try:
899 _all = item.__all__
900 except AttributeError:
901 _all = None
902
903 # import sub-packages
904 if import_modules and hasattr(item, '__path__'):
905 for pth in item.__path__:
906 for mod_path in os.listdir(pth):
907 this_py = os.path.join(pth, mod_path)
908 init_py = os.path.join(pth, mod_path, '__init__.py')
909 if (os.path.isfile(this_py) and
910 mod_path.endswith('.py')):
911 to_import = mod_path[:-3]
912 elif os.path.isfile(init_py):
913 to_import = mod_path
914 else:
915 continue
916 if to_import == '__init__':
917 continue
918
919 try:
920 # Catch SystemExit, too
921 base_exc = BaseException
922 except NameError:
923 # Python 2.4 doesn't have BaseException
924 base_exc = Exception
925
926 try:
927 old_stdout = sys.stdout
928 old_stderr = sys.stderr
929 try:
930 sys.stdout = StringIO()
931 sys.stderr = StringIO()
932 __import__("%s.%s" % (name, to_import))
933 finally:
934 sys.stdout = old_stdout
935 sys.stderr = old_stderr
936 except base_exc:
937 continue
938
939 for n, v in _getmembers(item):
940 try:
941 item_name = getattr(v, '__name__', "%s.%s" % (name, n))
942 mod_name = getattr(v, '__module__', None)
943 except NameError:
944 # ref. SWIG's global cvars
945 # NameError: Unknown C global variable
946 item_name = "%s.%s" % (name, n)
947 mod_name = None
948 if '.' not in item_name and mod_name:
949 item_name = "%s.%s" % (mod_name, item_name)
950
951 if not item_name.startswith(name + '.'):
952 # don't crawl "foreign" objects
953 if isinstance(v, ufunc):
954 # ... unless they are ufuncs
955 pass
956 else:
957 continue
958 elif not (inspect.ismodule(v) or _all is None or n in _all):
959 continue
960 stack.append(("%s.%s" % (name, n), v))
961 elif inspect.isclass(item):
962 kind = "class"
963 for n, v in _getmembers(item):
964 stack.append(("%s.%s" % (name, n), v))
965 elif hasattr(item, "__call__"):
966 kind = "func"
967
968 try:
969 doc = inspect.getdoc(item)
970 except NameError:
971 # ref SWIG's NameError: Unknown C global variable
972 doc = None
973 if doc is not None:
974 cache[name] = (doc, kind, index)
975
976 return cache
977
978 def _getmembers(item):
979 import inspect
980 try:
981 members = inspect.getmembers(item)
982 except Exception:
983 members = [(x, getattr(item, x)) for x in dir(item)
984 if hasattr(item, x)]
985 return members
986
987 #-----------------------------------------------------------------------------
988
989 # The following SafeEval class and company are adapted from Michael Spencer's
990 # ASPN Python Cookbook recipe:
991 # http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/364469
992 # Accordingly it is mostly Copyright 2006 by Michael Spencer.
993 # The recipe, like most of the other ASPN Python Cookbook recipes was made
994 # available under the Python license.
995 # http://www.python.org/license
996
997 # It has been modified to:
998 # * handle unary -/+
999 # * support True/False/None
1000 # * raise SyntaxError instead of a custom exception.
1001
1002 class SafeEval(object):
1003 """
1004 Object to evaluate constant string expressions.
1005
1006 This includes strings with lists, dicts and tuples using the abstract
1007 syntax tree created by ``compiler.parse``.
1008
1009 .. deprecated:: 1.10.0
1010
1011 See Also
1012 --------
1013 safe_eval
1014
1015 """
1016 def __init__(self):
1017 # 2014-10-15, 1.10
1018 warnings.warn("SafeEval is deprecated in 1.10 and will be removed.",
1019 DeprecationWarning)
1020
1021 def visit(self, node):
1022 cls = node.__class__
1023 meth = getattr(self, 'visit' + cls.__name__, self.default)
1024 return meth(node)
1025
1026 def default(self, node):
1027 raise SyntaxError("Unsupported source construct: %s"
1028 % node.__class__)
1029
1030 def visitExpression(self, node):
1031 return self.visit(node.body)
1032
1033 def visitNum(self, node):
1034 return node.n
1035
1036 def visitStr(self, node):
1037 return node.s
1038
1039 def visitBytes(self, node):
1040 return node.s
1041
1042 def visitDict(self, node,**kw):
1043 return dict([(self.visit(k), self.visit(v))
1044 for k, v in zip(node.keys, node.values)])
1045
1046 def visitTuple(self, node):
1047 return tuple([self.visit(i) for i in node.elts])
1048
1049 def visitList(self, node):
1050 return [self.visit(i) for i in node.elts]
1051
1052 def visitUnaryOp(self, node):
1053 import ast
1054 if isinstance(node.op, ast.UAdd):
1055 return +self.visit(node.operand)
1056 elif isinstance(node.op, ast.USub):
1057 return -self.visit(node.operand)
1058 else:
1059 raise SyntaxError("Unknown unary op: %r" % node.op)
1060
1061 def visitName(self, node):
1062 if node.id == 'False':
1063 return False
1064 elif node.id == 'True':
1065 return True
1066 elif node.id == 'None':
1067 return None
1068 else:
1069 raise SyntaxError("Unknown name: %s" % node.id)
1070
1071 def visitNameConstant(self, node):
1072 return node.value
1073
1074
1075 def safe_eval(source):
1076 """
1077 Protected string evaluation.
1078
1079 Evaluate a string containing a Python literal expression without
1080 allowing the execution of arbitrary non-literal code.
1081
1082 Parameters
1083 ----------
1084 source : str
1085 The string to evaluate.
1086
1087 Returns
1088 -------
1089 obj : object
1090 The result of evaluating `source`.
1091
1092 Raises
1093 ------
1094 SyntaxError
1095 If the code has invalid Python syntax, or if it contains
1096 non-literal code.
1097
1098 Examples
1099 --------
1100 >>> np.safe_eval('1')
1101 1
1102 >>> np.safe_eval('[1, 2, 3]')
1103 [1, 2, 3]
1104 >>> np.safe_eval('{"foo": ("bar", 10.0)}')
1105 {'foo': ('bar', 10.0)}
1106
1107 >>> np.safe_eval('import os')
1108 Traceback (most recent call last):
1109 ...
1110 SyntaxError: invalid syntax
1111
1112 >>> np.safe_eval('open("/home/user/.ssh/id_dsa").read()')
1113 Traceback (most recent call last):
1114 ...
1115 SyntaxError: Unsupported source construct: compiler.ast.CallFunc
1116
1117 """
1118 # Local import to speed up numpy's import time.
1119 import ast
1120
1121 return ast.literal_eval(source)
1122 #-----------------------------------------------------------------------------
1123
[end of numpy/lib/utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
numpy/numpy
|
ac21efdd9d5d69198bc0802645826a4b1240952a
|
Allow pathlib.Path arguments
`savez`, `savez_compressed`, and `load` all take `str` as an argument, but give an error when passed a `pathlib.Path` argument as a filename. Now that `pathlib.Path` is in the standard library, it would be nice if these functions could accept a `Path` instance as an argument, in addition to `str` and `file` objects.
Example:
``` python
p = Path('test.npz')
np.savez_compressed(p, data=[10])
```
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-16-304bca81cdf4> in <module>()
1 p = Path('test.npz')
----> 2 np.savez_compressed(p, data=[10])
/usr/lib/python3.5/site-packages/numpy/lib/npyio.py in savez_compressed(file, *args, **kwds)
510
511 """
--> 512 _savez(file, args, kwds, True)
513
514
/usr/lib/python3.5/site-packages/numpy/lib/npyio.py in _savez(file, args, kwds, compress)
550 fid.close()
551 fid = None
--> 552 zipf.write(tmpfile, arcname=fname)
553 finally:
554 if fid:
/usr/lib/python3.5/zipfile.py in write(self, filename, arcname, compress_type)
1481 zip64 = self._allowZip64 and \
1482 zinfo.file_size * 1.05 > ZIP64_LIMIT
-> 1483 self.fp.write(zinfo.FileHeader(zip64))
1484 file_size = 0
1485 while 1:
/usr/lib/python3.5/zipfile.py in write(self, data)
675
676 def write(self, data):
--> 677 n = self.fp.write(data)
678 self.offset += n
679 return n
AttributeError: 'PosixPath' object has no attribute 'write'
```
|
+1
shouldn't be hard to fix, PR very welcome
I'm happy to work on this.
great!
|
2015-11-10T02:09:16Z
|
<patch>
diff --git a/numpy/compat/py3k.py b/numpy/compat/py3k.py
--- a/numpy/compat/py3k.py
+++ b/numpy/compat/py3k.py
@@ -7,9 +7,13 @@
__all__ = ['bytes', 'asbytes', 'isfileobj', 'getexception', 'strchar',
'unicode', 'asunicode', 'asbytes_nested', 'asunicode_nested',
'asstr', 'open_latin1', 'long', 'basestring', 'sixu',
- 'integer_types']
+ 'integer_types', 'is_pathlib_path', 'Path']
import sys
+try:
+ from pathlib import Path
+except ImportError:
+ Path = None
if sys.version_info[0] >= 3:
import io
@@ -86,3 +90,10 @@ def asunicode_nested(x):
return [asunicode_nested(y) for y in x]
else:
return asunicode(x)
+
+
+def is_pathlib_path(obj):
+ """
+ Check whether obj is a pathlib.Path object.
+ """
+ return Path is not None and isinstance(obj, Path)
diff --git a/numpy/core/memmap.py b/numpy/core/memmap.py
--- a/numpy/core/memmap.py
+++ b/numpy/core/memmap.py
@@ -2,7 +2,7 @@
import numpy as np
from .numeric import uint8, ndarray, dtype
-from numpy.compat import long, basestring
+from numpy.compat import long, basestring, is_pathlib_path
__all__ = ['memmap']
@@ -39,7 +39,7 @@ class memmap(ndarray):
Parameters
----------
- filename : str or file-like object
+ filename : str, file-like object, or pathlib.Path instance
The file name or file object to be used as the array data buffer.
dtype : data-type, optional
The data-type used to interpret the file contents.
@@ -82,7 +82,7 @@ class memmap(ndarray):
Attributes
----------
- filename : str
+ filename : str or pathlib.Path instance
Path to the mapped file.
offset : int
Offset position in the file.
@@ -213,6 +213,9 @@ def __new__(subtype, filename, dtype=uint8, mode='r+', offset=0,
if hasattr(filename, 'read'):
fid = filename
own_file = False
+ elif is_pathlib_path(filename):
+ fid = filename.open((mode == 'c' and 'r' or mode)+'b')
+ own_file = True
else:
fid = open(filename, (mode == 'c' and 'r' or mode)+'b')
own_file = True
@@ -267,6 +270,8 @@ def __new__(subtype, filename, dtype=uint8, mode='r+', offset=0,
if isinstance(filename, basestring):
self.filename = os.path.abspath(filename)
+ elif is_pathlib_path(filename):
+ self.filename = filename.resolve()
# py3 returns int for TemporaryFile().name
elif (hasattr(filename, "name") and
isinstance(filename.name, basestring)):
diff --git a/numpy/lib/npyio.py b/numpy/lib/npyio.py
--- a/numpy/lib/npyio.py
+++ b/numpy/lib/npyio.py
@@ -14,12 +14,12 @@
from numpy.core.multiarray import packbits, unpackbits
from ._iotools import (
LineSplitter, NameValidator, StringConverter, ConverterError,
- ConverterLockError, ConversionWarning, _is_string_like, has_nested_fields,
- flatten_dtype, easy_dtype, _bytes_to_name
+ ConverterLockError, ConversionWarning, _is_string_like,
+ has_nested_fields, flatten_dtype, easy_dtype, _bytes_to_name
)
from numpy.compat import (
- asbytes, asstr, asbytes_nested, bytes, basestring, unicode
+ asbytes, asstr, asbytes_nested, bytes, basestring, unicode, is_pathlib_path
)
if sys.version_info[0] >= 3:
@@ -86,10 +86,19 @@ def __dir__(self):
return object.__getattribute__(self, '_obj').keys()
-def zipfile_factory(*args, **kwargs):
+def zipfile_factory(file, *args, **kwargs):
+ """
+ Create a ZipFile.
+
+ Allows for Zip64, and the `file` argument can accept file, str, or
+ pathlib.Path objects. `args` and `kwargs` are passed to the zipfile.ZipFile
+ constructor.
+ """
+ if is_pathlib_path(file):
+ file = str(file)
import zipfile
kwargs['allowZip64'] = True
- return zipfile.ZipFile(*args, **kwargs)
+ return zipfile.ZipFile(file, *args, **kwargs)
class NpzFile(object):
@@ -261,7 +270,7 @@ def load(file, mmap_mode=None, allow_pickle=True, fix_imports=True,
Parameters
----------
- file : file-like object or string
+ file : file-like object, string, or pathlib.Path
The file to read. File-like objects must support the
``seek()`` and ``read()`` methods. Pickled files require that the
file-like object support the ``readline()`` method as well.
@@ -355,12 +364,13 @@ def load(file, mmap_mode=None, allow_pickle=True, fix_imports=True,
memmap([4, 5, 6])
"""
- import gzip
-
own_fid = False
if isinstance(file, basestring):
fid = open(file, "rb")
own_fid = True
+ elif is_pathlib_path(file):
+ fid = file.open("rb")
+ own_fid = True
else:
fid = file
@@ -425,9 +435,9 @@ def save(file, arr, allow_pickle=True, fix_imports=True):
Parameters
----------
- file : file or str
+ file : file, str, or pathlib.Path
File or filename to which the data is saved. If file is a file-object,
- then the filename is unchanged. If file is a string, a ``.npy``
+ then the filename is unchanged. If file is a string or Path, a ``.npy``
extension will be appended to the file name if it does not already
have one.
allow_pickle : bool, optional
@@ -476,6 +486,11 @@ def save(file, arr, allow_pickle=True, fix_imports=True):
file = file + '.npy'
fid = open(file, "wb")
own_fid = True
+ elif is_pathlib_path(file):
+ if not file.name.endswith('.npy'):
+ file = file.parent / (file.name + '.npy')
+ fid = file.open("wb")
+ own_fid = True
else:
fid = file
@@ -507,8 +522,9 @@ def savez(file, *args, **kwds):
----------
file : str or file
Either the file name (string) or an open file (file-like object)
- where the data will be saved. If file is a string, the ``.npz``
- extension will be appended to the file name if it is not already there.
+ where the data will be saved. If file is a string or a Path, the
+ ``.npz`` extension will be appended to the file name if it is not
+ already there.
args : Arguments, optional
Arrays to save to the file. Since it is not possible for Python to
know the names of the arrays outside `savez`, the arrays will be saved
@@ -610,6 +626,9 @@ def _savez(file, args, kwds, compress, allow_pickle=True, pickle_kwargs=None):
if isinstance(file, basestring):
if not file.endswith('.npz'):
file = file + '.npz'
+ elif is_pathlib_path(file):
+ if not file.name.endswith('.npz'):
+ file = file.parent / (file.name + '.npz')
namedict = kwds
for i, val in enumerate(args):
@@ -695,7 +714,7 @@ def loadtxt(fname, dtype=float, comments='#', delimiter=None,
Parameters
----------
- fname : file or str
+ fname : file, str, or pathlib.Path
File, filename, or generator to read. If the filename extension is
``.gz`` or ``.bz2``, the file is first decompressed. Note that
generators should return byte strings for Python 3k.
@@ -822,6 +841,8 @@ def loadtxt(fname, dtype=float, comments='#', delimiter=None,
fown = False
try:
+ if is_pathlib_path(fname):
+ fname = str(fname)
if _is_string_like(fname):
fown = True
if fname.endswith('.gz'):
@@ -1117,6 +1138,8 @@ def savetxt(fname, X, fmt='%.18e', delimiter=' ', newline='\n', header='',
delimiter = asstr(delimiter)
own_fh = False
+ if is_pathlib_path(fname):
+ fname = str(fname)
if _is_string_like(fname):
own_fh = True
if fname.endswith('.gz'):
@@ -1302,7 +1325,7 @@ def genfromtxt(fname, dtype=float, comments='#', delimiter=None,
Parameters
----------
- fname : file, str, list of str, generator
+ fname : file, str, pathlib.Path, list of str, generator
File, filename, list, or generator to read. If the filename
extension is `.gz` or `.bz2`, the file is first decompressed. Mote
that generators must return byte strings in Python 3k. The strings
@@ -1477,6 +1500,8 @@ def genfromtxt(fname, dtype=float, comments='#', delimiter=None,
# Initialize the filehandle, the LineSplitter and the NameValidator
own_fhd = False
try:
+ if is_pathlib_path(fname):
+ fname = str(fname)
if isinstance(fname, basestring):
if sys.version_info[0] == 2:
fhd = iter(np.lib._datasource.open(fname, 'rbU'))
</patch>
|
[]
|
[]
| |||
docker__compose-1374
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
APIError: 500 Server Error on first docker-compose up / build
cc @aanand
I noticed this when testing the most recent RC (4)
Steps to reproduce:
1. Clone https://github.com/nathanleclaire/hubfwd
2. Run `docker-compose up` in the repo
Expected behavior: It doesn't blow up
Actual behavior: It blows up
On the first `up` and the first `up` only, so build smells suspicious.
```
Creating hubfwd_app_1...
Building app...
Step 0 : FROM golang:1.4.2
---> 121a93c90463
Step 1 : RUN go get -u github.com/codegangsta/negroni
---> Running in 5e2161a172f9
---> 623f1c94741b
Removing intermediate container 5e2161a172f9
Step 2 : RUN go get -u github.com/gorilla/mux
---> Running in c74924a6c8fd
---> 7923dd360f79
Removing intermediate container c74924a6c8fd
Step 3 : RUN go get -u github.com/Sirupsen/logrus
---> Running in 93443d6cf298
---> 3ae5e3801312
Removing intermediate container 93443d6cf298
Step 4 : RUN mkdir -p /go/src/github.com/nathanleclaire/hubfwd
---> Running in 8deddcbb0f1d
---> 6586dfbe5b2e
Removing intermediate container 8deddcbb0f1d
Step 5 : WORKDIR /go/src/github.com/nathanleclaire/hubfwd
---> Running in bb42cbdf1032
---> 0d824f6e8519
Removing intermediate container bb42cbdf1032
Step 6 : COPY . /go/src/github.com/nathanleclaire/hubfwd
---> ad6983d66cf7
Removing intermediate container e16e62829fb7
Step 7 : CMD go build
---> Running in 550e4ab79b39
---> 15ebeafc0600
Removing intermediate container 550e4ab79b39
Successfully built 15ebeafc0600
Attaching to hubfwd_app_1
Exception in thread Thread-1:
Traceback (most recent call last):
File "/compose/build/docker-compose/out00-PYZ.pyz/threading", line 810, in __bootstrap_inner
File "/compose/build/docker-compose/out00-PYZ.pyz/threading", line 763, in run
File "/compose/build/docker-compose/out00-PYZ.pyz/compose.cli.multiplexer", line 41, in _enqueue_output
File "/compose/build/docker-compose/out00-PYZ.pyz/compose.cli.log_printer", line 59, in _make_log_generator
File "/compose/build/docker-compose/out00-PYZ.pyz/compose.cli.utils", line 77, in split_buffer
File "/compose/build/docker-compose/out00-PYZ.pyz/docker.client", line 199, in _multiplexed_response_stream_helper
File "/compose/build/docker-compose/out00-PYZ.pyz/docker.client", line 143, in _get_raw_response_socket
File "/compose/build/docker-compose/out00-PYZ.pyz/docker.client", line 95, in _raise_for_status
APIError: 500 Server Error: Internal Server Error ("http: Hijack is incompatible with use of CloseNotifier")
```
I'm on OSX using a VM created by `docker-machine` in VirtualBox (boot2docker).
</issue>
<code>
[start of README.md]
1 Docker Compose
2 ==============
3 *(Previously known as Fig)*
4
5 Compose is a tool for defining and running complex applications with Docker.
6 With Compose, you define a multi-container application in a single file, then
7 spin your application up in a single command which does everything that needs to
8 be done to get it running.
9
10 Compose is great for development environments, staging servers, and CI. We don't
11 recommend that you use it in production yet.
12
13 Using Compose is basically a three-step process.
14
15 1. Define your app's environment with a `Dockerfile` so it can be
16 reproduced anywhere.
17 2. Define the services that make up your app in `docker-compose.yml` so
18 they can be run together in an isolated environment:
19 3. Lastly, run `docker-compose up` and Compose will start and run your entire app.
20
21 A `docker-compose.yml` looks like this:
22
23 web:
24 build: .
25 ports:
26 - "5000:5000"
27 volumes:
28 - .:/code
29 links:
30 - redis
31 redis:
32 image: redis
33
34 Compose has commands for managing the whole lifecycle of your application:
35
36 * Start, stop and rebuild services
37 * View the status of running services
38 * Stream the log output of running services
39 * Run a one-off command on a service
40
41 Installation and documentation
42 ------------------------------
43
44 - Full documentation is available on [Docker's website](http://docs.docker.com/compose/).
45 - If you have any questions, you can talk in real-time with other developers in the #docker-compose IRC channel on Freenode. [Click here to join using IRCCloud.](https://www.irccloud.com/invite?hostname=irc.freenode.net&channel=%23docker-compose)
46
47 Contributing
48 ------------
49
50 [](http://jenkins.dockerproject.com/job/Compose%20Master/)
51
52 Want to help build Compose? Check out our [contributing documentation](https://github.com/docker/compose/blob/master/CONTRIBUTING.md).
53
54
[end of README.md]
[start of compose/__init__.py]
1 from __future__ import unicode_literals
2 from .service import Service # noqa:flake8
3
4 __version__ = '1.3.0dev'
5
[end of compose/__init__.py]
[start of compose/cli/__init__.py]
1
[end of compose/cli/__init__.py]
[start of compose/cli/errors.py]
1 from __future__ import absolute_import
2 from textwrap import dedent
3
4
5 class UserError(Exception):
6 def __init__(self, msg):
7 self.msg = dedent(msg).strip()
8
9 def __unicode__(self):
10 return self.msg
11
12 __str__ = __unicode__
13
14
15 class DockerNotFoundMac(UserError):
16 def __init__(self):
17 super(DockerNotFoundMac, self).__init__("""
18 Couldn't connect to Docker daemon. You might need to install docker-osx:
19
20 https://github.com/noplay/docker-osx
21 """)
22
23
24 class DockerNotFoundUbuntu(UserError):
25 def __init__(self):
26 super(DockerNotFoundUbuntu, self).__init__("""
27 Couldn't connect to Docker daemon. You might need to install Docker:
28
29 http://docs.docker.io/en/latest/installation/ubuntulinux/
30 """)
31
32
33 class DockerNotFoundGeneric(UserError):
34 def __init__(self):
35 super(DockerNotFoundGeneric, self).__init__("""
36 Couldn't connect to Docker daemon. You might need to install Docker:
37
38 http://docs.docker.io/en/latest/installation/
39 """)
40
41
42 class ConnectionErrorBoot2Docker(UserError):
43 def __init__(self):
44 super(ConnectionErrorBoot2Docker, self).__init__("""
45 Couldn't connect to Docker daemon - you might need to run `boot2docker up`.
46 """)
47
48
49 class ConnectionErrorGeneric(UserError):
50 def __init__(self, url):
51 super(ConnectionErrorGeneric, self).__init__("""
52 Couldn't connect to Docker daemon at %s - is it running?
53
54 If it's at a non-standard location, specify the URL with the DOCKER_HOST environment variable.
55 """ % url)
56
57
58 class ComposeFileNotFound(UserError):
59 def __init__(self, supported_filenames):
60 super(ComposeFileNotFound, self).__init__("""
61 Can't find a suitable configuration file in this directory or any parent. Are you in the right directory?
62
63 Supported filenames: %s
64 """ % ", ".join(supported_filenames))
65
[end of compose/cli/errors.py]
[start of compose/cli/log_printer.py]
1 from __future__ import unicode_literals
2 from __future__ import absolute_import
3 import sys
4
5 from itertools import cycle
6
7 from .multiplexer import Multiplexer, STOP
8 from . import colors
9 from .utils import split_buffer
10
11
12 class LogPrinter(object):
13 def __init__(self, containers, attach_params=None, output=sys.stdout, monochrome=False):
14 self.containers = containers
15 self.attach_params = attach_params or {}
16 self.prefix_width = self._calculate_prefix_width(containers)
17 self.generators = self._make_log_generators(monochrome)
18 self.output = output
19
20 def run(self):
21 mux = Multiplexer(self.generators)
22 for line in mux.loop():
23 self.output.write(line)
24
25 def _calculate_prefix_width(self, containers):
26 """
27 Calculate the maximum width of container names so we can make the log
28 prefixes line up like so:
29
30 db_1 | Listening
31 web_1 | Listening
32 """
33 prefix_width = 0
34 for container in containers:
35 prefix_width = max(prefix_width, len(container.name_without_project))
36 return prefix_width
37
38 def _make_log_generators(self, monochrome):
39 color_fns = cycle(colors.rainbow())
40 generators = []
41
42 def no_color(text):
43 return text
44
45 for container in self.containers:
46 if monochrome:
47 color_fn = no_color
48 else:
49 color_fn = next(color_fns)
50 generators.append(self._make_log_generator(container, color_fn))
51
52 return generators
53
54 def _make_log_generator(self, container, color_fn):
55 prefix = color_fn(self._generate_prefix(container)).encode('utf-8')
56 # Attach to container before log printer starts running
57 line_generator = split_buffer(self._attach(container), '\n')
58
59 for line in line_generator:
60 yield prefix + line
61
62 exit_code = container.wait()
63 yield color_fn("%s exited with code %s\n" % (container.name, exit_code))
64 yield STOP
65
66 def _generate_prefix(self, container):
67 """
68 Generate the prefix for a log line without colour
69 """
70 name = container.name_without_project
71 padding = ' ' * (self.prefix_width - len(name))
72 return ''.join([name, padding, ' | '])
73
74 def _attach(self, container):
75 params = {
76 'stdout': True,
77 'stderr': True,
78 'stream': True,
79 }
80 params.update(self.attach_params)
81 params = dict((name, 1 if value else 0) for (name, value) in list(params.items()))
82 return container.attach(**params)
83
[end of compose/cli/log_printer.py]
[start of compose/cli/main.py]
1 from __future__ import print_function
2 from __future__ import unicode_literals
3 from inspect import getdoc
4 from operator import attrgetter
5 import logging
6 import re
7 import signal
8 import sys
9
10 from docker.errors import APIError
11 import dockerpty
12
13 from .. import __version__
14 from ..project import NoSuchService, ConfigurationError
15 from ..service import BuildError, CannotBeScaledError
16 from ..config import parse_environment
17 from .command import Command
18 from .docopt_command import NoSuchCommand
19 from .errors import UserError
20 from .formatter import Formatter
21 from .log_printer import LogPrinter
22 from .utils import yesno
23
24 log = logging.getLogger(__name__)
25
26
27 def main():
28 setup_logging()
29 try:
30 command = TopLevelCommand()
31 command.sys_dispatch()
32 except KeyboardInterrupt:
33 log.error("\nAborting.")
34 sys.exit(1)
35 except (UserError, NoSuchService, ConfigurationError) as e:
36 log.error(e.msg)
37 sys.exit(1)
38 except NoSuchCommand as e:
39 log.error("No such command: %s", e.command)
40 log.error("")
41 log.error("\n".join(parse_doc_section("commands:", getdoc(e.supercommand))))
42 sys.exit(1)
43 except APIError as e:
44 log.error(e.explanation)
45 sys.exit(1)
46 except BuildError as e:
47 log.error("Service '%s' failed to build: %s" % (e.service.name, e.reason))
48 sys.exit(1)
49
50
51 def setup_logging():
52 console_handler = logging.StreamHandler(sys.stderr)
53 console_handler.setFormatter(logging.Formatter())
54 console_handler.setLevel(logging.INFO)
55 root_logger = logging.getLogger()
56 root_logger.addHandler(console_handler)
57 root_logger.setLevel(logging.DEBUG)
58
59 # Disable requests logging
60 logging.getLogger("requests").propagate = False
61
62
63 # stolen from docopt master
64 def parse_doc_section(name, source):
65 pattern = re.compile('^([^\n]*' + name + '[^\n]*\n?(?:[ \t].*?(?:\n|$))*)',
66 re.IGNORECASE | re.MULTILINE)
67 return [s.strip() for s in pattern.findall(source)]
68
69
70 class TopLevelCommand(Command):
71 """Fast, isolated development environments using Docker.
72
73 Usage:
74 docker-compose [options] [COMMAND] [ARGS...]
75 docker-compose -h|--help
76
77 Options:
78 -f, --file FILE Specify an alternate compose file (default: docker-compose.yml)
79 -p, --project-name NAME Specify an alternate project name (default: directory name)
80 --verbose Show more output
81 -v, --version Print version and exit
82
83 Commands:
84 build Build or rebuild services
85 help Get help on a command
86 kill Kill containers
87 logs View output from containers
88 port Print the public port for a port binding
89 ps List containers
90 pull Pulls service images
91 restart Restart services
92 rm Remove stopped containers
93 run Run a one-off command
94 scale Set number of containers for a service
95 start Start services
96 stop Stop services
97 up Create and start containers
98
99 """
100 def docopt_options(self):
101 options = super(TopLevelCommand, self).docopt_options()
102 options['version'] = "docker-compose %s" % __version__
103 return options
104
105 def build(self, project, options):
106 """
107 Build or rebuild services.
108
109 Services are built once and then tagged as `project_service`,
110 e.g. `composetest_db`. If you change a service's `Dockerfile` or the
111 contents of its build directory, you can run `compose build` to rebuild it.
112
113 Usage: build [options] [SERVICE...]
114
115 Options:
116 --no-cache Do not use cache when building the image.
117 """
118 no_cache = bool(options.get('--no-cache', False))
119 project.build(service_names=options['SERVICE'], no_cache=no_cache)
120
121 def help(self, project, options):
122 """
123 Get help on a command.
124
125 Usage: help COMMAND
126 """
127 command = options['COMMAND']
128 if not hasattr(self, command):
129 raise NoSuchCommand(command, self)
130 raise SystemExit(getdoc(getattr(self, command)))
131
132 def kill(self, project, options):
133 """
134 Force stop service containers.
135
136 Usage: kill [options] [SERVICE...]
137
138 Options:
139 -s SIGNAL SIGNAL to send to the container.
140 Default signal is SIGKILL.
141 """
142 signal = options.get('-s', 'SIGKILL')
143
144 project.kill(service_names=options['SERVICE'], signal=signal)
145
146 def logs(self, project, options):
147 """
148 View output from containers.
149
150 Usage: logs [options] [SERVICE...]
151
152 Options:
153 --no-color Produce monochrome output.
154 """
155 containers = project.containers(service_names=options['SERVICE'], stopped=True)
156
157 monochrome = options['--no-color']
158 print("Attaching to", list_containers(containers))
159 LogPrinter(containers, attach_params={'logs': True}, monochrome=monochrome).run()
160
161 def port(self, project, options):
162 """
163 Print the public port for a port binding.
164
165 Usage: port [options] SERVICE PRIVATE_PORT
166
167 Options:
168 --protocol=proto tcp or udp (defaults to tcp)
169 --index=index index of the container if there are multiple
170 instances of a service (defaults to 1)
171 """
172 service = project.get_service(options['SERVICE'])
173 try:
174 container = service.get_container(number=options.get('--index') or 1)
175 except ValueError as e:
176 raise UserError(str(e))
177 print(container.get_local_port(
178 options['PRIVATE_PORT'],
179 protocol=options.get('--protocol') or 'tcp') or '')
180
181 def ps(self, project, options):
182 """
183 List containers.
184
185 Usage: ps [options] [SERVICE...]
186
187 Options:
188 -q Only display IDs
189 """
190 containers = sorted(
191 project.containers(service_names=options['SERVICE'], stopped=True) +
192 project.containers(service_names=options['SERVICE'], one_off=True),
193 key=attrgetter('name'))
194
195 if options['-q']:
196 for container in containers:
197 print(container.id)
198 else:
199 headers = [
200 'Name',
201 'Command',
202 'State',
203 'Ports',
204 ]
205 rows = []
206 for container in containers:
207 command = container.human_readable_command
208 if len(command) > 30:
209 command = '%s ...' % command[:26]
210 rows.append([
211 container.name,
212 command,
213 container.human_readable_state,
214 container.human_readable_ports,
215 ])
216 print(Formatter().table(headers, rows))
217
218 def pull(self, project, options):
219 """
220 Pulls images for services.
221
222 Usage: pull [options] [SERVICE...]
223
224 Options:
225 --allow-insecure-ssl Allow insecure connections to the docker
226 registry
227 """
228 insecure_registry = options['--allow-insecure-ssl']
229 project.pull(
230 service_names=options['SERVICE'],
231 insecure_registry=insecure_registry
232 )
233
234 def rm(self, project, options):
235 """
236 Remove stopped service containers.
237
238 Usage: rm [options] [SERVICE...]
239
240 Options:
241 -f, --force Don't ask to confirm removal
242 -v Remove volumes associated with containers
243 """
244 all_containers = project.containers(service_names=options['SERVICE'], stopped=True)
245 stopped_containers = [c for c in all_containers if not c.is_running]
246
247 if len(stopped_containers) > 0:
248 print("Going to remove", list_containers(stopped_containers))
249 if options.get('--force') \
250 or yesno("Are you sure? [yN] ", default=False):
251 project.remove_stopped(
252 service_names=options['SERVICE'],
253 v=options.get('-v', False)
254 )
255 else:
256 print("No stopped containers")
257
258 def run(self, project, options):
259 """
260 Run a one-off command on a service.
261
262 For example:
263
264 $ docker-compose run web python manage.py shell
265
266 By default, linked services will be started, unless they are already
267 running. If you do not want to start linked services, use
268 `docker-compose run --no-deps SERVICE COMMAND [ARGS...]`.
269
270 Usage: run [options] [-e KEY=VAL...] SERVICE [COMMAND] [ARGS...]
271
272 Options:
273 --allow-insecure-ssl Allow insecure connections to the docker
274 registry
275 -d Detached mode: Run container in the background, print
276 new container name.
277 --entrypoint CMD Override the entrypoint of the image.
278 -e KEY=VAL Set an environment variable (can be used multiple times)
279 -u, --user="" Run as specified username or uid
280 --no-deps Don't start linked services.
281 --rm Remove container after run. Ignored in detached mode.
282 --service-ports Run command with the service's ports enabled and mapped
283 to the host.
284 -T Disable pseudo-tty allocation. By default `docker-compose run`
285 allocates a TTY.
286 """
287 service = project.get_service(options['SERVICE'])
288
289 insecure_registry = options['--allow-insecure-ssl']
290
291 if not options['--no-deps']:
292 deps = service.get_linked_names()
293
294 if len(deps) > 0:
295 project.up(
296 service_names=deps,
297 start_deps=True,
298 recreate=False,
299 insecure_registry=insecure_registry,
300 detach=options['-d']
301 )
302
303 tty = True
304 if options['-d'] or options['-T'] or not sys.stdin.isatty():
305 tty = False
306
307 if options['COMMAND']:
308 command = [options['COMMAND']] + options['ARGS']
309 else:
310 command = service.options.get('command')
311
312 container_options = {
313 'command': command,
314 'tty': tty,
315 'stdin_open': not options['-d'],
316 'detach': options['-d'],
317 }
318
319 if options['-e']:
320 # Merge environment from config with -e command line
321 container_options['environment'] = dict(
322 parse_environment(service.options.get('environment')),
323 **parse_environment(options['-e']))
324
325 if options['--entrypoint']:
326 container_options['entrypoint'] = options.get('--entrypoint')
327
328 if options['--rm']:
329 container_options['restart'] = None
330
331 if options['--user']:
332 container_options['user'] = options.get('--user')
333
334 if not options['--service-ports']:
335 container_options['ports'] = []
336
337 container = service.create_container(
338 one_off=True,
339 insecure_registry=insecure_registry,
340 **container_options
341 )
342
343 if options['-d']:
344 service.start_container(container)
345 print(container.name)
346 else:
347 service.start_container(container)
348 dockerpty.start(project.client, container.id, interactive=not options['-T'])
349 exit_code = container.wait()
350 if options['--rm']:
351 log.info("Removing %s..." % container.name)
352 project.client.remove_container(container.id)
353 sys.exit(exit_code)
354
355 def scale(self, project, options):
356 """
357 Set number of containers to run for a service.
358
359 Numbers are specified in the form `service=num` as arguments.
360 For example:
361
362 $ docker-compose scale web=2 worker=3
363
364 Usage: scale [SERVICE=NUM...]
365 """
366 for s in options['SERVICE=NUM']:
367 if '=' not in s:
368 raise UserError('Arguments to scale should be in the form service=num')
369 service_name, num = s.split('=', 1)
370 try:
371 num = int(num)
372 except ValueError:
373 raise UserError('Number of containers for service "%s" is not a '
374 'number' % service_name)
375 try:
376 project.get_service(service_name).scale(num)
377 except CannotBeScaledError:
378 raise UserError(
379 'Service "%s" cannot be scaled because it specifies a port '
380 'on the host. If multiple containers for this service were '
381 'created, the port would clash.\n\nRemove the ":" from the '
382 'port definition in docker-compose.yml so Docker can choose a random '
383 'port for each container.' % service_name)
384
385 def start(self, project, options):
386 """
387 Start existing containers.
388
389 Usage: start [SERVICE...]
390 """
391 project.start(service_names=options['SERVICE'])
392
393 def stop(self, project, options):
394 """
395 Stop running containers without removing them.
396
397 They can be started again with `docker-compose start`.
398
399 Usage: stop [options] [SERVICE...]
400
401 Options:
402 -t, --timeout TIMEOUT Specify a shutdown timeout in seconds.
403 (default: 10)
404 """
405 timeout = options.get('--timeout')
406 params = {} if timeout is None else {'timeout': int(timeout)}
407 project.stop(service_names=options['SERVICE'], **params)
408
409 def restart(self, project, options):
410 """
411 Restart running containers.
412
413 Usage: restart [options] [SERVICE...]
414
415 Options:
416 -t, --timeout TIMEOUT Specify a shutdown timeout in seconds.
417 (default: 10)
418 """
419 timeout = options.get('--timeout')
420 params = {} if timeout is None else {'timeout': int(timeout)}
421 project.restart(service_names=options['SERVICE'], **params)
422
423 def up(self, project, options):
424 """
425 Build, (re)create, start and attach to containers for a service.
426
427 By default, `docker-compose up` will aggregate the output of each container, and
428 when it exits, all containers will be stopped. If you run `docker-compose up -d`,
429 it'll start the containers in the background and leave them running.
430
431 If there are existing containers for a service, `docker-compose up` will stop
432 and recreate them (preserving mounted volumes with volumes-from),
433 so that changes in `docker-compose.yml` are picked up. If you do not want existing
434 containers to be recreated, `docker-compose up --no-recreate` will re-use existing
435 containers.
436
437 Usage: up [options] [SERVICE...]
438
439 Options:
440 --allow-insecure-ssl Allow insecure connections to the docker
441 registry
442 -d Detached mode: Run containers in the background,
443 print new container names.
444 --no-color Produce monochrome output.
445 --no-deps Don't start linked services.
446 --no-recreate If containers already exist, don't recreate them.
447 --no-build Don't build an image, even if it's missing
448 -t, --timeout TIMEOUT When attached, use this timeout in seconds
449 for the shutdown. (default: 10)
450
451 """
452 insecure_registry = options['--allow-insecure-ssl']
453 detached = options['-d']
454
455 monochrome = options['--no-color']
456
457 start_deps = not options['--no-deps']
458 recreate = not options['--no-recreate']
459 service_names = options['SERVICE']
460
461 project.up(
462 service_names=service_names,
463 start_deps=start_deps,
464 recreate=recreate,
465 insecure_registry=insecure_registry,
466 detach=detached,
467 do_build=not options['--no-build'],
468 )
469
470 to_attach = [c for s in project.get_services(service_names) for c in s.containers()]
471
472 if not detached:
473 print("Attaching to", list_containers(to_attach))
474 log_printer = LogPrinter(to_attach, attach_params={"logs": True}, monochrome=monochrome)
475
476 try:
477 log_printer.run()
478 finally:
479 def handler(signal, frame):
480 project.kill(service_names=service_names)
481 sys.exit(0)
482 signal.signal(signal.SIGINT, handler)
483
484 print("Gracefully stopping... (press Ctrl+C again to force)")
485 timeout = options.get('--timeout')
486 params = {} if timeout is None else {'timeout': int(timeout)}
487 project.stop(service_names=service_names, **params)
488
489
490 def list_containers(containers):
491 return ", ".join(c.name for c in containers)
492
[end of compose/cli/main.py]
[start of compose/service.py]
1 from __future__ import unicode_literals
2 from __future__ import absolute_import
3 from collections import namedtuple
4 import logging
5 import re
6 from operator import attrgetter
7 import sys
8 import six
9
10 from docker.errors import APIError
11 from docker.utils import create_host_config
12
13 from .config import DOCKER_CONFIG_KEYS
14 from .container import Container, get_container_name
15 from .progress_stream import stream_output, StreamOutputError
16
17 log = logging.getLogger(__name__)
18
19
20 DOCKER_START_KEYS = [
21 'cap_add',
22 'cap_drop',
23 'dns',
24 'dns_search',
25 'env_file',
26 'extra_hosts',
27 'net',
28 'pid',
29 'privileged',
30 'restart',
31 ]
32
33 VALID_NAME_CHARS = '[a-zA-Z0-9]'
34
35
36 class BuildError(Exception):
37 def __init__(self, service, reason):
38 self.service = service
39 self.reason = reason
40
41
42 class CannotBeScaledError(Exception):
43 pass
44
45
46 class ConfigError(ValueError):
47 pass
48
49
50 VolumeSpec = namedtuple('VolumeSpec', 'external internal mode')
51
52
53 ServiceName = namedtuple('ServiceName', 'project service number')
54
55
56 class Service(object):
57 def __init__(self, name, client=None, project='default', links=None, external_links=None, volumes_from=None, net=None, **options):
58 if not re.match('^%s+$' % VALID_NAME_CHARS, name):
59 raise ConfigError('Invalid service name "%s" - only %s are allowed' % (name, VALID_NAME_CHARS))
60 if not re.match('^%s+$' % VALID_NAME_CHARS, project):
61 raise ConfigError('Invalid project name "%s" - only %s are allowed' % (project, VALID_NAME_CHARS))
62 if 'image' in options and 'build' in options:
63 raise ConfigError('Service %s has both an image and build path specified. A service can either be built to image or use an existing image, not both.' % name)
64 if 'image' not in options and 'build' not in options:
65 raise ConfigError('Service %s has neither an image nor a build path specified. Exactly one must be provided.' % name)
66
67 self.name = name
68 self.client = client
69 self.project = project
70 self.links = links or []
71 self.external_links = external_links or []
72 self.volumes_from = volumes_from or []
73 self.net = net or None
74 self.options = options
75
76 def containers(self, stopped=False, one_off=False):
77 return [Container.from_ps(self.client, container)
78 for container in self.client.containers(all=stopped)
79 if self.has_container(container, one_off=one_off)]
80
81 def has_container(self, container, one_off=False):
82 """Return True if `container` was created to fulfill this service."""
83 name = get_container_name(container)
84 if not name or not is_valid_name(name, one_off):
85 return False
86 project, name, _number = parse_name(name)
87 return project == self.project and name == self.name
88
89 def get_container(self, number=1):
90 """Return a :class:`compose.container.Container` for this service. The
91 container must be active, and match `number`.
92 """
93 for container in self.client.containers():
94 if not self.has_container(container):
95 continue
96 _, _, container_number = parse_name(get_container_name(container))
97 if container_number == number:
98 return Container.from_ps(self.client, container)
99
100 raise ValueError("No container found for %s_%s" % (self.name, number))
101
102 def start(self, **options):
103 for c in self.containers(stopped=True):
104 self.start_container_if_stopped(c, **options)
105
106 def stop(self, **options):
107 for c in self.containers():
108 log.info("Stopping %s..." % c.name)
109 c.stop(**options)
110
111 def kill(self, **options):
112 for c in self.containers():
113 log.info("Killing %s..." % c.name)
114 c.kill(**options)
115
116 def restart(self, **options):
117 for c in self.containers():
118 log.info("Restarting %s..." % c.name)
119 c.restart(**options)
120
121 def scale(self, desired_num):
122 """
123 Adjusts the number of containers to the specified number and ensures
124 they are running.
125
126 - creates containers until there are at least `desired_num`
127 - stops containers until there are at most `desired_num` running
128 - starts containers until there are at least `desired_num` running
129 - removes all stopped containers
130 """
131 if not self.can_be_scaled():
132 raise CannotBeScaledError()
133
134 # Create enough containers
135 containers = self.containers(stopped=True)
136 while len(containers) < desired_num:
137 log.info("Creating %s..." % self._next_container_name(containers))
138 containers.append(self.create_container(detach=True))
139
140 running_containers = []
141 stopped_containers = []
142 for c in containers:
143 if c.is_running:
144 running_containers.append(c)
145 else:
146 stopped_containers.append(c)
147 running_containers.sort(key=lambda c: c.number)
148 stopped_containers.sort(key=lambda c: c.number)
149
150 # Stop containers
151 while len(running_containers) > desired_num:
152 c = running_containers.pop()
153 log.info("Stopping %s..." % c.name)
154 c.stop(timeout=1)
155 stopped_containers.append(c)
156
157 # Start containers
158 while len(running_containers) < desired_num:
159 c = stopped_containers.pop(0)
160 log.info("Starting %s..." % c.name)
161 self.start_container(c)
162 running_containers.append(c)
163
164 self.remove_stopped()
165
166 def remove_stopped(self, **options):
167 for c in self.containers(stopped=True):
168 if not c.is_running:
169 log.info("Removing %s..." % c.name)
170 c.remove(**options)
171
172 def create_container(self,
173 one_off=False,
174 insecure_registry=False,
175 do_build=True,
176 intermediate_container=None,
177 **override_options):
178 """
179 Create a container for this service. If the image doesn't exist, attempt to pull
180 it.
181 """
182 container_options = self._get_container_create_options(
183 override_options,
184 one_off=one_off,
185 intermediate_container=intermediate_container,
186 )
187
188 if (do_build and
189 self.can_be_built() and
190 not self.client.images(name=self.full_name)):
191 self.build()
192
193 try:
194 return Container.create(self.client, **container_options)
195 except APIError as e:
196 if e.response.status_code == 404 and e.explanation and 'No such image' in str(e.explanation):
197 self.pull(insecure_registry=insecure_registry)
198 return Container.create(self.client, **container_options)
199 raise
200
201 def recreate_containers(self, insecure_registry=False, do_build=True, **override_options):
202 """
203 If a container for this service doesn't exist, create and start one. If there are
204 any, stop them, create+start new ones, and remove the old containers.
205 """
206 containers = self.containers(stopped=True)
207 if not containers:
208 log.info("Creating %s..." % self._next_container_name(containers))
209 container = self.create_container(
210 insecure_registry=insecure_registry,
211 do_build=do_build,
212 **override_options)
213 self.start_container(container)
214 return [(None, container)]
215 else:
216 tuples = []
217
218 for c in containers:
219 log.info("Recreating %s..." % c.name)
220 tuples.append(self.recreate_container(c, insecure_registry=insecure_registry, **override_options))
221
222 return tuples
223
224 def recreate_container(self, container, **override_options):
225 """Recreate a container. An intermediate container is created so that
226 the new container has the same name, while still supporting
227 `volumes-from` the original container.
228 """
229 try:
230 container.stop()
231 except APIError as e:
232 if (e.response.status_code == 500
233 and e.explanation
234 and 'no such process' in str(e.explanation)):
235 pass
236 else:
237 raise
238
239 intermediate_container = Container.create(
240 self.client,
241 image=container.image,
242 entrypoint=['/bin/echo'],
243 command=[],
244 detach=True,
245 host_config=create_host_config(volumes_from=[container.id]),
246 )
247 intermediate_container.start()
248 intermediate_container.wait()
249 container.remove()
250
251 options = dict(override_options)
252 new_container = self.create_container(
253 do_build=False,
254 intermediate_container=intermediate_container,
255 **options
256 )
257 self.start_container(new_container)
258
259 intermediate_container.remove()
260
261 return (intermediate_container, new_container)
262
263 def start_container_if_stopped(self, container):
264 if container.is_running:
265 return container
266 else:
267 log.info("Starting %s..." % container.name)
268 return self.start_container(container)
269
270 def start_container(self, container):
271 container.start()
272 return container
273
274 def start_or_create_containers(
275 self,
276 insecure_registry=False,
277 detach=False,
278 do_build=True):
279 containers = self.containers(stopped=True)
280
281 if not containers:
282 log.info("Creating %s..." % self._next_container_name(containers))
283 new_container = self.create_container(
284 insecure_registry=insecure_registry,
285 detach=detach,
286 do_build=do_build,
287 )
288 return [self.start_container(new_container)]
289 else:
290 return [self.start_container_if_stopped(c) for c in containers]
291
292 def get_linked_names(self):
293 return [s.name for (s, _) in self.links]
294
295 def get_volumes_from_names(self):
296 return [s.name for s in self.volumes_from if isinstance(s, Service)]
297
298 def get_net_name(self):
299 if isinstance(self.net, Service):
300 return self.net.name
301 else:
302 return
303
304 def _next_container_name(self, all_containers, one_off=False):
305 bits = [self.project, self.name]
306 if one_off:
307 bits.append('run')
308 return '_'.join(bits + [str(self._next_container_number(all_containers))])
309
310 def _next_container_number(self, all_containers):
311 numbers = [parse_name(c.name).number for c in all_containers]
312 return 1 if not numbers else max(numbers) + 1
313
314 def _get_links(self, link_to_self):
315 links = []
316 for service, link_name in self.links:
317 for container in service.containers():
318 links.append((container.name, link_name or service.name))
319 links.append((container.name, container.name))
320 links.append((container.name, container.name_without_project))
321 if link_to_self:
322 for container in self.containers():
323 links.append((container.name, self.name))
324 links.append((container.name, container.name))
325 links.append((container.name, container.name_without_project))
326 for external_link in self.external_links:
327 if ':' not in external_link:
328 link_name = external_link
329 else:
330 external_link, link_name = external_link.split(':')
331 links.append((external_link, link_name))
332 return links
333
334 def _get_volumes_from(self, intermediate_container=None):
335 volumes_from = []
336 for volume_source in self.volumes_from:
337 if isinstance(volume_source, Service):
338 containers = volume_source.containers(stopped=True)
339 if not containers:
340 volumes_from.append(volume_source.create_container().id)
341 else:
342 volumes_from.extend(map(attrgetter('id'), containers))
343
344 elif isinstance(volume_source, Container):
345 volumes_from.append(volume_source.id)
346
347 if intermediate_container:
348 volumes_from.append(intermediate_container.id)
349
350 return volumes_from
351
352 def _get_net(self):
353 if not self.net:
354 return "bridge"
355
356 if isinstance(self.net, Service):
357 containers = self.net.containers()
358 if len(containers) > 0:
359 net = 'container:' + containers[0].id
360 else:
361 log.warning("Warning: Service %s is trying to use reuse the network stack "
362 "of another service that is not running." % (self.net.name))
363 net = None
364 elif isinstance(self.net, Container):
365 net = 'container:' + self.net.id
366 else:
367 net = self.net
368
369 return net
370
371 def _get_container_create_options(self, override_options, one_off=False, intermediate_container=None):
372 container_options = dict(
373 (k, self.options[k])
374 for k in DOCKER_CONFIG_KEYS if k in self.options)
375 container_options.update(override_options)
376
377 container_options['name'] = self._next_container_name(
378 self.containers(stopped=True, one_off=one_off),
379 one_off)
380
381 # If a qualified hostname was given, split it into an
382 # unqualified hostname and a domainname unless domainname
383 # was also given explicitly. This matches the behavior of
384 # the official Docker CLI in that scenario.
385 if ('hostname' in container_options
386 and 'domainname' not in container_options
387 and '.' in container_options['hostname']):
388 parts = container_options['hostname'].partition('.')
389 container_options['hostname'] = parts[0]
390 container_options['domainname'] = parts[2]
391
392 if 'ports' in container_options or 'expose' in self.options:
393 ports = []
394 all_ports = container_options.get('ports', []) + self.options.get('expose', [])
395 for port in all_ports:
396 port = str(port)
397 if ':' in port:
398 port = port.split(':')[-1]
399 if '/' in port:
400 port = tuple(port.split('/'))
401 ports.append(port)
402 container_options['ports'] = ports
403
404 if 'volumes' in container_options:
405 container_options['volumes'] = dict(
406 (parse_volume_spec(v).internal, {})
407 for v in container_options['volumes'])
408
409 if self.can_be_built():
410 container_options['image'] = self.full_name
411
412 # Delete options which are only used when starting
413 for key in DOCKER_START_KEYS:
414 container_options.pop(key, None)
415
416 container_options['host_config'] = self._get_container_host_config(override_options, one_off=one_off, intermediate_container=intermediate_container)
417
418 return container_options
419
420 def _get_container_host_config(self, override_options, one_off=False, intermediate_container=None):
421 options = dict(self.options, **override_options)
422 port_bindings = build_port_bindings(options.get('ports') or [])
423
424 volume_bindings = dict(
425 build_volume_binding(parse_volume_spec(volume))
426 for volume in options.get('volumes') or []
427 if ':' in volume)
428
429 privileged = options.get('privileged', False)
430 cap_add = options.get('cap_add', None)
431 cap_drop = options.get('cap_drop', None)
432 pid = options.get('pid', None)
433
434 dns = options.get('dns', None)
435 if isinstance(dns, six.string_types):
436 dns = [dns]
437
438 dns_search = options.get('dns_search', None)
439 if isinstance(dns_search, six.string_types):
440 dns_search = [dns_search]
441
442 restart = parse_restart_spec(options.get('restart', None))
443
444 extra_hosts = build_extra_hosts(options.get('extra_hosts', None))
445
446 return create_host_config(
447 links=self._get_links(link_to_self=one_off),
448 port_bindings=port_bindings,
449 binds=volume_bindings,
450 volumes_from=self._get_volumes_from(intermediate_container),
451 privileged=privileged,
452 network_mode=self._get_net(),
453 dns=dns,
454 dns_search=dns_search,
455 restart_policy=restart,
456 cap_add=cap_add,
457 cap_drop=cap_drop,
458 extra_hosts=extra_hosts,
459 pid_mode=pid
460 )
461
462 def build(self, no_cache=False):
463 log.info('Building %s...' % self.name)
464
465 build_output = self.client.build(
466 self.options['build'],
467 tag=self.full_name,
468 stream=True,
469 rm=True,
470 nocache=no_cache,
471 dockerfile=self.options.get('dockerfile', None),
472 )
473
474 try:
475 all_events = stream_output(build_output, sys.stdout)
476 except StreamOutputError as e:
477 raise BuildError(self, unicode(e))
478
479 image_id = None
480
481 for event in all_events:
482 if 'stream' in event:
483 match = re.search(r'Successfully built ([0-9a-f]+)', event.get('stream', ''))
484 if match:
485 image_id = match.group(1)
486
487 if image_id is None:
488 raise BuildError(self, event if all_events else 'Unknown')
489
490 return image_id
491
492 def can_be_built(self):
493 return 'build' in self.options
494
495 @property
496 def full_name(self):
497 """
498 The tag to give to images built for this service.
499 """
500 return '%s_%s' % (self.project, self.name)
501
502 def can_be_scaled(self):
503 for port in self.options.get('ports', []):
504 if ':' in str(port):
505 return False
506 return True
507
508 def pull(self, insecure_registry=False):
509 if 'image' not in self.options:
510 return
511
512 repo, tag = parse_repository_tag(self.options['image'])
513 tag = tag or 'latest'
514 log.info('Pulling %s (%s:%s)...' % (self.name, repo, tag))
515 output = self.client.pull(
516 repo,
517 tag=tag,
518 stream=True,
519 insecure_registry=insecure_registry)
520 stream_output(output, sys.stdout)
521
522
523 NAME_RE = re.compile(r'^([^_]+)_([^_]+)_(run_)?(\d+)$')
524
525
526 def is_valid_name(name, one_off=False):
527 match = NAME_RE.match(name)
528 if match is None:
529 return False
530 if one_off:
531 return match.group(3) == 'run_'
532 else:
533 return match.group(3) is None
534
535
536 def parse_name(name):
537 match = NAME_RE.match(name)
538 (project, service_name, _, suffix) = match.groups()
539 return ServiceName(project, service_name, int(suffix))
540
541
542 def parse_restart_spec(restart_config):
543 if not restart_config:
544 return None
545 parts = restart_config.split(':')
546 if len(parts) > 2:
547 raise ConfigError("Restart %s has incorrect format, should be "
548 "mode[:max_retry]" % restart_config)
549 if len(parts) == 2:
550 name, max_retry_count = parts
551 else:
552 name, = parts
553 max_retry_count = 0
554
555 return {'Name': name, 'MaximumRetryCount': int(max_retry_count)}
556
557
558 def parse_volume_spec(volume_config):
559 parts = volume_config.split(':')
560 if len(parts) > 3:
561 raise ConfigError("Volume %s has incorrect format, should be "
562 "external:internal[:mode]" % volume_config)
563
564 if len(parts) == 1:
565 return VolumeSpec(None, parts[0], 'rw')
566
567 if len(parts) == 2:
568 parts.append('rw')
569
570 external, internal, mode = parts
571 if mode not in ('rw', 'ro'):
572 raise ConfigError("Volume %s has invalid mode (%s), should be "
573 "one of: rw, ro." % (volume_config, mode))
574
575 return VolumeSpec(external, internal, mode)
576
577
578 def parse_repository_tag(s):
579 if ":" not in s:
580 return s, ""
581 repo, tag = s.rsplit(":", 1)
582 if "/" in tag:
583 return s, ""
584 return repo, tag
585
586
587 def build_volume_binding(volume_spec):
588 internal = {'bind': volume_spec.internal, 'ro': volume_spec.mode == 'ro'}
589 return volume_spec.external, internal
590
591
592 def build_port_bindings(ports):
593 port_bindings = {}
594 for port in ports:
595 internal_port, external = split_port(port)
596 if internal_port in port_bindings:
597 port_bindings[internal_port].append(external)
598 else:
599 port_bindings[internal_port] = [external]
600 return port_bindings
601
602
603 def split_port(port):
604 parts = str(port).split(':')
605 if not 1 <= len(parts) <= 3:
606 raise ConfigError('Invalid port "%s", should be '
607 '[[remote_ip:]remote_port:]port[/protocol]' % port)
608
609 if len(parts) == 1:
610 internal_port, = parts
611 return internal_port, None
612 if len(parts) == 2:
613 external_port, internal_port = parts
614 return internal_port, external_port
615
616 external_ip, external_port, internal_port = parts
617 return internal_port, (external_ip, external_port or None)
618
619
620 def build_extra_hosts(extra_hosts_config):
621 if not extra_hosts_config:
622 return {}
623
624 if isinstance(extra_hosts_config, list):
625 extra_hosts_dict = {}
626 for extra_hosts_line in extra_hosts_config:
627 if not isinstance(extra_hosts_line, six.string_types):
628 raise ConfigError(
629 "extra_hosts_config \"%s\" must be either a list of strings or a string->string mapping," %
630 extra_hosts_config
631 )
632 host, ip = extra_hosts_line.split(':')
633 extra_hosts_dict.update({host.strip(): ip.strip()})
634 extra_hosts_config = extra_hosts_dict
635
636 if isinstance(extra_hosts_config, dict):
637 return extra_hosts_config
638
639 raise ConfigError(
640 "extra_hosts_config \"%s\" must be either a list of strings or a string->string mapping," %
641 extra_hosts_config
642 )
643
[end of compose/service.py]
[start of setup.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 from __future__ import unicode_literals
4 from __future__ import absolute_import
5 from setuptools import setup, find_packages
6 import codecs
7 import os
8 import re
9 import sys
10
11
12 def read(*parts):
13 path = os.path.join(os.path.dirname(__file__), *parts)
14 with codecs.open(path, encoding='utf-8') as fobj:
15 return fobj.read()
16
17
18 def find_version(*file_paths):
19 version_file = read(*file_paths)
20 version_match = re.search(r"^__version__ = ['\"]([^'\"]*)['\"]",
21 version_file, re.M)
22 if version_match:
23 return version_match.group(1)
24 raise RuntimeError("Unable to find version string.")
25
26
27 install_requires = [
28 'docopt >= 0.6.1, < 0.7',
29 'PyYAML >= 3.10, < 4',
30 'requests >= 2.6.1, < 2.7',
31 'texttable >= 0.8.1, < 0.9',
32 'websocket-client >= 0.11.0, < 1.0',
33 'docker-py >= 1.1.0, < 1.2',
34 'dockerpty >= 0.3.2, < 0.4',
35 'six >= 1.3.0, < 2',
36 ]
37
38
39 tests_require = [
40 'mock >= 1.0.1',
41 'nose',
42 'pyinstaller',
43 'flake8',
44 ]
45
46
47 if sys.version_info < (2, 7):
48 tests_require.append('unittest2')
49
50
51 setup(
52 name='docker-compose',
53 version=find_version("compose", "__init__.py"),
54 description='Multi-container orchestration for Docker',
55 url='https://www.docker.com/',
56 author='Docker, Inc.',
57 license='Apache License 2.0',
58 packages=find_packages(exclude=['tests.*', 'tests']),
59 include_package_data=True,
60 test_suite='nose.collector',
61 install_requires=install_requires,
62 tests_require=tests_require,
63 entry_points="""
64 [console_scripts]
65 docker-compose=compose.cli.main:main
66 """,
67 )
68
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
docker/compose
|
7e0ab0714fc6d023f2f1c19e8b6cb58cf6eef869
|
APIError: 500 Server Error on first docker-compose up / build
cc @aanand
I noticed this when testing the most recent RC (4)
Steps to reproduce:
1. Clone https://github.com/nathanleclaire/hubfwd
2. Run `docker-compose up` in the repo
Expected behavior: It doesn't blow up
Actual behavior: It blows up
On the first `up` and the first `up` only, so build smells suspicious.
```
Creating hubfwd_app_1...
Building app...
Step 0 : FROM golang:1.4.2
---> 121a93c90463
Step 1 : RUN go get -u github.com/codegangsta/negroni
---> Running in 5e2161a172f9
---> 623f1c94741b
Removing intermediate container 5e2161a172f9
Step 2 : RUN go get -u github.com/gorilla/mux
---> Running in c74924a6c8fd
---> 7923dd360f79
Removing intermediate container c74924a6c8fd
Step 3 : RUN go get -u github.com/Sirupsen/logrus
---> Running in 93443d6cf298
---> 3ae5e3801312
Removing intermediate container 93443d6cf298
Step 4 : RUN mkdir -p /go/src/github.com/nathanleclaire/hubfwd
---> Running in 8deddcbb0f1d
---> 6586dfbe5b2e
Removing intermediate container 8deddcbb0f1d
Step 5 : WORKDIR /go/src/github.com/nathanleclaire/hubfwd
---> Running in bb42cbdf1032
---> 0d824f6e8519
Removing intermediate container bb42cbdf1032
Step 6 : COPY . /go/src/github.com/nathanleclaire/hubfwd
---> ad6983d66cf7
Removing intermediate container e16e62829fb7
Step 7 : CMD go build
---> Running in 550e4ab79b39
---> 15ebeafc0600
Removing intermediate container 550e4ab79b39
Successfully built 15ebeafc0600
Attaching to hubfwd_app_1
Exception in thread Thread-1:
Traceback (most recent call last):
File "/compose/build/docker-compose/out00-PYZ.pyz/threading", line 810, in __bootstrap_inner
File "/compose/build/docker-compose/out00-PYZ.pyz/threading", line 763, in run
File "/compose/build/docker-compose/out00-PYZ.pyz/compose.cli.multiplexer", line 41, in _enqueue_output
File "/compose/build/docker-compose/out00-PYZ.pyz/compose.cli.log_printer", line 59, in _make_log_generator
File "/compose/build/docker-compose/out00-PYZ.pyz/compose.cli.utils", line 77, in split_buffer
File "/compose/build/docker-compose/out00-PYZ.pyz/docker.client", line 199, in _multiplexed_response_stream_helper
File "/compose/build/docker-compose/out00-PYZ.pyz/docker.client", line 143, in _get_raw_response_socket
File "/compose/build/docker-compose/out00-PYZ.pyz/docker.client", line 95, in _raise_for_status
APIError: 500 Server Error: Internal Server Error ("http: Hijack is incompatible with use of CloseNotifier")
```
I'm on OSX using a VM created by `docker-machine` in VirtualBox (boot2docker).
|
Bump on this - I got it with https://github.com/nathanleclaire/laraveldocker as well: (rc4)
```
Attaching to laraveldocker_db_1, laraveldocker_redis_1, laraveldocker_beanstalkd_1, laraveldocker_web_1
Exception in thread Thread-1:
Traceback (most recent call last):
File "/compose/build/docker-compose/out00-PYZ.pyz/threading", line 810, in __bootstrap_inner
File "/compose/build/docker-compose/out00-PYZ.pyz/threading", line 763, in run
File "/compose/build/docker-compose/out00-PYZ.pyz/compose.cli.multiplexer", line 41, in _enqueue_output
File "/compose/build/docker-compose/out00-PYZ.pyz/compose.cli.log_printer", line 59, in _make_log_generator
File "/compose/build/docker-compose/out00-PYZ.pyz/compose.cli.utils", line 77, in split_buffer
File "/compose/build/docker-compose/out00-PYZ.pyz/docker.client", line 199, in _multiplexed_response_stream_helper
File "/compose/build/docker-compose/out00-PYZ.pyz/docker.client", line 143, in _get_raw_response_socket
File "/compose/build/docker-compose/out00-PYZ.pyz/docker.client", line 95, in _raise_for_status
APIError: 500 Server Error: Internal Server Error ("http: Hijack is incompatible with use of CloseNotifier")
```
Also getting this after `docker-compose up`:
Versions:
- docker-compose 1.2.0 (installed by `sudo pip install -U https://github.com/docker/compose/archive/1.2.0.zip`)
- Docker version 1.6.0, build 4749651
- Boot2Docker-cli version: v1.6.0 and via virtualbox
- OSX Yosemite
```
Exception in thread Thread-1:
Traceback (most recent call last):
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/threading.py", line 810, in __bootstrap_inner
self.run()
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/threading.py", line 763, in run
self.__target(*self.__args, **self.__kwargs)
File "/Library/Python/2.7/site-packages/compose/cli/multiplexer.py", line 41, in _enqueue_output
for item in generator:
File "/Library/Python/2.7/site-packages/compose/cli/log_printer.py", line 59, in _make_log_generator
for line in line_generator:
File "/Library/Python/2.7/site-packages/compose/cli/utils.py", line 77, in split_buffer
for data in reader:
File "/Library/Python/2.7/site-packages/docker/client.py", line 225, in _multiplexed_response_stream_helper
socket = self._get_raw_response_socket(response)
File "/Library/Python/2.7/site-packages/docker/client.py", line 167, in _get_raw_response_socket
self._raise_for_status(response)
File "/Library/Python/2.7/site-packages/docker/client.py", line 119, in _raise_for_status
raise errors.APIError(e, response, explanation=explanation)
APIError: 500 Server Error: Internal Server Error ("http: Hijack is incompatible with use of CloseNotifier")
Exception in thread Thread-5:
Traceback (most recent call last):
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/threading.py", line 810, in __bootstrap_inner
self.run()
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/threading.py", line 763, in run
self.__target(*self.__args, **self.__kwargs)
File "/Library/Python/2.7/site-packages/compose/cli/multiplexer.py", line 41, in _enqueue_output
for item in generator:
File "/Library/Python/2.7/site-packages/compose/cli/log_printer.py", line 59, in _make_log_generator
for line in line_generator:
File "/Library/Python/2.7/site-packages/compose/cli/utils.py", line 77, in split_buffer
for data in reader:
File "/Library/Python/2.7/site-packages/docker/client.py", line 225, in _multiplexed_response_stream_helper
socket = self._get_raw_response_socket(response)
File "/Library/Python/2.7/site-packages/docker/client.py", line 167, in _get_raw_response_socket
self._raise_for_status(response)
File "/Library/Python/2.7/site-packages/docker/client.py", line 119, in _raise_for_status
raise errors.APIError(e, response, explanation=explanation)
APIError: 500 Server Error: Internal Server Error ("http: Hijack is incompatible with use of CloseNotifier")
```
:+1: also experiencing this issue.
:+1:
Have you figured out the solution already?
:+1:
For anyone reading, it seems that if I do `docker-compose build` manually and then do a `docker-compose up` everything works fine. That should hopefully serve as a workaround until this gets addressed.
I can reproduce. @nathanleclaire's workaround (`docker-compose build` first) does indeed fix it for me.
Looking at the `docker inspect` output of a container in both the working and failing case, I can't find anything obvious that's wrong. Very strange.
Me too, and running `docker-compose build` and then `docker-compose up` like @nathanleclaire said fixes it
Yay, this workaround worked for me too.
Experiencing the same issue here, same versions
+1, same issue/version, thanks for the workaround!
+1, same problem for me
Somehow your `up` supports http://golang.org/pkg/net/http/#CloseNotifier and `build` not :)
I think problem is in keep-alive, so in one connection you `build` and `attach`. To fix this you need to create new connection, not sure how can we fix it on docker side.
https://go-review.googlesource.com/#/c/3821/
+1
@LK4D4: nice find. I wonder if there's a way to force docker-py/requests to create a new connection for attaching. @shin-?
It looks like `docker.Client` extends http://docs.python-requests.org/en/latest/api/#request-sessions, so I think we could call `client.close()` and that would force it to re-establish a new connection.
|
2015-04-29T16:56:45Z
|
<patch>
diff --git a/compose/service.py b/compose/service.py
--- a/compose/service.py
+++ b/compose/service.py
@@ -476,6 +476,11 @@ def build(self, no_cache=False):
except StreamOutputError as e:
raise BuildError(self, unicode(e))
+ # Ensure the HTTP connection is not reused for another
+ # streaming command, as the Docker daemon can sometimes
+ # complain about it
+ self.client.close()
+
image_id = None
for event in all_events:
</patch>
|
[]
|
[]
| |||
wagtail__wagtail-738
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Update all admin actions to work on the specific version of the page
Currently, the unpublish, delete, etc actions all work on the `Page` class. The downside of this is save/delete/clean methods are called on `Page` rather than on the specific class. Theres a couple of downsides to this:
- If someone's overridden the save/clean/delete methods on their Page class, these would not be called
- A post/pre save/delete signal hooked to a specific class will not be called, (instead, the signals on Page would be called)
- In search, the above issue will make the page be indexed twice (And in #714, it will not be reindexed at all)
I think we should update these to work on the specific object rather than the Page object
</issue>
<code>
[start of README.rst]
1 .. image:: https://travis-ci.org/torchbox/wagtail.png?branch=master
2 :target: https://travis-ci.org/torchbox/wagtail
3
4 .. image:: https://coveralls.io/repos/torchbox/wagtail/badge.png?branch=master&zxcv1
5 :target: https://coveralls.io/r/torchbox/wagtail?branch=master
6
7 .. image:: https://pypip.in/v/wagtail/badge.png?zxcv
8 :target: https://crate.io/packages/wagtail/
9
10 Wagtail CMS
11 ===========
12
13 .. image:: http://i.imgur.com/4pbWQ35.png
14
15 Wagtail is a Django content management system built originally for the `Royal College of Art <http://www.rca.ac.uk/>`_ and focused on flexibility and user experience. Its features include:
16
17 * A fast, attractive editor interface
18 * Complete control over design with standard Django templates
19 * Configure content types through standard Django models
20 * Tightly integrated search (with an `Elasticsearch <http://www.elasticsearch.org/>`_ backend for production)
21 * Strong document and image management
22 * Wide support for embedded content
23 * Simple, configurable permissions
24 * Support for tree-based content organisation
25 * Optional preview->submit->approve workflow
26 * Fast out of the box. `Varnish <https://www.varnish-cache.org/>`_-friendly if you need it
27 * A simple `form builder <http://docs.wagtail.io/en/latest/core_components/form_builder.html>`_
28 * Optional `static site generation <http://docs.wagtail.io/en/latest/contrib_components/static_site_generation.html>`_
29 * Excellent `test coverage <https://coveralls.io/r/torchbox/wagtail?branch=master>`_
30
31 Find out more at `wagtail.io <http://wagtail.io/>`_.
32
33 Got a question? Ask it on our `Google Group <https://groups.google.com/forum/#!forum/wagtail>`_.
34
35 Who's using it?
36 ~~~~~~~~~~~~~~~
37 We've a list of public Wagtail sites here: https://github.com/torchbox/wagtail/wiki/Public-Wagtail-sites
38
39 Got one of your own? Feel free to add it!
40
41
42 Getting started
43 ~~~~~~~~~~~~~~~
44 * To get you up and running quickly, we've provided a demonstration site with all the configuration in place, at `github.com/torchbox/wagtaildemo <https://github.com/torchbox/wagtaildemo/>`_; see the `README <https://github.com/torchbox/wagtaildemo/blob/master/README.md>`_ for installation instructions.
45 * See the `Getting Started <http://wagtail.readthedocs.org/en/latest/getting_started/installation.html>`_ docs for installation (with the demo app) on a fresh Debian/Ubuntu box with production-ready dependencies, on OS X and on a Vagrant box.
46 * `Serafeim Papastefanos <https://github.com/spapas>`_ has written a `tutorial <http://spapas.github.io/2014/02/13/wagtail-tutorial/>`_ with all the steps to build a simple Wagtail site from scratch.
47 * We've also provided a skeletal django-template to get started on a blank site: https://github.com/torchbox/wagtail-template
48
49 Documentation
50 ~~~~~~~~~~~~~
51 Available at `wagtail.readthedocs.org <http://wagtail.readthedocs.org/>`_ and always being updated.
52
53 Compatibility
54 ~~~~~~~~~~~~~
55 Wagtail supports Django 1.6.2+ and 1.7.0+ on Python 2.6, 2.7, 3.2, 3.3 and 3.4.
56
57 Wagtail's dependencies are summarised at `requirements.io <https://requires.io/github/torchbox/wagtail/requirements>`_.
58
59 Contributing
60 ~~~~~~~~~~~~
61 If you're a Python or Django developer, fork the repo and get stuck in!
62
63 We suggest you start by checking the `Help develop me! <https://github.com/torchbox/wagtail/issues?labels=Help+develop+me%21>`_ label and the `coding guidelines <http://wagtail.readthedocs.org/en/latest/howto/contributing.html#coding-guidelines>`_.
64
65 Send us a useful pull request and we'll post you a `t-shirt <https://twitter.com/WagtailCMS/status/432166799464210432/photo/1>`_.
66
67 We also welcome `translations <http://wagtail.readthedocs.org/en/latest/howto/contributing.html#translations>`_ for Wagtail's interface.
68
[end of README.rst]
[start of wagtail/wagtailadmin/views/pages.py]
1 import warnings
2
3 from django.http import Http404, HttpResponse
4 from django.shortcuts import render, redirect, get_object_or_404
5 from django.core.exceptions import ValidationError, PermissionDenied
6 from django.contrib import messages
7 from django.contrib.contenttypes.models import ContentType
8 from django.contrib.auth.decorators import permission_required
9 from django.core.paginator import Paginator, EmptyPage, PageNotAnInteger
10 from django.utils import timezone
11 from django.utils.translation import ugettext as _
12 from django.utils.http import is_safe_url
13 from django.views.decorators.http import require_GET, require_POST
14 from django.views.decorators.vary import vary_on_headers
15 from django.db.models import Count
16
17 from wagtail.wagtailadmin.edit_handlers import TabbedInterface, ObjectList
18 from wagtail.wagtailadmin.forms import SearchForm, CopyForm
19 from wagtail.wagtailadmin import tasks, signals
20
21 from wagtail.wagtailcore import hooks
22 from wagtail.wagtailcore.models import Page, PageRevision, get_navigation_menu_items
23
24
25 @permission_required('wagtailadmin.access_admin')
26 def explorer_nav(request):
27 return render(request, 'wagtailadmin/shared/explorer_nav.html', {
28 'nodes': get_navigation_menu_items(),
29 })
30
31
32 @permission_required('wagtailadmin.access_admin')
33 def index(request, parent_page_id=None):
34 if parent_page_id:
35 parent_page = get_object_or_404(Page, id=parent_page_id)
36 else:
37 parent_page = Page.get_first_root_node()
38
39 pages = parent_page.get_children().prefetch_related('content_type')
40
41 # Get page ordering
42 ordering = request.GET.get('ordering', '-latest_revision_created_at')
43 if ordering not in ['title', '-title', 'content_type', '-content_type', 'live', '-live', 'latest_revision_created_at', '-latest_revision_created_at', 'ord']:
44 ordering = '-latest_revision_created_at'
45
46 # Pagination
47 if ordering != 'ord':
48 ordering_no_minus = ordering
49 if ordering_no_minus.startswith('-'):
50 ordering_no_minus = ordering[1:]
51 pages = pages.order_by(ordering).annotate(null_position=Count(ordering_no_minus)).order_by('-null_position', ordering)
52
53 p = request.GET.get('p', 1)
54 paginator = Paginator(pages, 50)
55 try:
56 pages = paginator.page(p)
57 except PageNotAnInteger:
58 pages = paginator.page(1)
59 except EmptyPage:
60 pages = paginator.page(paginator.num_pages)
61
62 return render(request, 'wagtailadmin/pages/index.html', {
63 'parent_page': parent_page,
64 'ordering': ordering,
65 'pages': pages,
66 })
67
68
69 @permission_required('wagtailadmin.access_admin')
70 def add_subpage(request, parent_page_id):
71 parent_page = get_object_or_404(Page, id=parent_page_id).specific
72 if not parent_page.permissions_for_user(request.user).can_add_subpage():
73 raise PermissionDenied
74
75 page_types = sorted(parent_page.allowed_subpage_types(), key=lambda pagetype: pagetype.name.lower())
76
77 if len(page_types) == 1:
78 # Only one page type is available - redirect straight to the create form rather than
79 # making the user choose
80 content_type = page_types[0]
81 return redirect('wagtailadmin_pages_create', content_type.app_label, content_type.model, parent_page.id)
82
83 return render(request, 'wagtailadmin/pages/add_subpage.html', {
84 'parent_page': parent_page,
85 'page_types': page_types,
86 })
87
88
89 @permission_required('wagtailadmin.access_admin')
90 def content_type_use(request, content_type_app_name, content_type_model_name):
91 try:
92 content_type = ContentType.objects.get_by_natural_key(content_type_app_name, content_type_model_name)
93 except ContentType.DoesNotExist:
94 raise Http404
95
96 p = request.GET.get("p", 1)
97
98 page_class = content_type.model_class()
99
100 # page_class must be a Page type and not some other random model
101 if not issubclass(page_class, Page):
102 raise Http404
103
104 pages = page_class.objects.all()
105
106 paginator = Paginator(pages, 10)
107
108 try:
109 pages = paginator.page(p)
110 except PageNotAnInteger:
111 pages = paginator.page(1)
112 except EmptyPage:
113 pages = paginator.page(paginator.num_pages)
114
115 return render(request, 'wagtailadmin/pages/content_type_use.html', {
116 'pages': pages,
117 'app_name': content_type_app_name,
118 'content_type': content_type,
119 'page_class': page_class,
120 })
121
122
123 @permission_required('wagtailadmin.access_admin')
124 def create(request, content_type_app_name, content_type_model_name, parent_page_id):
125 parent_page = get_object_or_404(Page, id=parent_page_id).specific
126 parent_page_perms = parent_page.permissions_for_user(request.user)
127 if not parent_page_perms.can_add_subpage():
128 raise PermissionDenied
129
130 try:
131 content_type = ContentType.objects.get_by_natural_key(content_type_app_name, content_type_model_name)
132 except ContentType.DoesNotExist:
133 raise Http404
134
135 # Get class
136 page_class = content_type.model_class()
137
138 # Make sure the class is a descendant of Page
139 if not issubclass(page_class, Page):
140 raise Http404
141
142 # page must be in the list of allowed subpage types for this parent ID
143 if content_type not in parent_page.allowed_subpage_types():
144 raise PermissionDenied
145
146 page = page_class(owner=request.user)
147 edit_handler_class = get_page_edit_handler(page_class)
148 form_class = edit_handler_class.get_form_class(page_class)
149
150 if request.POST:
151 form = form_class(request.POST, request.FILES, instance=page)
152
153 # Stick an extra validator into the form to make sure that the slug is not already in use
154 def clean_slug(slug):
155 # Make sure the slug isn't already in use
156 if parent_page.get_children().filter(slug=slug).count() > 0:
157 raise ValidationError(_("This slug is already in use"))
158 return slug
159 form.fields['slug'].clean = clean_slug
160
161 # Stick another validator into the form to check that the scheduled publishing settings are set correctly
162 def clean():
163 cleaned_data = form_class.clean(form)
164
165 # Go live must be before expire
166 go_live_at = cleaned_data.get('go_live_at')
167 expire_at = cleaned_data.get('expire_at')
168
169 if go_live_at and expire_at:
170 if go_live_at > expire_at:
171 msg = _('Go live date/time must be before expiry date/time')
172 form._errors['go_live_at'] = form.error_class([msg])
173 form._errors['expire_at'] = form.error_class([msg])
174 del cleaned_data['go_live_at']
175 del cleaned_data['expire_at']
176
177 # Expire must be in the future
178 expire_at = cleaned_data.get('expire_at')
179
180 if expire_at and expire_at < timezone.now():
181 form._errors['expire_at'] = form.error_class([_('Expiry date/time must be in the future')])
182 del cleaned_data['expire_at']
183
184 return cleaned_data
185 form.clean = clean
186
187 if form.is_valid():
188 page = form.save(commit=False)
189
190 is_publishing = bool(request.POST.get('action-publish')) and parent_page_perms.can_publish_subpage()
191 is_submitting = bool(request.POST.get('action-submit'))
192
193 # Set live to False and has_unpublished_changes to True if we are not publishing
194 if not is_publishing:
195 page.live = False
196 page.has_unpublished_changes = True
197
198 # Save page
199 parent_page.add_child(instance=page)
200
201 # Save revision
202 revision = page.save_revision(
203 user=request.user,
204 submitted_for_moderation=is_submitting,
205 )
206
207 # Publish
208 if is_publishing:
209 revision.publish()
210
211 # Notifications
212 if is_publishing:
213 messages.success(request, _("Page '{0}' published.").format(page.title))
214 elif is_submitting:
215 messages.success(request, _("Page '{0}' submitted for moderation.").format(page.title))
216 tasks.send_notification.delay(page.get_latest_revision().id, 'submitted', request.user.id)
217 else:
218 messages.success(request, _("Page '{0}' created.").format(page.title))
219
220 for fn in hooks.get_hooks('after_create_page'):
221 result = fn(request, page)
222 if hasattr(result, 'status_code'):
223 return result
224
225 return redirect('wagtailadmin_pages_edit', page.id)
226 else:
227 messages.error(request, _("The page could not be created due to validation errors"))
228 edit_handler = edit_handler_class(instance=page, form=form)
229 else:
230 signals.init_new_page.send(sender=create, page=page, parent=parent_page)
231 form = form_class(instance=page)
232 edit_handler = edit_handler_class(instance=page, form=form)
233
234 return render(request, 'wagtailadmin/pages/create.html', {
235 'content_type': content_type,
236 'page_class': page_class,
237 'parent_page': parent_page,
238 'edit_handler': edit_handler,
239 'preview_modes': page.preview_modes,
240 'form': form, # Used in unit tests
241 })
242
243
244 @permission_required('wagtailadmin.access_admin')
245 def edit(request, page_id):
246 latest_revision = get_object_or_404(Page, id=page_id).get_latest_revision()
247 page = get_object_or_404(Page, id=page_id).get_latest_revision_as_page()
248 parent = page.get_parent()
249
250 page_perms = page.permissions_for_user(request.user)
251 if not page_perms.can_edit():
252 raise PermissionDenied
253
254 edit_handler_class = get_page_edit_handler(page.__class__)
255 form_class = edit_handler_class.get_form_class(page.__class__)
256
257 errors_debug = None
258
259 if request.POST:
260 form = form_class(request.POST, request.FILES, instance=page)
261
262 # Stick an extra validator into the form to make sure that the slug is not already in use
263 def clean_slug(slug):
264 # Make sure the slug isn't already in use
265 if parent.get_children().filter(slug=slug).exclude(id=page_id).count() > 0:
266 raise ValidationError(_("This slug is already in use"))
267 return slug
268 form.fields['slug'].clean = clean_slug
269
270 # Stick another validator into the form to check that the scheduled publishing settings are set correctly
271 def clean():
272 cleaned_data = form_class.clean(form)
273
274 # Go live must be before expire
275 go_live_at = cleaned_data.get('go_live_at')
276 expire_at = cleaned_data.get('expire_at')
277
278 if go_live_at and expire_at:
279 if go_live_at > expire_at:
280 msg = _('Go live date/time must be before expiry date/time')
281 form._errors['go_live_at'] = form.error_class([msg])
282 form._errors['expire_at'] = form.error_class([msg])
283 del cleaned_data['go_live_at']
284 del cleaned_data['expire_at']
285
286 # Expire must be in the future
287 expire_at = cleaned_data.get('expire_at')
288
289 if expire_at and expire_at < timezone.now():
290 form._errors['expire_at'] = form.error_class([_('Expiry date/time must be in the future')])
291 del cleaned_data['expire_at']
292
293 return cleaned_data
294 form.clean = clean
295
296 if form.is_valid() and not page.locked:
297 page = form.save(commit=False)
298
299 is_publishing = bool(request.POST.get('action-publish')) and page_perms.can_publish()
300 is_submitting = bool(request.POST.get('action-submit'))
301
302 # Save revision
303 revision = page.save_revision(
304 user=request.user,
305 submitted_for_moderation=is_submitting,
306 )
307
308 # Publish
309 if is_publishing:
310 revision.publish()
311 else:
312 # Set has_unpublished_changes flag
313 if page.live:
314 # To avoid overwriting the live version, we only save the page
315 # to the revisions table
316 Page.objects.filter(id=page.id).update(has_unpublished_changes=True)
317 else:
318 page.has_unpublished_changes = True
319 page.save()
320
321 # Notifications
322 if is_publishing:
323 messages.success(request, _("Page '{0}' published.").format(page.title))
324 elif is_submitting:
325 messages.success(request, _("Page '{0}' submitted for moderation.").format(page.title))
326 tasks.send_notification.delay(page.get_latest_revision().id, 'submitted', request.user.id)
327 else:
328 messages.success(request, _("Page '{0}' updated.").format(page.title))
329
330 for fn in hooks.get_hooks('after_edit_page'):
331 result = fn(request, page)
332 if hasattr(result, 'status_code'):
333 return result
334
335 return redirect('wagtailadmin_pages_edit', page.id)
336 else:
337 if page.locked:
338 messages.error(request, _("The page could not be saved as it is locked"))
339 else:
340 messages.error(request, _("The page could not be saved due to validation errors"))
341
342 edit_handler = edit_handler_class(instance=page, form=form)
343 errors_debug = (
344 repr(edit_handler.form.errors)
345 + repr([(name, formset.errors) for (name, formset) in edit_handler.form.formsets.items() if formset.errors])
346 )
347 else:
348 form = form_class(instance=page)
349 edit_handler = edit_handler_class(instance=page, form=form)
350
351 # Check for revisions still undergoing moderation and warn
352 if latest_revision and latest_revision.submitted_for_moderation:
353 messages.warning(request, _("This page is currently awaiting moderation"))
354
355 return render(request, 'wagtailadmin/pages/edit.html', {
356 'page': page,
357 'edit_handler': edit_handler,
358 'errors_debug': errors_debug,
359 'preview_modes': page.preview_modes,
360 'form': form, # Used in unit tests
361 })
362
363
364 @permission_required('wagtailadmin.access_admin')
365 def delete(request, page_id):
366 page = get_object_or_404(Page, id=page_id)
367 if not page.permissions_for_user(request.user).can_delete():
368 raise PermissionDenied
369
370 if request.POST:
371 parent_id = page.get_parent().id
372 page.delete()
373
374 messages.success(request, _("Page '{0}' deleted.").format(page.title))
375
376 for fn in hooks.get_hooks('after_delete_page'):
377 result = fn(request, page)
378 if hasattr(result, 'status_code'):
379 return result
380
381 return redirect('wagtailadmin_explore', parent_id)
382
383 return render(request, 'wagtailadmin/pages/confirm_delete.html', {
384 'page': page,
385 'descendant_count': page.get_descendant_count()
386 })
387
388
389 @permission_required('wagtailadmin.access_admin')
390 def view_draft(request, page_id):
391 page = get_object_or_404(Page, id=page_id).get_latest_revision_as_page()
392 return page.serve_preview(page.dummy_request(), page.default_preview_mode)
393
394
395 @permission_required('wagtailadmin.access_admin')
396 def preview_on_edit(request, page_id):
397 # Receive the form submission that would typically be posted to the 'edit' view. If submission is valid,
398 # return the rendered page; if not, re-render the edit form
399 page = get_object_or_404(Page, id=page_id).get_latest_revision_as_page()
400 edit_handler_class = get_page_edit_handler(page.__class__)
401 form_class = edit_handler_class.get_form_class(page.__class__)
402
403 form = form_class(request.POST, request.FILES, instance=page)
404
405 if form.is_valid():
406 form.save(commit=False)
407
408 preview_mode = request.GET.get('mode', page.default_preview_mode)
409 response = page.serve_preview(page.dummy_request(), preview_mode)
410 response['X-Wagtail-Preview'] = 'ok'
411 return response
412
413 else:
414 edit_handler = edit_handler_class(instance=page, form=form)
415
416 response = render(request, 'wagtailadmin/pages/edit.html', {
417 'page': page,
418 'edit_handler': edit_handler,
419 'preview_modes': page.preview_modes,
420 })
421 response['X-Wagtail-Preview'] = 'error'
422 return response
423
424
425 @permission_required('wagtailadmin.access_admin')
426 def preview_on_create(request, content_type_app_name, content_type_model_name, parent_page_id):
427 # Receive the form submission that would typically be posted to the 'create' view. If submission is valid,
428 # return the rendered page; if not, re-render the edit form
429 try:
430 content_type = ContentType.objects.get_by_natural_key(content_type_app_name, content_type_model_name)
431 except ContentType.DoesNotExist:
432 raise Http404
433
434 page_class = content_type.model_class()
435 page = page_class()
436 edit_handler_class = get_page_edit_handler(page_class)
437 form_class = edit_handler_class.get_form_class(page_class)
438
439 form = form_class(request.POST, request.FILES, instance=page)
440
441 if form.is_valid():
442 form.save(commit=False)
443
444 # ensure that our unsaved page instance has a suitable url set
445 parent_page = get_object_or_404(Page, id=parent_page_id).specific
446 page.set_url_path(parent_page)
447
448 # Set treebeard attributes
449 page.depth = parent_page.depth + 1
450 page.path = Page._get_children_path_interval(parent_page.path)[1]
451
452 preview_mode = request.GET.get('mode', page.default_preview_mode)
453 response = page.serve_preview(page.dummy_request(), preview_mode)
454 response['X-Wagtail-Preview'] = 'ok'
455 return response
456
457 else:
458 edit_handler = edit_handler_class(instance=page, form=form)
459 parent_page = get_object_or_404(Page, id=parent_page_id).specific
460
461 response = render(request, 'wagtailadmin/pages/create.html', {
462 'content_type': content_type,
463 'page_class': page_class,
464 'parent_page': parent_page,
465 'edit_handler': edit_handler,
466 'preview_modes': page.preview_modes,
467 })
468 response['X-Wagtail-Preview'] = 'error'
469 return response
470
471
472 def preview(request):
473 """
474 The HTML of a previewed page is written to the destination browser window using document.write.
475 This overwrites any previous content in the window, while keeping its URL intact. This in turn
476 means that any content we insert that happens to trigger an HTTP request, such as an image or
477 stylesheet tag, will report that original URL as its referrer.
478
479 In Webkit browsers, a new window opened with window.open('', 'window_name') will have a location
480 of 'about:blank', causing it to omit the Referer header on those HTTP requests. This means that
481 any third-party font services that use the Referer header for access control will refuse to
482 serve us.
483
484 So, instead, we need to open the window on some arbitrary URL on our domain. (Provided that's
485 also the same domain as our editor JS code, the browser security model will happily allow us to
486 document.write over the page in question.)
487
488 This, my friends, is that arbitrary URL.
489
490 Since we're going to this trouble, we'll also take the opportunity to display a spinner on the
491 placeholder page, providing some much-needed visual feedback.
492 """
493 return render(request, 'wagtailadmin/pages/preview.html')
494
495 def preview_loading(request):
496 """
497 This page is blank, but must be real HTML so its DOM can be written to once the preview of the page has rendered
498 """
499 return HttpResponse("<html><head><title></title></head><body></body></html>")
500
501 @permission_required('wagtailadmin.access_admin')
502 def unpublish(request, page_id):
503 page = get_object_or_404(Page, id=page_id)
504 if not page.permissions_for_user(request.user).can_unpublish():
505 raise PermissionDenied
506
507 if request.method == 'POST':
508 page.unpublish()
509
510 messages.success(request, _("Page '{0}' unpublished.").format(page.title))
511
512 return redirect('wagtailadmin_explore', page.get_parent().id)
513
514 return render(request, 'wagtailadmin/pages/confirm_unpublish.html', {
515 'page': page,
516 })
517
518
519 @permission_required('wagtailadmin.access_admin')
520 def move_choose_destination(request, page_to_move_id, viewed_page_id=None):
521 page_to_move = get_object_or_404(Page, id=page_to_move_id)
522 page_perms = page_to_move.permissions_for_user(request.user)
523 if not page_perms.can_move():
524 raise PermissionDenied
525
526 if viewed_page_id:
527 viewed_page = get_object_or_404(Page, id=viewed_page_id)
528 else:
529 viewed_page = Page.get_first_root_node()
530
531 viewed_page.can_choose = page_perms.can_move_to(viewed_page)
532
533 child_pages = []
534 for target in viewed_page.get_children():
535 # can't move the page into itself or its descendants
536 target.can_choose = page_perms.can_move_to(target)
537
538 target.can_descend = not(target == page_to_move or target.is_child_of(page_to_move)) and target.get_children_count()
539
540 child_pages.append(target)
541
542 return render(request, 'wagtailadmin/pages/move_choose_destination.html', {
543 'page_to_move': page_to_move,
544 'viewed_page': viewed_page,
545 'child_pages': child_pages,
546 })
547
548
549 @permission_required('wagtailadmin.access_admin')
550 def move_confirm(request, page_to_move_id, destination_id):
551 page_to_move = get_object_or_404(Page, id=page_to_move_id)
552 destination = get_object_or_404(Page, id=destination_id)
553 if not page_to_move.permissions_for_user(request.user).can_move_to(destination):
554 raise PermissionDenied
555
556 if request.POST:
557 # any invalid moves *should* be caught by the permission check above,
558 # so don't bother to catch InvalidMoveToDescendant
559
560 page_to_move.move(destination, pos='last-child')
561
562 messages.success(request, _("Page '{0}' moved.").format(page_to_move.title))
563 return redirect('wagtailadmin_explore', destination.id)
564
565 return render(request, 'wagtailadmin/pages/confirm_move.html', {
566 'page_to_move': page_to_move,
567 'destination': destination,
568 })
569
570
571 @permission_required('wagtailadmin.access_admin')
572 def set_page_position(request, page_to_move_id):
573 page_to_move = get_object_or_404(Page, id=page_to_move_id)
574 parent_page = page_to_move.get_parent()
575
576 if not parent_page.permissions_for_user(request.user).can_reorder_children():
577 raise PermissionDenied
578
579 if request.POST:
580 # Get position parameter
581 position = request.GET.get('position', None)
582
583 # Find page thats already in this position
584 position_page = None
585 if position is not None:
586 try:
587 position_page = parent_page.get_children()[int(position)]
588 except IndexError:
589 pass # No page in this position
590
591 # Move page
592
593 # any invalid moves *should* be caught by the permission check above,
594 # so don't bother to catch InvalidMoveToDescendant
595
596 if position_page:
597 # If the page has been moved to the right, insert it to the
598 # right. If left, then left.
599 old_position = list(parent_page.get_children()).index(page_to_move)
600 if int(position) < old_position:
601 page_to_move.move(position_page, pos='left')
602 elif int(position) > old_position:
603 page_to_move.move(position_page, pos='right')
604 else:
605 # Move page to end
606 page_to_move.move(parent_page, pos='last-child')
607
608 return HttpResponse('')
609
610
611 @permission_required('wagtailadmin.access_admin')
612 def copy(request, page_id):
613 page = Page.objects.get(id=page_id)
614 parent_page = page.get_parent()
615
616 # Make sure this user has permission to add subpages on the parent
617 if not parent_page.permissions_for_user(request.user).can_add_subpage():
618 raise PermissionDenied
619
620 # Check if the user has permission to publish subpages on the parent
621 can_publish = parent_page.permissions_for_user(request.user).can_publish_subpage()
622
623 # Create the form
624 form = CopyForm(request.POST or None, page=page, can_publish=can_publish)
625
626 # Check if user is submitting
627 if request.method == 'POST' and form.is_valid():
628 # Copy the page
629 new_page = page.copy(
630 recursive=form.cleaned_data.get('copy_subpages'),
631 update_attrs={
632 'title': form.cleaned_data['new_title'],
633 'slug': form.cleaned_data['new_slug'],
634 }
635 )
636
637 # Check if we should keep copied subpages published
638 publish_copies = can_publish and form.cleaned_data.get('publish_copies')
639
640 # Unpublish copied pages if we need to
641 if not publish_copies:
642 new_page.get_descendants(inclusive=True).unpublish()
643
644 # Assign user of this request as the owner of all the new pages
645 new_page.get_descendants(inclusive=True).update(owner=request.user)
646
647 # Give a success message back to the user
648 if form.cleaned_data.get('copy_subpages'):
649 messages.success(request, _("Page '{0}' and {1} subpages copied.").format(page.title, new_page.get_descendants().count()))
650 else:
651 messages.success(request, _("Page '{0}' copied.").format(page.title))
652
653 # Redirect to explore of parent page
654 return redirect('wagtailadmin_explore', parent_page.id)
655
656 return render(request, 'wagtailadmin/pages/copy.html', {
657 'page': page,
658 'form': form,
659 })
660
661
662 PAGE_EDIT_HANDLERS = {}
663
664
665 def get_page_edit_handler(page_class):
666 if page_class not in PAGE_EDIT_HANDLERS:
667 PAGE_EDIT_HANDLERS[page_class] = TabbedInterface([
668 ObjectList(page_class.content_panels, heading='Content'),
669 ObjectList(page_class.promote_panels, heading='Promote'),
670 ObjectList(page_class.settings_panels, heading='Settings', classname="settings")
671 ])
672
673 return PAGE_EDIT_HANDLERS[page_class]
674
675
676 @permission_required('wagtailadmin.access_admin')
677 @vary_on_headers('X-Requested-With')
678 def search(request):
679 pages = []
680 q = None
681 is_searching = False
682 if 'q' in request.GET:
683 form = SearchForm(request.GET)
684 if form.is_valid():
685 q = form.cleaned_data['q']
686
687 # page number
688 p = request.GET.get("p", 1)
689 is_searching = True
690 pages = Page.search(q, show_unpublished=True, search_title_only=True, prefetch_related=['content_type'])
691
692 # Pagination
693 paginator = Paginator(pages, 20)
694 try:
695 pages = paginator.page(p)
696 except PageNotAnInteger:
697 pages = paginator.page(1)
698 except EmptyPage:
699 pages = paginator.page(paginator.num_pages)
700 else:
701 form = SearchForm()
702
703 if request.is_ajax():
704 return render(request, "wagtailadmin/pages/search_results.html", {
705 'pages': pages,
706 'is_searching': is_searching,
707 'query_string': q,
708 })
709 else:
710 return render(request, "wagtailadmin/pages/search.html", {
711 'search_form': form,
712 'pages': pages,
713 'is_searching': is_searching,
714 'query_string': q,
715 })
716
717
718 @permission_required('wagtailadmin.access_admin')
719 def approve_moderation(request, revision_id):
720 revision = get_object_or_404(PageRevision, id=revision_id)
721 if not revision.page.permissions_for_user(request.user).can_publish():
722 raise PermissionDenied
723
724 if not revision.submitted_for_moderation:
725 messages.error(request, _("The page '{0}' is not currently awaiting moderation.").format(revision.page.title))
726 return redirect('wagtailadmin_home')
727
728 if request.method == 'POST':
729 revision.approve_moderation()
730 messages.success(request, _("Page '{0}' published.").format(revision.page.title))
731 tasks.send_notification.delay(revision.id, 'approved', request.user.id)
732
733 return redirect('wagtailadmin_home')
734
735
736 @permission_required('wagtailadmin.access_admin')
737 def reject_moderation(request, revision_id):
738 revision = get_object_or_404(PageRevision, id=revision_id)
739 if not revision.page.permissions_for_user(request.user).can_publish():
740 raise PermissionDenied
741
742 if not revision.submitted_for_moderation:
743 messages.error(request, _("The page '{0}' is not currently awaiting moderation.").format( revision.page.title))
744 return redirect('wagtailadmin_home')
745
746 if request.method == 'POST':
747 revision.reject_moderation()
748 messages.success(request, _("Page '{0}' rejected for publication.").format(revision.page.title))
749 tasks.send_notification.delay(revision.id, 'rejected', request.user.id)
750
751 return redirect('wagtailadmin_home')
752
753
754 @permission_required('wagtailadmin.access_admin')
755 @require_GET
756 def preview_for_moderation(request, revision_id):
757 revision = get_object_or_404(PageRevision, id=revision_id)
758 if not revision.page.permissions_for_user(request.user).can_publish():
759 raise PermissionDenied
760
761 if not revision.submitted_for_moderation:
762 messages.error(request, _("The page '{0}' is not currently awaiting moderation.").format(revision.page.title))
763 return redirect('wagtailadmin_home')
764
765 page = revision.as_page_object()
766
767 request.revision_id = revision_id
768
769 # pass in the real user request rather than page.dummy_request(), so that request.user
770 # and request.revision_id will be picked up by the wagtail user bar
771 return page.serve_preview(request, page.default_preview_mode)
772
773
774 @permission_required('wagtailadmin.access_admin')
775 @require_POST
776 def lock(request, page_id):
777 # Get the page
778 page = get_object_or_404(Page, id=page_id)
779
780 # Check permissions
781 if not page.permissions_for_user(request.user).can_lock():
782 raise PermissionDenied
783
784 # Lock the page
785 if not page.locked:
786 page.locked = True
787 page.save()
788
789 messages.success(request, _("Page '{0}' is now locked.").format(page.title))
790
791 # Redirect
792 redirect_to = request.POST.get('next', None)
793 if redirect_to and is_safe_url(url=redirect_to, host=request.get_host()):
794 return redirect(redirect_to)
795 else:
796 return redirect('wagtailadmin_explore', page.get_parent().id)
797
798
799 @permission_required('wagtailadmin.access_admin')
800 @require_POST
801 def unlock(request, page_id):
802 # Get the page
803 page = get_object_or_404(Page, id=page_id)
804
805 # Check permissions
806 if not page.permissions_for_user(request.user).can_lock():
807 raise PermissionDenied
808
809 # Unlock the page
810 if page.locked:
811 page.locked = False
812 page.save()
813
814 messages.success(request, _("Page '{0}' is now unlocked.").format(page.title))
815
816 # Redirect
817 redirect_to = request.POST.get('next', None)
818 if redirect_to and is_safe_url(url=redirect_to, host=request.get_host()):
819 return redirect(redirect_to)
820 else:
821 return redirect('wagtailadmin_explore', page.get_parent().id)
822
[end of wagtail/wagtailadmin/views/pages.py]
[start of wagtail/wagtailcore/url_routing.py]
1 class RouteResult(object):
2 """
3 An object to be returned from Page.route, which encapsulates
4 all the information necessary to serve an HTTP response. Analogous to
5 django.core.urlresolvers.ResolverMatch, except that it identifies
6 a Page instance that we will call serve(*args, **kwargs) on, rather
7 than a view function.
8 """
9 def __init__(self, page, args=None, kwargs=None):
10 self.page = page
11 self.args = args or []
12 self.kwargs = kwargs or {}
13
14 def __getitem__(self, index):
15 return (self.page, self.args, self.kwargs)[index]
16
[end of wagtail/wagtailcore/url_routing.py]
[start of wagtail/wagtailsearch/views/editorspicks.py]
1 from django.shortcuts import render, redirect, get_object_or_404
2 from django.contrib.auth.decorators import permission_required
3 from django.contrib import messages
4
5 from django.core.paginator import Paginator, EmptyPage, PageNotAnInteger
6 from django.utils.translation import ugettext as _
7 from django.views.decorators.vary import vary_on_headers
8
9 from wagtail.wagtailsearch import models, forms
10 from wagtail.wagtailadmin.forms import SearchForm
11
12
13 @permission_required('wagtailadmin.access_admin')
14 @vary_on_headers('X-Requested-With')
15 def index(request):
16 is_searching = False
17 page = request.GET.get('p', 1)
18 query_string = request.GET.get('q', "")
19
20 queries = models.Query.objects.filter(editors_picks__isnull=False).distinct()
21
22 # Search
23 if query_string:
24 queries = queries.filter(query_string__icontains=query_string)
25 is_searching = True
26
27 # Pagination
28 paginator = Paginator(queries, 20)
29 try:
30 queries = paginator.page(page)
31 except PageNotAnInteger:
32 queries = paginator.page(1)
33 except EmptyPage:
34 queries = paginator.page(paginator.num_pages)
35
36 if request.is_ajax():
37 return render(request, "wagtailsearch/editorspicks/results.html", {
38 'is_searching': is_searching,
39 'queries': queries,
40 'query_string': query_string,
41 })
42 else:
43 return render(request, 'wagtailsearch/editorspicks/index.html', {
44 'is_searching': is_searching,
45 'queries': queries,
46 'query_string': query_string,
47 'search_form': SearchForm(data=dict(q=query_string) if query_string else None, placeholder=_("Search editor's picks")),
48 })
49
50
51 def save_editorspicks(query, new_query, editors_pick_formset):
52 # Save
53 if editors_pick_formset.is_valid():
54 # Set sort_order
55 for i, form in enumerate(editors_pick_formset.ordered_forms):
56 form.instance.sort_order = i
57
58 # Make sure the form is marked as changed so it gets saved with the new order
59 form.has_changed = lambda: True
60
61 editors_pick_formset.save()
62
63 # If query was changed, move all editors picks to the new query
64 if query != new_query:
65 editors_pick_formset.get_queryset().update(query=new_query)
66
67 return True
68 else:
69 return False
70
71
72 @permission_required('wagtailadmin.access_admin')
73 def add(request):
74 if request.POST:
75 # Get query
76 query_form = forms.QueryForm(request.POST)
77 if query_form.is_valid():
78 query = models.Query.get(query_form['query_string'].value())
79
80 # Save editors picks
81 editors_pick_formset = forms.EditorsPickFormSet(request.POST, instance=query)
82 if save_editorspicks(query, query, editors_pick_formset):
83 messages.success(request, _("Editor's picks for '{0}' created.").format(query))
84 return redirect('wagtailsearch_editorspicks_index')
85 else:
86 if len(editors_pick_formset.non_form_errors()):
87 messages.error(request, " ".join(error for error in editors_pick_formset.non_form_errors())) # formset level error (e.g. no forms submitted)
88 else:
89 messages.error(request, _("Recommendations have not been created due to errors")) # specific errors will be displayed within form fields
90 else:
91 editors_pick_formset = forms.EditorsPickFormSet()
92 else:
93 query_form = forms.QueryForm()
94 editors_pick_formset = forms.EditorsPickFormSet()
95
96 return render(request, 'wagtailsearch/editorspicks/add.html', {
97 'query_form': query_form,
98 'editors_pick_formset': editors_pick_formset,
99 })
100
101
102 @permission_required('wagtailadmin.access_admin')
103 def edit(request, query_id):
104 query = get_object_or_404(models.Query, id=query_id)
105
106 if request.POST:
107 # Get query
108 query_form = forms.QueryForm(request.POST)
109 # and the recommendations
110 editors_pick_formset = forms.EditorsPickFormSet(request.POST, instance=query)
111
112 if query_form.is_valid():
113 new_query = models.Query.get(query_form['query_string'].value())
114
115 # Save editors picks
116 if save_editorspicks(query, new_query, editors_pick_formset):
117 messages.success(request, _("Editor's picks for '{0}' updated.").format(new_query))
118 return redirect('wagtailsearch_editorspicks_index')
119 else:
120 if len(editors_pick_formset.non_form_errors()):
121 messages.error(request, " ".join(error for error in editors_pick_formset.non_form_errors())) # formset level error (e.g. no forms submitted)
122 else:
123 messages.error(request, _("Recommendations have not been saved due to errors")) # specific errors will be displayed within form fields
124
125 else:
126 query_form = forms.QueryForm(initial=dict(query_string=query.query_string))
127 editors_pick_formset = forms.EditorsPickFormSet(instance=query)
128
129 return render(request, 'wagtailsearch/editorspicks/edit.html', {
130 'query_form': query_form,
131 'editors_pick_formset': editors_pick_formset,
132 'query': query,
133 })
134
135
136 @permission_required('wagtailadmin.access_admin')
137 def delete(request, query_id):
138 query = get_object_or_404(models.Query, id=query_id)
139
140 if request.POST:
141 query.editors_picks.all().delete()
142 messages.success(request, _("Editor's picks deleted."))
143 return redirect('wagtailsearch_editorspicks_index')
144
145 return render(request, 'wagtailsearch/editorspicks/confirm_delete.html', {
146 'query': query,
147 })
148
[end of wagtail/wagtailsearch/views/editorspicks.py]
[start of wagtail/wagtailusers/forms.py]
1 from django import forms
2 from django.contrib.auth.forms import UserCreationForm as BaseUserCreationForm
3 from django.utils.translation import ugettext_lazy as _
4 from django.contrib.auth import get_user_model
5 from django.contrib.auth.models import Group, Permission
6
7 from wagtail.wagtailcore import hooks
8 from wagtail.wagtailusers.models import UserProfile
9 from wagtail.wagtailcore.models import UserPagePermissionsProxy, GroupPagePermission
10
11
12 User = get_user_model()
13
14
15 # extend Django's UserCreationForm with an 'is_superuser' field
16 class UserCreationForm(BaseUserCreationForm):
17
18 required_css_class = "required"
19 is_superuser = forms.BooleanField(
20 label=_("Administrator"),
21 required=False,
22 help_text=_("If ticked, this user has the ability to manage user accounts.")
23 )
24
25 email = forms.EmailField(required=True, label=_("Email"))
26 first_name = forms.CharField(required=True, label=_("First Name"))
27 last_name = forms.CharField(required=True, label=_("Last Name"))
28
29 class Meta:
30 model = User
31 fields = ("username", "email", "first_name", "last_name", "is_superuser", "groups")
32 widgets = {
33 'groups': forms.CheckboxSelectMultiple
34 }
35
36 def clean_username(self):
37 # Method copied from parent
38
39 username = self.cleaned_data["username"]
40 try:
41 # When called from BaseUserCreationForm, the method fails if using a AUTH_MODEL_MODEL,
42 # This is because the following line tries to perform a lookup on
43 # the default "auth_user" table.
44 User._default_manager.get(username=username)
45 except User.DoesNotExist:
46 return username
47 raise forms.ValidationError(
48 self.error_messages['duplicate_username'],
49 code='duplicate_username',
50 )
51
52 def save(self, commit=True):
53 user = super(UserCreationForm, self).save(commit=False)
54
55 # users can access django-admin iff they are a superuser
56 user.is_staff = user.is_superuser
57
58 if commit:
59 user.save()
60 self.save_m2m()
61 return user
62
63
64 # Largely the same as django.contrib.auth.forms.UserCreationForm, but with enough subtle changes
65 # (to make password non-required) that it isn't worth inheriting...
66 class UserEditForm(forms.ModelForm):
67 required_css_class = "required"
68
69 error_messages = {
70 'duplicate_username': _("A user with that username already exists."),
71 'password_mismatch': _("The two password fields didn't match."),
72 }
73 username = forms.RegexField(
74 label=_("Username"),
75 max_length=30,
76 regex=r'^[\w.@+-]+$',
77 help_text=_("Required. 30 characters or fewer. Letters, digits and @/./+/-/_ only."),
78 error_messages={
79 'invalid': _("This value may contain only letters, numbers and @/./+/-/_ characters.")
80 })
81
82 email = forms.EmailField(required=True, label=_("Email"))
83 first_name = forms.CharField(required=True, label=_("First Name"))
84 last_name = forms.CharField(required=True, label=_("Last Name"))
85
86 password1 = forms.CharField(
87 label=_("Password"),
88 required=False,
89 widget=forms.PasswordInput,
90 help_text=_("Leave blank if not changing."))
91 password2 = forms.CharField(
92 label=_("Password confirmation"), required=False,
93 widget=forms.PasswordInput,
94 help_text=_("Enter the same password as above, for verification."))
95
96 is_superuser = forms.BooleanField(
97 label=_("Administrator"),
98 required=False,
99 help_text=_("Administrators have the ability to manage user accounts.")
100 )
101
102 class Meta:
103 model = User
104 fields = ("username", "email", "first_name", "last_name", "is_active", "is_superuser", "groups")
105 widgets = {
106 'groups': forms.CheckboxSelectMultiple
107 }
108
109 def clean_username(self):
110 # Since User.username is unique, this check is redundant,
111 # but it sets a nicer error message than the ORM. See #13147.
112 username = self.cleaned_data["username"]
113 try:
114 User._default_manager.exclude(id=self.instance.id).get(username=username)
115 except User.DoesNotExist:
116 return username
117 raise forms.ValidationError(self.error_messages['duplicate_username'])
118
119 def clean_password2(self):
120 password1 = self.cleaned_data.get("password1")
121 password2 = self.cleaned_data.get("password2")
122 if password1 != password2:
123 raise forms.ValidationError(
124 self.error_messages['password_mismatch'])
125 return password2
126
127 def save(self, commit=True):
128 user = super(UserEditForm, self).save(commit=False)
129
130 # users can access django-admin iff they are a superuser
131 user.is_staff = user.is_superuser
132
133 if self.cleaned_data["password1"]:
134 user.set_password(self.cleaned_data["password1"])
135 if commit:
136 user.save()
137 self.save_m2m()
138 return user
139
140
141 class GroupForm(forms.ModelForm):
142 def __init__(self, *args, **kwargs):
143 super(GroupForm, self).__init__(*args, **kwargs)
144 self.registered_permissions = Permission.objects.none()
145 for fn in hooks.get_hooks('register_permissions'):
146 self.registered_permissions = self.registered_permissions | fn()
147 self.fields['permissions'].queryset = self.registered_permissions
148
149 required_css_class = "required"
150
151 error_messages = {
152 'duplicate_name': _("A group with that name already exists."),
153 }
154
155 is_superuser = forms.BooleanField(
156 label=_("Administrator"),
157 required=False,
158 help_text=_("Administrators have the ability to manage user accounts.")
159 )
160
161 class Meta:
162 model = Group
163 fields = ("name", "permissions", )
164
165 def clean_name(self):
166 # Since Group.name is unique, this check is redundant,
167 # but it sets a nicer error message than the ORM. See #13147.
168 name = self.cleaned_data["name"]
169 try:
170 Group._default_manager.exclude(id=self.instance.id).get(name=name)
171 except Group.DoesNotExist:
172 return name
173 raise forms.ValidationError(self.error_messages['duplicate_name'])
174
175 def save(self):
176 # We go back to the object to read (in order to reapply) the
177 # permissions which were set on this group, but which are not
178 # accessible in the wagtail admin interface, as otherwise these would
179 # be clobbered by this form.
180 try:
181 untouchable_permissions = self.instance.permissions.exclude(pk__in=self.registered_permissions)
182 bool(untouchable_permissions) # force this to be evaluated, as it's about to change
183 except ValueError:
184 # this form is not bound; we're probably creating a new group
185 untouchable_permissions = []
186 group = super(GroupForm, self).save()
187 group.permissions.add(*untouchable_permissions)
188 return group
189
190
191 class GroupPagePermissionForm(forms.ModelForm):
192 def __init__(self, *args, **kwargs):
193 super(GroupPagePermissionForm, self).__init__(*args, **kwargs)
194 self.fields['page'].widget = forms.HiddenInput()
195
196 class Meta:
197 model = GroupPagePermission
198 fields = ('page', 'permission_type')
199
200
201 class BaseGroupPagePermissionFormSet(forms.models.BaseInlineFormSet):
202 def __init__(self, *args, **kwargs):
203 super(BaseGroupPagePermissionFormSet, self).__init__(*args, **kwargs)
204 self.form = GroupPagePermissionForm
205 for form in self.forms:
206 form.fields['DELETE'].widget = forms.HiddenInput()
207
208 @property
209 def empty_form(self):
210 empty_form = super(BaseGroupPagePermissionFormSet, self).empty_form
211 empty_form.fields['DELETE'].widget = forms.HiddenInput()
212 return empty_form
213
214
215 class NotificationPreferencesForm(forms.ModelForm):
216 def __init__(self, *args, **kwargs):
217 super(NotificationPreferencesForm, self).__init__(*args, **kwargs)
218 user_perms = UserPagePermissionsProxy(self.instance.user)
219 if not user_perms.can_publish_pages():
220 del self.fields['submitted_notifications']
221 if not user_perms.can_edit_pages():
222 del self.fields['approved_notifications']
223 del self.fields['rejected_notifications']
224
225 class Meta:
226 model = UserProfile
227 fields = ("submitted_notifications", "approved_notifications", "rejected_notifications")
228
[end of wagtail/wagtailusers/forms.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
wagtail/wagtail
|
d633cbfd6c7a39fd0e9a6d37abe05739c09c18fd
|
Update all admin actions to work on the specific version of the page
Currently, the unpublish, delete, etc actions all work on the `Page` class. The downside of this is save/delete/clean methods are called on `Page` rather than on the specific class. Theres a couple of downsides to this:
- If someone's overridden the save/clean/delete methods on their Page class, these would not be called
- A post/pre save/delete signal hooked to a specific class will not be called, (instead, the signals on Page would be called)
- In search, the above issue will make the page be indexed twice (And in #714, it will not be reindexed at all)
I think we should update these to work on the specific object rather than the Page object
|
2014-10-23T08:35:14Z
|
<patch>
diff --git a/wagtail/wagtailadmin/views/pages.py b/wagtail/wagtailadmin/views/pages.py
--- a/wagtail/wagtailadmin/views/pages.py
+++ b/wagtail/wagtailadmin/views/pages.py
@@ -363,7 +363,7 @@ def clean():
@permission_required('wagtailadmin.access_admin')
def delete(request, page_id):
- page = get_object_or_404(Page, id=page_id)
+ page = get_object_or_404(Page, id=page_id).specific
if not page.permissions_for_user(request.user).can_delete():
raise PermissionDenied
@@ -500,7 +500,7 @@ def preview_loading(request):
@permission_required('wagtailadmin.access_admin')
def unpublish(request, page_id):
- page = get_object_or_404(Page, id=page_id)
+ page = get_object_or_404(Page, id=page_id).specific
if not page.permissions_for_user(request.user).can_unpublish():
raise PermissionDenied
@@ -548,7 +548,7 @@ def move_choose_destination(request, page_to_move_id, viewed_page_id=None):
@permission_required('wagtailadmin.access_admin')
def move_confirm(request, page_to_move_id, destination_id):
- page_to_move = get_object_or_404(Page, id=page_to_move_id)
+ page_to_move = get_object_or_404(Page, id=page_to_move_id).specific
destination = get_object_or_404(Page, id=destination_id)
if not page_to_move.permissions_for_user(request.user).can_move_to(destination):
raise PermissionDenied
@@ -775,7 +775,7 @@ def preview_for_moderation(request, revision_id):
@require_POST
def lock(request, page_id):
# Get the page
- page = get_object_or_404(Page, id=page_id)
+ page = get_object_or_404(Page, id=page_id).specific
# Check permissions
if not page.permissions_for_user(request.user).can_lock():
@@ -800,7 +800,7 @@ def lock(request, page_id):
@require_POST
def unlock(request, page_id):
# Get the page
- page = get_object_or_404(Page, id=page_id)
+ page = get_object_or_404(Page, id=page_id).specific
# Check permissions
if not page.permissions_for_user(request.user).can_lock():
</patch>
|
[]
|
[]
| ||||
ipython__ipython-11752
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
BackgroundJobManager wrong job.num's are assigned after flush call
Hello, there is a problem with how `job.num` is calculated.
### What I do:
I try to run the following code
```python
from IPython.lib import backgroundjobs as bg
def forever():
while True:
pass
return 'Done'
def once():
return 'Done'
jm = bg.BackgroundJobManager()
jm.new(forever)
jm.new(once)
jm.new(once)
jm.new(forever)
jm.flush()
jm.new(forever)
jm.new(once)
jm.new(once)
jm.new(once)
jm.flush()
```
### What is wrong
I got an exception when calling `jm.flush()` second time
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-35-a03db7172b42> in <module>
19 jm.new(once)
20 jm.new(once)
---> 21 jm.flush()
~/.virtualenvs/ipython3/lib/python3.7/site-packages/IPython/lib/backgroundjobs.py in flush(self)
320 alljobs = self.all
321 for job in self.completed+self.dead:
--> 322 del(alljobs[job.num])
323
324 # Now flush these lists completely
KeyError: 4
```
### My guess
It is connected to the way job.num is calculated here: https://github.com/ipython/ipython/blob/master/IPython/lib/backgroundjobs.py#L190
```python
# when we call flush first time in the example above len(self.all) becomes 2
job.num = len(self.all)+1 if self.all else 0
self.running.append(job)
# and now we count from 2 despite that we have running job with id=4 already
self.all[job.num] = job
```
Also, there is second bug in the calculation:
we could never create a job with `id=1` as it will be 0 if `self.all` is empty or 1 + 1 if there is one job in `self.all`
### How to fix
My suggestion is to create new instance field `self._current_job_id = 0`, and increment it on every new job creation.
With that, we'll get a counter that will be tied to BackgroundJobManager instance only and not to the current jobs count
If this issue will be confirmed I probably will create a fix myself.
</issue>
<code>
[start of README.rst]
1 .. image:: https://codecov.io/github/ipython/ipython/coverage.svg?branch=master
2 :target: https://codecov.io/github/ipython/ipython?branch=master
3
4 .. image:: https://img.shields.io/pypi/v/IPython.svg
5 :target: https://pypi.python.org/pypi/ipython
6
7 .. image:: https://img.shields.io/travis/ipython/ipython.svg
8 :target: https://travis-ci.org/ipython/ipython
9
10 .. image:: https://www.codetriage.com/ipython/ipython/badges/users.svg
11 :target: https://www.codetriage.com/ipython/ipython/
12
13 ===========================================
14 IPython: Productive Interactive Computing
15 ===========================================
16
17 Overview
18 ========
19
20 Welcome to IPython. Our full documentation is available on `ipython.readthedocs.io
21 <https://ipython.readthedocs.io/en/stable/>`_ and contains information on how to install, use, and
22 contribute to the project.
23
24 **IPython versions and Python Support**
25
26 **IPython 7.0** requires Python version 3.5 and above.
27
28 **IPython 6.x** requires Python version 3.3 and above.
29
30 **IPython 5.x LTS** is the compatible release for Python 2.7.
31 If you require Python 2 support, you **must** use IPython 5.x LTS. Please
32 update your project configurations and requirements as necessary.
33
34
35 The Notebook, Qt console and a number of other pieces are now parts of *Jupyter*.
36 See the `Jupyter installation docs <https://jupyter.readthedocs.io/en/latest/install.html>`__
37 if you want to use these.
38
39
40
41
42 Development and Instant running
43 ===============================
44
45 You can find the latest version of the development documentation on `readthedocs
46 <https://ipython.readthedocs.io/en/latest/>`_.
47
48 You can run IPython from this directory without even installing it system-wide
49 by typing at the terminal::
50
51 $ python -m IPython
52
53 Or see the `development installation docs
54 <https://ipython.readthedocs.io/en/latest/install/install.html#installing-the-development-version>`_
55 for the latest revision on read the docs.
56
57 Documentation and installation instructions for older version of IPython can be
58 found on the `IPython website <https://ipython.org/documentation.html>`_
59
60
61
62 IPython requires Python version 3 or above
63 ==========================================
64
65 Starting with version 6.0, IPython does not support Python 2.7, 3.0, 3.1, or
66 3.2.
67
68 For a version compatible with Python 2.7, please install the 5.x LTS Long Term
69 Support version.
70
71 If you are encountering this error message you are likely trying to install or
72 use IPython from source. You need to checkout the remote 5.x branch. If you are
73 using git the following should work::
74
75 $ git fetch origin
76 $ git checkout 5.x
77
78 If you encounter this error message with a regular install of IPython, then you
79 likely need to update your package manager, for example if you are using `pip`
80 check the version of pip with::
81
82 $ pip --version
83
84 You will need to update pip to the version 9.0.1 or greater. If you are not using
85 pip, please inquiry with the maintainers of the package for your package
86 manager.
87
88 For more information see one of our blog posts:
89
90 https://blog.jupyter.org/release-of-ipython-5-0-8ce60b8d2e8e
91
92 As well as the following Pull-Request for discussion:
93
94 https://github.com/ipython/ipython/pull/9900
95
96 This error does also occur if you are invoking ``setup.py`` directly – which you
97 should not – or are using ``easy_install`` If this is the case, use ``pip
98 install .`` instead of ``setup.py install`` , and ``pip install -e .`` instead
99 of ``setup.py develop`` If you are depending on IPython as a dependency you may
100 also want to have a conditional dependency on IPython depending on the Python
101 version::
102
103 install_req = ['ipython']
104 if sys.version_info[0] < 3 and 'bdist_wheel' not in sys.argv:
105 install_req.remove('ipython')
106 install_req.append('ipython<6')
107
108 setup(
109 ...
110 install_requires=install_req
111 )
112
[end of README.rst]
[start of IPython/core/magics/namespace.py]
1 """Implementation of namespace-related magic functions.
2 """
3 #-----------------------------------------------------------------------------
4 # Copyright (c) 2012 The IPython Development Team.
5 #
6 # Distributed under the terms of the Modified BSD License.
7 #
8 # The full license is in the file COPYING.txt, distributed with this software.
9 #-----------------------------------------------------------------------------
10
11 #-----------------------------------------------------------------------------
12 # Imports
13 #-----------------------------------------------------------------------------
14
15 # Stdlib
16 import gc
17 import re
18 import sys
19
20 # Our own packages
21 from IPython.core import page
22 from IPython.core.error import StdinNotImplementedError, UsageError
23 from IPython.core.magic import Magics, magics_class, line_magic
24 from IPython.testing.skipdoctest import skip_doctest
25 from IPython.utils.encoding import DEFAULT_ENCODING
26 from IPython.utils.openpy import read_py_file
27 from IPython.utils.path import get_py_filename
28
29 #-----------------------------------------------------------------------------
30 # Magic implementation classes
31 #-----------------------------------------------------------------------------
32
33 @magics_class
34 class NamespaceMagics(Magics):
35 """Magics to manage various aspects of the user's namespace.
36
37 These include listing variables, introspecting into them, etc.
38 """
39
40 @line_magic
41 def pinfo(self, parameter_s='', namespaces=None):
42 """Provide detailed information about an object.
43
44 '%pinfo object' is just a synonym for object? or ?object."""
45
46 #print 'pinfo par: <%s>' % parameter_s # dbg
47 # detail_level: 0 -> obj? , 1 -> obj??
48 detail_level = 0
49 # We need to detect if we got called as 'pinfo pinfo foo', which can
50 # happen if the user types 'pinfo foo?' at the cmd line.
51 pinfo,qmark1,oname,qmark2 = \
52 re.match(r'(pinfo )?(\?*)(.*?)(\??$)',parameter_s).groups()
53 if pinfo or qmark1 or qmark2:
54 detail_level = 1
55 if "*" in oname:
56 self.psearch(oname)
57 else:
58 self.shell._inspect('pinfo', oname, detail_level=detail_level,
59 namespaces=namespaces)
60
61 @line_magic
62 def pinfo2(self, parameter_s='', namespaces=None):
63 """Provide extra detailed information about an object.
64
65 '%pinfo2 object' is just a synonym for object?? or ??object."""
66 self.shell._inspect('pinfo', parameter_s, detail_level=1,
67 namespaces=namespaces)
68
69 @skip_doctest
70 @line_magic
71 def pdef(self, parameter_s='', namespaces=None):
72 """Print the call signature for any callable object.
73
74 If the object is a class, print the constructor information.
75
76 Examples
77 --------
78 ::
79
80 In [3]: %pdef urllib.urlopen
81 urllib.urlopen(url, data=None, proxies=None)
82 """
83 self.shell._inspect('pdef',parameter_s, namespaces)
84
85 @line_magic
86 def pdoc(self, parameter_s='', namespaces=None):
87 """Print the docstring for an object.
88
89 If the given object is a class, it will print both the class and the
90 constructor docstrings."""
91 self.shell._inspect('pdoc',parameter_s, namespaces)
92
93 @line_magic
94 def psource(self, parameter_s='', namespaces=None):
95 """Print (or run through pager) the source code for an object."""
96 if not parameter_s:
97 raise UsageError('Missing object name.')
98 self.shell._inspect('psource',parameter_s, namespaces)
99
100 @line_magic
101 def pfile(self, parameter_s='', namespaces=None):
102 """Print (or run through pager) the file where an object is defined.
103
104 The file opens at the line where the object definition begins. IPython
105 will honor the environment variable PAGER if set, and otherwise will
106 do its best to print the file in a convenient form.
107
108 If the given argument is not an object currently defined, IPython will
109 try to interpret it as a filename (automatically adding a .py extension
110 if needed). You can thus use %pfile as a syntax highlighting code
111 viewer."""
112
113 # first interpret argument as an object name
114 out = self.shell._inspect('pfile',parameter_s, namespaces)
115 # if not, try the input as a filename
116 if out == 'not found':
117 try:
118 filename = get_py_filename(parameter_s)
119 except IOError as msg:
120 print(msg)
121 return
122 page.page(self.shell.pycolorize(read_py_file(filename, skip_encoding_cookie=False)))
123
124 @line_magic
125 def psearch(self, parameter_s=''):
126 """Search for object in namespaces by wildcard.
127
128 %psearch [options] PATTERN [OBJECT TYPE]
129
130 Note: ? can be used as a synonym for %psearch, at the beginning or at
131 the end: both a*? and ?a* are equivalent to '%psearch a*'. Still, the
132 rest of the command line must be unchanged (options come first), so
133 for example the following forms are equivalent
134
135 %psearch -i a* function
136 -i a* function?
137 ?-i a* function
138
139 Arguments:
140
141 PATTERN
142
143 where PATTERN is a string containing * as a wildcard similar to its
144 use in a shell. The pattern is matched in all namespaces on the
145 search path. By default objects starting with a single _ are not
146 matched, many IPython generated objects have a single
147 underscore. The default is case insensitive matching. Matching is
148 also done on the attributes of objects and not only on the objects
149 in a module.
150
151 [OBJECT TYPE]
152
153 Is the name of a python type from the types module. The name is
154 given in lowercase without the ending type, ex. StringType is
155 written string. By adding a type here only objects matching the
156 given type are matched. Using all here makes the pattern match all
157 types (this is the default).
158
159 Options:
160
161 -a: makes the pattern match even objects whose names start with a
162 single underscore. These names are normally omitted from the
163 search.
164
165 -i/-c: make the pattern case insensitive/sensitive. If neither of
166 these options are given, the default is read from your configuration
167 file, with the option ``InteractiveShell.wildcards_case_sensitive``.
168 If this option is not specified in your configuration file, IPython's
169 internal default is to do a case sensitive search.
170
171 -e/-s NAMESPACE: exclude/search a given namespace. The pattern you
172 specify can be searched in any of the following namespaces:
173 'builtin', 'user', 'user_global','internal', 'alias', where
174 'builtin' and 'user' are the search defaults. Note that you should
175 not use quotes when specifying namespaces.
176
177 'Builtin' contains the python module builtin, 'user' contains all
178 user data, 'alias' only contain the shell aliases and no python
179 objects, 'internal' contains objects used by IPython. The
180 'user_global' namespace is only used by embedded IPython instances,
181 and it contains module-level globals. You can add namespaces to the
182 search with -s or exclude them with -e (these options can be given
183 more than once).
184
185 Examples
186 --------
187 ::
188
189 %psearch a* -> objects beginning with an a
190 %psearch -e builtin a* -> objects NOT in the builtin space starting in a
191 %psearch a* function -> all functions beginning with an a
192 %psearch re.e* -> objects beginning with an e in module re
193 %psearch r*.e* -> objects that start with e in modules starting in r
194 %psearch r*.* string -> all strings in modules beginning with r
195
196 Case sensitive search::
197
198 %psearch -c a* list all object beginning with lower case a
199
200 Show objects beginning with a single _::
201
202 %psearch -a _* list objects beginning with a single underscore
203 """
204 try:
205 parameter_s.encode('ascii')
206 except UnicodeEncodeError:
207 print('Python identifiers can only contain ascii characters.')
208 return
209
210 # default namespaces to be searched
211 def_search = ['user_local', 'user_global', 'builtin']
212
213 # Process options/args
214 opts,args = self.parse_options(parameter_s,'cias:e:',list_all=True)
215 opt = opts.get
216 shell = self.shell
217 psearch = shell.inspector.psearch
218
219 # select case options
220 if 'i' in opts:
221 ignore_case = True
222 elif 'c' in opts:
223 ignore_case = False
224 else:
225 ignore_case = not shell.wildcards_case_sensitive
226
227 # Build list of namespaces to search from user options
228 def_search.extend(opt('s',[]))
229 ns_exclude = ns_exclude=opt('e',[])
230 ns_search = [nm for nm in def_search if nm not in ns_exclude]
231
232 # Call the actual search
233 try:
234 psearch(args,shell.ns_table,ns_search,
235 show_all=opt('a'),ignore_case=ignore_case)
236 except:
237 shell.showtraceback()
238
239 @skip_doctest
240 @line_magic
241 def who_ls(self, parameter_s=''):
242 """Return a sorted list of all interactive variables.
243
244 If arguments are given, only variables of types matching these
245 arguments are returned.
246
247 Examples
248 --------
249
250 Define two variables and list them with who_ls::
251
252 In [1]: alpha = 123
253
254 In [2]: beta = 'test'
255
256 In [3]: %who_ls
257 Out[3]: ['alpha', 'beta']
258
259 In [4]: %who_ls int
260 Out[4]: ['alpha']
261
262 In [5]: %who_ls str
263 Out[5]: ['beta']
264 """
265
266 user_ns = self.shell.user_ns
267 user_ns_hidden = self.shell.user_ns_hidden
268 nonmatching = object() # This can never be in user_ns
269 out = [ i for i in user_ns
270 if not i.startswith('_') \
271 and (user_ns[i] is not user_ns_hidden.get(i, nonmatching)) ]
272
273 typelist = parameter_s.split()
274 if typelist:
275 typeset = set(typelist)
276 out = [i for i in out if type(user_ns[i]).__name__ in typeset]
277
278 out.sort()
279 return out
280
281 @skip_doctest
282 @line_magic
283 def who(self, parameter_s=''):
284 """Print all interactive variables, with some minimal formatting.
285
286 If any arguments are given, only variables whose type matches one of
287 these are printed. For example::
288
289 %who function str
290
291 will only list functions and strings, excluding all other types of
292 variables. To find the proper type names, simply use type(var) at a
293 command line to see how python prints type names. For example:
294
295 ::
296
297 In [1]: type('hello')\\
298 Out[1]: <type 'str'>
299
300 indicates that the type name for strings is 'str'.
301
302 ``%who`` always excludes executed names loaded through your configuration
303 file and things which are internal to IPython.
304
305 This is deliberate, as typically you may load many modules and the
306 purpose of %who is to show you only what you've manually defined.
307
308 Examples
309 --------
310
311 Define two variables and list them with who::
312
313 In [1]: alpha = 123
314
315 In [2]: beta = 'test'
316
317 In [3]: %who
318 alpha beta
319
320 In [4]: %who int
321 alpha
322
323 In [5]: %who str
324 beta
325 """
326
327 varlist = self.who_ls(parameter_s)
328 if not varlist:
329 if parameter_s:
330 print('No variables match your requested type.')
331 else:
332 print('Interactive namespace is empty.')
333 return
334
335 # if we have variables, move on...
336 count = 0
337 for i in varlist:
338 print(i+'\t', end=' ')
339 count += 1
340 if count > 8:
341 count = 0
342 print()
343 print()
344
345 @skip_doctest
346 @line_magic
347 def whos(self, parameter_s=''):
348 """Like %who, but gives some extra information about each variable.
349
350 The same type filtering of %who can be applied here.
351
352 For all variables, the type is printed. Additionally it prints:
353
354 - For {},[],(): their length.
355
356 - For numpy arrays, a summary with shape, number of
357 elements, typecode and size in memory.
358
359 - Everything else: a string representation, snipping their middle if
360 too long.
361
362 Examples
363 --------
364
365 Define two variables and list them with whos::
366
367 In [1]: alpha = 123
368
369 In [2]: beta = 'test'
370
371 In [3]: %whos
372 Variable Type Data/Info
373 --------------------------------
374 alpha int 123
375 beta str test
376 """
377
378 varnames = self.who_ls(parameter_s)
379 if not varnames:
380 if parameter_s:
381 print('No variables match your requested type.')
382 else:
383 print('Interactive namespace is empty.')
384 return
385
386 # if we have variables, move on...
387
388 # for these types, show len() instead of data:
389 seq_types = ['dict', 'list', 'tuple']
390
391 # for numpy arrays, display summary info
392 ndarray_type = None
393 if 'numpy' in sys.modules:
394 try:
395 from numpy import ndarray
396 except ImportError:
397 pass
398 else:
399 ndarray_type = ndarray.__name__
400
401 # Find all variable names and types so we can figure out column sizes
402
403 # some types are well known and can be shorter
404 abbrevs = {'IPython.core.macro.Macro' : 'Macro'}
405 def type_name(v):
406 tn = type(v).__name__
407 return abbrevs.get(tn,tn)
408
409 varlist = [self.shell.user_ns[n] for n in varnames]
410
411 typelist = []
412 for vv in varlist:
413 tt = type_name(vv)
414
415 if tt=='instance':
416 typelist.append( abbrevs.get(str(vv.__class__),
417 str(vv.__class__)))
418 else:
419 typelist.append(tt)
420
421 # column labels and # of spaces as separator
422 varlabel = 'Variable'
423 typelabel = 'Type'
424 datalabel = 'Data/Info'
425 colsep = 3
426 # variable format strings
427 vformat = "{0:<{varwidth}}{1:<{typewidth}}"
428 aformat = "%s: %s elems, type `%s`, %s bytes"
429 # find the size of the columns to format the output nicely
430 varwidth = max(max(map(len,varnames)), len(varlabel)) + colsep
431 typewidth = max(max(map(len,typelist)), len(typelabel)) + colsep
432 # table header
433 print(varlabel.ljust(varwidth) + typelabel.ljust(typewidth) + \
434 ' '+datalabel+'\n' + '-'*(varwidth+typewidth+len(datalabel)+1))
435 # and the table itself
436 kb = 1024
437 Mb = 1048576 # kb**2
438 for vname,var,vtype in zip(varnames,varlist,typelist):
439 print(vformat.format(vname, vtype, varwidth=varwidth, typewidth=typewidth), end=' ')
440 if vtype in seq_types:
441 print("n="+str(len(var)))
442 elif vtype == ndarray_type:
443 vshape = str(var.shape).replace(',','').replace(' ','x')[1:-1]
444 if vtype==ndarray_type:
445 # numpy
446 vsize = var.size
447 vbytes = vsize*var.itemsize
448 vdtype = var.dtype
449
450 if vbytes < 100000:
451 print(aformat % (vshape, vsize, vdtype, vbytes))
452 else:
453 print(aformat % (vshape, vsize, vdtype, vbytes), end=' ')
454 if vbytes < Mb:
455 print('(%s kb)' % (vbytes/kb,))
456 else:
457 print('(%s Mb)' % (vbytes/Mb,))
458 else:
459 try:
460 vstr = str(var)
461 except UnicodeEncodeError:
462 vstr = var.encode(DEFAULT_ENCODING,
463 'backslashreplace')
464 except:
465 vstr = "<object with id %d (str() failed)>" % id(var)
466 vstr = vstr.replace('\n', '\\n')
467 if len(vstr) < 50:
468 print(vstr)
469 else:
470 print(vstr[:25] + "<...>" + vstr[-25:])
471
472 @line_magic
473 def reset(self, parameter_s=''):
474 """Resets the namespace by removing all names defined by the user, if
475 called without arguments, or by removing some types of objects, such
476 as everything currently in IPython's In[] and Out[] containers (see
477 the parameters for details).
478
479 Parameters
480 ----------
481 -f : force reset without asking for confirmation.
482
483 -s : 'Soft' reset: Only clears your namespace, leaving history intact.
484 References to objects may be kept. By default (without this option),
485 we do a 'hard' reset, giving you a new session and removing all
486 references to objects from the current session.
487
488 in : reset input history
489
490 out : reset output history
491
492 dhist : reset directory history
493
494 array : reset only variables that are NumPy arrays
495
496 See Also
497 --------
498 reset_selective : invoked as ``%reset_selective``
499
500 Examples
501 --------
502 ::
503
504 In [6]: a = 1
505
506 In [7]: a
507 Out[7]: 1
508
509 In [8]: 'a' in _ip.user_ns
510 Out[8]: True
511
512 In [9]: %reset -f
513
514 In [1]: 'a' in _ip.user_ns
515 Out[1]: False
516
517 In [2]: %reset -f in
518 Flushing input history
519
520 In [3]: %reset -f dhist in
521 Flushing directory history
522 Flushing input history
523
524 Notes
525 -----
526 Calling this magic from clients that do not implement standard input,
527 such as the ipython notebook interface, will reset the namespace
528 without confirmation.
529 """
530 opts, args = self.parse_options(parameter_s,'sf', mode='list')
531 if 'f' in opts:
532 ans = True
533 else:
534 try:
535 ans = self.shell.ask_yes_no(
536 "Once deleted, variables cannot be recovered. Proceed (y/[n])?",
537 default='n')
538 except StdinNotImplementedError:
539 ans = True
540 if not ans:
541 print('Nothing done.')
542 return
543
544 if 's' in opts: # Soft reset
545 user_ns = self.shell.user_ns
546 for i in self.who_ls():
547 del(user_ns[i])
548 elif len(args) == 0: # Hard reset
549 self.shell.reset(new_session = False)
550
551 # reset in/out/dhist/array: previously extensinions/clearcmd.py
552 ip = self.shell
553 user_ns = self.shell.user_ns # local lookup, heavily used
554
555 for target in args:
556 target = target.lower() # make matches case insensitive
557 if target == 'out':
558 print("Flushing output cache (%d entries)" % len(user_ns['_oh']))
559 self.shell.displayhook.flush()
560
561 elif target == 'in':
562 print("Flushing input history")
563 pc = self.shell.displayhook.prompt_count + 1
564 for n in range(1, pc):
565 key = '_i'+repr(n)
566 user_ns.pop(key,None)
567 user_ns.update(dict(_i=u'',_ii=u'',_iii=u''))
568 hm = ip.history_manager
569 # don't delete these, as %save and %macro depending on the
570 # length of these lists to be preserved
571 hm.input_hist_parsed[:] = [''] * pc
572 hm.input_hist_raw[:] = [''] * pc
573 # hm has internal machinery for _i,_ii,_iii, clear it out
574 hm._i = hm._ii = hm._iii = hm._i00 = u''
575
576 elif target == 'array':
577 # Support cleaning up numpy arrays
578 try:
579 from numpy import ndarray
580 # This must be done with items and not iteritems because
581 # we're going to modify the dict in-place.
582 for x,val in list(user_ns.items()):
583 if isinstance(val,ndarray):
584 del user_ns[x]
585 except ImportError:
586 print("reset array only works if Numpy is available.")
587
588 elif target == 'dhist':
589 print("Flushing directory history")
590 del user_ns['_dh'][:]
591
592 else:
593 print("Don't know how to reset ", end=' ')
594 print(target + ", please run `%reset?` for details")
595
596 gc.collect()
597
598 @line_magic
599 def reset_selective(self, parameter_s=''):
600 """Resets the namespace by removing names defined by the user.
601
602 Input/Output history are left around in case you need them.
603
604 %reset_selective [-f] regex
605
606 No action is taken if regex is not included
607
608 Options
609 -f : force reset without asking for confirmation.
610
611 See Also
612 --------
613 reset : invoked as ``%reset``
614
615 Examples
616 --------
617
618 We first fully reset the namespace so your output looks identical to
619 this example for pedagogical reasons; in practice you do not need a
620 full reset::
621
622 In [1]: %reset -f
623
624 Now, with a clean namespace we can make a few variables and use
625 ``%reset_selective`` to only delete names that match our regexp::
626
627 In [2]: a=1; b=2; c=3; b1m=4; b2m=5; b3m=6; b4m=7; b2s=8
628
629 In [3]: who_ls
630 Out[3]: ['a', 'b', 'b1m', 'b2m', 'b2s', 'b3m', 'b4m', 'c']
631
632 In [4]: %reset_selective -f b[2-3]m
633
634 In [5]: who_ls
635 Out[5]: ['a', 'b', 'b1m', 'b2s', 'b4m', 'c']
636
637 In [6]: %reset_selective -f d
638
639 In [7]: who_ls
640 Out[7]: ['a', 'b', 'b1m', 'b2s', 'b4m', 'c']
641
642 In [8]: %reset_selective -f c
643
644 In [9]: who_ls
645 Out[9]: ['a', 'b', 'b1m', 'b2s', 'b4m']
646
647 In [10]: %reset_selective -f b
648
649 In [11]: who_ls
650 Out[11]: ['a']
651
652 Notes
653 -----
654 Calling this magic from clients that do not implement standard input,
655 such as the ipython notebook interface, will reset the namespace
656 without confirmation.
657 """
658
659 opts, regex = self.parse_options(parameter_s,'f')
660
661 if 'f' in opts:
662 ans = True
663 else:
664 try:
665 ans = self.shell.ask_yes_no(
666 "Once deleted, variables cannot be recovered. Proceed (y/[n])? ",
667 default='n')
668 except StdinNotImplementedError:
669 ans = True
670 if not ans:
671 print('Nothing done.')
672 return
673 user_ns = self.shell.user_ns
674 if not regex:
675 print('No regex pattern specified. Nothing done.')
676 return
677 else:
678 try:
679 m = re.compile(regex)
680 except TypeError:
681 raise TypeError('regex must be a string or compiled pattern')
682 for i in self.who_ls():
683 if m.search(i):
684 del(user_ns[i])
685
686 @line_magic
687 def xdel(self, parameter_s=''):
688 """Delete a variable, trying to clear it from anywhere that
689 IPython's machinery has references to it. By default, this uses
690 the identity of the named object in the user namespace to remove
691 references held under other names. The object is also removed
692 from the output history.
693
694 Options
695 -n : Delete the specified name from all namespaces, without
696 checking their identity.
697 """
698 opts, varname = self.parse_options(parameter_s,'n')
699 try:
700 self.shell.del_var(varname, ('n' in opts))
701 except (NameError, ValueError) as e:
702 print(type(e).__name__ +": "+ str(e))
703
[end of IPython/core/magics/namespace.py]
[start of IPython/lib/backgroundjobs.py]
1 # -*- coding: utf-8 -*-
2 """Manage background (threaded) jobs conveniently from an interactive shell.
3
4 This module provides a BackgroundJobManager class. This is the main class
5 meant for public usage, it implements an object which can create and manage
6 new background jobs.
7
8 It also provides the actual job classes managed by these BackgroundJobManager
9 objects, see their docstrings below.
10
11
12 This system was inspired by discussions with B. Granger and the
13 BackgroundCommand class described in the book Python Scripting for
14 Computational Science, by H. P. Langtangen:
15
16 http://folk.uio.no/hpl/scripting
17
18 (although ultimately no code from this text was used, as IPython's system is a
19 separate implementation).
20
21 An example notebook is provided in our documentation illustrating interactive
22 use of the system.
23 """
24
25 #*****************************************************************************
26 # Copyright (C) 2005-2006 Fernando Perez <[email protected]>
27 #
28 # Distributed under the terms of the BSD License. The full license is in
29 # the file COPYING, distributed as part of this software.
30 #*****************************************************************************
31
32 # Code begins
33 import sys
34 import threading
35
36 from IPython import get_ipython
37 from IPython.core.ultratb import AutoFormattedTB
38 from logging import error, debug
39
40
41 class BackgroundJobManager(object):
42 """Class to manage a pool of backgrounded threaded jobs.
43
44 Below, we assume that 'jobs' is a BackgroundJobManager instance.
45
46 Usage summary (see the method docstrings for details):
47
48 jobs.new(...) -> start a new job
49
50 jobs() or jobs.status() -> print status summary of all jobs
51
52 jobs[N] -> returns job number N.
53
54 foo = jobs[N].result -> assign to variable foo the result of job N
55
56 jobs[N].traceback() -> print the traceback of dead job N
57
58 jobs.remove(N) -> remove (finished) job N
59
60 jobs.flush() -> remove all finished jobs
61
62 As a convenience feature, BackgroundJobManager instances provide the
63 utility result and traceback methods which retrieve the corresponding
64 information from the jobs list:
65
66 jobs.result(N) <--> jobs[N].result
67 jobs.traceback(N) <--> jobs[N].traceback()
68
69 While this appears minor, it allows you to use tab completion
70 interactively on the job manager instance.
71 """
72
73 def __init__(self):
74 # Lists for job management, accessed via a property to ensure they're
75 # up to date.x
76 self._running = []
77 self._completed = []
78 self._dead = []
79 # A dict of all jobs, so users can easily access any of them
80 self.all = {}
81 # For reporting
82 self._comp_report = []
83 self._dead_report = []
84 # Store status codes locally for fast lookups
85 self._s_created = BackgroundJobBase.stat_created_c
86 self._s_running = BackgroundJobBase.stat_running_c
87 self._s_completed = BackgroundJobBase.stat_completed_c
88 self._s_dead = BackgroundJobBase.stat_dead_c
89
90 @property
91 def running(self):
92 self._update_status()
93 return self._running
94
95 @property
96 def dead(self):
97 self._update_status()
98 return self._dead
99
100 @property
101 def completed(self):
102 self._update_status()
103 return self._completed
104
105 def new(self, func_or_exp, *args, **kwargs):
106 """Add a new background job and start it in a separate thread.
107
108 There are two types of jobs which can be created:
109
110 1. Jobs based on expressions which can be passed to an eval() call.
111 The expression must be given as a string. For example:
112
113 job_manager.new('myfunc(x,y,z=1)'[,glob[,loc]])
114
115 The given expression is passed to eval(), along with the optional
116 global/local dicts provided. If no dicts are given, they are
117 extracted automatically from the caller's frame.
118
119 A Python statement is NOT a valid eval() expression. Basically, you
120 can only use as an eval() argument something which can go on the right
121 of an '=' sign and be assigned to a variable.
122
123 For example,"print 'hello'" is not valid, but '2+3' is.
124
125 2. Jobs given a function object, optionally passing additional
126 positional arguments:
127
128 job_manager.new(myfunc, x, y)
129
130 The function is called with the given arguments.
131
132 If you need to pass keyword arguments to your function, you must
133 supply them as a dict named kw:
134
135 job_manager.new(myfunc, x, y, kw=dict(z=1))
136
137 The reason for this assymmetry is that the new() method needs to
138 maintain access to its own keywords, and this prevents name collisions
139 between arguments to new() and arguments to your own functions.
140
141 In both cases, the result is stored in the job.result field of the
142 background job object.
143
144 You can set `daemon` attribute of the thread by giving the keyword
145 argument `daemon`.
146
147 Notes and caveats:
148
149 1. All threads running share the same standard output. Thus, if your
150 background jobs generate output, it will come out on top of whatever
151 you are currently writing. For this reason, background jobs are best
152 used with silent functions which simply return their output.
153
154 2. Threads also all work within the same global namespace, and this
155 system does not lock interactive variables. So if you send job to the
156 background which operates on a mutable object for a long time, and
157 start modifying that same mutable object interactively (or in another
158 backgrounded job), all sorts of bizarre behaviour will occur.
159
160 3. If a background job is spending a lot of time inside a C extension
161 module which does not release the Python Global Interpreter Lock
162 (GIL), this will block the IPython prompt. This is simply because the
163 Python interpreter can only switch between threads at Python
164 bytecodes. While the execution is inside C code, the interpreter must
165 simply wait unless the extension module releases the GIL.
166
167 4. There is no way, due to limitations in the Python threads library,
168 to kill a thread once it has started."""
169
170 if callable(func_or_exp):
171 kw = kwargs.get('kw',{})
172 job = BackgroundJobFunc(func_or_exp,*args,**kw)
173 elif isinstance(func_or_exp, str):
174 if not args:
175 frame = sys._getframe(1)
176 glob, loc = frame.f_globals, frame.f_locals
177 elif len(args)==1:
178 glob = loc = args[0]
179 elif len(args)==2:
180 glob,loc = args
181 else:
182 raise ValueError(
183 'Expression jobs take at most 2 args (globals,locals)')
184 job = BackgroundJobExpr(func_or_exp, glob, loc)
185 else:
186 raise TypeError('invalid args for new job')
187
188 if kwargs.get('daemon', False):
189 job.daemon = True
190 job.num = len(self.all)+1 if self.all else 0
191 self.running.append(job)
192 self.all[job.num] = job
193 debug('Starting job # %s in a separate thread.' % job.num)
194 job.start()
195 return job
196
197 def __getitem__(self, job_key):
198 num = job_key if isinstance(job_key, int) else job_key.num
199 return self.all[num]
200
201 def __call__(self):
202 """An alias to self.status(),
203
204 This allows you to simply call a job manager instance much like the
205 Unix `jobs` shell command."""
206
207 return self.status()
208
209 def _update_status(self):
210 """Update the status of the job lists.
211
212 This method moves finished jobs to one of two lists:
213 - self.completed: jobs which completed successfully
214 - self.dead: jobs which finished but died.
215
216 It also copies those jobs to corresponding _report lists. These lists
217 are used to report jobs completed/dead since the last update, and are
218 then cleared by the reporting function after each call."""
219
220 # Status codes
221 srun, scomp, sdead = self._s_running, self._s_completed, self._s_dead
222 # State lists, use the actual lists b/c the public names are properties
223 # that call this very function on access
224 running, completed, dead = self._running, self._completed, self._dead
225
226 # Now, update all state lists
227 for num, job in enumerate(running):
228 stat = job.stat_code
229 if stat == srun:
230 continue
231 elif stat == scomp:
232 completed.append(job)
233 self._comp_report.append(job)
234 running[num] = False
235 elif stat == sdead:
236 dead.append(job)
237 self._dead_report.append(job)
238 running[num] = False
239 # Remove dead/completed jobs from running list
240 running[:] = filter(None, running)
241
242 def _group_report(self,group,name):
243 """Report summary for a given job group.
244
245 Return True if the group had any elements."""
246
247 if group:
248 print('%s jobs:' % name)
249 for job in group:
250 print('%s : %s' % (job.num,job))
251 print()
252 return True
253
254 def _group_flush(self,group,name):
255 """Flush a given job group
256
257 Return True if the group had any elements."""
258
259 njobs = len(group)
260 if njobs:
261 plural = {1:''}.setdefault(njobs,'s')
262 print('Flushing %s %s job%s.' % (njobs,name,plural))
263 group[:] = []
264 return True
265
266 def _status_new(self):
267 """Print the status of newly finished jobs.
268
269 Return True if any new jobs are reported.
270
271 This call resets its own state every time, so it only reports jobs
272 which have finished since the last time it was called."""
273
274 self._update_status()
275 new_comp = self._group_report(self._comp_report, 'Completed')
276 new_dead = self._group_report(self._dead_report,
277 'Dead, call jobs.traceback() for details')
278 self._comp_report[:] = []
279 self._dead_report[:] = []
280 return new_comp or new_dead
281
282 def status(self,verbose=0):
283 """Print a status of all jobs currently being managed."""
284
285 self._update_status()
286 self._group_report(self.running,'Running')
287 self._group_report(self.completed,'Completed')
288 self._group_report(self.dead,'Dead')
289 # Also flush the report queues
290 self._comp_report[:] = []
291 self._dead_report[:] = []
292
293 def remove(self,num):
294 """Remove a finished (completed or dead) job."""
295
296 try:
297 job = self.all[num]
298 except KeyError:
299 error('Job #%s not found' % num)
300 else:
301 stat_code = job.stat_code
302 if stat_code == self._s_running:
303 error('Job #%s is still running, it can not be removed.' % num)
304 return
305 elif stat_code == self._s_completed:
306 self.completed.remove(job)
307 elif stat_code == self._s_dead:
308 self.dead.remove(job)
309
310 def flush(self):
311 """Flush all finished jobs (completed and dead) from lists.
312
313 Running jobs are never flushed.
314
315 It first calls _status_new(), to update info. If any jobs have
316 completed since the last _status_new() call, the flush operation
317 aborts."""
318
319 # Remove the finished jobs from the master dict
320 alljobs = self.all
321 for job in self.completed+self.dead:
322 del(alljobs[job.num])
323
324 # Now flush these lists completely
325 fl_comp = self._group_flush(self.completed, 'Completed')
326 fl_dead = self._group_flush(self.dead, 'Dead')
327 if not (fl_comp or fl_dead):
328 print('No jobs to flush.')
329
330 def result(self,num):
331 """result(N) -> return the result of job N."""
332 try:
333 return self.all[num].result
334 except KeyError:
335 error('Job #%s not found' % num)
336
337 def _traceback(self, job):
338 num = job if isinstance(job, int) else job.num
339 try:
340 self.all[num].traceback()
341 except KeyError:
342 error('Job #%s not found' % num)
343
344 def traceback(self, job=None):
345 if job is None:
346 self._update_status()
347 for deadjob in self.dead:
348 print("Traceback for: %r" % deadjob)
349 self._traceback(deadjob)
350 print()
351 else:
352 self._traceback(job)
353
354
355 class BackgroundJobBase(threading.Thread):
356 """Base class to build BackgroundJob classes.
357
358 The derived classes must implement:
359
360 - Their own __init__, since the one here raises NotImplementedError. The
361 derived constructor must call self._init() at the end, to provide common
362 initialization.
363
364 - A strform attribute used in calls to __str__.
365
366 - A call() method, which will make the actual execution call and must
367 return a value to be held in the 'result' field of the job object.
368 """
369
370 # Class constants for status, in string and as numerical codes (when
371 # updating jobs lists, we don't want to do string comparisons). This will
372 # be done at every user prompt, so it has to be as fast as possible
373 stat_created = 'Created'; stat_created_c = 0
374 stat_running = 'Running'; stat_running_c = 1
375 stat_completed = 'Completed'; stat_completed_c = 2
376 stat_dead = 'Dead (Exception), call jobs.traceback() for details'
377 stat_dead_c = -1
378
379 def __init__(self):
380 """Must be implemented in subclasses.
381
382 Subclasses must call :meth:`_init` for standard initialisation.
383 """
384 raise NotImplementedError("This class can not be instantiated directly.")
385
386 def _init(self):
387 """Common initialization for all BackgroundJob objects"""
388
389 for attr in ['call','strform']:
390 assert hasattr(self,attr), "Missing attribute <%s>" % attr
391
392 # The num tag can be set by an external job manager
393 self.num = None
394
395 self.status = BackgroundJobBase.stat_created
396 self.stat_code = BackgroundJobBase.stat_created_c
397 self.finished = False
398 self.result = '<BackgroundJob has not completed>'
399
400 # reuse the ipython traceback handler if we can get to it, otherwise
401 # make a new one
402 try:
403 make_tb = get_ipython().InteractiveTB.text
404 except:
405 make_tb = AutoFormattedTB(mode = 'Context',
406 color_scheme='NoColor',
407 tb_offset = 1).text
408 # Note that the actual API for text() requires the three args to be
409 # passed in, so we wrap it in a simple lambda.
410 self._make_tb = lambda : make_tb(None, None, None)
411
412 # Hold a formatted traceback if one is generated.
413 self._tb = None
414
415 threading.Thread.__init__(self)
416
417 def __str__(self):
418 return self.strform
419
420 def __repr__(self):
421 return '<BackgroundJob #%d: %s>' % (self.num, self.strform)
422
423 def traceback(self):
424 print(self._tb)
425
426 def run(self):
427 try:
428 self.status = BackgroundJobBase.stat_running
429 self.stat_code = BackgroundJobBase.stat_running_c
430 self.result = self.call()
431 except:
432 self.status = BackgroundJobBase.stat_dead
433 self.stat_code = BackgroundJobBase.stat_dead_c
434 self.finished = None
435 self.result = ('<BackgroundJob died, call jobs.traceback() for details>')
436 self._tb = self._make_tb()
437 else:
438 self.status = BackgroundJobBase.stat_completed
439 self.stat_code = BackgroundJobBase.stat_completed_c
440 self.finished = True
441
442
443 class BackgroundJobExpr(BackgroundJobBase):
444 """Evaluate an expression as a background job (uses a separate thread)."""
445
446 def __init__(self, expression, glob=None, loc=None):
447 """Create a new job from a string which can be fed to eval().
448
449 global/locals dicts can be provided, which will be passed to the eval
450 call."""
451
452 # fail immediately if the given expression can't be compiled
453 self.code = compile(expression,'<BackgroundJob compilation>','eval')
454
455 glob = {} if glob is None else glob
456 loc = {} if loc is None else loc
457 self.expression = self.strform = expression
458 self.glob = glob
459 self.loc = loc
460 self._init()
461
462 def call(self):
463 return eval(self.code,self.glob,self.loc)
464
465
466 class BackgroundJobFunc(BackgroundJobBase):
467 """Run a function call as a background job (uses a separate thread)."""
468
469 def __init__(self, func, *args, **kwargs):
470 """Create a new job from a callable object.
471
472 Any positional arguments and keyword args given to this constructor
473 after the initial callable are passed directly to it."""
474
475 if not callable(func):
476 raise TypeError(
477 'first argument to BackgroundJobFunc must be callable')
478
479 self.func = func
480 self.args = args
481 self.kwargs = kwargs
482 # The string form will only include the function passed, because
483 # generating string representations of the arguments is a potentially
484 # _very_ expensive operation (e.g. with large arrays).
485 self.strform = str(func)
486 self._init()
487
488 def call(self):
489 return self.func(*self.args, **self.kwargs)
490
[end of IPython/lib/backgroundjobs.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
ipython/ipython
|
de0abd8b2d4687706d5323adde85d8264f773a69
|
BackgroundJobManager wrong job.num's are assigned after flush call
Hello, there is a problem with how `job.num` is calculated.
### What I do:
I try to run the following code
```python
from IPython.lib import backgroundjobs as bg
def forever():
while True:
pass
return 'Done'
def once():
return 'Done'
jm = bg.BackgroundJobManager()
jm.new(forever)
jm.new(once)
jm.new(once)
jm.new(forever)
jm.flush()
jm.new(forever)
jm.new(once)
jm.new(once)
jm.new(once)
jm.flush()
```
### What is wrong
I got an exception when calling `jm.flush()` second time
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-35-a03db7172b42> in <module>
19 jm.new(once)
20 jm.new(once)
---> 21 jm.flush()
~/.virtualenvs/ipython3/lib/python3.7/site-packages/IPython/lib/backgroundjobs.py in flush(self)
320 alljobs = self.all
321 for job in self.completed+self.dead:
--> 322 del(alljobs[job.num])
323
324 # Now flush these lists completely
KeyError: 4
```
### My guess
It is connected to the way job.num is calculated here: https://github.com/ipython/ipython/blob/master/IPython/lib/backgroundjobs.py#L190
```python
# when we call flush first time in the example above len(self.all) becomes 2
job.num = len(self.all)+1 if self.all else 0
self.running.append(job)
# and now we count from 2 despite that we have running job with id=4 already
self.all[job.num] = job
```
Also, there is second bug in the calculation:
we could never create a job with `id=1` as it will be 0 if `self.all` is empty or 1 + 1 if there is one job in `self.all`
### How to fix
My suggestion is to create new instance field `self._current_job_id = 0`, and increment it on every new job creation.
With that, we'll get a counter that will be tied to BackgroundJobManager instance only and not to the current jobs count
If this issue will be confirmed I probably will create a fix myself.
|
@meeseeksdev tag bug, lib/backgroundjobs
Aww sintell, I was not able to apply the following label(s): `lib/backgroundjobs`,`bug`. Either because they are not existing labels on this repository or because you do not have the permission to apply these.I tried my best to guess by looking at the casing, but was unable to find matching labels.
|
2019-05-27T13:15:27Z
|
<patch>
diff --git a/IPython/lib/backgroundjobs.py b/IPython/lib/backgroundjobs.py
--- a/IPython/lib/backgroundjobs.py
+++ b/IPython/lib/backgroundjobs.py
@@ -86,6 +86,7 @@ def __init__(self):
self._s_running = BackgroundJobBase.stat_running_c
self._s_completed = BackgroundJobBase.stat_completed_c
self._s_dead = BackgroundJobBase.stat_dead_c
+ self._current_job_id = 0
@property
def running(self):
@@ -187,7 +188,8 @@ def new(self, func_or_exp, *args, **kwargs):
if kwargs.get('daemon', False):
job.daemon = True
- job.num = len(self.all)+1 if self.all else 0
+ job.num = self._current_job_id
+ self._current_job_id += 1
self.running.append(job)
self.all[job.num] = job
debug('Starting job # %s in a separate thread.' % job.num)
</patch>
|
[]
|
[]
| |||
apache__airflow-18119
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Exception within LocalTaskJob._run_mini_scheduler_on_child_tasks brakes Sentry Handler
### Apache Airflow version
2.1.3 (latest released)
### Operating System
Debian GNU/Linux 10 (buster)
### Versions of Apache Airflow Providers
```
apache-airflow-providers-amazon @ file:///root/.cache/pypoetry/artifacts/7f/f7/23/fc7fd3543aa486275ef0385c29063ff0dc391b0fc95dc5aa6cab2cf4e5/apache_airflow_providers_amazon-2.2.0-py3-none-any.whl
apache-airflow-providers-celery @ file:///root/.cache/pypoetry/artifacts/14/80/39/0d9d57205da1d24189ac9c18eb3477664ed2c2618c1467c9809b9a2fbf/apache_airflow_providers_celery-2.0.0-py3-none-any.whl
apache-airflow-providers-ftp @ file:///root/.cache/pypoetry/artifacts/a5/13/da/bf14abc40193a1ee1b82bbd800e3ac230427d7684b9d40998ac3684bef/apache_airflow_providers_ftp-2.0.1-py3-none-any.whl
apache-airflow-providers-http @ file:///root/.cache/pypoetry/artifacts/fc/d7/d2/73c89ef847bbae1704fa403d7e92dba1feead757aae141613980db40ff/apache_airflow_providers_http-2.0.0-py3-none-any.whl
apache-airflow-providers-imap @ file:///root/.cache/pypoetry/artifacts/af/5d/de/21c10bfc7ac076a415dcc3fc909317547e77e38c005487552cf40ddd97/apache_airflow_providers_imap-2.0.1-py3-none-any.whl
apache-airflow-providers-postgres @ file:///root/.cache/pypoetry/artifacts/50/27/e0/9b0d8f4c0abf59967bb87a04a93d73896d9a4558994185dd8bc43bb67f/apache_airflow_providers_postgres-2.2.0-py3-none-any.whl
apache-airflow-providers-redis @ file:///root/.cache/pypoetry/artifacts/7d/95/03/5d2a65ace88ae9a9ce9134b927b1e9639c8680c13a31e58425deae55d1/apache_airflow_providers_redis-2.0.1-py3-none-any.whl
apache-airflow-providers-sqlite @ file:///root/.cache/pypoetry/artifacts/ec/e6/a3/e0d81fef662ccf79609e7d2c4e4440839a464771fd2a002d252c9a401d/apache_airflow_providers_sqlite-2.0.1-py3-none-any.whl
```
### Deployment
Other Docker-based deployment
### Deployment details
We are using the Sentry integration
### What happened
An exception within LocalTaskJobs mini scheduler was handled incorrectly by the Sentry integrations 'enrich_errors' method. This is because it assumes its applied to a method of a TypeInstance task
```
TypeError: cannot pickle 'dict_keys' object
File "airflow/sentry.py", line 166, in wrapper
return func(task_instance, *args, **kwargs)
File "airflow/jobs/local_task_job.py", line 241, in _run_mini_scheduler_on_child_tasks
partial_dag = task.dag.partial_subset(
File "airflow/models/dag.py", line 1487, in partial_subset
dag.task_dict = {
File "airflow/models/dag.py", line 1488, in <dictcomp>
t.task_id: copy.deepcopy(t, {id(t.dag): dag}) # type: ignore
File "copy.py", line 153, in deepcopy
y = copier(memo)
File "airflow/models/baseoperator.py", line 970, in __deepcopy__
setattr(result, k, copy.deepcopy(v, memo))
File "copy.py", line 161, in deepcopy
rv = reductor(4)
AttributeError: 'LocalTaskJob' object has no attribute 'task'
File "airflow", line 8, in <module>
sys.exit(main())
File "airflow/__main__.py", line 40, in main
args.func(args)
File "airflow/cli/cli_parser.py", line 48, in command
return func(*args, **kwargs)
File "airflow/utils/cli.py", line 91, in wrapper
return f(*args, **kwargs)
File "airflow/cli/commands/task_command.py", line 238, in task_run
_run_task_by_selected_method(args, dag, ti)
File "airflow/cli/commands/task_command.py", line 64, in _run_task_by_selected_method
_run_task_by_local_task_job(args, ti)
File "airflow/cli/commands/task_command.py", line 121, in _run_task_by_local_task_job
run_job.run()
File "airflow/jobs/base_job.py", line 245, in run
self._execute()
File "airflow/jobs/local_task_job.py", line 128, in _execute
self.handle_task_exit(return_code)
File "airflow/jobs/local_task_job.py", line 166, in handle_task_exit
self._run_mini_scheduler_on_child_tasks()
File "airflow/utils/session.py", line 70, in wrapper
return func(*args, session=session, **kwargs)
File "airflow/sentry.py", line 168, in wrapper
self.add_tagging(task_instance)
File "airflow/sentry.py", line 119, in add_tagging
task = task_instance.task
```
### What you expected to happen
The error to be handled correctly and passed on to Sentry without raising another exception within the error handling system
### How to reproduce
In this case we were trying to backfill task for a DAG that at that point had a compilation error. This is quite an edge case yes :-)
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
</issue>
<code>
[start of README.md]
1 <!--
2 Licensed to the Apache Software Foundation (ASF) under one
3 or more contributor license agreements. See the NOTICE file
4 distributed with this work for additional information
5 regarding copyright ownership. The ASF licenses this file
6 to you under the Apache License, Version 2.0 (the
7 "License"); you may not use this file except in compliance
8 with the License. You may obtain a copy of the License at
9
10 http://www.apache.org/licenses/LICENSE-2.0
11
12 Unless required by applicable law or agreed to in writing,
13 software distributed under the License is distributed on an
14 "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
15 KIND, either express or implied. See the License for the
16 specific language governing permissions and limitations
17 under the License.
18 -->
19
20 # Apache Airflow
21
22 [](https://badge.fury.io/py/apache-airflow)
23 [](https://github.com/apache/airflow/actions)
24 [](https://codecov.io/github/apache/airflow?branch=main)
25 [](https://www.apache.org/licenses/LICENSE-2.0.txt)
26 [](https://pypi.org/project/apache-airflow/)
27 [](https://hub.docker.com/r/apache/airflow)
28 [](https://hub.docker.com/r/apache/airflow)
29 [](https://pypi.org/project/apache-airflow/)
30 [](https://artifacthub.io/packages/search?repo=apache-airflow)
31 [](https://github.com/psf/black)
32 [](https://twitter.com/ApacheAirflow)
33 [](https://s.apache.org/airflow-slack)
34
35 [Apache Airflow](https://airflow.apache.org/docs/apache-airflow/stable/) (or simply Airflow) is a platform to programmatically author, schedule, and monitor workflows.
36
37 When workflows are defined as code, they become more maintainable, versionable, testable, and collaborative.
38
39 Use Airflow to author workflows as directed acyclic graphs (DAGs) of tasks. The Airflow scheduler executes your tasks on an array of workers while following the specified dependencies. Rich command line utilities make performing complex surgeries on DAGs a snap. The rich user interface makes it easy to visualize pipelines running in production, monitor progress, and troubleshoot issues when needed.
40
41 <!-- START doctoc generated TOC please keep comment here to allow auto update -->
42 <!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
43 **Table of contents**
44
45 - [Project Focus](#project-focus)
46 - [Principles](#principles)
47 - [Requirements](#requirements)
48 - [Getting started](#getting-started)
49 - [Installing from PyPI](#installing-from-pypi)
50 - [Official source code](#official-source-code)
51 - [Convenience packages](#convenience-packages)
52 - [User Interface](#user-interface)
53 - [Semantic versioning](#semantic-versioning)
54 - [Version Life Cycle](#version-life-cycle)
55 - [Support for Python and Kubernetes versions](#support-for-python-and-kubernetes-versions)
56 - [Contributing](#contributing)
57 - [Who uses Apache Airflow?](#who-uses-apache-airflow)
58 - [Who Maintains Apache Airflow?](#who-maintains-apache-airflow)
59 - [Can I use the Apache Airflow logo in my presentation?](#can-i-use-the-apache-airflow-logo-in-my-presentation)
60 - [Airflow merchandise](#airflow-merchandise)
61 - [Links](#links)
62 - [Sponsors](#sponsors)
63
64 <!-- END doctoc generated TOC please keep comment here to allow auto update -->
65
66 ## Project Focus
67
68 Airflow works best with workflows that are mostly static and slowly changing. When the DAG structure is similar from one run to the next, it clarifies the unit of work and continuity. Other similar projects include [Luigi](https://github.com/spotify/luigi), [Oozie](https://oozie.apache.org/) and [Azkaban](https://azkaban.github.io/).
69
70 Airflow is commonly used to process data, but has the opinion that tasks should ideally be idempotent (i.e., results of the task will be the same, and will not create duplicated data in a destination system), and should not pass large quantities of data from one task to the next (though tasks can pass metadata using Airflow's [Xcom feature](https://airflow.apache.org/docs/apache-airflow/stable/concepts.html#xcoms)). For high-volume, data-intensive tasks, a best practice is to delegate to external services specializing in that type of work.
71
72 Airflow is not a streaming solution, but it is often used to process real-time data, pulling data off streams in batches.
73
74 ## Principles
75
76 - **Dynamic**: Airflow pipelines are configuration as code (Python), allowing for dynamic pipeline generation. This allows for writing code that instantiates pipelines dynamically.
77 - **Extensible**: Easily define your own operators, executors and extend the library so that it fits the level of abstraction that suits your environment.
78 - **Elegant**: Airflow pipelines are lean and explicit. Parameterizing your scripts is built into the core of Airflow using the powerful **Jinja** templating engine.
79 - **Scalable**: Airflow has a modular architecture and uses a message queue to orchestrate an arbitrary number of workers.
80
81 ## Requirements
82
83 Apache Airflow is tested with:
84
85 | | Main version (dev) | Stable version (2.1.3) |
86 | -------------------- | ------------------------- | ------------------------ |
87 | Python | 3.6, 3.7, 3.8, 3.9 | 3.6, 3.7, 3.8, 3.9 |
88 | Kubernetes | 1.18, 1.19, 1.20 | 1.18, 1.19, 1.20 |
89 | PostgreSQL | 9.6, 10, 11, 12, 13 | 9.6, 10, 11, 12, 13 |
90 | MySQL | 5.7, 8 | 5.7, 8 |
91 | SQLite | 3.15.0+ | 3.15.0+ |
92 | MSSQL(Experimental) | 2017, 2019 | |
93
94 **Note**: MySQL 5.x versions are unable to or have limitations with
95 running multiple schedulers -- please see the [Scheduler docs](https://airflow.apache.org/docs/apache-airflow/stable/scheduler.html).
96 MariaDB is not tested/recommended.
97
98 **Note**: SQLite is used in Airflow tests. Do not use it in production. We recommend
99 using the latest stable version of SQLite for local development.
100
101 ## Getting started
102
103 Visit the official Airflow website documentation (latest **stable** release) for help with
104 [installing Airflow](https://airflow.apache.org/docs/apache-airflow/stable/installation.html),
105 [getting started](https://airflow.apache.org/docs/apache-airflow/stable/start/index.html), or walking
106 through a more complete [tutorial](https://airflow.apache.org/docs/apache-airflow/stable/tutorial.html).
107
108 > Note: If you're looking for documentation for the main branch (latest development branch): you can find it on [s.apache.org/airflow-docs](https://s.apache.org/airflow-docs/).
109
110 For more information on Airflow Improvement Proposals (AIPs), visit
111 the [Airflow Wiki](https://cwiki.apache.org/confluence/display/AIRFLOW/Airflow+Improvements+Proposals).
112
113 Documentation for dependent projects like provider packages, Docker image, Helm Chart, you'll find it in [the documentation index](https://airflow.apache.org/docs/).
114
115 ## Installing from PyPI
116
117 We publish Apache Airflow as `apache-airflow` package in PyPI. Installing it however might be sometimes tricky
118 because Airflow is a bit of both a library and application. Libraries usually keep their dependencies open, and
119 applications usually pin them, but we should do neither and both simultaneously. We decided to keep
120 our dependencies as open as possible (in `setup.py`) so users can install different versions of libraries
121 if needed. This means that `pip install apache-airflow` will not work from time to time or will
122 produce unusable Airflow installation.
123
124 To have repeatable installation, however, we keep a set of "known-to-be-working" constraint
125 files in the orphan `constraints-main` and `constraints-2-0` branches. We keep those "known-to-be-working"
126 constraints files separately per major/minor Python version.
127 You can use them as constraint files when installing Airflow from PyPI. Note that you have to specify
128 correct Airflow tag/version/branch and Python versions in the URL.
129
130
131 1. Installing just Airflow:
132
133 > Note: Only `pip` installation is currently officially supported.
134
135 While it is possible to install Airflow with tools like [Poetry](https://python-poetry.org) or
136 [pip-tools](https://pypi.org/project/pip-tools), they do not share the same workflow as
137 `pip` - especially when it comes to constraint vs. requirements management.
138 Installing via `Poetry` or `pip-tools` is not currently supported.
139
140 If you wish to install Airflow using those tools, you should use the constraint files and convert
141 them to the appropriate format and workflow that your tool requires.
142
143
144 ```bash
145 pip install 'apache-airflow==2.1.3' \
146 --constraint "https://raw.githubusercontent.com/apache/airflow/constraints-2.1.3/constraints-3.7.txt"
147 ```
148
149 2. Installing with extras (i.e., postgres, google)
150
151 ```bash
152 pip install 'apache-airflow[postgres,google]==2.1.3' \
153 --constraint "https://raw.githubusercontent.com/apache/airflow/constraints-2.1.3/constraints-3.7.txt"
154 ```
155
156 For information on installing provider packages, check
157 [providers](http://airflow.apache.org/docs/apache-airflow-providers/index.html).
158
159 ## Official source code
160
161 Apache Airflow is an [Apache Software Foundation](https://www.apache.org) (ASF) project,
162 and our official source code releases:
163
164 - Follow the [ASF Release Policy](https://www.apache.org/legal/release-policy.html)
165 - Can be downloaded from [the ASF Distribution Directory](https://downloads.apache.org/airflow)
166 - Are cryptographically signed by the release manager
167 - Are officially voted on by the PMC members during the
168 [Release Approval Process](https://www.apache.org/legal/release-policy.html#release-approval)
169
170 Following the ASF rules, the source packages released must be sufficient for a user to build and test the
171 release provided they have access to the appropriate platform and tools.
172
173 ## Convenience packages
174
175 There are other ways of installing and using Airflow. Those are "convenience" methods - they are
176 not "official releases" as stated by the `ASF Release Policy`, but they can be used by the users
177 who do not want to build the software themselves.
178
179 Those are - in the order of most common ways people install Airflow:
180
181 - [PyPI releases](https://pypi.org/project/apache-airflow/) to install Airflow using standard `pip` tool
182 - [Docker Images](https://hub.docker.com/r/apache/airflow) to install airflow via
183 `docker` tool, use them in Kubernetes, Helm Charts, `docker-compose`, `docker swarm`, etc. You can
184 read more about using, customising, and extending the images in the
185 [Latest docs](https://airflow.apache.org/docs/docker-stack/index.html), and
186 learn details on the internals in the [IMAGES.rst](https://github.com/apache/airflow/blob/main/IMAGES.rst) document.
187 - [Tags in GitHub](https://github.com/apache/airflow/tags) to retrieve the git project sources that
188 were used to generate official source packages via git
189
190 All those artifacts are not official releases, but they are prepared using officially released sources.
191 Some of those artifacts are "development" or "pre-release" ones, and they are clearly marked as such
192 following the ASF Policy.
193
194 ## User Interface
195
196 - **DAGs**: Overview of all DAGs in your environment.
197
198 
199
200 - **Tree**: Tree representation of a DAG that spans across time.
201
202 
203
204 - **Graph**: Visualization of a DAG's dependencies and their current status for a specific run.
205
206 
207
208 - **Task Duration**: Total time spent on different tasks over time.
209
210 
211
212 - **Gantt**: Duration and overlap of a DAG.
213
214 
215
216 - **Code**: Quick way to view source code of a DAG.
217
218 
219
220 ## Semantic versioning
221
222 As of Airflow 2.0.0, we support a strict [SemVer](https://semver.org/) approach for all packages released.
223
224 There are few specific rules that we agreed to that define details of versioning of the different
225 packages:
226
227 * **Airflow**: SemVer rules apply to core airflow only (excludes any changes to providers).
228 Changing limits for versions of Airflow dependencies is not a breaking change on its own.
229 * **Airflow Providers**: SemVer rules apply to changes in the particular provider's code only.
230 SemVer MAJOR and MINOR versions for the packages are independent of the Airflow version.
231 For example, `google 4.1.0` and `amazon 3.0.3` providers can happily be installed
232 with `Airflow 2.1.2`. If there are limits of cross-dependencies between providers and Airflow packages,
233 they are present in providers as `install_requires` limitations. We aim to keep backwards
234 compatibility of providers with all previously released Airflow 2 versions but
235 there will sometimes be breaking changes that might make some, or all
236 providers, have minimum Airflow version specified. Change of that minimum supported Airflow version
237 is a breaking change for provider because installing the new provider might automatically
238 upgrade Airflow (which might be an undesired side effect of upgrading provider).
239 * **Airflow Helm Chart**: SemVer rules apply to changes in the chart only. SemVer MAJOR and MINOR
240 versions for the chart are independent from the Airflow version. We aim to keep backwards
241 compatibility of the Helm Chart with all released Airflow 2 versions, but some new features might
242 only work starting from specific Airflow releases. We might however limit the Helm
243 Chart to depend on minimal Airflow version.
244 * **Airflow API clients**: SemVer MAJOR and MINOR versions follow MAJOR and MINOR versions of Airflow.
245 The first MAJOR or MINOR X.Y.0 release of Airflow should always be followed by X.Y.0 release of
246 all clients. The clients then can release their own PATCH releases with bugfixes,
247 independently of Airflow PATCH releases.
248
249 ## Version Life Cycle
250
251 Apache Airflow version life cycle:
252
253 | Version | Current Patch/Minor | State | First Release | Limited Support | EOL/Terminated |
254 |---------|---------------------|-----------|---------------|-----------------|----------------|
255 | 2 | 2.1.3 | Supported | Dec 17, 2020 | Dec 31, 2021 | TBD |
256 | 1.10 | 1.10.15 | EOL | Aug 27, 2018 | Dec 17, 2020 | June 17, 2021 |
257 | 1.9 | 1.9.0 | EOL | Jan 03, 2018 | Aug 27, 2018 | Aug 27, 2018 |
258 | 1.8 | 1.8.2 | EOL | Mar 19, 2017 | Jan 03, 2018 | Jan 03, 2018 |
259 | 1.7 | 1.7.1.2 | EOL | Mar 28, 2016 | Mar 19, 2017 | Mar 19, 2017 |
260
261 Limited support versions will be supported with security and critical bug fix only.
262 EOL versions will not get any fixes nor support.
263 We always recommend that all users run the latest available minor release for whatever major version is in use.
264 We **highly** recommend upgrading to the latest Airflow major release at the earliest convenient time and before the EOL date.
265
266 ## Support for Python and Kubernetes versions
267
268 As of Airflow 2.0, we agreed to certain rules we follow for Python and Kubernetes support.
269 They are based on the official release schedule of Python and Kubernetes, nicely summarized in the
270 [Python Developer's Guide](https://devguide.python.org/#status-of-python-branches) and
271 [Kubernetes version skew policy](https://kubernetes.io/docs/setup/release/version-skew-policy/).
272
273 1. We drop support for Python and Kubernetes versions when they reach EOL. We drop support for those
274 EOL versions in main right after EOL date, and it is effectively removed when we release the
275 first new MINOR (Or MAJOR if there is no new MINOR version) of Airflow
276 For example, for Python 3.6 it means that we drop support in main right after 23.12.2021, and the first
277 MAJOR or MINOR version of Airflow released after will not have it.
278
279 2. The "oldest" supported version of Python/Kubernetes is the default one. "Default" is only meaningful
280 in terms of "smoke tests" in CI PRs, which are run using this default version and the default reference
281 image available. Currently `apache/airflow:latest` and `apache/airflow:2.1.3` images
282 are both Python 3.6 images. However, the first MINOR/MAJOR release of Airflow release after 23.12.2021 will
283 become Python 3.7 images.
284
285 3. We support a new version of Python/Kubernetes in main after they are officially released, as soon as we
286 make them work in our CI pipeline (which might not be immediate due to dependencies catching up with
287 new versions of Python mostly) we release new images/support in Airflow based on the working CI setup.
288
289 ### Additional notes on Python version requirements
290
291 * Previous versions [require](https://github.com/apache/airflow/issues/8162) at least Python 3.5.3
292 when using Python 3.
293
294 ## Contributing
295
296 Want to help build Apache Airflow? Check out our [contributing documentation](https://github.com/apache/airflow/blob/main/CONTRIBUTING.rst).
297
298 Official Docker (container) images for Apache Airflow are described in [IMAGES.rst](https://github.com/apache/airflow/blob/main/IMAGES.rst).
299
300 ## Who uses Apache Airflow?
301
302 More than 400 organizations are using Apache Airflow
303 [in the wild](https://github.com/apache/airflow/blob/main/INTHEWILD.md).
304
305 ## Who Maintains Apache Airflow?
306
307 Airflow is the work of the [community](https://github.com/apache/airflow/graphs/contributors),
308 but the [core committers/maintainers](https://people.apache.org/committers-by-project.html#airflow)
309 are responsible for reviewing and merging PRs as well as steering conversations around new feature requests.
310 If you would like to become a maintainer, please review the Apache Airflow
311 [committer requirements](https://github.com/apache/airflow/blob/main/COMMITTERS.rst#guidelines-to-become-an-airflow-committer).
312
313 ## Can I use the Apache Airflow logo in my presentation?
314
315 Yes! Be sure to abide by the Apache Foundation [trademark policies](https://www.apache.org/foundation/marks/#books) and the Apache Airflow [Brandbook](https://cwiki.apache.org/confluence/display/AIRFLOW/Brandbook). The most up to date logos are found in [this repo](/docs/apache-airflow/img/logos) and on the Apache Software Foundation [website](https://www.apache.org/logos/about.html).
316
317 ## Airflow merchandise
318
319 If you would love to have Apache Airflow stickers, t-shirt, etc. then check out
320 [Redbubble Shop](https://www.redbubble.com/i/sticker/Apache-Airflow-by-comdev/40497530.EJUG5).
321
322 ## Links
323
324 - [Documentation](https://airflow.apache.org/docs/apache-airflow/stable/)
325 - [Chat](https://s.apache.org/airflow-slack)
326
327 ## Sponsors
328
329 The CI infrastructure for Apache Airflow has been sponsored by:
330
331 <!-- Ordered by most recently "funded" -->
332
333 <a href="https://astronomer.io"><img src="https://assets2.astronomer.io/logos/logoForLIGHTbackground.png" alt="astronomer.io" width="250px"></a>
334 <a href="https://aws.amazon.com/opensource/"><img src="docs/integration-logos/aws/[email protected]" alt="AWS OpenSource" width="130px"></a>
335
[end of README.md]
[start of airflow/cli/commands/task_command.py]
1 #
2 # Licensed to the Apache Software Foundation (ASF) under one
3 # or more contributor license agreements. See the NOTICE file
4 # distributed with this work for additional information
5 # regarding copyright ownership. The ASF licenses this file
6 # to you under the Apache License, Version 2.0 (the
7 # "License"); you may not use this file except in compliance
8 # with the License. You may obtain a copy of the License at
9 #
10 # http://www.apache.org/licenses/LICENSE-2.0
11 #
12 # Unless required by applicable law or agreed to in writing,
13 # software distributed under the License is distributed on an
14 # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
15 # KIND, either express or implied. See the License for the
16 # specific language governing permissions and limitations
17 # under the License.
18 """Task sub-commands"""
19 import importlib
20 import json
21 import logging
22 import os
23 import textwrap
24 from contextlib import contextmanager, redirect_stderr, redirect_stdout, suppress
25 from typing import List
26
27 from pendulum.parsing.exceptions import ParserError
28 from sqlalchemy.orm.exc import NoResultFound
29
30 from airflow import settings
31 from airflow.cli.simple_table import AirflowConsole
32 from airflow.configuration import conf
33 from airflow.exceptions import AirflowException, DagRunNotFound
34 from airflow.executors.executor_loader import ExecutorLoader
35 from airflow.jobs.local_task_job import LocalTaskJob
36 from airflow.models import DagPickle, TaskInstance
37 from airflow.models.dag import DAG
38 from airflow.models.dagrun import DagRun
39 from airflow.ti_deps.dep_context import DepContext
40 from airflow.ti_deps.dependencies_deps import SCHEDULER_QUEUED_DEPS
41 from airflow.utils import cli as cli_utils
42 from airflow.utils.cli import (
43 get_dag,
44 get_dag_by_file_location,
45 get_dag_by_pickle,
46 get_dags,
47 suppress_logs_and_warning,
48 )
49 from airflow.utils.dates import timezone
50 from airflow.utils.log.logging_mixin import StreamLogWriter
51 from airflow.utils.net import get_hostname
52 from airflow.utils.session import create_session, provide_session
53
54
55 def _get_dag_run(dag, exec_date_or_run_id, create_if_necssary, session):
56 dag_run = dag.get_dagrun(run_id=exec_date_or_run_id, session=session)
57 if dag_run:
58 return dag_run
59
60 execution_date = None
61 with suppress(ParserError, TypeError):
62 execution_date = timezone.parse(exec_date_or_run_id)
63
64 if create_if_necssary and not execution_date:
65 return DagRun(dag_id=dag.dag_id, run_id=exec_date_or_run_id)
66 try:
67 return (
68 session.query(DagRun)
69 .filter(
70 DagRun.dag_id == dag.dag_id,
71 DagRun.execution_date == execution_date,
72 )
73 .one()
74 )
75 except NoResultFound:
76 if create_if_necssary:
77 return DagRun(dag.dag_id, execution_date=execution_date)
78 raise DagRunNotFound(
79 f"DagRun for {dag.dag_id} with run_id or execution_date of {exec_date_or_run_id!r} not found"
80 ) from None
81
82
83 @provide_session
84 def _get_ti(task, exec_date_or_run_id, create_if_necssary=False, session=None):
85 """Get the task instance through DagRun.run_id, if that fails, get the TI the old way"""
86 dag_run = _get_dag_run(task.dag, exec_date_or_run_id, create_if_necssary, session)
87
88 ti = dag_run.get_task_instance(task.task_id)
89 if not ti and create_if_necssary:
90 ti = TaskInstance(task, run_id=None)
91 ti.dag_run = dag_run
92 ti.refresh_from_task(task)
93 return ti
94
95
96 def _run_task_by_selected_method(args, dag: DAG, ti: TaskInstance) -> None:
97 """
98 Runs the task in one of 3 modes
99
100 - using LocalTaskJob
101 - as raw task
102 - by executor
103 """
104 if args.local:
105 _run_task_by_local_task_job(args, ti)
106 elif args.raw:
107 _run_raw_task(args, ti)
108 else:
109 _run_task_by_executor(args, dag, ti)
110
111
112 def _run_task_by_executor(args, dag, ti):
113 """
114 Sends the task to the executor for execution. This can result in the task being started by another host
115 if the executor implementation does
116 """
117 pickle_id = None
118 if args.ship_dag:
119 try:
120 # Running remotely, so pickling the DAG
121 with create_session() as session:
122 pickle = DagPickle(dag)
123 session.add(pickle)
124 pickle_id = pickle.id
125 # TODO: This should be written to a log
126 print(f'Pickled dag {dag} as pickle_id: {pickle_id}')
127 except Exception as e:
128 print('Could not pickle the DAG')
129 print(e)
130 raise e
131 executor = ExecutorLoader.get_default_executor()
132 executor.job_id = "manual"
133 executor.start()
134 print("Sending to executor.")
135 executor.queue_task_instance(
136 ti,
137 mark_success=args.mark_success,
138 pickle_id=pickle_id,
139 ignore_all_deps=args.ignore_all_dependencies,
140 ignore_depends_on_past=args.ignore_depends_on_past,
141 ignore_task_deps=args.ignore_dependencies,
142 ignore_ti_state=args.force,
143 pool=args.pool,
144 )
145 executor.heartbeat()
146 executor.end()
147
148
149 def _run_task_by_local_task_job(args, ti):
150 """Run LocalTaskJob, which monitors the raw task execution process"""
151 run_job = LocalTaskJob(
152 task_instance=ti,
153 mark_success=args.mark_success,
154 pickle_id=args.pickle,
155 ignore_all_deps=args.ignore_all_dependencies,
156 ignore_depends_on_past=args.ignore_depends_on_past,
157 ignore_task_deps=args.ignore_dependencies,
158 ignore_ti_state=args.force,
159 pool=args.pool,
160 )
161 try:
162 run_job.run()
163
164 finally:
165 if args.shut_down_logging:
166 logging.shutdown()
167
168
169 RAW_TASK_UNSUPPORTED_OPTION = [
170 "ignore_all_dependencies",
171 "ignore_depends_on_past",
172 "ignore_dependencies",
173 "force",
174 ]
175
176
177 def _run_raw_task(args, ti: TaskInstance) -> None:
178 """Runs the main task handling code"""
179 ti._run_raw_task(
180 mark_success=args.mark_success,
181 job_id=args.job_id,
182 pool=args.pool,
183 error_file=args.error_file,
184 )
185
186
187 @contextmanager
188 def _capture_task_logs(ti):
189 """Manage logging context for a task run
190
191 - Replace the root logger configuration with the airflow.task configuration
192 so we can capture logs from any custom loggers used in the task.
193
194 - Redirect stdout and stderr to the task instance log, as INFO and WARNING
195 level messages, respectively.
196
197 """
198 modify = not settings.DONOT_MODIFY_HANDLERS
199
200 if modify:
201 root_logger, task_logger = logging.getLogger(), logging.getLogger('airflow.task')
202
203 orig_level = root_logger.level
204 root_logger.setLevel(task_logger.level)
205 orig_handlers = root_logger.handlers.copy()
206 root_logger.handlers[:] = task_logger.handlers
207
208 try:
209 info_writer = StreamLogWriter(ti.log, logging.INFO)
210 warning_writer = StreamLogWriter(ti.log, logging.WARNING)
211
212 with redirect_stdout(info_writer), redirect_stderr(warning_writer):
213 yield
214
215 finally:
216 if modify:
217 # Restore the root logger to its original state.
218 root_logger.setLevel(orig_level)
219 root_logger.handlers[:] = orig_handlers
220
221
222 @cli_utils.action_logging
223 def task_run(args, dag=None):
224 """Runs a single task instance"""
225 # Load custom airflow config
226
227 if args.local and args.raw:
228 raise AirflowException(
229 "Option --raw and --local are mutually exclusive. "
230 "Please remove one option to execute the command."
231 )
232
233 if args.raw:
234 unsupported_options = [o for o in RAW_TASK_UNSUPPORTED_OPTION if getattr(args, o)]
235
236 if unsupported_options:
237 raise AirflowException(
238 "Option --raw does not work with some of the other options on this command. You "
239 "can't use --raw option and the following options: {}. You provided the option {}. "
240 "Delete it to execute the command".format(
241 ", ".join(f"--{o}" for o in RAW_TASK_UNSUPPORTED_OPTION),
242 ", ".join(f"--{o}" for o in unsupported_options),
243 )
244 )
245 if dag and args.pickle:
246 raise AirflowException("You cannot use the --pickle option when using DAG.cli() method.")
247 if args.cfg_path:
248 with open(args.cfg_path) as conf_file:
249 conf_dict = json.load(conf_file)
250
251 if os.path.exists(args.cfg_path):
252 os.remove(args.cfg_path)
253
254 conf.read_dict(conf_dict, source=args.cfg_path)
255 settings.configure_vars()
256
257 settings.MASK_SECRETS_IN_LOGS = True
258
259 # IMPORTANT, have to use the NullPool, otherwise, each "run" command may leave
260 # behind multiple open sleeping connections while heartbeating, which could
261 # easily exceed the database connection limit when
262 # processing hundreds of simultaneous tasks.
263 settings.configure_orm(disable_connection_pool=True)
264
265 if args.pickle:
266 print(f'Loading pickle id: {args.pickle}')
267 dag = get_dag_by_pickle(args.pickle)
268 elif not dag:
269 dag = get_dag(args.subdir, args.dag_id)
270 else:
271 # Use DAG from parameter
272 pass
273 task = dag.get_task(task_id=args.task_id)
274 ti = _get_ti(task, args.execution_date_or_run_id)
275 ti.init_run_context(raw=args.raw)
276
277 hostname = get_hostname()
278
279 print(f"Running {ti} on host {hostname}")
280
281 if args.interactive:
282 _run_task_by_selected_method(args, dag, ti)
283 else:
284 with _capture_task_logs(ti):
285 _run_task_by_selected_method(args, dag, ti)
286
287
288 @cli_utils.action_logging
289 def task_failed_deps(args):
290 """
291 Returns the unmet dependencies for a task instance from the perspective of the
292 scheduler (i.e. why a task instance doesn't get scheduled and then queued by the
293 scheduler, and then run by an executor).
294 >>> airflow tasks failed-deps tutorial sleep 2015-01-01
295 Task instance dependencies not met:
296 Dagrun Running: Task instance's dagrun did not exist: Unknown reason
297 Trigger Rule: Task's trigger rule 'all_success' requires all upstream tasks
298 to have succeeded, but found 1 non-success(es).
299 """
300 dag = get_dag(args.subdir, args.dag_id)
301 task = dag.get_task(task_id=args.task_id)
302 ti = _get_ti(task, args.execution_date_or_run_id)
303
304 dep_context = DepContext(deps=SCHEDULER_QUEUED_DEPS)
305 failed_deps = list(ti.get_failed_dep_statuses(dep_context=dep_context))
306 # TODO, Do we want to print or log this
307 if failed_deps:
308 print("Task instance dependencies not met:")
309 for dep in failed_deps:
310 print(f"{dep.dep_name}: {dep.reason}")
311 else:
312 print("Task instance dependencies are all met.")
313
314
315 @cli_utils.action_logging
316 @suppress_logs_and_warning
317 def task_state(args):
318 """
319 Returns the state of a TaskInstance at the command line.
320 >>> airflow tasks state tutorial sleep 2015-01-01
321 success
322 """
323 dag = get_dag(args.subdir, args.dag_id)
324 task = dag.get_task(task_id=args.task_id)
325 ti = _get_ti(task, args.execution_date_or_run_id)
326 print(ti.current_state())
327
328
329 @cli_utils.action_logging
330 @suppress_logs_and_warning
331 def task_list(args, dag=None):
332 """Lists the tasks within a DAG at the command line"""
333 dag = dag or get_dag(args.subdir, args.dag_id)
334 if args.tree:
335 dag.tree_view()
336 else:
337 tasks = sorted(t.task_id for t in dag.tasks)
338 print("\n".join(tasks))
339
340
341 SUPPORTED_DEBUGGER_MODULES: List[str] = [
342 "pudb",
343 "web_pdb",
344 "ipdb",
345 "pdb",
346 ]
347
348
349 def _guess_debugger():
350 """
351 Trying to guess the debugger used by the user. When it doesn't find any user-installed debugger,
352 returns ``pdb``.
353
354 List of supported debuggers:
355
356 * `pudb <https://github.com/inducer/pudb>`__
357 * `web_pdb <https://github.com/romanvm/python-web-pdb>`__
358 * `ipdb <https://github.com/gotcha/ipdb>`__
359 * `pdb <https://docs.python.org/3/library/pdb.html>`__
360 """
361 for mod in SUPPORTED_DEBUGGER_MODULES:
362 try:
363 return importlib.import_module(mod)
364 except ImportError:
365 continue
366 return importlib.import_module("pdb")
367
368
369 @cli_utils.action_logging
370 @suppress_logs_and_warning
371 @provide_session
372 def task_states_for_dag_run(args, session=None):
373 """Get the status of all task instances in a DagRun"""
374 dag_run = (
375 session.query(DagRun)
376 .filter(DagRun.run_id == args.execution_date_or_run_id, DagRun.dag_id == args.dag_id)
377 .one_or_none()
378 )
379 if not dag_run:
380 try:
381 execution_date = timezone.parse(args.execution_date_or_run_id)
382 dag_run = (
383 session.query(DagRun)
384 .filter(DagRun.execution_date == execution_date, DagRun.dag_id == args.dag_id)
385 .one_or_none()
386 )
387 except (ParserError, TypeError) as err:
388 raise AirflowException(f"Error parsing the supplied execution_date. Error: {str(err)}")
389
390 if dag_run is None:
391 raise DagRunNotFound(
392 f"DagRun for {args.dag_id} with run_id or execution_date of {args.execution_date_or_run_id!r} "
393 "not found"
394 )
395
396 AirflowConsole().print_as(
397 data=dag_run.task_instances,
398 output=args.output,
399 mapper=lambda ti: {
400 "dag_id": ti.dag_id,
401 "execution_date": dag_run.execution_date.isoformat(),
402 "task_id": ti.task_id,
403 "state": ti.state,
404 "start_date": ti.start_date.isoformat() if ti.start_date else "",
405 "end_date": ti.end_date.isoformat() if ti.end_date else "",
406 },
407 )
408
409
410 @cli_utils.action_logging
411 def task_test(args, dag=None):
412 """Tests task for a given dag_id"""
413 # We want to log output from operators etc to show up here. Normally
414 # airflow.task would redirect to a file, but here we want it to propagate
415 # up to the normal airflow handler.
416
417 settings.MASK_SECRETS_IN_LOGS = True
418
419 handlers = logging.getLogger('airflow.task').handlers
420 already_has_stream_handler = False
421 for handler in handlers:
422 already_has_stream_handler = isinstance(handler, logging.StreamHandler)
423 if already_has_stream_handler:
424 break
425 if not already_has_stream_handler:
426 logging.getLogger('airflow.task').propagate = True
427
428 env_vars = {'AIRFLOW_TEST_MODE': 'True'}
429 if args.env_vars:
430 env_vars.update(args.env_vars)
431 os.environ.update(env_vars)
432
433 dag = dag or get_dag(args.subdir, args.dag_id)
434
435 task = dag.get_task(task_id=args.task_id)
436 # Add CLI provided task_params to task.params
437 if args.task_params:
438 passed_in_params = json.loads(args.task_params)
439 task.params.update(passed_in_params)
440 ti = _get_ti(task, args.execution_date_or_run_id, create_if_necssary=True)
441
442 try:
443 if args.dry_run:
444 ti.dry_run()
445 else:
446 ti.run(ignore_task_deps=True, ignore_ti_state=True, test_mode=True)
447 except Exception:
448 if args.post_mortem:
449 debugger = _guess_debugger()
450 debugger.post_mortem()
451 else:
452 raise
453 finally:
454 if not already_has_stream_handler:
455 # Make sure to reset back to normal. When run for CLI this doesn't
456 # matter, but it does for test suite
457 logging.getLogger('airflow.task').propagate = False
458
459
460 @cli_utils.action_logging
461 @suppress_logs_and_warning
462 def task_render(args):
463 """Renders and displays templated fields for a given task"""
464 dag = get_dag(args.subdir, args.dag_id)
465 task = dag.get_task(task_id=args.task_id)
466 ti = _get_ti(task, args.execution_date_or_run_id, create_if_necssary=True)
467 ti.render_templates()
468 for attr in task.__class__.template_fields:
469 print(
470 textwrap.dedent(
471 f""" # ----------------------------------------------------------
472 # property: {attr}
473 # ----------------------------------------------------------
474 {getattr(task, attr)}
475 """
476 )
477 )
478
479
480 @cli_utils.action_logging
481 def task_clear(args):
482 """Clears all task instances or only those matched by regex for a DAG(s)"""
483 logging.basicConfig(level=settings.LOGGING_LEVEL, format=settings.SIMPLE_LOG_FORMAT)
484
485 if args.dag_id and not args.subdir and not args.dag_regex and not args.task_regex:
486 dags = [get_dag_by_file_location(args.dag_id)]
487 else:
488 # todo clear command only accepts a single dag_id. no reason for get_dags with 's' except regex?
489 dags = get_dags(args.subdir, args.dag_id, use_regex=args.dag_regex)
490
491 if args.task_regex:
492 for idx, dag in enumerate(dags):
493 dags[idx] = dag.partial_subset(
494 task_ids_or_regex=args.task_regex,
495 include_downstream=args.downstream,
496 include_upstream=args.upstream,
497 )
498
499 DAG.clear_dags(
500 dags,
501 start_date=args.start_date,
502 end_date=args.end_date,
503 only_failed=args.only_failed,
504 only_running=args.only_running,
505 confirm_prompt=not args.yes,
506 include_subdags=not args.exclude_subdags,
507 include_parentdag=not args.exclude_parentdag,
508 )
509
[end of airflow/cli/commands/task_command.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
apache/airflow
|
afd4ba666149b27a4aab7e15c8d76ed1fd4f134a
|
Exception within LocalTaskJob._run_mini_scheduler_on_child_tasks brakes Sentry Handler
### Apache Airflow version
2.1.3 (latest released)
### Operating System
Debian GNU/Linux 10 (buster)
### Versions of Apache Airflow Providers
```
apache-airflow-providers-amazon @ file:///root/.cache/pypoetry/artifacts/7f/f7/23/fc7fd3543aa486275ef0385c29063ff0dc391b0fc95dc5aa6cab2cf4e5/apache_airflow_providers_amazon-2.2.0-py3-none-any.whl
apache-airflow-providers-celery @ file:///root/.cache/pypoetry/artifacts/14/80/39/0d9d57205da1d24189ac9c18eb3477664ed2c2618c1467c9809b9a2fbf/apache_airflow_providers_celery-2.0.0-py3-none-any.whl
apache-airflow-providers-ftp @ file:///root/.cache/pypoetry/artifacts/a5/13/da/bf14abc40193a1ee1b82bbd800e3ac230427d7684b9d40998ac3684bef/apache_airflow_providers_ftp-2.0.1-py3-none-any.whl
apache-airflow-providers-http @ file:///root/.cache/pypoetry/artifacts/fc/d7/d2/73c89ef847bbae1704fa403d7e92dba1feead757aae141613980db40ff/apache_airflow_providers_http-2.0.0-py3-none-any.whl
apache-airflow-providers-imap @ file:///root/.cache/pypoetry/artifacts/af/5d/de/21c10bfc7ac076a415dcc3fc909317547e77e38c005487552cf40ddd97/apache_airflow_providers_imap-2.0.1-py3-none-any.whl
apache-airflow-providers-postgres @ file:///root/.cache/pypoetry/artifacts/50/27/e0/9b0d8f4c0abf59967bb87a04a93d73896d9a4558994185dd8bc43bb67f/apache_airflow_providers_postgres-2.2.0-py3-none-any.whl
apache-airflow-providers-redis @ file:///root/.cache/pypoetry/artifacts/7d/95/03/5d2a65ace88ae9a9ce9134b927b1e9639c8680c13a31e58425deae55d1/apache_airflow_providers_redis-2.0.1-py3-none-any.whl
apache-airflow-providers-sqlite @ file:///root/.cache/pypoetry/artifacts/ec/e6/a3/e0d81fef662ccf79609e7d2c4e4440839a464771fd2a002d252c9a401d/apache_airflow_providers_sqlite-2.0.1-py3-none-any.whl
```
### Deployment
Other Docker-based deployment
### Deployment details
We are using the Sentry integration
### What happened
An exception within LocalTaskJobs mini scheduler was handled incorrectly by the Sentry integrations 'enrich_errors' method. This is because it assumes its applied to a method of a TypeInstance task
```
TypeError: cannot pickle 'dict_keys' object
File "airflow/sentry.py", line 166, in wrapper
return func(task_instance, *args, **kwargs)
File "airflow/jobs/local_task_job.py", line 241, in _run_mini_scheduler_on_child_tasks
partial_dag = task.dag.partial_subset(
File "airflow/models/dag.py", line 1487, in partial_subset
dag.task_dict = {
File "airflow/models/dag.py", line 1488, in <dictcomp>
t.task_id: copy.deepcopy(t, {id(t.dag): dag}) # type: ignore
File "copy.py", line 153, in deepcopy
y = copier(memo)
File "airflow/models/baseoperator.py", line 970, in __deepcopy__
setattr(result, k, copy.deepcopy(v, memo))
File "copy.py", line 161, in deepcopy
rv = reductor(4)
AttributeError: 'LocalTaskJob' object has no attribute 'task'
File "airflow", line 8, in <module>
sys.exit(main())
File "airflow/__main__.py", line 40, in main
args.func(args)
File "airflow/cli/cli_parser.py", line 48, in command
return func(*args, **kwargs)
File "airflow/utils/cli.py", line 91, in wrapper
return f(*args, **kwargs)
File "airflow/cli/commands/task_command.py", line 238, in task_run
_run_task_by_selected_method(args, dag, ti)
File "airflow/cli/commands/task_command.py", line 64, in _run_task_by_selected_method
_run_task_by_local_task_job(args, ti)
File "airflow/cli/commands/task_command.py", line 121, in _run_task_by_local_task_job
run_job.run()
File "airflow/jobs/base_job.py", line 245, in run
self._execute()
File "airflow/jobs/local_task_job.py", line 128, in _execute
self.handle_task_exit(return_code)
File "airflow/jobs/local_task_job.py", line 166, in handle_task_exit
self._run_mini_scheduler_on_child_tasks()
File "airflow/utils/session.py", line 70, in wrapper
return func(*args, session=session, **kwargs)
File "airflow/sentry.py", line 168, in wrapper
self.add_tagging(task_instance)
File "airflow/sentry.py", line 119, in add_tagging
task = task_instance.task
```
### What you expected to happen
The error to be handled correctly and passed on to Sentry without raising another exception within the error handling system
### How to reproduce
In this case we were trying to backfill task for a DAG that at that point had a compilation error. This is quite an edge case yes :-)
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
|
2021-09-09T13:42:04Z
|
<patch>
diff --git a/airflow/sentry.py b/airflow/sentry.py
--- a/airflow/sentry.py
+++ b/airflow/sentry.py
@@ -144,11 +144,14 @@ def add_breadcrumbs(self, task_instance, session=None):
sentry_sdk.add_breadcrumb(category="completed_tasks", data=data, level="info")
def enrich_errors(self, func):
- """Wrap TaskInstance._run_raw_task to support task specific tags and breadcrumbs."""
+ """
+ Wrap TaskInstance._run_raw_task and LocalTaskJob._run_mini_scheduler_on_child_tasks
+ to support task specific tags and breadcrumbs.
+ """
session_args_idx = find_session_idx(func)
@wraps(func)
- def wrapper(task_instance, *args, **kwargs):
+ def wrapper(_self, *args, **kwargs):
# Wrapping the _run_raw_task function with push_scope to contain
# tags and breadcrumbs to a specific Task Instance
@@ -159,8 +162,14 @@ def wrapper(task_instance, *args, **kwargs):
with sentry_sdk.push_scope():
try:
- return func(task_instance, *args, **kwargs)
+ return func(_self, *args, **kwargs)
except Exception as e:
+ # Is a LocalTaskJob get the task instance
+ if hasattr(_self, 'task_instance'):
+ task_instance = _self.task_instance
+ else:
+ task_instance = _self
+
self.add_tagging(task_instance)
self.add_breadcrumbs(task_instance, session=session)
sentry_sdk.capture_exception(e)
</patch>
|
[]
|
[]
| ||||
pypa__pip-10846
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`--use-deprecated=html5lib` does not parse links, even though they're present
### Description
When using Pip 22.0 with `--use-deprecated=html5lib` with JFrog as the Index packages pip throws the error: `ERROR: No matching distribution found for requests`
Tested with the "requests" package on Windows 10 using pip 22.0 (fails) and pip 21.3.1 (works)
### Expected behavior
` --use-deprecated=html5lib` should allow JFrog indexes to work.
### pip version
22.0
### Python version
3.10
### OS
Windows
### How to Reproduce
Install package from JFrog index using pip 22.0
### Output
```sh-session
C:\>python -m pip install -vvv requests --use-deprecated=html5lib
Using pip 22.0 from <corporate_local_path>\lib\site-packages\pip (python 3.10)
Non-user install by explicit request
Created temporary directory: <corporate_user_path>\AppData\Local\Temp\pip-ephem-wheel-cache-4a5e6ucc
Created temporary directory: <corporate_user_path>\AppData\Local\Temp\pip-req-tracker-p0zhtye3
Initialized build tracking at <corporate_user_path>\AppData\Local\Temp\pip-req-tracker-p0zhtye3
Created build tracker: <corporate_user_path>\AppData\Local\Temp\pip-req-tracker-p0zhtye3
Entered build tracker: <corporate_user_path>\AppData\Local\Temp\pip-req-tracker-p0zhtye3
Created temporary directory: <corporate_user_path>\AppData\Local\Temp\pip-install-_cnfjhxu
Looking in indexes: http://<corporate_domain>/artifactory/api/pypi/pypi-release/simple
1 location(s) to search for versions of requests:
* http://<corporate_domain>/artifactory/api/pypi/pypi-release/simple/requests/
Fetching project page and analyzing links: http://<corporate_domain>/artifactory/api/pypi/pypi-release/simple/requests/
Getting page http://<corporate_domain>/artifactory/api/pypi/pypi-release/simple/requests/
Found index url http://<corporate_domain>/artifactory/api/pypi/pypi-release/simple
Looking up http://<corporate_domain>/artifactory/api/pypi/pypi-release/simple/requests/ in the cache
Request header has "max_age" as 0, cache bypassed
Starting new HTTP connection (1): <corporate_domain>:80
http://<corporate_domain>:80 "GET /artifactory/api/pypi/pypi-release/simple/requests/ HTTP/1.1" 200 None
Updating cache with response from http://<corporate_domain>/artifactory/api/pypi/pypi-release/simple/requests/
Skipping link: not a file: http://<corporate_domain>/artifactory/api/pypi/pypi-release/simple/requests/
Given no hashes to check 0 links for project 'requests': discarding no candidates
ERROR: Could not find a version that satisfies the requirement requests (from versions: none)
ERROR: No matching distribution found for requests
Exception information:
Traceback (most recent call last):
File "<corporate_local_path>\lib\site-packages\pip\_vendor\resolvelib\resolvers.py", line 348, in resolve
self._add_to_criteria(self.state.criteria, r, parent=None)
File "<corporate_local_path>\lib\site-packages\pip\_vendor\resolvelib\resolvers.py", line 173, in _add_to_criteria
raise RequirementsConflicted(criterion)
pip._vendor.resolvelib.resolvers.RequirementsConflicted: Requirements conflict: SpecifierRequirement('requests')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<corporate_local_path>\lib\site-packages\pip\_internal\resolution\resolvelib\resolver.py", line 94, in resolve
result = self._result = resolver.resolve(
File "<corporate_local_path>\lib\site-packages\pip\_vendor\resolvelib\resolvers.py", line 481, in resolve
state = resolution.resolve(requirements, max_rounds=max_rounds)
File "<corporate_local_path>\lib\site-packages\pip\_vendor\resolvelib\resolvers.py", line 350, in resolve
raise ResolutionImpossible(e.criterion.information)
pip._vendor.resolvelib.resolvers.ResolutionImpossible: [RequirementInformation(requirement=SpecifierRequirement('requests'), parent=None)]
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<corporate_local_path>\lib\site-packages\pip\_internal\cli\base_command.py", line 165, in exc_logging_wrapper
status = run_func(*args)
File "<corporate_local_path>\lib\site-packages\pip\_internal\cli\req_command.py", line 205, in wrapper
return func(self, options, args)
File "<corporate_local_path>\lib\site-packages\pip\_internal\commands\install.py", line 339, in run
requirement_set = resolver.resolve(
File "<corporate_local_path>\lib\site-packages\pip\_internal\resolution\resolvelib\resolver.py", line 103, in resolve
raise error from e
pip._internal.exceptions.DistributionNotFound: No matching distribution found for requests
Removed build tracker: '<corporate_user_path>\\AppData\\Local\\Temp\\pip-req-tracker-p0zhtye3'
```
### Code of Conduct
- [X] I agree to follow the [PSF Code of Conduct](https://www.python.org/psf/conduct/).
</issue>
<code>
[start of README.rst]
1 pip - The Python Package Installer
2 ==================================
3
4 .. image:: https://img.shields.io/pypi/v/pip.svg
5 :target: https://pypi.org/project/pip/
6
7 .. image:: https://readthedocs.org/projects/pip/badge/?version=latest
8 :target: https://pip.pypa.io/en/latest
9
10 pip is the `package installer`_ for Python. You can use pip to install packages from the `Python Package Index`_ and other indexes.
11
12 Please take a look at our documentation for how to install and use pip:
13
14 * `Installation`_
15 * `Usage`_
16
17 We release updates regularly, with a new version every 3 months. Find more details in our documentation:
18
19 * `Release notes`_
20 * `Release process`_
21
22 In pip 20.3, we've `made a big improvement to the heart of pip`_; `learn more`_. We want your input, so `sign up for our user experience research studies`_ to help us do it right.
23
24 **Note**: pip 21.0, in January 2021, removed Python 2 support, per pip's `Python 2 support policy`_. Please migrate to Python 3.
25
26 If you find bugs, need help, or want to talk to the developers, please use our mailing lists or chat rooms:
27
28 * `Issue tracking`_
29 * `Discourse channel`_
30 * `User IRC`_
31
32 If you want to get involved head over to GitHub to get the source code, look at our development documentation and feel free to jump on the developer mailing lists and chat rooms:
33
34 * `GitHub page`_
35 * `Development documentation`_
36 * `Development mailing list`_
37 * `Development IRC`_
38
39 Code of Conduct
40 ---------------
41
42 Everyone interacting in the pip project's codebases, issue trackers, chat
43 rooms, and mailing lists is expected to follow the `PSF Code of Conduct`_.
44
45 .. _package installer: https://packaging.python.org/guides/tool-recommendations/
46 .. _Python Package Index: https://pypi.org
47 .. _Installation: https://pip.pypa.io/en/stable/installation/
48 .. _Usage: https://pip.pypa.io/en/stable/
49 .. _Release notes: https://pip.pypa.io/en/stable/news.html
50 .. _Release process: https://pip.pypa.io/en/latest/development/release-process/
51 .. _GitHub page: https://github.com/pypa/pip
52 .. _Development documentation: https://pip.pypa.io/en/latest/development
53 .. _made a big improvement to the heart of pip: https://pyfound.blogspot.com/2020/11/pip-20-3-new-resolver.html
54 .. _learn more: https://pip.pypa.io/en/latest/user_guide/#changes-to-the-pip-dependency-resolver-in-20-3-2020
55 .. _sign up for our user experience research studies: https://pyfound.blogspot.com/2020/03/new-pip-resolver-to-roll-out-this-year.html
56 .. _Python 2 support policy: https://pip.pypa.io/en/latest/development/release-process/#python-2-support
57 .. _Issue tracking: https://github.com/pypa/pip/issues
58 .. _Discourse channel: https://discuss.python.org/c/packaging
59 .. _Development mailing list: https://mail.python.org/mailman3/lists/distutils-sig.python.org/
60 .. _User IRC: https://kiwiirc.com/nextclient/#ircs://irc.libera.chat:+6697/pypa
61 .. _Development IRC: https://kiwiirc.com/nextclient/#ircs://irc.libera.chat:+6697/pypa-dev
62 .. _PSF Code of Conduct: https://github.com/pypa/.github/blob/main/CODE_OF_CONDUCT.md
63
[end of README.rst]
[start of src/pip/_internal/commands/install.py]
1 import errno
2 import operator
3 import os
4 import shutil
5 import site
6 from optparse import SUPPRESS_HELP, Values
7 from typing import Iterable, List, Optional
8
9 from pip._vendor.packaging.utils import canonicalize_name
10
11 from pip._internal.cache import WheelCache
12 from pip._internal.cli import cmdoptions
13 from pip._internal.cli.cmdoptions import make_target_python
14 from pip._internal.cli.req_command import (
15 RequirementCommand,
16 warn_if_run_as_root,
17 with_cleanup,
18 )
19 from pip._internal.cli.status_codes import ERROR, SUCCESS
20 from pip._internal.exceptions import CommandError, InstallationError
21 from pip._internal.locations import get_scheme
22 from pip._internal.metadata import get_environment
23 from pip._internal.models.format_control import FormatControl
24 from pip._internal.operations.check import ConflictDetails, check_install_conflicts
25 from pip._internal.req import install_given_reqs
26 from pip._internal.req.req_install import InstallRequirement
27 from pip._internal.req.req_tracker import get_requirement_tracker
28 from pip._internal.utils.compat import WINDOWS
29 from pip._internal.utils.distutils_args import parse_distutils_args
30 from pip._internal.utils.filesystem import test_writable_dir
31 from pip._internal.utils.logging import getLogger
32 from pip._internal.utils.misc import (
33 ensure_dir,
34 get_pip_version,
35 protect_pip_from_modification_on_windows,
36 write_output,
37 )
38 from pip._internal.utils.temp_dir import TempDirectory
39 from pip._internal.utils.virtualenv import (
40 running_under_virtualenv,
41 virtualenv_no_global,
42 )
43 from pip._internal.wheel_builder import (
44 BinaryAllowedPredicate,
45 build,
46 should_build_for_install_command,
47 )
48
49 logger = getLogger(__name__)
50
51
52 def get_check_binary_allowed(format_control: FormatControl) -> BinaryAllowedPredicate:
53 def check_binary_allowed(req: InstallRequirement) -> bool:
54 canonical_name = canonicalize_name(req.name or "")
55 allowed_formats = format_control.get_allowed_formats(canonical_name)
56 return "binary" in allowed_formats
57
58 return check_binary_allowed
59
60
61 class InstallCommand(RequirementCommand):
62 """
63 Install packages from:
64
65 - PyPI (and other indexes) using requirement specifiers.
66 - VCS project urls.
67 - Local project directories.
68 - Local or remote source archives.
69
70 pip also supports installing from "requirements files", which provide
71 an easy way to specify a whole environment to be installed.
72 """
73
74 usage = """
75 %prog [options] <requirement specifier> [package-index-options] ...
76 %prog [options] -r <requirements file> [package-index-options] ...
77 %prog [options] [-e] <vcs project url> ...
78 %prog [options] [-e] <local project path> ...
79 %prog [options] <archive url/path> ..."""
80
81 def add_options(self) -> None:
82 self.cmd_opts.add_option(cmdoptions.requirements())
83 self.cmd_opts.add_option(cmdoptions.constraints())
84 self.cmd_opts.add_option(cmdoptions.no_deps())
85 self.cmd_opts.add_option(cmdoptions.pre())
86
87 self.cmd_opts.add_option(cmdoptions.editable())
88 self.cmd_opts.add_option(
89 "-t",
90 "--target",
91 dest="target_dir",
92 metavar="dir",
93 default=None,
94 help=(
95 "Install packages into <dir>. "
96 "By default this will not replace existing files/folders in "
97 "<dir>. Use --upgrade to replace existing packages in <dir> "
98 "with new versions."
99 ),
100 )
101 cmdoptions.add_target_python_options(self.cmd_opts)
102
103 self.cmd_opts.add_option(
104 "--user",
105 dest="use_user_site",
106 action="store_true",
107 help=(
108 "Install to the Python user install directory for your "
109 "platform. Typically ~/.local/, or %APPDATA%\\Python on "
110 "Windows. (See the Python documentation for site.USER_BASE "
111 "for full details.)"
112 ),
113 )
114 self.cmd_opts.add_option(
115 "--no-user",
116 dest="use_user_site",
117 action="store_false",
118 help=SUPPRESS_HELP,
119 )
120 self.cmd_opts.add_option(
121 "--root",
122 dest="root_path",
123 metavar="dir",
124 default=None,
125 help="Install everything relative to this alternate root directory.",
126 )
127 self.cmd_opts.add_option(
128 "--prefix",
129 dest="prefix_path",
130 metavar="dir",
131 default=None,
132 help=(
133 "Installation prefix where lib, bin and other top-level "
134 "folders are placed"
135 ),
136 )
137
138 self.cmd_opts.add_option(cmdoptions.src())
139
140 self.cmd_opts.add_option(
141 "-U",
142 "--upgrade",
143 dest="upgrade",
144 action="store_true",
145 help=(
146 "Upgrade all specified packages to the newest available "
147 "version. The handling of dependencies depends on the "
148 "upgrade-strategy used."
149 ),
150 )
151
152 self.cmd_opts.add_option(
153 "--upgrade-strategy",
154 dest="upgrade_strategy",
155 default="only-if-needed",
156 choices=["only-if-needed", "eager"],
157 help=(
158 "Determines how dependency upgrading should be handled "
159 "[default: %default]. "
160 '"eager" - dependencies are upgraded regardless of '
161 "whether the currently installed version satisfies the "
162 "requirements of the upgraded package(s). "
163 '"only-if-needed" - are upgraded only when they do not '
164 "satisfy the requirements of the upgraded package(s)."
165 ),
166 )
167
168 self.cmd_opts.add_option(
169 "--force-reinstall",
170 dest="force_reinstall",
171 action="store_true",
172 help="Reinstall all packages even if they are already up-to-date.",
173 )
174
175 self.cmd_opts.add_option(
176 "-I",
177 "--ignore-installed",
178 dest="ignore_installed",
179 action="store_true",
180 help=(
181 "Ignore the installed packages, overwriting them. "
182 "This can break your system if the existing package "
183 "is of a different version or was installed "
184 "with a different package manager!"
185 ),
186 )
187
188 self.cmd_opts.add_option(cmdoptions.ignore_requires_python())
189 self.cmd_opts.add_option(cmdoptions.no_build_isolation())
190 self.cmd_opts.add_option(cmdoptions.use_pep517())
191 self.cmd_opts.add_option(cmdoptions.no_use_pep517())
192
193 self.cmd_opts.add_option(cmdoptions.install_options())
194 self.cmd_opts.add_option(cmdoptions.global_options())
195
196 self.cmd_opts.add_option(
197 "--compile",
198 action="store_true",
199 dest="compile",
200 default=True,
201 help="Compile Python source files to bytecode",
202 )
203
204 self.cmd_opts.add_option(
205 "--no-compile",
206 action="store_false",
207 dest="compile",
208 help="Do not compile Python source files to bytecode",
209 )
210
211 self.cmd_opts.add_option(
212 "--no-warn-script-location",
213 action="store_false",
214 dest="warn_script_location",
215 default=True,
216 help="Do not warn when installing scripts outside PATH",
217 )
218 self.cmd_opts.add_option(
219 "--no-warn-conflicts",
220 action="store_false",
221 dest="warn_about_conflicts",
222 default=True,
223 help="Do not warn about broken dependencies",
224 )
225
226 self.cmd_opts.add_option(cmdoptions.no_binary())
227 self.cmd_opts.add_option(cmdoptions.only_binary())
228 self.cmd_opts.add_option(cmdoptions.prefer_binary())
229 self.cmd_opts.add_option(cmdoptions.require_hashes())
230 self.cmd_opts.add_option(cmdoptions.progress_bar())
231
232 index_opts = cmdoptions.make_option_group(
233 cmdoptions.index_group,
234 self.parser,
235 )
236
237 self.parser.insert_option_group(0, index_opts)
238 self.parser.insert_option_group(0, self.cmd_opts)
239
240 @with_cleanup
241 def run(self, options: Values, args: List[str]) -> int:
242 if options.use_user_site and options.target_dir is not None:
243 raise CommandError("Can not combine '--user' and '--target'")
244
245 cmdoptions.check_install_build_global(options)
246 upgrade_strategy = "to-satisfy-only"
247 if options.upgrade:
248 upgrade_strategy = options.upgrade_strategy
249
250 cmdoptions.check_dist_restriction(options, check_target=True)
251
252 install_options = options.install_options or []
253
254 logger.verbose("Using %s", get_pip_version())
255 options.use_user_site = decide_user_install(
256 options.use_user_site,
257 prefix_path=options.prefix_path,
258 target_dir=options.target_dir,
259 root_path=options.root_path,
260 isolated_mode=options.isolated_mode,
261 )
262
263 target_temp_dir: Optional[TempDirectory] = None
264 target_temp_dir_path: Optional[str] = None
265 if options.target_dir:
266 options.ignore_installed = True
267 options.target_dir = os.path.abspath(options.target_dir)
268 if (
269 # fmt: off
270 os.path.exists(options.target_dir) and
271 not os.path.isdir(options.target_dir)
272 # fmt: on
273 ):
274 raise CommandError(
275 "Target path exists but is not a directory, will not continue."
276 )
277
278 # Create a target directory for using with the target option
279 target_temp_dir = TempDirectory(kind="target")
280 target_temp_dir_path = target_temp_dir.path
281 self.enter_context(target_temp_dir)
282
283 global_options = options.global_options or []
284
285 session = self.get_default_session(options)
286
287 target_python = make_target_python(options)
288 finder = self._build_package_finder(
289 options=options,
290 session=session,
291 target_python=target_python,
292 ignore_requires_python=options.ignore_requires_python,
293 )
294 wheel_cache = WheelCache(options.cache_dir, options.format_control)
295
296 req_tracker = self.enter_context(get_requirement_tracker())
297
298 directory = TempDirectory(
299 delete=not options.no_clean,
300 kind="install",
301 globally_managed=True,
302 )
303
304 try:
305 reqs = self.get_requirements(args, options, finder, session)
306
307 # Only when installing is it permitted to use PEP 660.
308 # In other circumstances (pip wheel, pip download) we generate
309 # regular (i.e. non editable) metadata and wheels.
310 for req in reqs:
311 req.permit_editable_wheels = True
312
313 reject_location_related_install_options(reqs, options.install_options)
314
315 preparer = self.make_requirement_preparer(
316 temp_build_dir=directory,
317 options=options,
318 req_tracker=req_tracker,
319 session=session,
320 finder=finder,
321 use_user_site=options.use_user_site,
322 verbosity=self.verbosity,
323 )
324 resolver = self.make_resolver(
325 preparer=preparer,
326 finder=finder,
327 options=options,
328 wheel_cache=wheel_cache,
329 use_user_site=options.use_user_site,
330 ignore_installed=options.ignore_installed,
331 ignore_requires_python=options.ignore_requires_python,
332 force_reinstall=options.force_reinstall,
333 upgrade_strategy=upgrade_strategy,
334 use_pep517=options.use_pep517,
335 )
336
337 self.trace_basic_info(finder)
338
339 requirement_set = resolver.resolve(
340 reqs, check_supported_wheels=not options.target_dir
341 )
342
343 try:
344 pip_req = requirement_set.get_requirement("pip")
345 except KeyError:
346 modifying_pip = False
347 else:
348 # If we're not replacing an already installed pip,
349 # we're not modifying it.
350 modifying_pip = pip_req.satisfied_by is None
351 protect_pip_from_modification_on_windows(modifying_pip=modifying_pip)
352
353 check_binary_allowed = get_check_binary_allowed(finder.format_control)
354
355 reqs_to_build = [
356 r
357 for r in requirement_set.requirements.values()
358 if should_build_for_install_command(r, check_binary_allowed)
359 ]
360
361 _, build_failures = build(
362 reqs_to_build,
363 wheel_cache=wheel_cache,
364 verify=True,
365 build_options=[],
366 global_options=[],
367 )
368
369 # If we're using PEP 517, we cannot do a legacy setup.py install
370 # so we fail here.
371 pep517_build_failure_names: List[str] = [
372 r.name for r in build_failures if r.use_pep517 # type: ignore
373 ]
374 if pep517_build_failure_names:
375 raise InstallationError(
376 "Could not build wheels for {}, which is required to "
377 "install pyproject.toml-based projects".format(
378 ", ".join(pep517_build_failure_names)
379 )
380 )
381
382 # For now, we just warn about failures building legacy
383 # requirements, as we'll fall through to a setup.py install for
384 # those.
385 for r in build_failures:
386 if not r.use_pep517:
387 r.legacy_install_reason = 8368
388
389 to_install = resolver.get_installation_order(requirement_set)
390
391 # Check for conflicts in the package set we're installing.
392 conflicts: Optional[ConflictDetails] = None
393 should_warn_about_conflicts = (
394 not options.ignore_dependencies and options.warn_about_conflicts
395 )
396 if should_warn_about_conflicts:
397 conflicts = self._determine_conflicts(to_install)
398
399 # Don't warn about script install locations if
400 # --target or --prefix has been specified
401 warn_script_location = options.warn_script_location
402 if options.target_dir or options.prefix_path:
403 warn_script_location = False
404
405 installed = install_given_reqs(
406 to_install,
407 install_options,
408 global_options,
409 root=options.root_path,
410 home=target_temp_dir_path,
411 prefix=options.prefix_path,
412 warn_script_location=warn_script_location,
413 use_user_site=options.use_user_site,
414 pycompile=options.compile,
415 )
416
417 lib_locations = get_lib_location_guesses(
418 user=options.use_user_site,
419 home=target_temp_dir_path,
420 root=options.root_path,
421 prefix=options.prefix_path,
422 isolated=options.isolated_mode,
423 )
424 env = get_environment(lib_locations)
425
426 installed.sort(key=operator.attrgetter("name"))
427 items = []
428 for result in installed:
429 item = result.name
430 try:
431 installed_dist = env.get_distribution(item)
432 if installed_dist is not None:
433 item = f"{item}-{installed_dist.version}"
434 except Exception:
435 pass
436 items.append(item)
437
438 if conflicts is not None:
439 self._warn_about_conflicts(
440 conflicts,
441 resolver_variant=self.determine_resolver_variant(options),
442 )
443
444 installed_desc = " ".join(items)
445 if installed_desc:
446 write_output(
447 "Successfully installed %s",
448 installed_desc,
449 )
450 except OSError as error:
451 show_traceback = self.verbosity >= 1
452
453 message = create_os_error_message(
454 error,
455 show_traceback,
456 options.use_user_site,
457 )
458 logger.error(message, exc_info=show_traceback) # noqa
459
460 return ERROR
461
462 if options.target_dir:
463 assert target_temp_dir
464 self._handle_target_dir(
465 options.target_dir, target_temp_dir, options.upgrade
466 )
467
468 warn_if_run_as_root()
469 return SUCCESS
470
471 def _handle_target_dir(
472 self, target_dir: str, target_temp_dir: TempDirectory, upgrade: bool
473 ) -> None:
474 ensure_dir(target_dir)
475
476 # Checking both purelib and platlib directories for installed
477 # packages to be moved to target directory
478 lib_dir_list = []
479
480 # Checking both purelib and platlib directories for installed
481 # packages to be moved to target directory
482 scheme = get_scheme("", home=target_temp_dir.path)
483 purelib_dir = scheme.purelib
484 platlib_dir = scheme.platlib
485 data_dir = scheme.data
486
487 if os.path.exists(purelib_dir):
488 lib_dir_list.append(purelib_dir)
489 if os.path.exists(platlib_dir) and platlib_dir != purelib_dir:
490 lib_dir_list.append(platlib_dir)
491 if os.path.exists(data_dir):
492 lib_dir_list.append(data_dir)
493
494 for lib_dir in lib_dir_list:
495 for item in os.listdir(lib_dir):
496 if lib_dir == data_dir:
497 ddir = os.path.join(data_dir, item)
498 if any(s.startswith(ddir) for s in lib_dir_list[:-1]):
499 continue
500 target_item_dir = os.path.join(target_dir, item)
501 if os.path.exists(target_item_dir):
502 if not upgrade:
503 logger.warning(
504 "Target directory %s already exists. Specify "
505 "--upgrade to force replacement.",
506 target_item_dir,
507 )
508 continue
509 if os.path.islink(target_item_dir):
510 logger.warning(
511 "Target directory %s already exists and is "
512 "a link. pip will not automatically replace "
513 "links, please remove if replacement is "
514 "desired.",
515 target_item_dir,
516 )
517 continue
518 if os.path.isdir(target_item_dir):
519 shutil.rmtree(target_item_dir)
520 else:
521 os.remove(target_item_dir)
522
523 shutil.move(os.path.join(lib_dir, item), target_item_dir)
524
525 def _determine_conflicts(
526 self, to_install: List[InstallRequirement]
527 ) -> Optional[ConflictDetails]:
528 try:
529 return check_install_conflicts(to_install)
530 except Exception:
531 logger.exception(
532 "Error while checking for conflicts. Please file an issue on "
533 "pip's issue tracker: https://github.com/pypa/pip/issues/new"
534 )
535 return None
536
537 def _warn_about_conflicts(
538 self, conflict_details: ConflictDetails, resolver_variant: str
539 ) -> None:
540 package_set, (missing, conflicting) = conflict_details
541 if not missing and not conflicting:
542 return
543
544 parts: List[str] = []
545 if resolver_variant == "legacy":
546 parts.append(
547 "pip's legacy dependency resolver does not consider dependency "
548 "conflicts when selecting packages. This behaviour is the "
549 "source of the following dependency conflicts."
550 )
551 else:
552 assert resolver_variant == "2020-resolver"
553 parts.append(
554 "pip's dependency resolver does not currently take into account "
555 "all the packages that are installed. This behaviour is the "
556 "source of the following dependency conflicts."
557 )
558
559 # NOTE: There is some duplication here, with commands/check.py
560 for project_name in missing:
561 version = package_set[project_name][0]
562 for dependency in missing[project_name]:
563 message = (
564 "{name} {version} requires {requirement}, "
565 "which is not installed."
566 ).format(
567 name=project_name,
568 version=version,
569 requirement=dependency[1],
570 )
571 parts.append(message)
572
573 for project_name in conflicting:
574 version = package_set[project_name][0]
575 for dep_name, dep_version, req in conflicting[project_name]:
576 message = (
577 "{name} {version} requires {requirement}, but {you} have "
578 "{dep_name} {dep_version} which is incompatible."
579 ).format(
580 name=project_name,
581 version=version,
582 requirement=req,
583 dep_name=dep_name,
584 dep_version=dep_version,
585 you=("you" if resolver_variant == "2020-resolver" else "you'll"),
586 )
587 parts.append(message)
588
589 logger.critical("\n".join(parts))
590
591
592 def get_lib_location_guesses(
593 user: bool = False,
594 home: Optional[str] = None,
595 root: Optional[str] = None,
596 isolated: bool = False,
597 prefix: Optional[str] = None,
598 ) -> List[str]:
599 scheme = get_scheme(
600 "",
601 user=user,
602 home=home,
603 root=root,
604 isolated=isolated,
605 prefix=prefix,
606 )
607 return [scheme.purelib, scheme.platlib]
608
609
610 def site_packages_writable(root: Optional[str], isolated: bool) -> bool:
611 return all(
612 test_writable_dir(d)
613 for d in set(get_lib_location_guesses(root=root, isolated=isolated))
614 )
615
616
617 def decide_user_install(
618 use_user_site: Optional[bool],
619 prefix_path: Optional[str] = None,
620 target_dir: Optional[str] = None,
621 root_path: Optional[str] = None,
622 isolated_mode: bool = False,
623 ) -> bool:
624 """Determine whether to do a user install based on the input options.
625
626 If use_user_site is False, no additional checks are done.
627 If use_user_site is True, it is checked for compatibility with other
628 options.
629 If use_user_site is None, the default behaviour depends on the environment,
630 which is provided by the other arguments.
631 """
632 # In some cases (config from tox), use_user_site can be set to an integer
633 # rather than a bool, which 'use_user_site is False' wouldn't catch.
634 if (use_user_site is not None) and (not use_user_site):
635 logger.debug("Non-user install by explicit request")
636 return False
637
638 if use_user_site:
639 if prefix_path:
640 raise CommandError(
641 "Can not combine '--user' and '--prefix' as they imply "
642 "different installation locations"
643 )
644 if virtualenv_no_global():
645 raise InstallationError(
646 "Can not perform a '--user' install. User site-packages "
647 "are not visible in this virtualenv."
648 )
649 logger.debug("User install by explicit request")
650 return True
651
652 # If we are here, user installs have not been explicitly requested/avoided
653 assert use_user_site is None
654
655 # user install incompatible with --prefix/--target
656 if prefix_path or target_dir:
657 logger.debug("Non-user install due to --prefix or --target option")
658 return False
659
660 # If user installs are not enabled, choose a non-user install
661 if not site.ENABLE_USER_SITE:
662 logger.debug("Non-user install because user site-packages disabled")
663 return False
664
665 # If we have permission for a non-user install, do that,
666 # otherwise do a user install.
667 if site_packages_writable(root=root_path, isolated=isolated_mode):
668 logger.debug("Non-user install because site-packages writeable")
669 return False
670
671 logger.info(
672 "Defaulting to user installation because normal site-packages "
673 "is not writeable"
674 )
675 return True
676
677
678 def reject_location_related_install_options(
679 requirements: List[InstallRequirement], options: Optional[List[str]]
680 ) -> None:
681 """If any location-changing --install-option arguments were passed for
682 requirements or on the command-line, then show a deprecation warning.
683 """
684
685 def format_options(option_names: Iterable[str]) -> List[str]:
686 return ["--{}".format(name.replace("_", "-")) for name in option_names]
687
688 offenders = []
689
690 for requirement in requirements:
691 install_options = requirement.install_options
692 location_options = parse_distutils_args(install_options)
693 if location_options:
694 offenders.append(
695 "{!r} from {}".format(
696 format_options(location_options.keys()), requirement
697 )
698 )
699
700 if options:
701 location_options = parse_distutils_args(options)
702 if location_options:
703 offenders.append(
704 "{!r} from command line".format(format_options(location_options.keys()))
705 )
706
707 if not offenders:
708 return
709
710 raise CommandError(
711 "Location-changing options found in --install-option: {}."
712 " This is unsupported, use pip-level options like --user,"
713 " --prefix, --root, and --target instead.".format("; ".join(offenders))
714 )
715
716
717 def create_os_error_message(
718 error: OSError, show_traceback: bool, using_user_site: bool
719 ) -> str:
720 """Format an error message for an OSError
721
722 It may occur anytime during the execution of the install command.
723 """
724 parts = []
725
726 # Mention the error if we are not going to show a traceback
727 parts.append("Could not install packages due to an OSError")
728 if not show_traceback:
729 parts.append(": ")
730 parts.append(str(error))
731 else:
732 parts.append(".")
733
734 # Spilt the error indication from a helper message (if any)
735 parts[-1] += "\n"
736
737 # Suggest useful actions to the user:
738 # (1) using user site-packages or (2) verifying the permissions
739 if error.errno == errno.EACCES:
740 user_option_part = "Consider using the `--user` option"
741 permissions_part = "Check the permissions"
742
743 if not running_under_virtualenv() and not using_user_site:
744 parts.extend(
745 [
746 user_option_part,
747 " or ",
748 permissions_part.lower(),
749 ]
750 )
751 else:
752 parts.append(permissions_part)
753 parts.append(".\n")
754
755 # Suggest the user to enable Long Paths if path length is
756 # more than 260
757 if (
758 WINDOWS
759 and error.errno == errno.ENOENT
760 and error.filename
761 and len(error.filename) > 260
762 ):
763 parts.append(
764 "HINT: This error might have occurred since "
765 "this system does not have Windows Long Path "
766 "support enabled. You can find information on "
767 "how to enable this at "
768 "https://pip.pypa.io/warnings/enable-long-paths\n"
769 )
770
771 return "".join(parts).strip() + "\n"
772
[end of src/pip/_internal/commands/install.py]
[start of src/pip/_vendor/resolvelib/resolvers.py]
1 import collections
2 import operator
3
4 from .providers import AbstractResolver
5 from .structs import DirectedGraph, IteratorMapping, build_iter_view
6
7 RequirementInformation = collections.namedtuple(
8 "RequirementInformation", ["requirement", "parent"]
9 )
10
11
12 class ResolverException(Exception):
13 """A base class for all exceptions raised by this module.
14
15 Exceptions derived by this class should all be handled in this module. Any
16 bubbling pass the resolver should be treated as a bug.
17 """
18
19
20 class RequirementsConflicted(ResolverException):
21 def __init__(self, criterion):
22 super(RequirementsConflicted, self).__init__(criterion)
23 self.criterion = criterion
24
25 def __str__(self):
26 return "Requirements conflict: {}".format(
27 ", ".join(repr(r) for r in self.criterion.iter_requirement()),
28 )
29
30
31 class InconsistentCandidate(ResolverException):
32 def __init__(self, candidate, criterion):
33 super(InconsistentCandidate, self).__init__(candidate, criterion)
34 self.candidate = candidate
35 self.criterion = criterion
36
37 def __str__(self):
38 return "Provided candidate {!r} does not satisfy {}".format(
39 self.candidate,
40 ", ".join(repr(r) for r in self.criterion.iter_requirement()),
41 )
42
43
44 class Criterion(object):
45 """Representation of possible resolution results of a package.
46
47 This holds three attributes:
48
49 * `information` is a collection of `RequirementInformation` pairs.
50 Each pair is a requirement contributing to this criterion, and the
51 candidate that provides the requirement.
52 * `incompatibilities` is a collection of all known not-to-work candidates
53 to exclude from consideration.
54 * `candidates` is a collection containing all possible candidates deducted
55 from the union of contributing requirements and known incompatibilities.
56 It should never be empty, except when the criterion is an attribute of a
57 raised `RequirementsConflicted` (in which case it is always empty).
58
59 .. note::
60 This class is intended to be externally immutable. **Do not** mutate
61 any of its attribute containers.
62 """
63
64 def __init__(self, candidates, information, incompatibilities):
65 self.candidates = candidates
66 self.information = information
67 self.incompatibilities = incompatibilities
68
69 def __repr__(self):
70 requirements = ", ".join(
71 "({!r}, via={!r})".format(req, parent)
72 for req, parent in self.information
73 )
74 return "Criterion({})".format(requirements)
75
76 def iter_requirement(self):
77 return (i.requirement for i in self.information)
78
79 def iter_parent(self):
80 return (i.parent for i in self.information)
81
82
83 class ResolutionError(ResolverException):
84 pass
85
86
87 class ResolutionImpossible(ResolutionError):
88 def __init__(self, causes):
89 super(ResolutionImpossible, self).__init__(causes)
90 # causes is a list of RequirementInformation objects
91 self.causes = causes
92
93
94 class ResolutionTooDeep(ResolutionError):
95 def __init__(self, round_count):
96 super(ResolutionTooDeep, self).__init__(round_count)
97 self.round_count = round_count
98
99
100 # Resolution state in a round.
101 State = collections.namedtuple("State", "mapping criteria backtrack_causes")
102
103
104 class Resolution(object):
105 """Stateful resolution object.
106
107 This is designed as a one-off object that holds information to kick start
108 the resolution process, and holds the results afterwards.
109 """
110
111 def __init__(self, provider, reporter):
112 self._p = provider
113 self._r = reporter
114 self._states = []
115
116 @property
117 def state(self):
118 try:
119 return self._states[-1]
120 except IndexError:
121 raise AttributeError("state")
122
123 def _push_new_state(self):
124 """Push a new state into history.
125
126 This new state will be used to hold resolution results of the next
127 coming round.
128 """
129 base = self._states[-1]
130 state = State(
131 mapping=base.mapping.copy(),
132 criteria=base.criteria.copy(),
133 backtrack_causes=base.backtrack_causes[:],
134 )
135 self._states.append(state)
136
137 def _add_to_criteria(self, criteria, requirement, parent):
138 self._r.adding_requirement(requirement=requirement, parent=parent)
139
140 identifier = self._p.identify(requirement_or_candidate=requirement)
141 criterion = criteria.get(identifier)
142 if criterion:
143 incompatibilities = list(criterion.incompatibilities)
144 else:
145 incompatibilities = []
146
147 matches = self._p.find_matches(
148 identifier=identifier,
149 requirements=IteratorMapping(
150 criteria,
151 operator.methodcaller("iter_requirement"),
152 {identifier: [requirement]},
153 ),
154 incompatibilities=IteratorMapping(
155 criteria,
156 operator.attrgetter("incompatibilities"),
157 {identifier: incompatibilities},
158 ),
159 )
160
161 if criterion:
162 information = list(criterion.information)
163 information.append(RequirementInformation(requirement, parent))
164 else:
165 information = [RequirementInformation(requirement, parent)]
166
167 criterion = Criterion(
168 candidates=build_iter_view(matches),
169 information=information,
170 incompatibilities=incompatibilities,
171 )
172 if not criterion.candidates:
173 raise RequirementsConflicted(criterion)
174 criteria[identifier] = criterion
175
176 def _get_preference(self, name):
177 return self._p.get_preference(
178 identifier=name,
179 resolutions=self.state.mapping,
180 candidates=IteratorMapping(
181 self.state.criteria,
182 operator.attrgetter("candidates"),
183 ),
184 information=IteratorMapping(
185 self.state.criteria,
186 operator.attrgetter("information"),
187 ),
188 backtrack_causes=self.state.backtrack_causes,
189 )
190
191 def _is_current_pin_satisfying(self, name, criterion):
192 try:
193 current_pin = self.state.mapping[name]
194 except KeyError:
195 return False
196 return all(
197 self._p.is_satisfied_by(requirement=r, candidate=current_pin)
198 for r in criterion.iter_requirement()
199 )
200
201 def _get_updated_criteria(self, candidate):
202 criteria = self.state.criteria.copy()
203 for requirement in self._p.get_dependencies(candidate=candidate):
204 self._add_to_criteria(criteria, requirement, parent=candidate)
205 return criteria
206
207 def _attempt_to_pin_criterion(self, name):
208 criterion = self.state.criteria[name]
209
210 causes = []
211 for candidate in criterion.candidates:
212 try:
213 criteria = self._get_updated_criteria(candidate)
214 except RequirementsConflicted as e:
215 causes.append(e.criterion)
216 continue
217
218 # Check the newly-pinned candidate actually works. This should
219 # always pass under normal circumstances, but in the case of a
220 # faulty provider, we will raise an error to notify the implementer
221 # to fix find_matches() and/or is_satisfied_by().
222 satisfied = all(
223 self._p.is_satisfied_by(requirement=r, candidate=candidate)
224 for r in criterion.iter_requirement()
225 )
226 if not satisfied:
227 raise InconsistentCandidate(candidate, criterion)
228
229 self._r.pinning(candidate=candidate)
230 self.state.criteria.update(criteria)
231
232 # Put newly-pinned candidate at the end. This is essential because
233 # backtracking looks at this mapping to get the last pin.
234 self.state.mapping.pop(name, None)
235 self.state.mapping[name] = candidate
236
237 return []
238
239 # All candidates tried, nothing works. This criterion is a dead
240 # end, signal for backtracking.
241 return causes
242
243 def _backtrack(self):
244 """Perform backtracking.
245
246 When we enter here, the stack is like this::
247
248 [ state Z ]
249 [ state Y ]
250 [ state X ]
251 .... earlier states are irrelevant.
252
253 1. No pins worked for Z, so it does not have a pin.
254 2. We want to reset state Y to unpinned, and pin another candidate.
255 3. State X holds what state Y was before the pin, but does not
256 have the incompatibility information gathered in state Y.
257
258 Each iteration of the loop will:
259
260 1. Discard Z.
261 2. Discard Y but remember its incompatibility information gathered
262 previously, and the failure we're dealing with right now.
263 3. Push a new state Y' based on X, and apply the incompatibility
264 information from Y to Y'.
265 4a. If this causes Y' to conflict, we need to backtrack again. Make Y'
266 the new Z and go back to step 2.
267 4b. If the incompatibilities apply cleanly, end backtracking.
268 """
269 while len(self._states) >= 3:
270 # Remove the state that triggered backtracking.
271 del self._states[-1]
272
273 # Retrieve the last candidate pin and known incompatibilities.
274 broken_state = self._states.pop()
275 name, candidate = broken_state.mapping.popitem()
276 incompatibilities_from_broken = [
277 (k, list(v.incompatibilities))
278 for k, v in broken_state.criteria.items()
279 ]
280
281 # Also mark the newly known incompatibility.
282 incompatibilities_from_broken.append((name, [candidate]))
283
284 self._r.backtracking(candidate=candidate)
285
286 # Create a new state from the last known-to-work one, and apply
287 # the previously gathered incompatibility information.
288 def _patch_criteria():
289 for k, incompatibilities in incompatibilities_from_broken:
290 if not incompatibilities:
291 continue
292 try:
293 criterion = self.state.criteria[k]
294 except KeyError:
295 continue
296 matches = self._p.find_matches(
297 identifier=k,
298 requirements=IteratorMapping(
299 self.state.criteria,
300 operator.methodcaller("iter_requirement"),
301 ),
302 incompatibilities=IteratorMapping(
303 self.state.criteria,
304 operator.attrgetter("incompatibilities"),
305 {k: incompatibilities},
306 ),
307 )
308 candidates = build_iter_view(matches)
309 if not candidates:
310 return False
311 incompatibilities.extend(criterion.incompatibilities)
312 self.state.criteria[k] = Criterion(
313 candidates=candidates,
314 information=list(criterion.information),
315 incompatibilities=incompatibilities,
316 )
317 return True
318
319 self._push_new_state()
320 success = _patch_criteria()
321
322 # It works! Let's work on this new state.
323 if success:
324 return True
325
326 # State does not work after applying known incompatibilities.
327 # Try the still previous state.
328
329 # No way to backtrack anymore.
330 return False
331
332 def resolve(self, requirements, max_rounds):
333 if self._states:
334 raise RuntimeError("already resolved")
335
336 self._r.starting()
337
338 # Initialize the root state.
339 self._states = [
340 State(
341 mapping=collections.OrderedDict(),
342 criteria={},
343 backtrack_causes=[],
344 )
345 ]
346 for r in requirements:
347 try:
348 self._add_to_criteria(self.state.criteria, r, parent=None)
349 except RequirementsConflicted as e:
350 raise ResolutionImpossible(e.criterion.information)
351
352 # The root state is saved as a sentinel so the first ever pin can have
353 # something to backtrack to if it fails. The root state is basically
354 # pinning the virtual "root" package in the graph.
355 self._push_new_state()
356
357 for round_index in range(max_rounds):
358 self._r.starting_round(index=round_index)
359
360 unsatisfied_names = [
361 key
362 for key, criterion in self.state.criteria.items()
363 if not self._is_current_pin_satisfying(key, criterion)
364 ]
365
366 # All criteria are accounted for. Nothing more to pin, we are done!
367 if not unsatisfied_names:
368 self._r.ending(state=self.state)
369 return self.state
370
371 # Choose the most preferred unpinned criterion to try.
372 name = min(unsatisfied_names, key=self._get_preference)
373 failure_causes = self._attempt_to_pin_criterion(name)
374
375 if failure_causes:
376 causes = [i for c in failure_causes for i in c.information]
377 # Backtrack if pinning fails. The backtrack process puts us in
378 # an unpinned state, so we can work on it in the next round.
379 self._r.resolving_conflicts(causes=causes)
380 success = self._backtrack()
381 self.state.backtrack_causes[:] = causes
382
383 # Dead ends everywhere. Give up.
384 if not success:
385 raise ResolutionImpossible(self.state.backtrack_causes)
386 else:
387 # Pinning was successful. Push a new state to do another pin.
388 self._push_new_state()
389
390 self._r.ending_round(index=round_index, state=self.state)
391
392 raise ResolutionTooDeep(max_rounds)
393
394
395 def _has_route_to_root(criteria, key, all_keys, connected):
396 if key in connected:
397 return True
398 if key not in criteria:
399 return False
400 for p in criteria[key].iter_parent():
401 try:
402 pkey = all_keys[id(p)]
403 except KeyError:
404 continue
405 if pkey in connected:
406 connected.add(key)
407 return True
408 if _has_route_to_root(criteria, pkey, all_keys, connected):
409 connected.add(key)
410 return True
411 return False
412
413
414 Result = collections.namedtuple("Result", "mapping graph criteria")
415
416
417 def _build_result(state):
418 mapping = state.mapping
419 all_keys = {id(v): k for k, v in mapping.items()}
420 all_keys[id(None)] = None
421
422 graph = DirectedGraph()
423 graph.add(None) # Sentinel as root dependencies' parent.
424
425 connected = {None}
426 for key, criterion in state.criteria.items():
427 if not _has_route_to_root(state.criteria, key, all_keys, connected):
428 continue
429 if key not in graph:
430 graph.add(key)
431 for p in criterion.iter_parent():
432 try:
433 pkey = all_keys[id(p)]
434 except KeyError:
435 continue
436 if pkey not in graph:
437 graph.add(pkey)
438 graph.connect(pkey, key)
439
440 return Result(
441 mapping={k: v for k, v in mapping.items() if k in connected},
442 graph=graph,
443 criteria=state.criteria,
444 )
445
446
447 class Resolver(AbstractResolver):
448 """The thing that performs the actual resolution work."""
449
450 base_exception = ResolverException
451
452 def resolve(self, requirements, max_rounds=100):
453 """Take a collection of constraints, spit out the resolution result.
454
455 The return value is a representation to the final resolution result. It
456 is a tuple subclass with three public members:
457
458 * `mapping`: A dict of resolved candidates. Each key is an identifier
459 of a requirement (as returned by the provider's `identify` method),
460 and the value is the resolved candidate.
461 * `graph`: A `DirectedGraph` instance representing the dependency tree.
462 The vertices are keys of `mapping`, and each edge represents *why*
463 a particular package is included. A special vertex `None` is
464 included to represent parents of user-supplied requirements.
465 * `criteria`: A dict of "criteria" that hold detailed information on
466 how edges in the graph are derived. Each key is an identifier of a
467 requirement, and the value is a `Criterion` instance.
468
469 The following exceptions may be raised if a resolution cannot be found:
470
471 * `ResolutionImpossible`: A resolution cannot be found for the given
472 combination of requirements. The `causes` attribute of the
473 exception is a list of (requirement, parent), giving the
474 requirements that could not be satisfied.
475 * `ResolutionTooDeep`: The dependency tree is too deeply nested and
476 the resolver gave up. This is usually caused by a circular
477 dependency, but you can try to resolve this by increasing the
478 `max_rounds` argument.
479 """
480 resolution = Resolution(self.provider, self.reporter)
481 state = resolution.resolve(requirements, max_rounds=max_rounds)
482 return _build_result(state)
483
[end of src/pip/_vendor/resolvelib/resolvers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pypa/pip
|
9dec8a5e59f523de0e24cf7b058e5847d652cb6b
|
`--use-deprecated=html5lib` does not parse links, even though they're present
### Description
When using Pip 22.0 with `--use-deprecated=html5lib` with JFrog as the Index packages pip throws the error: `ERROR: No matching distribution found for requests`
Tested with the "requests" package on Windows 10 using pip 22.0 (fails) and pip 21.3.1 (works)
### Expected behavior
` --use-deprecated=html5lib` should allow JFrog indexes to work.
### pip version
22.0
### Python version
3.10
### OS
Windows
### How to Reproduce
Install package from JFrog index using pip 22.0
### Output
```sh-session
C:\>python -m pip install -vvv requests --use-deprecated=html5lib
Using pip 22.0 from <corporate_local_path>\lib\site-packages\pip (python 3.10)
Non-user install by explicit request
Created temporary directory: <corporate_user_path>\AppData\Local\Temp\pip-ephem-wheel-cache-4a5e6ucc
Created temporary directory: <corporate_user_path>\AppData\Local\Temp\pip-req-tracker-p0zhtye3
Initialized build tracking at <corporate_user_path>\AppData\Local\Temp\pip-req-tracker-p0zhtye3
Created build tracker: <corporate_user_path>\AppData\Local\Temp\pip-req-tracker-p0zhtye3
Entered build tracker: <corporate_user_path>\AppData\Local\Temp\pip-req-tracker-p0zhtye3
Created temporary directory: <corporate_user_path>\AppData\Local\Temp\pip-install-_cnfjhxu
Looking in indexes: http://<corporate_domain>/artifactory/api/pypi/pypi-release/simple
1 location(s) to search for versions of requests:
* http://<corporate_domain>/artifactory/api/pypi/pypi-release/simple/requests/
Fetching project page and analyzing links: http://<corporate_domain>/artifactory/api/pypi/pypi-release/simple/requests/
Getting page http://<corporate_domain>/artifactory/api/pypi/pypi-release/simple/requests/
Found index url http://<corporate_domain>/artifactory/api/pypi/pypi-release/simple
Looking up http://<corporate_domain>/artifactory/api/pypi/pypi-release/simple/requests/ in the cache
Request header has "max_age" as 0, cache bypassed
Starting new HTTP connection (1): <corporate_domain>:80
http://<corporate_domain>:80 "GET /artifactory/api/pypi/pypi-release/simple/requests/ HTTP/1.1" 200 None
Updating cache with response from http://<corporate_domain>/artifactory/api/pypi/pypi-release/simple/requests/
Skipping link: not a file: http://<corporate_domain>/artifactory/api/pypi/pypi-release/simple/requests/
Given no hashes to check 0 links for project 'requests': discarding no candidates
ERROR: Could not find a version that satisfies the requirement requests (from versions: none)
ERROR: No matching distribution found for requests
Exception information:
Traceback (most recent call last):
File "<corporate_local_path>\lib\site-packages\pip\_vendor\resolvelib\resolvers.py", line 348, in resolve
self._add_to_criteria(self.state.criteria, r, parent=None)
File "<corporate_local_path>\lib\site-packages\pip\_vendor\resolvelib\resolvers.py", line 173, in _add_to_criteria
raise RequirementsConflicted(criterion)
pip._vendor.resolvelib.resolvers.RequirementsConflicted: Requirements conflict: SpecifierRequirement('requests')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<corporate_local_path>\lib\site-packages\pip\_internal\resolution\resolvelib\resolver.py", line 94, in resolve
result = self._result = resolver.resolve(
File "<corporate_local_path>\lib\site-packages\pip\_vendor\resolvelib\resolvers.py", line 481, in resolve
state = resolution.resolve(requirements, max_rounds=max_rounds)
File "<corporate_local_path>\lib\site-packages\pip\_vendor\resolvelib\resolvers.py", line 350, in resolve
raise ResolutionImpossible(e.criterion.information)
pip._vendor.resolvelib.resolvers.ResolutionImpossible: [RequirementInformation(requirement=SpecifierRequirement('requests'), parent=None)]
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<corporate_local_path>\lib\site-packages\pip\_internal\cli\base_command.py", line 165, in exc_logging_wrapper
status = run_func(*args)
File "<corporate_local_path>\lib\site-packages\pip\_internal\cli\req_command.py", line 205, in wrapper
return func(self, options, args)
File "<corporate_local_path>\lib\site-packages\pip\_internal\commands\install.py", line 339, in run
requirement_set = resolver.resolve(
File "<corporate_local_path>\lib\site-packages\pip\_internal\resolution\resolvelib\resolver.py", line 103, in resolve
raise error from e
pip._internal.exceptions.DistributionNotFound: No matching distribution found for requests
Removed build tracker: '<corporate_user_path>\\AppData\\Local\\Temp\\pip-req-tracker-p0zhtye3'
```
### Code of Conduct
- [X] I agree to follow the [PSF Code of Conduct](https://www.python.org/psf/conduct/).
|
When I run the same command under Pip 21.3.1 I see it runs a GET on `https://<corporate_domain>/artifactory/api/pypi/pypi-release/simple/requests/ ` and is able to parse the following response: https://gist.github.com/notatallshaw/caef03cdb0592c13fab463a9fb5223a3 with many "Found link" results.
Can you share the raw HTML document returned by Artifactory? Feel free to redact the URLs as appropriate.
> Can you share the raw HTML document returned by Artifactory? Feel free to redact the URLs as appropriate.
Already done in previous comment: https://github.com/pypa/pip/issues/10845#issuecomment-1025166105
I'm able to reproduce this, with just pip's parsing logic:
```
from pathlib import Path
from pip._internal.index.collector import HTMLPage, parse_links
content = Path("/tmp/page.html").read_bytes()
page = HTMLPage(content, "utf-8", "https://private.domain.example.com/index")
try:
print("new", len(list(parse_links(page, use_deprecated_html5lib=True))))
except TypeError:
print("old", len(list(parse_links(page))))
```
21.3.1
```
❯ python /tmp/foo.py
old 208
```
22.0
```
❯ python /tmp/foo.py
new 0
```
|
2022-01-30T16:24:26Z
|
<patch>
diff --git a/src/pip/_internal/index/collector.py b/src/pip/_internal/index/collector.py
--- a/src/pip/_internal/index/collector.py
+++ b/src/pip/_internal/index/collector.py
@@ -343,7 +343,8 @@ def parse_links(page: "HTMLPage", use_deprecated_html5lib: bool) -> Iterable[Lin
Parse an HTML document, and yield its anchor elements as Link objects.
"""
if use_deprecated_html5lib:
- return _parse_links_html5lib(page)
+ yield from _parse_links_html5lib(page)
+ return
parser = HTMLLinkParser()
encoding = page.encoding or "utf-8"
</patch>
|
[]
|
[]
| |||
docker__compose-3449
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Unexpected result when using build args with default values
# Scenario:
## Files
`Dockerfile`:
```
FROM ubuntu:14.04
ARG FOO=1
RUN echo "-${FOO}-"
CMD /bin/bash
```
`docker-compose.yml`:
```
version: '2'
services:
test:
build:
context: .
args:
- FOO
```
## Execution:
```
$ ./docker-compose-1.6.2 --verbose config
compose.config.config.find: Using configuration files: ./docker-compose.yml
networks: {}
services:
test:
build:
args:
FOO: None
context: /home/riccardi/git/ses-docker/test-default-build-arg
version: '2.0'
volumes: {}
```
```
$ ./docker-compose-1.6.2 --verbose build
compose.config.config.find: Using configuration files: ./docker-compose.yml
docker.auth.auth.load_config: File doesn't exist
compose.cli.command.get_client: docker-compose version 1.6.2, build 4d72027
docker-py version: 1.7.2
CPython version: 2.7.9
OpenSSL version: OpenSSL 1.0.1e 11 Feb 2013
compose.cli.command.get_client: Docker base_url: http+docker://localunixsocket
compose.cli.command.get_client: Docker version: KernelVersion=4.2.0-35-generic, Os=linux, BuildTime=2016-03-10T15:54:52.312835708+00:00, ApiVersion=1.22, Version=1.10.3, GitCommit=20f81dd, Arch=amd64, GoVersion=go1.5.3
compose.service.build: Building test
compose.cli.verbose_proxy.proxy_callable: docker build <- (pull=False, stream=True, nocache=False, tag=u'testdefaultbuildarg_test', buildargs={u'FOO': 'None'}, rm=True, forcerm=False, path='/home/riccardi/git/ses-docker/test-default-build-arg', dockerfile=None)
docker.api.build._set_auth_headers: Looking for auth config
docker.api.build._set_auth_headers: No auth config in memory - loading from filesystem
docker.auth.auth.load_config: File doesn't exist
docker.api.build._set_auth_headers: No auth config found
compose.cli.verbose_proxy.proxy_callable: docker build -> <generator object _stream_helper at 0x7f56bafb3a50>
Step 1 : FROM ubuntu:14.04
---> b549a9959a66
Step 2 : ARG FOO=1
---> Using cache
---> 4774113d6ec5
Step 3 : RUN echo "-${FOO}-"
---> Running in dabd31837074
-None-
---> f8a99349af3b
Removing intermediate container dabd31837074
Step 4 : CMD /bin/bash
---> Running in 487f5e789c38
---> 6c484f426fb5
Removing intermediate container 487f5e789c38
Successfully built 6c484f426fb5
compose.cli.verbose_proxy.proxy_callable: docker close <- ()
compose.cli.verbose_proxy.proxy_callable: docker close -> None
```
( same result with 1.7.1-rc1 which includes PR #2938 )
# Issue
## Expected result:
prints `-1-`.
## Actual result:
prints `-None-`.
## Details:
Compose has no value for `FOO` build arg from its environment, so it could either send an empty string to `docker build`, or better: not send this build arg to `docker build`.
The second one would be great: it would open the possibility to use the default value for the build arg as defined in the `Dockerfile`. ( For now the workaround is to duplicate the default values from `Dockerfile` to `.env`, only works with >=1.7.0).
The first one would be still better than the current behavior.
Current behavior: no value in Compose environment is represented in python as `None`, then casted to a string `"None"`, which is probably always bad.
</issue>
<code>
[start of README.md]
1 Docker Compose
2 ==============
3 
4
5 Compose is a tool for defining and running multi-container Docker applications.
6 With Compose, you use a Compose file to configure your application's services.
7 Then, using a single command, you create and start all the services
8 from your configuration. To learn more about all the features of Compose
9 see [the list of features](https://github.com/docker/compose/blob/release/docs/overview.md#features).
10
11 Compose is great for development, testing, and staging environments, as well as
12 CI workflows. You can learn more about each case in
13 [Common Use Cases](https://github.com/docker/compose/blob/release/docs/overview.md#common-use-cases).
14
15 Using Compose is basically a three-step process.
16
17 1. Define your app's environment with a `Dockerfile` so it can be
18 reproduced anywhere.
19 2. Define the services that make up your app in `docker-compose.yml` so
20 they can be run together in an isolated environment:
21 3. Lastly, run `docker-compose up` and Compose will start and run your entire app.
22
23 A `docker-compose.yml` looks like this:
24
25 version: '2'
26
27 services:
28 web:
29 build: .
30 ports:
31 - "5000:5000"
32 volumes:
33 - .:/code
34 redis:
35 image: redis
36
37 For more information about the Compose file, see the
38 [Compose file reference](https://github.com/docker/compose/blob/release/docs/compose-file.md)
39
40 Compose has commands for managing the whole lifecycle of your application:
41
42 * Start, stop and rebuild services
43 * View the status of running services
44 * Stream the log output of running services
45 * Run a one-off command on a service
46
47 Installation and documentation
48 ------------------------------
49
50 - Full documentation is available on [Docker's website](https://docs.docker.com/compose/).
51 - If you have any questions, you can talk in real-time with other developers in the #docker-compose IRC channel on Freenode. [Click here to join using IRCCloud.](https://www.irccloud.com/invite?hostname=irc.freenode.net&channel=%23docker-compose)
52 - Code repository for Compose is on [Github](https://github.com/docker/compose)
53 - If you find any problems please fill out an [issue](https://github.com/docker/compose/issues/new)
54
55 Contributing
56 ------------
57
58 [](http://jenkins.dockerproject.org/job/Compose%20Master/)
59
60 Want to help build Compose? Check out our [contributing documentation](https://github.com/docker/compose/blob/master/CONTRIBUTING.md).
61
62 Releasing
63 ---------
64
65 Releases are built by maintainers, following an outline of the [release process](https://github.com/docker/compose/blob/master/project/RELEASE-PROCESS.md).
66
[end of README.md]
[start of compose/cli/main.py]
1 from __future__ import absolute_import
2 from __future__ import print_function
3 from __future__ import unicode_literals
4
5 import contextlib
6 import functools
7 import json
8 import logging
9 import re
10 import sys
11 from inspect import getdoc
12 from operator import attrgetter
13
14 from . import errors
15 from . import signals
16 from .. import __version__
17 from ..config import config
18 from ..config import ConfigurationError
19 from ..config import parse_environment
20 from ..config.environment import Environment
21 from ..config.serialize import serialize_config
22 from ..const import DEFAULT_TIMEOUT
23 from ..const import IS_WINDOWS_PLATFORM
24 from ..progress_stream import StreamOutputError
25 from ..project import NoSuchService
26 from ..project import OneOffFilter
27 from ..project import ProjectError
28 from ..service import BuildAction
29 from ..service import BuildError
30 from ..service import ConvergenceStrategy
31 from ..service import ImageType
32 from ..service import NeedsBuildError
33 from .command import get_config_path_from_options
34 from .command import project_from_options
35 from .docopt_command import DocoptDispatcher
36 from .docopt_command import get_handler
37 from .docopt_command import NoSuchCommand
38 from .errors import UserError
39 from .formatter import ConsoleWarningFormatter
40 from .formatter import Formatter
41 from .log_printer import build_log_presenters
42 from .log_printer import LogPrinter
43 from .utils import get_version_info
44 from .utils import yesno
45
46
47 if not IS_WINDOWS_PLATFORM:
48 from dockerpty.pty import PseudoTerminal, RunOperation, ExecOperation
49
50 log = logging.getLogger(__name__)
51 console_handler = logging.StreamHandler(sys.stderr)
52
53
54 def main():
55 command = dispatch()
56
57 try:
58 command()
59 except (KeyboardInterrupt, signals.ShutdownException):
60 log.error("Aborting.")
61 sys.exit(1)
62 except (UserError, NoSuchService, ConfigurationError, ProjectError) as e:
63 log.error(e.msg)
64 sys.exit(1)
65 except BuildError as e:
66 log.error("Service '%s' failed to build: %s" % (e.service.name, e.reason))
67 sys.exit(1)
68 except StreamOutputError as e:
69 log.error(e)
70 sys.exit(1)
71 except NeedsBuildError as e:
72 log.error("Service '%s' needs to be built, but --no-build was passed." % e.service.name)
73 sys.exit(1)
74 except errors.ConnectionError:
75 sys.exit(1)
76
77
78 def dispatch():
79 setup_logging()
80 dispatcher = DocoptDispatcher(
81 TopLevelCommand,
82 {'options_first': True, 'version': get_version_info('compose')})
83
84 try:
85 options, handler, command_options = dispatcher.parse(sys.argv[1:])
86 except NoSuchCommand as e:
87 commands = "\n".join(parse_doc_section("commands:", getdoc(e.supercommand)))
88 log.error("No such command: %s\n\n%s", e.command, commands)
89 sys.exit(1)
90
91 setup_console_handler(console_handler, options.get('--verbose'))
92 return functools.partial(perform_command, options, handler, command_options)
93
94
95 def perform_command(options, handler, command_options):
96 if options['COMMAND'] in ('help', 'version'):
97 # Skip looking up the compose file.
98 handler(command_options)
99 return
100
101 if options['COMMAND'] == 'config':
102 command = TopLevelCommand(None)
103 handler(command, options, command_options)
104 return
105
106 project = project_from_options('.', options)
107 command = TopLevelCommand(project)
108 with errors.handle_connection_errors(project.client):
109 handler(command, command_options)
110
111
112 def setup_logging():
113 root_logger = logging.getLogger()
114 root_logger.addHandler(console_handler)
115 root_logger.setLevel(logging.DEBUG)
116
117 # Disable requests logging
118 logging.getLogger("requests").propagate = False
119
120
121 def setup_console_handler(handler, verbose):
122 if handler.stream.isatty():
123 format_class = ConsoleWarningFormatter
124 else:
125 format_class = logging.Formatter
126
127 if verbose:
128 handler.setFormatter(format_class('%(name)s.%(funcName)s: %(message)s'))
129 handler.setLevel(logging.DEBUG)
130 else:
131 handler.setFormatter(format_class())
132 handler.setLevel(logging.INFO)
133
134
135 # stolen from docopt master
136 def parse_doc_section(name, source):
137 pattern = re.compile('^([^\n]*' + name + '[^\n]*\n?(?:[ \t].*?(?:\n|$))*)',
138 re.IGNORECASE | re.MULTILINE)
139 return [s.strip() for s in pattern.findall(source)]
140
141
142 class TopLevelCommand(object):
143 """Define and run multi-container applications with Docker.
144
145 Usage:
146 docker-compose [-f <arg>...] [options] [COMMAND] [ARGS...]
147 docker-compose -h|--help
148
149 Options:
150 -f, --file FILE Specify an alternate compose file (default: docker-compose.yml)
151 -p, --project-name NAME Specify an alternate project name (default: directory name)
152 --verbose Show more output
153 -v, --version Print version and exit
154 -H, --host HOST Daemon socket to connect to
155
156 --tls Use TLS; implied by --tlsverify
157 --tlscacert CA_PATH Trust certs signed only by this CA
158 --tlscert CLIENT_CERT_PATH Path to TLS certificate file
159 --tlskey TLS_KEY_PATH Path to TLS key file
160 --tlsverify Use TLS and verify the remote
161 --skip-hostname-check Don't check the daemon's hostname against the name specified
162 in the client certificate (for example if your docker host
163 is an IP address)
164
165 Commands:
166 build Build or rebuild services
167 config Validate and view the compose file
168 create Create services
169 down Stop and remove containers, networks, images, and volumes
170 events Receive real time events from containers
171 exec Execute a command in a running container
172 help Get help on a command
173 kill Kill containers
174 logs View output from containers
175 pause Pause services
176 port Print the public port for a port binding
177 ps List containers
178 pull Pulls service images
179 restart Restart services
180 rm Remove stopped containers
181 run Run a one-off command
182 scale Set number of containers for a service
183 start Start services
184 stop Stop services
185 unpause Unpause services
186 up Create and start containers
187 version Show the Docker-Compose version information
188 """
189
190 def __init__(self, project, project_dir='.'):
191 self.project = project
192 self.project_dir = '.'
193
194 def build(self, options):
195 """
196 Build or rebuild services.
197
198 Services are built once and then tagged as `project_service`,
199 e.g. `composetest_db`. If you change a service's `Dockerfile` or the
200 contents of its build directory, you can run `docker-compose build` to rebuild it.
201
202 Usage: build [options] [SERVICE...]
203
204 Options:
205 --force-rm Always remove intermediate containers.
206 --no-cache Do not use cache when building the image.
207 --pull Always attempt to pull a newer version of the image.
208 """
209 self.project.build(
210 service_names=options['SERVICE'],
211 no_cache=bool(options.get('--no-cache', False)),
212 pull=bool(options.get('--pull', False)),
213 force_rm=bool(options.get('--force-rm', False)))
214
215 def config(self, config_options, options):
216 """
217 Validate and view the compose file.
218
219 Usage: config [options]
220
221 Options:
222 -q, --quiet Only validate the configuration, don't print
223 anything.
224 --services Print the service names, one per line.
225
226 """
227 environment = Environment.from_env_file(self.project_dir)
228 config_path = get_config_path_from_options(
229 self.project_dir, config_options, environment
230 )
231 compose_config = config.load(
232 config.find(self.project_dir, config_path, environment)
233 )
234
235 if options['--quiet']:
236 return
237
238 if options['--services']:
239 print('\n'.join(service['name'] for service in compose_config.services))
240 return
241
242 print(serialize_config(compose_config))
243
244 def create(self, options):
245 """
246 Creates containers for a service.
247
248 Usage: create [options] [SERVICE...]
249
250 Options:
251 --force-recreate Recreate containers even if their configuration and
252 image haven't changed. Incompatible with --no-recreate.
253 --no-recreate If containers already exist, don't recreate them.
254 Incompatible with --force-recreate.
255 --no-build Don't build an image, even if it's missing.
256 --build Build images before creating containers.
257 """
258 service_names = options['SERVICE']
259
260 self.project.create(
261 service_names=service_names,
262 strategy=convergence_strategy_from_opts(options),
263 do_build=build_action_from_opts(options),
264 )
265
266 def down(self, options):
267 """
268 Stops containers and removes containers, networks, volumes, and images
269 created by `up`.
270
271 By default, the only things removed are:
272
273 - Containers for services defined in the Compose file
274 - Networks defined in the `networks` section of the Compose file
275 - The default network, if one is used
276
277 Networks and volumes defined as `external` are never removed.
278
279 Usage: down [options]
280
281 Options:
282 --rmi type Remove images. Type must be one of:
283 'all': Remove all images used by any service.
284 'local': Remove only images that don't have a custom tag
285 set by the `image` field.
286 -v, --volumes Remove named volumes declared in the `volumes` section
287 of the Compose file and anonymous volumes
288 attached to containers.
289 --remove-orphans Remove containers for services not defined in the
290 Compose file
291 """
292 image_type = image_type_from_opt('--rmi', options['--rmi'])
293 self.project.down(image_type, options['--volumes'], options['--remove-orphans'])
294
295 def events(self, options):
296 """
297 Receive real time events from containers.
298
299 Usage: events [options] [SERVICE...]
300
301 Options:
302 --json Output events as a stream of json objects
303 """
304 def format_event(event):
305 attributes = ["%s=%s" % item for item in event['attributes'].items()]
306 return ("{time} {type} {action} {id} ({attrs})").format(
307 attrs=", ".join(sorted(attributes)),
308 **event)
309
310 def json_format_event(event):
311 event['time'] = event['time'].isoformat()
312 event.pop('container')
313 return json.dumps(event)
314
315 for event in self.project.events():
316 formatter = json_format_event if options['--json'] else format_event
317 print(formatter(event))
318 sys.stdout.flush()
319
320 def exec_command(self, options):
321 """
322 Execute a command in a running container
323
324 Usage: exec [options] SERVICE COMMAND [ARGS...]
325
326 Options:
327 -d Detached mode: Run command in the background.
328 --privileged Give extended privileges to the process.
329 --user USER Run the command as this user.
330 -T Disable pseudo-tty allocation. By default `docker-compose exec`
331 allocates a TTY.
332 --index=index index of the container if there are multiple
333 instances of a service [default: 1]
334 """
335 index = int(options.get('--index'))
336 service = self.project.get_service(options['SERVICE'])
337 detach = options['-d']
338
339 if IS_WINDOWS_PLATFORM and not detach:
340 raise UserError(
341 "Interactive mode is not yet supported on Windows.\n"
342 "Please pass the -d flag when using `docker-compose exec`."
343 )
344 try:
345 container = service.get_container(number=index)
346 except ValueError as e:
347 raise UserError(str(e))
348 command = [options['COMMAND']] + options['ARGS']
349 tty = not options["-T"]
350
351 create_exec_options = {
352 "privileged": options["--privileged"],
353 "user": options["--user"],
354 "tty": tty,
355 "stdin": tty,
356 }
357
358 exec_id = container.create_exec(command, **create_exec_options)
359
360 if detach:
361 container.start_exec(exec_id, tty=tty)
362 return
363
364 signals.set_signal_handler_to_shutdown()
365 try:
366 operation = ExecOperation(
367 self.project.client,
368 exec_id,
369 interactive=tty,
370 )
371 pty = PseudoTerminal(self.project.client, operation)
372 pty.start()
373 except signals.ShutdownException:
374 log.info("received shutdown exception: closing")
375 exit_code = self.project.client.exec_inspect(exec_id).get("ExitCode")
376 sys.exit(exit_code)
377
378 @classmethod
379 def help(cls, options):
380 """
381 Get help on a command.
382
383 Usage: help [COMMAND]
384 """
385 if options['COMMAND']:
386 subject = get_handler(cls, options['COMMAND'])
387 else:
388 subject = cls
389
390 print(getdoc(subject))
391
392 def kill(self, options):
393 """
394 Force stop service containers.
395
396 Usage: kill [options] [SERVICE...]
397
398 Options:
399 -s SIGNAL SIGNAL to send to the container.
400 Default signal is SIGKILL.
401 """
402 signal = options.get('-s', 'SIGKILL')
403
404 self.project.kill(service_names=options['SERVICE'], signal=signal)
405
406 def logs(self, options):
407 """
408 View output from containers.
409
410 Usage: logs [options] [SERVICE...]
411
412 Options:
413 --no-color Produce monochrome output.
414 -f, --follow Follow log output.
415 -t, --timestamps Show timestamps.
416 --tail="all" Number of lines to show from the end of the logs
417 for each container.
418 """
419 containers = self.project.containers(service_names=options['SERVICE'], stopped=True)
420
421 tail = options['--tail']
422 if tail is not None:
423 if tail.isdigit():
424 tail = int(tail)
425 elif tail != 'all':
426 raise UserError("tail flag must be all or a number")
427 log_args = {
428 'follow': options['--follow'],
429 'tail': tail,
430 'timestamps': options['--timestamps']
431 }
432 print("Attaching to", list_containers(containers))
433 log_printer_from_project(
434 self.project,
435 containers,
436 options['--no-color'],
437 log_args,
438 event_stream=self.project.events(service_names=options['SERVICE'])).run()
439
440 def pause(self, options):
441 """
442 Pause services.
443
444 Usage: pause [SERVICE...]
445 """
446 containers = self.project.pause(service_names=options['SERVICE'])
447 exit_if(not containers, 'No containers to pause', 1)
448
449 def port(self, options):
450 """
451 Print the public port for a port binding.
452
453 Usage: port [options] SERVICE PRIVATE_PORT
454
455 Options:
456 --protocol=proto tcp or udp [default: tcp]
457 --index=index index of the container if there are multiple
458 instances of a service [default: 1]
459 """
460 index = int(options.get('--index'))
461 service = self.project.get_service(options['SERVICE'])
462 try:
463 container = service.get_container(number=index)
464 except ValueError as e:
465 raise UserError(str(e))
466 print(container.get_local_port(
467 options['PRIVATE_PORT'],
468 protocol=options.get('--protocol') or 'tcp') or '')
469
470 def ps(self, options):
471 """
472 List containers.
473
474 Usage: ps [options] [SERVICE...]
475
476 Options:
477 -q Only display IDs
478 """
479 containers = sorted(
480 self.project.containers(service_names=options['SERVICE'], stopped=True) +
481 self.project.containers(service_names=options['SERVICE'], one_off=OneOffFilter.only),
482 key=attrgetter('name'))
483
484 if options['-q']:
485 for container in containers:
486 print(container.id)
487 else:
488 headers = [
489 'Name',
490 'Command',
491 'State',
492 'Ports',
493 ]
494 rows = []
495 for container in containers:
496 command = container.human_readable_command
497 if len(command) > 30:
498 command = '%s ...' % command[:26]
499 rows.append([
500 container.name,
501 command,
502 container.human_readable_state,
503 container.human_readable_ports,
504 ])
505 print(Formatter().table(headers, rows))
506
507 def pull(self, options):
508 """
509 Pulls images for services.
510
511 Usage: pull [options] [SERVICE...]
512
513 Options:
514 --ignore-pull-failures Pull what it can and ignores images with pull failures.
515 """
516 self.project.pull(
517 service_names=options['SERVICE'],
518 ignore_pull_failures=options.get('--ignore-pull-failures')
519 )
520
521 def rm(self, options):
522 """
523 Removes stopped service containers.
524
525 By default, anonymous volumes attached to containers will not be removed. You
526 can override this with `-v`. To list all volumes, use `docker volume ls`.
527
528 Any data which is not in a volume will be lost.
529
530 Usage: rm [options] [SERVICE...]
531
532 Options:
533 -f, --force Don't ask to confirm removal
534 -v Remove any anonymous volumes attached to containers
535 -a, --all Obsolete. Also remove one-off containers created by
536 docker-compose run
537 """
538 if options.get('--all'):
539 log.warn(
540 '--all flag is obsolete. This is now the default behavior '
541 'of `docker-compose rm`'
542 )
543 one_off = OneOffFilter.include
544
545 all_containers = self.project.containers(
546 service_names=options['SERVICE'], stopped=True, one_off=one_off
547 )
548 stopped_containers = [c for c in all_containers if not c.is_running]
549
550 if len(stopped_containers) > 0:
551 print("Going to remove", list_containers(stopped_containers))
552 if options.get('--force') \
553 or yesno("Are you sure? [yN] ", default=False):
554 self.project.remove_stopped(
555 service_names=options['SERVICE'],
556 v=options.get('-v', False),
557 one_off=one_off
558 )
559 else:
560 print("No stopped containers")
561
562 def run(self, options):
563 """
564 Run a one-off command on a service.
565
566 For example:
567
568 $ docker-compose run web python manage.py shell
569
570 By default, linked services will be started, unless they are already
571 running. If you do not want to start linked services, use
572 `docker-compose run --no-deps SERVICE COMMAND [ARGS...]`.
573
574 Usage: run [options] [-p PORT...] [-e KEY=VAL...] SERVICE [COMMAND] [ARGS...]
575
576 Options:
577 -d Detached mode: Run container in the background, print
578 new container name.
579 --name NAME Assign a name to the container
580 --entrypoint CMD Override the entrypoint of the image.
581 -e KEY=VAL Set an environment variable (can be used multiple times)
582 -u, --user="" Run as specified username or uid
583 --no-deps Don't start linked services.
584 --rm Remove container after run. Ignored in detached mode.
585 -p, --publish=[] Publish a container's port(s) to the host
586 --service-ports Run command with the service's ports enabled and mapped
587 to the host.
588 -T Disable pseudo-tty allocation. By default `docker-compose run`
589 allocates a TTY.
590 -w, --workdir="" Working directory inside the container
591 """
592 service = self.project.get_service(options['SERVICE'])
593 detach = options['-d']
594
595 if IS_WINDOWS_PLATFORM and not detach:
596 raise UserError(
597 "Interactive mode is not yet supported on Windows.\n"
598 "Please pass the -d flag when using `docker-compose run`."
599 )
600
601 if options['--publish'] and options['--service-ports']:
602 raise UserError(
603 'Service port mapping and manual port mapping '
604 'can not be used togather'
605 )
606
607 if options['COMMAND']:
608 command = [options['COMMAND']] + options['ARGS']
609 else:
610 command = service.options.get('command')
611
612 container_options = build_container_options(options, detach, command)
613 run_one_off_container(container_options, self.project, service, options)
614
615 def scale(self, options):
616 """
617 Set number of containers to run for a service.
618
619 Numbers are specified in the form `service=num` as arguments.
620 For example:
621
622 $ docker-compose scale web=2 worker=3
623
624 Usage: scale [options] [SERVICE=NUM...]
625
626 Options:
627 -t, --timeout TIMEOUT Specify a shutdown timeout in seconds.
628 (default: 10)
629 """
630 timeout = int(options.get('--timeout') or DEFAULT_TIMEOUT)
631
632 for s in options['SERVICE=NUM']:
633 if '=' not in s:
634 raise UserError('Arguments to scale should be in the form service=num')
635 service_name, num = s.split('=', 1)
636 try:
637 num = int(num)
638 except ValueError:
639 raise UserError('Number of containers for service "%s" is not a '
640 'number' % service_name)
641 self.project.get_service(service_name).scale(num, timeout=timeout)
642
643 def start(self, options):
644 """
645 Start existing containers.
646
647 Usage: start [SERVICE...]
648 """
649 containers = self.project.start(service_names=options['SERVICE'])
650 exit_if(not containers, 'No containers to start', 1)
651
652 def stop(self, options):
653 """
654 Stop running containers without removing them.
655
656 They can be started again with `docker-compose start`.
657
658 Usage: stop [options] [SERVICE...]
659
660 Options:
661 -t, --timeout TIMEOUT Specify a shutdown timeout in seconds.
662 (default: 10)
663 """
664 timeout = int(options.get('--timeout') or DEFAULT_TIMEOUT)
665 self.project.stop(service_names=options['SERVICE'], timeout=timeout)
666
667 def restart(self, options):
668 """
669 Restart running containers.
670
671 Usage: restart [options] [SERVICE...]
672
673 Options:
674 -t, --timeout TIMEOUT Specify a shutdown timeout in seconds.
675 (default: 10)
676 """
677 timeout = int(options.get('--timeout') or DEFAULT_TIMEOUT)
678 containers = self.project.restart(service_names=options['SERVICE'], timeout=timeout)
679 exit_if(not containers, 'No containers to restart', 1)
680
681 def unpause(self, options):
682 """
683 Unpause services.
684
685 Usage: unpause [SERVICE...]
686 """
687 containers = self.project.unpause(service_names=options['SERVICE'])
688 exit_if(not containers, 'No containers to unpause', 1)
689
690 def up(self, options):
691 """
692 Builds, (re)creates, starts, and attaches to containers for a service.
693
694 Unless they are already running, this command also starts any linked services.
695
696 The `docker-compose up` command aggregates the output of each container. When
697 the command exits, all containers are stopped. Running `docker-compose up -d`
698 starts the containers in the background and leaves them running.
699
700 If there are existing containers for a service, and the service's configuration
701 or image was changed after the container's creation, `docker-compose up` picks
702 up the changes by stopping and recreating the containers (preserving mounted
703 volumes). To prevent Compose from picking up changes, use the `--no-recreate`
704 flag.
705
706 If you want to force Compose to stop and recreate all containers, use the
707 `--force-recreate` flag.
708
709 Usage: up [options] [SERVICE...]
710
711 Options:
712 -d Detached mode: Run containers in the background,
713 print new container names.
714 Incompatible with --abort-on-container-exit.
715 --no-color Produce monochrome output.
716 --no-deps Don't start linked services.
717 --force-recreate Recreate containers even if their configuration
718 and image haven't changed.
719 Incompatible with --no-recreate.
720 --no-recreate If containers already exist, don't recreate them.
721 Incompatible with --force-recreate.
722 --no-build Don't build an image, even if it's missing.
723 --build Build images before starting containers.
724 --abort-on-container-exit Stops all containers if any container was stopped.
725 Incompatible with -d.
726 -t, --timeout TIMEOUT Use this timeout in seconds for container shutdown
727 when attached or when containers are already
728 running. (default: 10)
729 --remove-orphans Remove containers for services not
730 defined in the Compose file
731 """
732 start_deps = not options['--no-deps']
733 cascade_stop = options['--abort-on-container-exit']
734 service_names = options['SERVICE']
735 timeout = int(options.get('--timeout') or DEFAULT_TIMEOUT)
736 remove_orphans = options['--remove-orphans']
737 detached = options.get('-d')
738
739 if detached and cascade_stop:
740 raise UserError("--abort-on-container-exit and -d cannot be combined.")
741
742 with up_shutdown_context(self.project, service_names, timeout, detached):
743 to_attach = self.project.up(
744 service_names=service_names,
745 start_deps=start_deps,
746 strategy=convergence_strategy_from_opts(options),
747 do_build=build_action_from_opts(options),
748 timeout=timeout,
749 detached=detached,
750 remove_orphans=remove_orphans)
751
752 if detached:
753 return
754
755 log_printer = log_printer_from_project(
756 self.project,
757 filter_containers_to_service_names(to_attach, service_names),
758 options['--no-color'],
759 {'follow': True},
760 cascade_stop,
761 event_stream=self.project.events(service_names=service_names))
762 print("Attaching to", list_containers(log_printer.containers))
763 log_printer.run()
764
765 if cascade_stop:
766 print("Aborting on container exit...")
767 self.project.stop(service_names=service_names, timeout=timeout)
768
769 @classmethod
770 def version(cls, options):
771 """
772 Show version informations
773
774 Usage: version [--short]
775
776 Options:
777 --short Shows only Compose's version number.
778 """
779 if options['--short']:
780 print(__version__)
781 else:
782 print(get_version_info('full'))
783
784
785 def convergence_strategy_from_opts(options):
786 no_recreate = options['--no-recreate']
787 force_recreate = options['--force-recreate']
788 if force_recreate and no_recreate:
789 raise UserError("--force-recreate and --no-recreate cannot be combined.")
790
791 if force_recreate:
792 return ConvergenceStrategy.always
793
794 if no_recreate:
795 return ConvergenceStrategy.never
796
797 return ConvergenceStrategy.changed
798
799
800 def image_type_from_opt(flag, value):
801 if not value:
802 return ImageType.none
803 try:
804 return ImageType[value]
805 except KeyError:
806 raise UserError("%s flag must be one of: all, local" % flag)
807
808
809 def build_action_from_opts(options):
810 if options['--build'] and options['--no-build']:
811 raise UserError("--build and --no-build can not be combined.")
812
813 if options['--build']:
814 return BuildAction.force
815
816 if options['--no-build']:
817 return BuildAction.skip
818
819 return BuildAction.none
820
821
822 def build_container_options(options, detach, command):
823 container_options = {
824 'command': command,
825 'tty': not (detach or options['-T'] or not sys.stdin.isatty()),
826 'stdin_open': not detach,
827 'detach': detach,
828 }
829
830 if options['-e']:
831 container_options['environment'] = parse_environment(options['-e'])
832
833 if options['--entrypoint']:
834 container_options['entrypoint'] = options.get('--entrypoint')
835
836 if options['--rm']:
837 container_options['restart'] = None
838
839 if options['--user']:
840 container_options['user'] = options.get('--user')
841
842 if not options['--service-ports']:
843 container_options['ports'] = []
844
845 if options['--publish']:
846 container_options['ports'] = options.get('--publish')
847
848 if options['--name']:
849 container_options['name'] = options['--name']
850
851 if options['--workdir']:
852 container_options['working_dir'] = options['--workdir']
853
854 return container_options
855
856
857 def run_one_off_container(container_options, project, service, options):
858 if not options['--no-deps']:
859 deps = service.get_dependency_names()
860 if deps:
861 project.up(
862 service_names=deps,
863 start_deps=True,
864 strategy=ConvergenceStrategy.never)
865
866 project.initialize()
867
868 container = service.create_container(
869 quiet=True,
870 one_off=True,
871 **container_options)
872
873 if options['-d']:
874 service.start_container(container)
875 print(container.name)
876 return
877
878 def remove_container(force=False):
879 if options['--rm']:
880 project.client.remove_container(container.id, force=True)
881
882 signals.set_signal_handler_to_shutdown()
883 try:
884 try:
885 operation = RunOperation(
886 project.client,
887 container.id,
888 interactive=not options['-T'],
889 logs=False,
890 )
891 pty = PseudoTerminal(project.client, operation)
892 sockets = pty.sockets()
893 service.start_container(container)
894 pty.start(sockets)
895 exit_code = container.wait()
896 except signals.ShutdownException:
897 project.client.stop(container.id)
898 exit_code = 1
899 except signals.ShutdownException:
900 project.client.kill(container.id)
901 remove_container(force=True)
902 sys.exit(2)
903
904 remove_container()
905 sys.exit(exit_code)
906
907
908 def log_printer_from_project(
909 project,
910 containers,
911 monochrome,
912 log_args,
913 cascade_stop=False,
914 event_stream=None,
915 ):
916 return LogPrinter(
917 containers,
918 build_log_presenters(project.service_names, monochrome),
919 event_stream or project.events(),
920 cascade_stop=cascade_stop,
921 log_args=log_args)
922
923
924 def filter_containers_to_service_names(containers, service_names):
925 if not service_names:
926 return containers
927
928 return [
929 container
930 for container in containers if container.service in service_names
931 ]
932
933
934 @contextlib.contextmanager
935 def up_shutdown_context(project, service_names, timeout, detached):
936 if detached:
937 yield
938 return
939
940 signals.set_signal_handler_to_shutdown()
941 try:
942 try:
943 yield
944 except signals.ShutdownException:
945 print("Gracefully stopping... (press Ctrl+C again to force)")
946 project.stop(service_names=service_names, timeout=timeout)
947 except signals.ShutdownException:
948 project.kill(service_names=service_names)
949 sys.exit(2)
950
951
952 def list_containers(containers):
953 return ", ".join(c.name for c in containers)
954
955
956 def exit_if(condition, message, exit_code):
957 if condition:
958 log.error(message)
959 raise SystemExit(exit_code)
960
[end of compose/cli/main.py]
[start of compose/cli/utils.py]
1 from __future__ import absolute_import
2 from __future__ import division
3 from __future__ import unicode_literals
4
5 import os
6 import platform
7 import ssl
8 import subprocess
9 import sys
10
11 import docker
12
13 import compose
14
15 # WindowsError is not defined on non-win32 platforms. Avoid runtime errors by
16 # defining it as OSError (its parent class) if missing.
17 try:
18 WindowsError
19 except NameError:
20 WindowsError = OSError
21
22
23 def yesno(prompt, default=None):
24 """
25 Prompt the user for a yes or no.
26
27 Can optionally specify a default value, which will only be
28 used if they enter a blank line.
29
30 Unrecognised input (anything other than "y", "n", "yes",
31 "no" or "") will return None.
32 """
33 answer = input(prompt).strip().lower()
34
35 if answer == "y" or answer == "yes":
36 return True
37 elif answer == "n" or answer == "no":
38 return False
39 elif answer == "":
40 return default
41 else:
42 return None
43
44
45 def input(prompt):
46 """
47 Version of input (raw_input in Python 2) which forces a flush of sys.stdout
48 to avoid problems where the prompt fails to appear due to line buffering
49 """
50 sys.stdout.write(prompt)
51 sys.stdout.flush()
52 return sys.stdin.readline().rstrip(b'\n')
53
54
55 def call_silently(*args, **kwargs):
56 """
57 Like subprocess.call(), but redirects stdout and stderr to /dev/null.
58 """
59 with open(os.devnull, 'w') as shutup:
60 try:
61 return subprocess.call(*args, stdout=shutup, stderr=shutup, **kwargs)
62 except WindowsError:
63 # On Windows, subprocess.call() can still raise exceptions. Normalize
64 # to POSIXy behaviour by returning a nonzero exit code.
65 return 1
66
67
68 def is_mac():
69 return platform.system() == 'Darwin'
70
71
72 def is_ubuntu():
73 return platform.system() == 'Linux' and platform.linux_distribution()[0] == 'Ubuntu'
74
75
76 def get_version_info(scope):
77 versioninfo = 'docker-compose version {}, build {}'.format(
78 compose.__version__,
79 get_build_version())
80
81 if scope == 'compose':
82 return versioninfo
83 if scope == 'full':
84 return (
85 "{}\n"
86 "docker-py version: {}\n"
87 "{} version: {}\n"
88 "OpenSSL version: {}"
89 ).format(
90 versioninfo,
91 docker.version,
92 platform.python_implementation(),
93 platform.python_version(),
94 ssl.OPENSSL_VERSION)
95
96 raise ValueError("{} is not a valid version scope".format(scope))
97
98
99 def get_build_version():
100 filename = os.path.join(os.path.dirname(compose.__file__), 'GITSHA')
101 if not os.path.exists(filename):
102 return 'unknown'
103
104 with open(filename) as fh:
105 return fh.read().strip()
106
[end of compose/cli/utils.py]
[start of contrib/migration/migrate-compose-file-v1-to-v2.py]
1 #!/usr/bin/env python
2 """
3 Migrate a Compose file from the V1 format in Compose 1.5 to the V2 format
4 supported by Compose 1.6+
5 """
6 from __future__ import absolute_import
7 from __future__ import unicode_literals
8
9 import argparse
10 import logging
11 import sys
12
13 import ruamel.yaml
14
15 from compose.config.types import VolumeSpec
16
17
18 log = logging.getLogger('migrate')
19
20
21 def migrate(content):
22 data = ruamel.yaml.load(content, ruamel.yaml.RoundTripLoader)
23
24 service_names = data.keys()
25
26 for name, service in data.items():
27 warn_for_links(name, service)
28 warn_for_external_links(name, service)
29 rewrite_net(service, service_names)
30 rewrite_build(service)
31 rewrite_logging(service)
32 rewrite_volumes_from(service, service_names)
33
34 services = {name: data.pop(name) for name in data.keys()}
35
36 data['version'] = "2"
37 data['services'] = services
38 create_volumes_section(data)
39
40 return data
41
42
43 def warn_for_links(name, service):
44 links = service.get('links')
45 if links:
46 example_service = links[0].partition(':')[0]
47 log.warn(
48 "Service {name} has links, which no longer create environment "
49 "variables such as {example_service_upper}_PORT. "
50 "If you are using those in your application code, you should "
51 "instead connect directly to the hostname, e.g. "
52 "'{example_service}'."
53 .format(name=name, example_service=example_service,
54 example_service_upper=example_service.upper()))
55
56
57 def warn_for_external_links(name, service):
58 external_links = service.get('external_links')
59 if external_links:
60 log.warn(
61 "Service {name} has external_links: {ext}, which now work "
62 "slightly differently. In particular, two containers must be "
63 "connected to at least one network in common in order to "
64 "communicate, even if explicitly linked together.\n\n"
65 "Either connect the external container to your app's default "
66 "network, or connect both the external container and your "
67 "service's containers to a pre-existing network. See "
68 "https://docs.docker.com/compose/networking/ "
69 "for more on how to do this."
70 .format(name=name, ext=external_links))
71
72
73 def rewrite_net(service, service_names):
74 if 'net' in service:
75 network_mode = service.pop('net')
76
77 # "container:<service name>" is now "service:<service name>"
78 if network_mode.startswith('container:'):
79 name = network_mode.partition(':')[2]
80 if name in service_names:
81 network_mode = 'service:{}'.format(name)
82
83 service['network_mode'] = network_mode
84
85
86 def rewrite_build(service):
87 if 'dockerfile' in service:
88 service['build'] = {
89 'context': service.pop('build'),
90 'dockerfile': service.pop('dockerfile'),
91 }
92
93
94 def rewrite_logging(service):
95 if 'log_driver' in service:
96 service['logging'] = {'driver': service.pop('log_driver')}
97 if 'log_opt' in service:
98 service['logging']['options'] = service.pop('log_opt')
99
100
101 def rewrite_volumes_from(service, service_names):
102 for idx, volume_from in enumerate(service.get('volumes_from', [])):
103 if volume_from.split(':', 1)[0] not in service_names:
104 service['volumes_from'][idx] = 'container:%s' % volume_from
105
106
107 def create_volumes_section(data):
108 named_volumes = get_named_volumes(data['services'])
109 if named_volumes:
110 log.warn(
111 "Named volumes ({names}) must be explicitly declared. Creating a "
112 "'volumes' section with declarations.\n\n"
113 "For backwards-compatibility, they've been declared as external. "
114 "If you don't mind the volume names being prefixed with the "
115 "project name, you can remove the 'external' option from each one."
116 .format(names=', '.join(list(named_volumes))))
117
118 data['volumes'] = named_volumes
119
120
121 def get_named_volumes(services):
122 volume_specs = [
123 VolumeSpec.parse(volume)
124 for service in services.values()
125 for volume in service.get('volumes', [])
126 ]
127 names = {
128 spec.external
129 for spec in volume_specs
130 if spec.is_named_volume
131 }
132 return {name: {'external': True} for name in names}
133
134
135 def write(stream, new_format, indent, width):
136 ruamel.yaml.dump(
137 new_format,
138 stream,
139 Dumper=ruamel.yaml.RoundTripDumper,
140 indent=indent,
141 width=width)
142
143
144 def parse_opts(args):
145 parser = argparse.ArgumentParser()
146 parser.add_argument("filename", help="Compose file filename.")
147 parser.add_argument("-i", "--in-place", action='store_true')
148 parser.add_argument(
149 "--indent", type=int, default=2,
150 help="Number of spaces used to indent the output yaml.")
151 parser.add_argument(
152 "--width", type=int, default=80,
153 help="Number of spaces used as the output width.")
154 return parser.parse_args()
155
156
157 def main(args):
158 logging.basicConfig(format='\033[33m%(levelname)s:\033[37m %(message)s\033[0m\n')
159
160 opts = parse_opts(args)
161
162 with open(opts.filename, 'r') as fh:
163 new_format = migrate(fh.read())
164
165 if opts.in_place:
166 output = open(opts.filename, 'w')
167 else:
168 output = sys.stdout
169 write(output, new_format, opts.indent, opts.width)
170
171
172 if __name__ == "__main__":
173 main(sys.argv)
174
[end of contrib/migration/migrate-compose-file-v1-to-v2.py]
[start of setup.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 from __future__ import absolute_import
4 from __future__ import unicode_literals
5
6 import codecs
7 import os
8 import re
9 import sys
10
11 from setuptools import find_packages
12 from setuptools import setup
13
14
15 def read(*parts):
16 path = os.path.join(os.path.dirname(__file__), *parts)
17 with codecs.open(path, encoding='utf-8') as fobj:
18 return fobj.read()
19
20
21 def find_version(*file_paths):
22 version_file = read(*file_paths)
23 version_match = re.search(r"^__version__ = ['\"]([^'\"]*)['\"]",
24 version_file, re.M)
25 if version_match:
26 return version_match.group(1)
27 raise RuntimeError("Unable to find version string.")
28
29
30 install_requires = [
31 'cached-property >= 1.2.0, < 2',
32 'docopt >= 0.6.1, < 0.7',
33 'PyYAML >= 3.10, < 4',
34 'requests >= 2.6.1, < 2.8',
35 'texttable >= 0.8.1, < 0.9',
36 'websocket-client >= 0.32.0, < 1.0',
37 'docker-py >= 1.8.1, < 2',
38 'dockerpty >= 0.4.1, < 0.5',
39 'six >= 1.3.0, < 2',
40 'jsonschema >= 2.5.1, < 3',
41 ]
42
43
44 tests_require = [
45 'pytest',
46 ]
47
48
49 if sys.version_info[:2] < (3, 4):
50 tests_require.append('mock >= 1.0.1')
51 install_requires.append('enum34 >= 1.0.4, < 2')
52
53
54 setup(
55 name='docker-compose',
56 version=find_version("compose", "__init__.py"),
57 description='Multi-container orchestration for Docker',
58 url='https://www.docker.com/',
59 author='Docker, Inc.',
60 license='Apache License 2.0',
61 packages=find_packages(exclude=['tests.*', 'tests']),
62 include_package_data=True,
63 test_suite='nose.collector',
64 install_requires=install_requires,
65 tests_require=tests_require,
66 entry_points="""
67 [console_scripts]
68 docker-compose=compose.cli.main:main
69 """,
70 classifiers=[
71 'Development Status :: 5 - Production/Stable',
72 'Environment :: Console',
73 'Intended Audience :: Developers',
74 'License :: OSI Approved :: Apache Software License',
75 'Programming Language :: Python :: 2',
76 'Programming Language :: Python :: 2.7',
77 'Programming Language :: Python :: 3',
78 'Programming Language :: Python :: 3.4',
79 ],
80 )
81
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
docker/compose
|
86b5ed1a84b5046089abcfded8922d6dc7e318a2
|
Unexpected result when using build args with default values
# Scenario:
## Files
`Dockerfile`:
```
FROM ubuntu:14.04
ARG FOO=1
RUN echo "-${FOO}-"
CMD /bin/bash
```
`docker-compose.yml`:
```
version: '2'
services:
test:
build:
context: .
args:
- FOO
```
## Execution:
```
$ ./docker-compose-1.6.2 --verbose config
compose.config.config.find: Using configuration files: ./docker-compose.yml
networks: {}
services:
test:
build:
args:
FOO: None
context: /home/riccardi/git/ses-docker/test-default-build-arg
version: '2.0'
volumes: {}
```
```
$ ./docker-compose-1.6.2 --verbose build
compose.config.config.find: Using configuration files: ./docker-compose.yml
docker.auth.auth.load_config: File doesn't exist
compose.cli.command.get_client: docker-compose version 1.6.2, build 4d72027
docker-py version: 1.7.2
CPython version: 2.7.9
OpenSSL version: OpenSSL 1.0.1e 11 Feb 2013
compose.cli.command.get_client: Docker base_url: http+docker://localunixsocket
compose.cli.command.get_client: Docker version: KernelVersion=4.2.0-35-generic, Os=linux, BuildTime=2016-03-10T15:54:52.312835708+00:00, ApiVersion=1.22, Version=1.10.3, GitCommit=20f81dd, Arch=amd64, GoVersion=go1.5.3
compose.service.build: Building test
compose.cli.verbose_proxy.proxy_callable: docker build <- (pull=False, stream=True, nocache=False, tag=u'testdefaultbuildarg_test', buildargs={u'FOO': 'None'}, rm=True, forcerm=False, path='/home/riccardi/git/ses-docker/test-default-build-arg', dockerfile=None)
docker.api.build._set_auth_headers: Looking for auth config
docker.api.build._set_auth_headers: No auth config in memory - loading from filesystem
docker.auth.auth.load_config: File doesn't exist
docker.api.build._set_auth_headers: No auth config found
compose.cli.verbose_proxy.proxy_callable: docker build -> <generator object _stream_helper at 0x7f56bafb3a50>
Step 1 : FROM ubuntu:14.04
---> b549a9959a66
Step 2 : ARG FOO=1
---> Using cache
---> 4774113d6ec5
Step 3 : RUN echo "-${FOO}-"
---> Running in dabd31837074
-None-
---> f8a99349af3b
Removing intermediate container dabd31837074
Step 4 : CMD /bin/bash
---> Running in 487f5e789c38
---> 6c484f426fb5
Removing intermediate container 487f5e789c38
Successfully built 6c484f426fb5
compose.cli.verbose_proxy.proxy_callable: docker close <- ()
compose.cli.verbose_proxy.proxy_callable: docker close -> None
```
( same result with 1.7.1-rc1 which includes PR #2938 )
# Issue
## Expected result:
prints `-1-`.
## Actual result:
prints `-None-`.
## Details:
Compose has no value for `FOO` build arg from its environment, so it could either send an empty string to `docker build`, or better: not send this build arg to `docker build`.
The second one would be great: it would open the possibility to use the default value for the build arg as defined in the `Dockerfile`. ( For now the workaround is to duplicate the default values from `Dockerfile` to `.env`, only works with >=1.7.0).
The first one would be still better than the current behavior.
Current behavior: no value in Compose environment is represented in python as `None`, then casted to a string `"None"`, which is probably always bad.
|
I guess if the value is unset we should omit it from the args sent to `build`.
@dnephin exactly, that would be my preferred solution. This way, default values defined in the `Dockerfile` could apply, avoiding duplication of default values.
This could be seen as a new feature though. The intermediate step, if needed, would be to just send an empty string instead of the `"None"` string.
> This could be seen as a new feature though. The intermediate step, if needed, would be to just send an empty string instead of the `None` string.
Yep, that would also be helpful for building images behind HTTP proxies.
I have the following `docker-compose.yml` file:
```
version: "2"
services:
hello:
build:
context: .
args:
- http_proxy
command: sh
```
and the following `Dockerfile`:
```
FROM alpine
RUN set -x && env
```
When I run `docker-compose build hello` _without_ the environment variable `http_proxy`, I get `None`:
```
$ docker-compose build --no-cache hello
Building hello
Step 1 : FROM alpine
---> 13e1761bf172
Step 2 : RUN set -x && env
---> Running in 0ec8d98bf685
HOSTNAME=27c9668b3d5e
SHLVL=1
HOME=/root
http_proxy=None
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/
+ env
---> 212f8c97b934
Removing intermediate container 0ec8d98bf685
Successfully built 212f8c97b934
$
```
What I would expect is, in fact, the same behaviour as when `http_proxy` is defined to be empty:
```
$ env http_proxy= docker-compose build --no-cache hello
Building hello
Step 1 : FROM alpine
---> 13e1761bf172
Step 2 : RUN set -x && env
---> Running in 6bfec420e409
HOSTNAME=27c9668b3d5e
SHLVL=1
HOME=/root
http_proxy=
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/
+ env
---> c5bf6c0e0190
Removing intermediate container 6bfec420e409
Successfully built c5bf6c0e0190
(docker-compose)carlos@elouard:~/x$
```
If that were the case, my `docker-compose.yml` and `Dockerfile` would work both at `$WORK`, where I have an HTTP proxy in front of me, and at home, where I do not have such constraint.
|
2016-05-11T21:26:24Z
|
<patch>
diff --git a/compose/utils.py b/compose/utils.py
--- a/compose/utils.py
+++ b/compose/utils.py
@@ -95,4 +95,4 @@ def microseconds_from_time_nano(time_nano):
def build_string_dict(source_dict):
- return dict((k, str(v)) for k, v in source_dict.items())
+ return dict((k, str(v if v is not None else '')) for k, v in source_dict.items())
</patch>
|
[]
|
[]
| |||
pandas-dev__pandas-6883
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
API: SQL legacy mode to_sql 'index' kwarg behaviour
A leftover from #6735. In this PR, multi-index support was added to the new `to_sql` and `read_table` functions based on sqlalchemy. However, I did not change anything in the legacy `to_sql` functions.
This has the following consequences for the `index` handling in legacy mode (https://github.com/pydata/pandas/blob/18bd0d64bf1fcdc7e86e743332dab29e9a155909/pandas/io/sql.py#L808):
- no multi-index support: so depending on the `con` type (dbapi connection or sqlalchemy connection), writing a multi-index dataframe will work or generate an error.
- before, in 0.13.1 and before, there was actually no support for writing the index (it was just not written), so this is actually an **API change** for the legacy mode (because now writing the index is set to True by default), for `write_frame` (as `to_sql` did not yet exist)
We could also opt to remove this entirely from the legacy mode (leave it as it was). However this is also somewhat complicated, as it is not easy to detect when the `index` keyword is specified by the user in legacy mode (in order to warn that this is ignored), as it is set to True by default. But it seems to me that we should either support it fully (with multi-index as for sqlalchemy based), or not.
But maybe more in general: how do we see the 'legacy'? Just keep it for backwards compatibility? Or is it useful to have something that is not dependent on sqlalchemy? (so also enhance it? or only bug fixes?)
@hayd @mangecoeur @danielballan
</issue>
<code>
[start of README.md]
1 # pandas: powerful Python data analysis toolkit
2
3 
4
5 [](http://scatterci.github.io/pydata/pandas)
6
7 ## What is it
8
9 **pandas** is a Python package providing fast, flexible, and expressive data
10 structures designed to make working with "relational" or "labeled" data both
11 easy and intuitive. It aims to be the fundamental high-level building block for
12 doing practical, **real world** data analysis in Python. Additionally, it has
13 the broader goal of becoming **the most powerful and flexible open source data
14 analysis / manipulation tool available in any language**. It is already well on
15 its way toward this goal.
16
17 ## Main Features
18 Here are just a few of the things that pandas does well:
19
20 - Easy handling of [**missing data**][missing-data] (represented as
21 `NaN`) in floating point as well as non-floating point data
22 - Size mutability: columns can be [**inserted and
23 deleted**][insertion-deletion] from DataFrame and higher dimensional
24 objects
25 - Automatic and explicit [**data alignment**][alignment]: objects can
26 be explicitly aligned to a set of labels, or the user can simply
27 ignore the labels and let `Series`, `DataFrame`, etc. automatically
28 align the data for you in computations
29 - Powerful, flexible [**group by**][groupby] functionality to perform
30 split-apply-combine operations on data sets, for both aggregating
31 and transforming data
32 - Make it [**easy to convert**][conversion] ragged,
33 differently-indexed data in other Python and NumPy data structures
34 into DataFrame objects
35 - Intelligent label-based [**slicing**][slicing], [**fancy
36 indexing**][fancy-indexing], and [**subsetting**][subsetting] of
37 large data sets
38 - Intuitive [**merging**][merging] and [**joining**][joining] data
39 sets
40 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of
41 data sets
42 - [**Hierarchical**][mi] labeling of axes (possible to have multiple
43 labels per tick)
44 - Robust IO tools for loading data from [**flat files**][flat-files]
45 (CSV and delimited), [**Excel files**][excel], [**databases**][db],
46 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore]
47 - [**Time series**][timeseries]-specific functionality: date range
48 generation and frequency conversion, moving window statistics,
49 moving window linear regressions, date shifting and lagging, etc.
50
51
52 [missing-data]: http://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data
53 [insertion-deletion]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html#column-selection-addition-deletion
54 [alignment]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html?highlight=alignment#intro-to-data-structures
55 [groupby]: http://pandas.pydata.org/pandas-docs/stable/groupby.html#group-by-split-apply-combine
56 [conversion]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe
57 [slicing]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges
58 [fancy-indexing]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#advanced-indexing-with-ix
59 [subsetting]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing
60 [merging]: http://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging
61 [joining]: http://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index
62 [reshape]: http://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-and-pivot-tables
63 [pivot-table]: http://pandas.pydata.org/pandas-docs/stable/reshaping.html#pivot-tables-and-cross-tabulations
64 [mi]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex
65 [flat-files]: http://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files
66 [excel]: http://pandas.pydata.org/pandas-docs/stable/io.html#excel-files
67 [db]: http://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries
68 [hdfstore]: http://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables
69 [timeseries]: http://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-series-date-functionality
70
71 ## Where to get it
72 The source code is currently hosted on GitHub at:
73 http://github.com/pydata/pandas
74
75 Binary installers for the latest released version are available at the Python
76 package index
77
78 http://pypi.python.org/pypi/pandas/
79
80 And via `easy_install`:
81
82 ```sh
83 easy_install pandas
84 ```
85
86 or `pip`:
87
88 ```sh
89 pip install pandas
90 ```
91
92 ## Dependencies
93 - [NumPy](http://www.numpy.org): 1.6.1 or higher
94 - [python-dateutil](http://labix.org/python-dateutil): 1.5 or higher
95 - [pytz](http://pytz.sourceforge.net)
96 - Needed for time zone support with ``pandas.date_range``
97
98 ### Highly Recommended Dependencies
99 - [numexpr](http://code.google.com/p/numexpr/)
100 - Needed to accelerate some expression evaluation operations
101 - Required by PyTables
102 - [bottleneck](http://berkeleyanalytics.com/bottleneck)
103 - Needed to accelerate certain numerical operations
104
105 ### Optional dependencies
106 - [Cython](http://www.cython.org): Only necessary to build development version. Version 0.17.1 or higher.
107 - [SciPy](http://www.scipy.org): miscellaneous statistical functions
108 - [PyTables](http://www.pytables.org): necessary for HDF5-based storage
109 - [SQLAlchemy](http://www.sqlalchemy.org): for SQL database support. Version 0.8.1 or higher recommended.
110 - [matplotlib](http://matplotlib.sourceforge.net/): for plotting
111 - [statsmodels](http://statsmodels.sourceforge.net/)
112 - Needed for parts of `pandas.stats`
113 - For Excel I/O:
114 - [xlrd/xlwt](http://www.python-excel.org/)
115 - Excel reading (xlrd) and writing (xlwt)
116 - [openpyxl](http://packages.python.org/openpyxl/)
117 - openpyxl version 1.6.1 or higher, for writing .xlsx files
118 - xlrd >= 0.9.0
119 - [XlsxWriter](https://pypi.python.org/pypi/XlsxWriter)
120 - Alternative Excel writer.
121 - [Google bq Command Line Tool](https://developers.google.com/bigquery/bq-command-line-tool/)
122 - Needed for `pandas.io.gbq`
123 - [boto](https://pypi.python.org/pypi/boto): necessary for Amazon S3 access.
124 - One of the following combinations of libraries is needed to use the
125 top-level [`pandas.read_html`][read-html-docs] function:
126 - [BeautifulSoup4][BeautifulSoup4] and [html5lib][html5lib] (Any
127 recent version of [html5lib][html5lib] is okay.)
128 - [BeautifulSoup4][BeautifulSoup4] and [lxml][lxml]
129 - [BeautifulSoup4][BeautifulSoup4] and [html5lib][html5lib] and [lxml][lxml]
130 - Only [lxml][lxml], although see [HTML reading gotchas][html-gotchas]
131 for reasons as to why you should probably **not** take this approach.
132
133 #### Notes about HTML parsing libraries
134 - If you install [BeautifulSoup4][BeautifulSoup4] you must install
135 either [lxml][lxml] or [html5lib][html5lib] or both.
136 `pandas.read_html` will **not** work with *only* `BeautifulSoup4`
137 installed.
138 - You are strongly encouraged to read [HTML reading
139 gotchas][html-gotchas]. It explains issues surrounding the
140 installation and usage of the above three libraries.
141 - You may need to install an older version of
142 [BeautifulSoup4][BeautifulSoup4]:
143 - Versions 4.2.1, 4.1.3 and 4.0.2 have been confirmed for 64 and
144 32-bit Ubuntu/Debian
145 - Additionally, if you're using [Anaconda][Anaconda] you should
146 definitely read [the gotchas about HTML parsing][html-gotchas]
147 libraries
148 - If you're on a system with `apt-get` you can do
149
150 ```sh
151 sudo apt-get build-dep python-lxml
152 ```
153
154 to get the necessary dependencies for installation of [lxml][lxml].
155 This will prevent further headaches down the line.
156
157 [html5lib]: https://github.com/html5lib/html5lib-python "html5lib"
158 [BeautifulSoup4]: http://www.crummy.com/software/BeautifulSoup "BeautifulSoup4"
159 [lxml]: http://lxml.de
160 [Anaconda]: https://store.continuum.io/cshop/anaconda
161 [NumPy]: http://numpy.scipy.org/
162 [html-gotchas]: http://pandas.pydata.org/pandas-docs/stable/gotchas.html#html-table-parsing
163 [read-html-docs]: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.io.html.read_html.html#pandas.io.html.read_html
164
165 ## Installation from sources
166 To install pandas from source you need Cython in addition to the normal
167 dependencies above. Cython can be installed from pypi:
168
169 ```sh
170 pip install cython
171 ```
172
173 In the `pandas` directory (same one where you found this file after
174 cloning the git repo), execute:
175
176 ```sh
177 python setup.py install
178 ```
179
180 or for installing in [development mode](http://www.pip-installer.org/en/latest/usage.html):
181
182 ```sh
183 python setup.py develop
184 ```
185
186 Alternatively, you can use `pip` if you want all the dependencies pulled
187 in automatically (the `-e` option is for installing it in [development
188 mode](http://www.pip-installer.org/en/latest/usage.html)):
189
190 ```sh
191 pip install -e .
192 ```
193
194 On Windows, you will need to install MinGW and execute:
195
196 ```sh
197 python setup.py build --compiler=mingw32
198 python setup.py install
199 ```
200
201 See http://pandas.pydata.org/ for more information.
202
203 ## License
204 BSD
205
206 ## Documentation
207 The official documentation is hosted on PyData.org: http://pandas.pydata.org/
208
209 The Sphinx documentation should provide a good starting point for learning how
210 to use the library. Expect the docs to continue to expand as time goes on.
211
212 ## Background
213 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and
214 has been under active development since then.
215
216 ## Discussion and Development
217 Since pandas development is related to a number of other scientific
218 Python projects, questions are welcome on the scipy-user mailing
219 list. Specialized discussions or design issues should take place on
220 the pystatsmodels mailing list / Google group, where
221 ``scikits.statsmodels`` and other libraries will also be discussed:
222
223 http://groups.google.com/group/pystatsmodels
224
[end of README.md]
[start of pandas/io/sql.py]
1 """
2 Collection of query wrappers / abstractions to both facilitate data
3 retrieval and to reduce dependency on DB-specific API.
4 """
5 from __future__ import print_function, division
6 from datetime import datetime, date, timedelta
7
8 import warnings
9 import itertools
10 import numpy as np
11
12 import pandas.core.common as com
13 from pandas.compat import lzip, map, zip, raise_with_traceback, string_types
14 from pandas.core.api import DataFrame, Series
15 from pandas.core.base import PandasObject
16 from pandas.tseries.tools import to_datetime
17
18
19 class SQLAlchemyRequired(ImportError):
20 pass
21
22
23 class DatabaseError(IOError):
24 pass
25
26
27 #------------------------------------------------------------------------------
28 # Helper functions
29
30 def _convert_params(sql, params):
31 """convert sql and params args to DBAPI2.0 compliant format"""
32 args = [sql]
33 if params is not None:
34 if hasattr(params, 'keys'): # test if params is a mapping
35 args += [params]
36 else:
37 args += [list(params)]
38 return args
39
40
41 def _safe_col_name(col_name):
42 #TODO: probably want to forbid database reserved names, such as "database"
43 return col_name.strip().replace(' ', '_')
44
45
46 def _handle_date_column(col, format=None):
47 if isinstance(format, dict):
48 return to_datetime(col, **format)
49 else:
50 if format in ['D', 's', 'ms', 'us', 'ns']:
51 return to_datetime(col, coerce=True, unit=format)
52 elif issubclass(col.dtype.type, np.floating) or issubclass(col.dtype.type, np.integer):
53 # parse dates as timestamp
54 format = 's' if format is None else format
55 return to_datetime(col, coerce=True, unit=format)
56 else:
57 return to_datetime(col, coerce=True, format=format)
58
59
60 def _parse_date_columns(data_frame, parse_dates):
61 """ Force non-datetime columns to be read as such.
62 Supports both string formatted and integer timestamp columns
63 """
64 # handle non-list entries for parse_dates gracefully
65 if parse_dates is True or parse_dates is None or parse_dates is False:
66 parse_dates = []
67
68 if not hasattr(parse_dates, '__iter__'):
69 parse_dates = [parse_dates]
70
71 for col_name in parse_dates:
72 df_col = data_frame[col_name]
73 try:
74 fmt = parse_dates[col_name]
75 except TypeError:
76 fmt = None
77 data_frame[col_name] = _handle_date_column(df_col, format=fmt)
78
79 return data_frame
80
81
82 def execute(sql, con, cur=None, params=None, flavor='sqlite'):
83 """
84 Execute the given SQL query using the provided connection object.
85
86 Parameters
87 ----------
88 sql : string
89 Query to be executed
90 con : SQLAlchemy engine or DBAPI2 connection (legacy mode)
91 Using SQLAlchemy makes it possible to use any DB supported by that
92 library.
93 If a DBAPI2 object, a supported SQL flavor must also be provided
94 cur : depreciated, cursor is obtained from connection
95 params : list or tuple, optional
96 List of parameters to pass to execute method.
97 flavor : string "sqlite", "mysql"
98 Specifies the flavor of SQL to use.
99 Ignored when using SQLAlchemy engine. Required when using DBAPI2 connection.
100 Returns
101 -------
102 Results Iterable
103 """
104 pandas_sql = pandasSQL_builder(con, flavor=flavor)
105 args = _convert_params(sql, params)
106 return pandas_sql.execute(*args)
107
108
109 def tquery(sql, con, cur=None, params=None, flavor='sqlite'):
110 """
111 Returns list of tuples corresponding to each row in given sql
112 query.
113
114 If only one column selected, then plain list is returned.
115
116 Parameters
117 ----------
118 sql: string
119 SQL query to be executed
120 con: SQLAlchemy engine or DBAPI2 connection (legacy mode)
121 Using SQLAlchemy makes it possible to use any DB supported by that
122 library.
123 If a DBAPI2 object is given, a supported SQL flavor must also be provided
124 cur: depreciated, cursor is obtained from connection
125 params: list or tuple, optional
126 List of parameters to pass to execute method.
127 flavor : string "sqlite", "mysql"
128 Specifies the flavor of SQL to use.
129 Ignored when using SQLAlchemy engine. Required when using DBAPI2
130 connection.
131 Returns
132 -------
133 Results Iterable
134 """
135 warnings.warn(
136 "tquery is depreciated, and will be removed in future versions",
137 DeprecationWarning)
138
139 pandas_sql = pandasSQL_builder(con, flavor=flavor)
140 args = _convert_params(sql, params)
141 return pandas_sql.tquery(*args)
142
143
144 def uquery(sql, con, cur=None, params=None, engine=None, flavor='sqlite'):
145 """
146 Does the same thing as tquery, but instead of returning results, it
147 returns the number of rows affected. Good for update queries.
148
149 Parameters
150 ----------
151 sql: string
152 SQL query to be executed
153 con: SQLAlchemy engine or DBAPI2 connection (legacy mode)
154 Using SQLAlchemy makes it possible to use any DB supported by that
155 library.
156 If a DBAPI2 object is given, a supported SQL flavor must also be provided
157 cur: depreciated, cursor is obtained from connection
158 params: list or tuple, optional
159 List of parameters to pass to execute method.
160 flavor : string "sqlite", "mysql"
161 Specifies the flavor of SQL to use.
162 Ignored when using SQLAlchemy engine. Required when using DBAPI2
163 connection.
164 Returns
165 -------
166 Number of affected rows
167 """
168 warnings.warn(
169 "uquery is depreciated, and will be removed in future versions",
170 DeprecationWarning)
171 pandas_sql = pandasSQL_builder(con, flavor=flavor)
172 args = _convert_params(sql, params)
173 return pandas_sql.uquery(*args)
174
175
176 #------------------------------------------------------------------------------
177 # Read and write to DataFrames
178
179
180 def read_sql(sql, con, index_col=None, flavor='sqlite', coerce_float=True,
181 params=None, parse_dates=None):
182 """
183 Returns a DataFrame corresponding to the result set of the query
184 string.
185
186 Optionally provide an `index_col` parameter to use one of the
187 columns as the index, otherwise default integer index will be used.
188
189 Parameters
190 ----------
191 sql : string
192 SQL query to be executed
193 con : SQLAlchemy engine or DBAPI2 connection (legacy mode)
194 Using SQLAlchemy makes it possible to use any DB supported by that
195 library.
196 If a DBAPI2 object is given, a supported SQL flavor must also be provided
197 index_col : string, optional
198 column name to use for the returned DataFrame object.
199 flavor : string, {'sqlite', 'mysql'}
200 The flavor of SQL to use. Ignored when using
201 SQLAlchemy engine. Required when using DBAPI2 connection.
202 coerce_float : boolean, default True
203 Attempt to convert values to non-string, non-numeric objects (like
204 decimal.Decimal) to floating point, useful for SQL result sets
205 cur : depreciated, cursor is obtained from connection
206 params : list, tuple or dict, optional
207 List of parameters to pass to execute method.
208 parse_dates : list or dict
209 - List of column names to parse as dates
210 - Dict of ``{column_name: format string}`` where format string is
211 strftime compatible in case of parsing string times or is one of
212 (D, s, ns, ms, us) in case of parsing integer timestamps
213 - Dict of ``{column_name: arg dict}``, where the arg dict corresponds
214 to the keyword arguments of :func:`pandas.to_datetime`
215 Especially useful with databases without native Datetime support,
216 such as SQLite
217
218 Returns
219 -------
220 DataFrame
221
222 See also
223 --------
224 read_table
225
226 """
227 pandas_sql = pandasSQL_builder(con, flavor=flavor)
228 return pandas_sql.read_sql(sql,
229 index_col=index_col,
230 params=params,
231 coerce_float=coerce_float,
232 parse_dates=parse_dates)
233
234
235 def to_sql(frame, name, con, flavor='sqlite', if_exists='fail', index=True,
236 index_label=None):
237 """
238 Write records stored in a DataFrame to a SQL database.
239
240 Parameters
241 ----------
242 frame : DataFrame
243 name : string
244 Name of SQL table
245 con : SQLAlchemy engine or DBAPI2 connection (legacy mode)
246 Using SQLAlchemy makes it possible to use any DB supported by that
247 library.
248 If a DBAPI2 object is given, a supported SQL flavor must also be provided
249 flavor : {'sqlite', 'mysql'}, default 'sqlite'
250 The flavor of SQL to use. Ignored when using SQLAlchemy engine.
251 Required when using DBAPI2 connection.
252 if_exists : {'fail', 'replace', 'append'}, default 'fail'
253 - fail: If table exists, do nothing.
254 - replace: If table exists, drop it, recreate it, and insert data.
255 - append: If table exists, insert data. Create if does not exist.
256 index : boolean, default True
257 Write DataFrame index as a column
258 index_label : string or sequence, default None
259 Column label for index column(s). If None is given (default) and
260 `index` is True, then the index names are used.
261 A sequence should be given if the DataFrame uses MultiIndex.
262
263 """
264 pandas_sql = pandasSQL_builder(con, flavor=flavor)
265
266 if isinstance(frame, Series):
267 frame = frame.to_frame()
268 elif not isinstance(frame, DataFrame):
269 raise NotImplementedError
270
271 pandas_sql.to_sql(frame, name, if_exists=if_exists, index=index,
272 index_label=index_label)
273
274
275 def has_table(table_name, con, meta=None, flavor='sqlite'):
276 """
277 Check if DataBase has named table.
278
279 Parameters
280 ----------
281 table_name: string
282 Name of SQL table
283 con: SQLAlchemy engine or DBAPI2 connection (legacy mode)
284 Using SQLAlchemy makes it possible to use any DB supported by that
285 library.
286 If a DBAPI2 object is given, a supported SQL flavor name must also be provided
287 flavor: {'sqlite', 'mysql'}, default 'sqlite'
288 The flavor of SQL to use. Ignored when using SQLAlchemy engine.
289 Required when using DBAPI2 connection.
290
291 Returns
292 -------
293 boolean
294 """
295 pandas_sql = pandasSQL_builder(con, flavor=flavor)
296 return pandas_sql.has_table(table_name)
297
298
299 def read_table(table_name, con, meta=None, index_col=None, coerce_float=True,
300 parse_dates=None, columns=None):
301 """Given a table name and SQLAlchemy engine, return a DataFrame.
302
303 Type convertions will be done automatically.
304
305 Parameters
306 ----------
307 table_name : string
308 Name of SQL table in database
309 con : SQLAlchemy engine
310 Legacy mode not supported
311 meta : SQLAlchemy meta, optional
312 If omitted MetaData is reflected from engine
313 index_col : string or sequence of strings, optional
314 Column(s) to set as index.
315 coerce_float : boolean, default True
316 Attempt to convert values to non-string, non-numeric objects (like
317 decimal.Decimal) to floating point. Can result in loss of Precision.
318 parse_dates : list or dict
319 - List of column names to parse as dates
320 - Dict of ``{column_name: format string}`` where format string is
321 strftime compatible in case of parsing string times or is one of
322 (D, s, ns, ms, us) in case of parsing integer timestamps
323 - Dict of ``{column_name: arg dict}``, where the arg dict corresponds
324 to the keyword arguments of :func:`pandas.to_datetime`
325 Especially useful with databases without native Datetime support,
326 such as SQLite
327 columns : list, optional
328 List of column names to select from sql table
329
330 Returns
331 -------
332 DataFrame
333
334 See also
335 --------
336 read_sql
337
338 """
339 pandas_sql = PandasSQLAlchemy(con, meta=meta)
340 table = pandas_sql.read_table(table_name,
341 index_col=index_col,
342 coerce_float=coerce_float,
343 parse_dates=parse_dates,
344 columns=columns)
345
346 if table is not None:
347 return table
348 else:
349 raise ValueError("Table %s not found" % table_name, con)
350
351
352 def pandasSQL_builder(con, flavor=None, meta=None):
353 """
354 Convenience function to return the correct PandasSQL subclass based on the
355 provided parameters
356 """
357 try:
358 import sqlalchemy
359
360 if isinstance(con, sqlalchemy.engine.Engine):
361 return PandasSQLAlchemy(con, meta=meta)
362 else:
363 warnings.warn(
364 """Not an SQLAlchemy engine,
365 attempting to use as legacy DBAPI connection""")
366 if flavor is None:
367 raise ValueError(
368 """PandasSQL must be created with an SQLAlchemy engine
369 or a DBAPI2 connection and SQL flavour""")
370 else:
371 return PandasSQLLegacy(con, flavor)
372
373 except ImportError:
374 warnings.warn("SQLAlchemy not installed, using legacy mode")
375 if flavor is None:
376 raise SQLAlchemyRequired
377 else:
378 return PandasSQLLegacy(con, flavor)
379
380
381 class PandasSQLTable(PandasObject):
382 """
383 For mapping Pandas tables to SQL tables.
384 Uses fact that table is reflected by SQLAlchemy to
385 do better type convertions.
386 Also holds various flags needed to avoid having to
387 pass them between functions all the time.
388 """
389 # TODO: support for multiIndex
390 def __init__(self, name, pandas_sql_engine, frame=None, index=True,
391 if_exists='fail', prefix='pandas', index_label=None):
392 self.name = name
393 self.pd_sql = pandas_sql_engine
394 self.prefix = prefix
395 self.frame = frame
396 self.index = self._index_name(index, index_label)
397
398 if frame is not None:
399 # We want to write a frame
400 if self.pd_sql.has_table(self.name):
401 if if_exists == 'fail':
402 raise ValueError("Table '%s' already exists." % name)
403 elif if_exists == 'replace':
404 self.pd_sql.drop_table(self.name)
405 self.table = self._create_table_statement()
406 self.create()
407 elif if_exists == 'append':
408 self.table = self.pd_sql.get_table(self.name)
409 if self.table is None:
410 self.table = self._create_table_statement()
411 else:
412 self.table = self._create_table_statement()
413 self.create()
414 else:
415 # no data provided, read-only mode
416 self.table = self.pd_sql.get_table(self.name)
417
418 if self.table is None:
419 raise ValueError("Could not init table '%s'" % name)
420
421 def exists(self):
422 return self.pd_sql.has_table(self.name)
423
424 def sql_schema(self):
425 return str(self.table.compile())
426
427 def create(self):
428 self.table.create()
429
430 def insert_statement(self):
431 return self.table.insert()
432
433 def maybe_asscalar(self, i):
434 try:
435 return np.asscalar(i)
436 except AttributeError:
437 return i
438
439 def insert(self):
440 ins = self.insert_statement()
441 data_list = []
442
443 if self.index is not None:
444 temp = self.frame.copy()
445 temp.index.names = self.index
446 try:
447 temp.reset_index(inplace=True)
448 except ValueError as err:
449 raise ValueError(
450 "duplicate name in index/columns: {0}".format(err))
451 else:
452 temp = self.frame
453
454 keys = temp.columns
455
456 for t in temp.itertuples():
457 data = dict((k, self.maybe_asscalar(v))
458 for k, v in zip(keys, t[1:]))
459 data_list.append(data)
460
461 self.pd_sql.execute(ins, data_list)
462
463 def read(self, coerce_float=True, parse_dates=None, columns=None):
464
465 if columns is not None and len(columns) > 0:
466 from sqlalchemy import select
467 cols = [self.table.c[n] for n in columns]
468 if self.index is not None:
469 [cols.insert(0, self.table.c[idx]) for idx in self.index[::-1]]
470 sql_select = select(cols)
471 else:
472 sql_select = self.table.select()
473
474 result = self.pd_sql.execute(sql_select)
475 data = result.fetchall()
476 column_names = result.keys()
477
478 self.frame = DataFrame.from_records(
479 data, columns=column_names, coerce_float=coerce_float)
480
481 self._harmonize_columns(parse_dates=parse_dates)
482
483 if self.index is not None:
484 self.frame.set_index(self.index, inplace=True)
485
486 return self.frame
487
488 def _index_name(self, index, index_label):
489 # for writing: index=True to include index in sql table
490 if index is True:
491 nlevels = self.frame.index.nlevels
492 # if index_label is specified, set this as index name(s)
493 if index_label is not None:
494 if not isinstance(index_label, list):
495 index_label = [index_label]
496 if len(index_label) != nlevels:
497 raise ValueError(
498 "Length of 'index_label' should match number of "
499 "levels, which is {0}".format(nlevels))
500 else:
501 return index_label
502 # return the used column labels for the index columns
503 if nlevels == 1 and 'index' not in self.frame.columns and self.frame.index.name is None:
504 return ['index']
505 else:
506 return [l if l is not None else "level_{0}".format(i)
507 for i, l in enumerate(self.frame.index.names)]
508
509 # for reading: index=(list of) string to specify column to set as index
510 elif isinstance(index, string_types):
511 return [index]
512 elif isinstance(index, list):
513 return index
514 else:
515 return None
516
517 def _create_table_statement(self):
518 from sqlalchemy import Table, Column
519
520 safe_columns = map(_safe_col_name, self.frame.dtypes.index)
521 column_types = map(self._sqlalchemy_type, self.frame.dtypes)
522
523 columns = [Column(name, typ)
524 for name, typ in zip(safe_columns, column_types)]
525
526 if self.index is not None:
527 for i, idx_label in enumerate(self.index[::-1]):
528 idx_type = self._sqlalchemy_type(
529 self.frame.index.get_level_values(i))
530 columns.insert(0, Column(idx_label, idx_type, index=True))
531
532 return Table(self.name, self.pd_sql.meta, *columns)
533
534 def _harmonize_columns(self, parse_dates=None):
535 """ Make a data_frame's column type align with an sql_table
536 column types
537 Need to work around limited NA value support.
538 Floats are always fine, ints must always
539 be floats if there are Null values.
540 Booleans are hard because converting bool column with None replaces
541 all Nones with false. Therefore only convert bool if there are no
542 NA values.
543 Datetimes should already be converted
544 to np.datetime if supported, but here we also force conversion
545 if required
546 """
547 # handle non-list entries for parse_dates gracefully
548 if parse_dates is True or parse_dates is None or parse_dates is False:
549 parse_dates = []
550
551 if not hasattr(parse_dates, '__iter__'):
552 parse_dates = [parse_dates]
553
554 for sql_col in self.table.columns:
555 col_name = sql_col.name
556 try:
557 df_col = self.frame[col_name]
558 # the type the dataframe column should have
559 col_type = self._numpy_type(sql_col.type)
560
561 if col_type is datetime or col_type is date:
562 if not issubclass(df_col.dtype.type, np.datetime64):
563 self.frame[col_name] = _handle_date_column(df_col)
564
565 elif col_type is float:
566 # floats support NA, can always convert!
567 self.frame[col_name].astype(col_type, copy=False)
568
569 elif len(df_col) == df_col.count():
570 # No NA values, can convert ints and bools
571 if col_type is int or col_type is bool:
572 self.frame[col_name].astype(col_type, copy=False)
573
574 # Handle date parsing
575 if col_name in parse_dates:
576 try:
577 fmt = parse_dates[col_name]
578 except TypeError:
579 fmt = None
580 self.frame[col_name] = _handle_date_column(
581 df_col, format=fmt)
582
583 except KeyError:
584 pass # this column not in results
585
586 def _sqlalchemy_type(self, arr_or_dtype):
587 from sqlalchemy.types import Integer, Float, Text, Boolean, DateTime, Date, Interval
588
589 if arr_or_dtype is date:
590 return Date
591 if com.is_datetime64_dtype(arr_or_dtype):
592 try:
593 tz = arr_or_dtype.tzinfo
594 return DateTime(timezone=True)
595 except:
596 return DateTime
597 if com.is_timedelta64_dtype(arr_or_dtype):
598 return Interval
599 elif com.is_float_dtype(arr_or_dtype):
600 return Float
601 elif com.is_integer_dtype(arr_or_dtype):
602 # TODO: Refine integer size.
603 return Integer
604 elif com.is_bool(arr_or_dtype):
605 return Boolean
606 return Text
607
608 def _numpy_type(self, sqltype):
609 from sqlalchemy.types import Integer, Float, Boolean, DateTime, Date
610
611 if isinstance(sqltype, Float):
612 return float
613 if isinstance(sqltype, Integer):
614 # TODO: Refine integer size.
615 return int
616 if isinstance(sqltype, DateTime):
617 # Caution: np.datetime64 is also a subclass of np.number.
618 return datetime
619 if isinstance(sqltype, Date):
620 return date
621 if isinstance(sqltype, Boolean):
622 return bool
623 return object
624
625
626 class PandasSQL(PandasObject):
627 """
628 Subclasses Should define read_sql and to_sql
629 """
630
631 def read_sql(self, *args, **kwargs):
632 raise ValueError(
633 "PandasSQL must be created with an SQLAlchemy engine or connection+sql flavor")
634
635 def to_sql(self, *args, **kwargs):
636 raise ValueError(
637 "PandasSQL must be created with an SQLAlchemy engine or connection+sql flavor")
638
639
640 class PandasSQLAlchemy(PandasSQL):
641 """
642 This class enables convertion between DataFrame and SQL databases
643 using SQLAlchemy to handle DataBase abstraction
644 """
645
646 def __init__(self, engine, meta=None):
647 self.engine = engine
648 if not meta:
649 from sqlalchemy.schema import MetaData
650 meta = MetaData(self.engine)
651 meta.reflect(self.engine)
652
653 self.meta = meta
654
655 def execute(self, *args, **kwargs):
656 """Simple passthrough to SQLAlchemy engine"""
657 return self.engine.execute(*args, **kwargs)
658
659 def tquery(self, *args, **kwargs):
660 result = self.execute(*args, **kwargs)
661 return result.fetchall()
662
663 def uquery(self, *args, **kwargs):
664 result = self.execute(*args, **kwargs)
665 return result.rowcount
666
667 def read_sql(self, sql, index_col=None, coerce_float=True,
668 parse_dates=None, params=None):
669 args = _convert_params(sql, params)
670
671 result = self.execute(*args)
672 data = result.fetchall()
673 columns = result.keys()
674
675 data_frame = DataFrame.from_records(
676 data, columns=columns, coerce_float=coerce_float)
677
678 _parse_date_columns(data_frame, parse_dates)
679
680 if index_col is not None:
681 data_frame.set_index(index_col, inplace=True)
682
683 return data_frame
684
685 def to_sql(self, frame, name, if_exists='fail', index=True,
686 index_label=None):
687 table = PandasSQLTable(
688 name, self, frame=frame, index=index, if_exists=if_exists,
689 index_label=index_label)
690 table.insert()
691
692 @property
693 def tables(self):
694 return self.meta.tables
695
696 def has_table(self, name):
697 if self.meta.tables.get(name) is not None:
698 return True
699 else:
700 return False
701
702 def get_table(self, table_name):
703 return self.meta.tables.get(table_name)
704
705 def read_table(self, table_name, index_col=None, coerce_float=True,
706 parse_dates=None, columns=None):
707
708 table = PandasSQLTable(table_name, self, index=index_col)
709 return table.read(coerce_float=coerce_float,
710 parse_dates=parse_dates, columns=columns)
711
712 def drop_table(self, table_name):
713 if self.engine.has_table(table_name):
714 self.get_table(table_name).drop()
715 self.meta.clear()
716 self.meta.reflect()
717
718 def _create_sql_schema(self, frame, table_name):
719 table = PandasSQLTable(table_name, self, frame=frame)
720 return str(table.compile())
721
722
723 # ---- SQL without SQLAlchemy ---
724 # Flavour specific sql strings and handler class for access to DBs without
725 # SQLAlchemy installed
726 # SQL type convertions for each DB
727 _SQL_TYPES = {
728 'text': {
729 'mysql': 'VARCHAR (63)',
730 'sqlite': 'TEXT',
731 },
732 'float': {
733 'mysql': 'FLOAT',
734 'sqlite': 'REAL',
735 },
736 'int': {
737 'mysql': 'BIGINT',
738 'sqlite': 'INTEGER',
739 },
740 'datetime': {
741 'mysql': 'DATETIME',
742 'sqlite': 'TIMESTAMP',
743 },
744 'date': {
745 'mysql': 'DATE',
746 'sqlite': 'TIMESTAMP',
747 },
748 'bool': {
749 'mysql': 'BOOLEAN',
750 'sqlite': 'INTEGER',
751 }
752 }
753
754 # SQL enquote and wildcard symbols
755 _SQL_SYMB = {
756 'mysql': {
757 'br_l': '`',
758 'br_r': '`',
759 'wld': '%s'
760 },
761 'sqlite': {
762 'br_l': '[',
763 'br_r': ']',
764 'wld': '?'
765 }
766 }
767
768
769 class PandasSQLTableLegacy(PandasSQLTable):
770 """Patch the PandasSQLTable for legacy support.
771 Instead of a table variable just use the Create Table
772 statement"""
773 def sql_schema(self):
774 return str(self.table)
775
776 def create(self):
777 self.pd_sql.execute(self.table)
778
779 def insert_statement(self):
780 # Replace spaces in DataFrame column names with _.
781 safe_names = [_safe_col_name(n) for n in self.frame.dtypes.index]
782 flv = self.pd_sql.flavor
783 br_l = _SQL_SYMB[flv]['br_l'] # left val quote char
784 br_r = _SQL_SYMB[flv]['br_r'] # right val quote char
785 wld = _SQL_SYMB[flv]['wld'] # wildcard char
786
787 if self.index is not None:
788 safe_names.insert(0, self.index)
789
790 bracketed_names = [br_l + column + br_r for column in safe_names]
791 col_names = ','.join(bracketed_names)
792 wildcards = ','.join([wld] * len(safe_names))
793 insert_statement = 'INSERT INTO %s (%s) VALUES (%s)' % (
794 self.name, col_names, wildcards)
795 return insert_statement
796
797 def insert(self):
798 ins = self.insert_statement()
799 cur = self.pd_sql.con.cursor()
800 for r in self.frame.itertuples():
801 data = [self.maybe_asscalar(v) for v in r[1:]]
802 if self.index is not None:
803 data.insert(0, self.maybe_asscalar(r[0]))
804 cur.execute(ins, tuple(data))
805 cur.close()
806 self.pd_sql.con.commit()
807
808 def _index_name(self, index, index_label):
809 if index is True:
810 if self.frame.index.name is not None:
811 return _safe_col_name(self.frame.index.name)
812 else:
813 return 'pandas_index'
814 elif isinstance(index, string_types):
815 return index
816 else:
817 return None
818
819 def _create_table_statement(self):
820 "Return a CREATE TABLE statement to suit the contents of a DataFrame."
821
822 # Replace spaces in DataFrame column names with _.
823 safe_columns = [_safe_col_name(n) for n in self.frame.dtypes.index]
824 column_types = [self._sql_type_name(typ) for typ in self.frame.dtypes]
825
826 if self.index is not None:
827 safe_columns.insert(0, self.index)
828 column_types.insert(0, self._sql_type_name(self.frame.index.dtype))
829 flv = self.pd_sql.flavor
830
831 br_l = _SQL_SYMB[flv]['br_l'] # left val quote char
832 br_r = _SQL_SYMB[flv]['br_r'] # right val quote char
833
834 col_template = br_l + '%s' + br_r + ' %s'
835
836 columns = ',\n '.join(col_template %
837 x for x in zip(safe_columns, column_types))
838 template = """CREATE TABLE %(name)s (
839 %(columns)s
840 )"""
841 create_statement = template % {'name': self.name, 'columns': columns}
842 return create_statement
843
844 def _sql_type_name(self, dtype):
845 pytype = dtype.type
846 pytype_name = "text"
847 if issubclass(pytype, np.floating):
848 pytype_name = "float"
849 elif issubclass(pytype, np.integer):
850 pytype_name = "int"
851 elif issubclass(pytype, np.datetime64) or pytype is datetime:
852 # Caution: np.datetime64 is also a subclass of np.number.
853 pytype_name = "datetime"
854 elif pytype is datetime.date:
855 pytype_name = "date"
856 elif issubclass(pytype, np.bool_):
857 pytype_name = "bool"
858
859 return _SQL_TYPES[pytype_name][self.pd_sql.flavor]
860
861
862 class PandasSQLLegacy(PandasSQL):
863
864 def __init__(self, con, flavor):
865 self.con = con
866 if flavor not in ['sqlite', 'mysql']:
867 raise NotImplementedError
868 else:
869 self.flavor = flavor
870
871 def execute(self, *args, **kwargs):
872 try:
873 cur = self.con.cursor()
874 if kwargs:
875 cur.execute(*args, **kwargs)
876 else:
877 cur.execute(*args)
878 return cur
879 except Exception as e:
880 try:
881 self.con.rollback()
882 except Exception: # pragma: no cover
883 ex = DatabaseError(
884 "Execution failed on sql: %s\n%s\nunable to rollback" % (args[0], e))
885 raise_with_traceback(ex)
886
887 ex = DatabaseError("Execution failed on sql: %s" % args[0])
888 raise_with_traceback(ex)
889
890 def tquery(self, *args):
891 cur = self.execute(*args)
892 result = self._fetchall_as_list(cur)
893
894 # This makes into tuples
895 if result and len(result[0]) == 1:
896 # python 3 compat
897 result = list(lzip(*result)[0])
898 elif result is None: # pragma: no cover
899 result = []
900 return result
901
902 def uquery(self, *args):
903 cur = self.execute(*args)
904 return cur.rowcount
905
906 def read_sql(self, sql, index_col=None, coerce_float=True, params=None,
907 parse_dates=None):
908 args = _convert_params(sql, params)
909 cursor = self.execute(*args)
910 columns = [col_desc[0] for col_desc in cursor.description]
911 data = self._fetchall_as_list(cursor)
912 cursor.close()
913
914 data_frame = DataFrame.from_records(
915 data, columns=columns, coerce_float=coerce_float)
916
917 _parse_date_columns(data_frame, parse_dates)
918
919 if index_col is not None:
920 data_frame.set_index(index_col, inplace=True)
921 return data_frame
922
923 def _fetchall_as_list(self, cur):
924 result = cur.fetchall()
925 if not isinstance(result, list):
926 result = list(result)
927 return result
928
929 def to_sql(self, frame, name, if_exists='fail', index=True,
930 index_label=None):
931 """
932 Write records stored in a DataFrame to a SQL database.
933
934 Parameters
935 ----------
936 frame: DataFrame
937 name: name of SQL table
938 flavor: {'sqlite', 'mysql', 'postgres'}, default 'sqlite'
939 if_exists: {'fail', 'replace', 'append'}, default 'fail'
940 fail: If table exists, do nothing.
941 replace: If table exists, drop it, recreate it, and insert data.
942 append: If table exists, insert data. Create if does not exist.
943 index_label : ignored (only used in sqlalchemy mode)
944 """
945 table = PandasSQLTableLegacy(
946 name, self, frame=frame, index=index, if_exists=if_exists)
947 table.insert()
948
949 def has_table(self, name):
950 flavor_map = {
951 'sqlite': ("SELECT name FROM sqlite_master "
952 "WHERE type='table' AND name='%s';") % name,
953 'mysql': "SHOW TABLES LIKE '%s'" % name}
954 query = flavor_map.get(self.flavor)
955
956 return len(self.tquery(query)) > 0
957
958 def get_table(self, table_name):
959 return None # not supported in Legacy mode
960
961 def drop_table(self, name):
962 drop_sql = "DROP TABLE %s" % name
963 self.execute(drop_sql)
964
965
966 # legacy names, with depreciation warnings and copied docs
967 def get_schema(frame, name, con, flavor='sqlite'):
968 """
969 Get the SQL db table schema for the given frame
970
971 Parameters
972 ----------
973 frame: DataFrame
974 name: name of SQL table
975 con: an open SQL database connection object
976 engine: an SQLAlchemy engine - replaces connection and flavor
977 flavor: {'sqlite', 'mysql', 'postgres'}, default 'sqlite'
978
979 """
980 warnings.warn(
981 "get_schema is depreciated", DeprecationWarning)
982 pandas_sql = pandasSQL_builder(con=con, flavor=flavor)
983 return pandas_sql._create_sql_schema(frame, name)
984
985
986 def read_frame(*args, **kwargs):
987 """DEPRECIATED - use read_sql
988 """
989 warnings.warn(
990 "read_frame is depreciated, use read_sql", DeprecationWarning)
991 return read_sql(*args, **kwargs)
992
993
994 def write_frame(*args, **kwargs):
995 """DEPRECIATED - use to_sql
996 """
997 warnings.warn("write_frame is depreciated, use to_sql", DeprecationWarning)
998 return to_sql(*args, **kwargs)
999
1000
1001 # Append wrapped function docstrings
1002 read_frame.__doc__ += read_sql.__doc__
1003 write_frame.__doc__ += to_sql.__doc__
1004
[end of pandas/io/sql.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pandas-dev/pandas
|
ad1f47ddb16c383c7377d0f7a930d0a80c6fbfef
|
API: SQL legacy mode to_sql 'index' kwarg behaviour
A leftover from #6735. In this PR, multi-index support was added to the new `to_sql` and `read_table` functions based on sqlalchemy. However, I did not change anything in the legacy `to_sql` functions.
This has the following consequences for the `index` handling in legacy mode (https://github.com/pydata/pandas/blob/18bd0d64bf1fcdc7e86e743332dab29e9a155909/pandas/io/sql.py#L808):
- no multi-index support: so depending on the `con` type (dbapi connection or sqlalchemy connection), writing a multi-index dataframe will work or generate an error.
- before, in 0.13.1 and before, there was actually no support for writing the index (it was just not written), so this is actually an **API change** for the legacy mode (because now writing the index is set to True by default), for `write_frame` (as `to_sql` did not yet exist)
We could also opt to remove this entirely from the legacy mode (leave it as it was). However this is also somewhat complicated, as it is not easy to detect when the `index` keyword is specified by the user in legacy mode (in order to warn that this is ignored), as it is set to True by default. But it seems to me that we should either support it fully (with multi-index as for sqlalchemy based), or not.
But maybe more in general: how do we see the 'legacy'? Just keep it for backwards compatibility? Or is it useful to have something that is not dependent on sqlalchemy? (so also enhance it? or only bug fixes?)
@hayd @mangecoeur @danielballan
|
> How do we see the 'legacy'?
In the long term, I'd like to keep full-fledged support for connections generated by `sqlite3`. For light applications, I still favor those over sqlalchemy for code portability (e.g., to collaborators who might not have sqlaclemy). Since all other flavors already require dependencies that are not built in to Python, I think it is reasonable to expect users to _also_ obtain sqlalchemy if they want the very best support. One man's opinion.
So, I'd like to build MultiIndex support for legacy connections.
Hmm, that's maybe indeed a good division. This would mean to deprecate the mysql flavor, and only keeping the slqite flavor.
I think it wouldn't be that difficult to add multi-index support to the legacy mode (actually a lot of the code can be reused I think. I will look at it).
|
2014-04-14T20:25:59Z
|
<patch>
diff --git a/pandas/io/sql.py b/pandas/io/sql.py
--- a/pandas/io/sql.py
+++ b/pandas/io/sql.py
@@ -436,10 +436,7 @@ def maybe_asscalar(self, i):
except AttributeError:
return i
- def insert(self):
- ins = self.insert_statement()
- data_list = []
-
+ def insert_data(self):
if self.index is not None:
temp = self.frame.copy()
temp.index.names = self.index
@@ -451,6 +448,12 @@ def insert(self):
else:
temp = self.frame
+ return temp
+
+ def insert(self):
+ ins = self.insert_statement()
+ data_list = []
+ temp = self.insert_data()
keys = temp.columns
for t in temp.itertuples():
@@ -785,7 +788,7 @@ def insert_statement(self):
wld = _SQL_SYMB[flv]['wld'] # wildcard char
if self.index is not None:
- safe_names.insert(0, self.index)
+ [safe_names.insert(0, idx) for idx in self.index[::-1]]
bracketed_names = [br_l + column + br_r for column in safe_names]
col_names = ','.join(bracketed_names)
@@ -796,26 +799,18 @@ def insert_statement(self):
def insert(self):
ins = self.insert_statement()
+ temp = self.insert_data()
+ data_list = []
+
+ for t in temp.itertuples():
+ data = tuple((self.maybe_asscalar(v) for v in t[1:]))
+ data_list.append(data)
+
cur = self.pd_sql.con.cursor()
- for r in self.frame.itertuples():
- data = [self.maybe_asscalar(v) for v in r[1:]]
- if self.index is not None:
- data.insert(0, self.maybe_asscalar(r[0]))
- cur.execute(ins, tuple(data))
+ cur.executemany(ins, data_list)
cur.close()
self.pd_sql.con.commit()
- def _index_name(self, index, index_label):
- if index is True:
- if self.frame.index.name is not None:
- return _safe_col_name(self.frame.index.name)
- else:
- return 'pandas_index'
- elif isinstance(index, string_types):
- return index
- else:
- return None
-
def _create_table_statement(self):
"Return a CREATE TABLE statement to suit the contents of a DataFrame."
@@ -824,8 +819,10 @@ def _create_table_statement(self):
column_types = [self._sql_type_name(typ) for typ in self.frame.dtypes]
if self.index is not None:
- safe_columns.insert(0, self.index)
- column_types.insert(0, self._sql_type_name(self.frame.index.dtype))
+ for i, idx_label in enumerate(self.index[::-1]):
+ safe_columns.insert(0, idx_label)
+ column_types.insert(0, self._sql_type_name(self.frame.index.get_level_values(i).dtype))
+
flv = self.pd_sql.flavor
br_l = _SQL_SYMB[flv]['br_l'] # left val quote char
@@ -935,15 +932,16 @@ def to_sql(self, frame, name, if_exists='fail', index=True,
----------
frame: DataFrame
name: name of SQL table
- flavor: {'sqlite', 'mysql', 'postgres'}, default 'sqlite'
+ flavor: {'sqlite', 'mysql'}, default 'sqlite'
if_exists: {'fail', 'replace', 'append'}, default 'fail'
fail: If table exists, do nothing.
replace: If table exists, drop it, recreate it, and insert data.
append: If table exists, insert data. Create if does not exist.
- index_label : ignored (only used in sqlalchemy mode)
+
"""
table = PandasSQLTableLegacy(
- name, self, frame=frame, index=index, if_exists=if_exists)
+ name, self, frame=frame, index=index, if_exists=if_exists,
+ index_label=index_label)
table.insert()
def has_table(self, name):
@@ -991,13 +989,47 @@ def read_frame(*args, **kwargs):
return read_sql(*args, **kwargs)
-def write_frame(*args, **kwargs):
+def write_frame(frame, name, con, flavor='sqlite', if_exists='fail', **kwargs):
"""DEPRECIATED - use to_sql
+
+ Write records stored in a DataFrame to a SQL database.
+
+ Parameters
+ ----------
+ frame : DataFrame
+ name : string
+ con : DBAPI2 connection
+ flavor : {'sqlite', 'mysql'}, default 'sqlite'
+ The flavor of SQL to use.
+ if_exists : {'fail', 'replace', 'append'}, default 'fail'
+ - fail: If table exists, do nothing.
+ - replace: If table exists, drop it, recreate it, and insert data.
+ - append: If table exists, insert data. Create if does not exist.
+ index : boolean, default False
+ Write DataFrame index as a column
+
+ Notes
+ -----
+ This function is deprecated in favor of ``to_sql``. There are however
+ two differences:
+
+ - With ``to_sql`` the index is written to the sql database by default. To
+ keep the behaviour this function you need to specify ``index=False``.
+ - The new ``to_sql`` function supports sqlalchemy engines to work with
+ different sql flavors.
+
+ See also
+ --------
+ pandas.DataFrame.to_sql
+
"""
warnings.warn("write_frame is depreciated, use to_sql", DeprecationWarning)
- return to_sql(*args, **kwargs)
+
+ # for backwards compatibility, set index=False when not specified
+ index = kwargs.pop('index', False)
+ return to_sql(frame, name, con, flavor=flavor, if_exists=if_exists,
+ index=index, **kwargs)
# Append wrapped function docstrings
read_frame.__doc__ += read_sql.__doc__
-write_frame.__doc__ += to_sql.__doc__
</patch>
|
[]
|
[]
| |||
pandas-dev__pandas-23721
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Make default xfail strict
Pytest let's you configure the default xfail strictness in the `setup.cfg`
```
xfail_strict=true
```
This issue is to
1. Make that change
2. Update tests that are currently XPASSing
- If always passing then remove the xfail entirely
- If only sometimes passing (flaky test, dependency, configuration, etc.) then mark that specific test as `strict=False`.
cc @jbrockmendel @jreback
</issue>
<code>
[start of README.md]
1 <div align="center">
2 <img src="https://github.com/pandas-dev/pandas/blob/master/doc/logo/pandas_logo.png"><br>
3 </div>
4
5 -----------------
6
7 # pandas: powerful Python data analysis toolkit
8
9 <table>
10 <tr>
11 <td>Latest Release</td>
12 <td>
13 <a href="https://pypi.org/project/pandas/">
14 <img src="https://img.shields.io/pypi/v/pandas.svg" alt="latest release" />
15 </a>
16 </td>
17 </tr>
18 <td></td>
19 <td>
20 <a href="https://anaconda.org/anaconda/pandas/">
21 <img src="https://anaconda.org/conda-forge/pandas/badges/version.svg" alt="latest release" />
22 </a>
23 </td>
24 </tr>
25 <tr>
26 <td>Package Status</td>
27 <td>
28 <a href="https://pypi.org/project/pandas/">
29 <img src="https://img.shields.io/pypi/status/pandas.svg" alt="status" /></td>
30 </a>
31 </tr>
32 <tr>
33 <td>License</td>
34 <td>
35 <a href="https://github.com/pandas-dev/pandas/blob/master/LICENSE">
36 <img src="https://img.shields.io/pypi/l/pandas.svg" alt="license" />
37 </a>
38 </td>
39 </tr>
40 <tr>
41 <td>Build Status</td>
42 <td>
43 <a href="https://travis-ci.org/pandas-dev/pandas">
44 <img src="https://travis-ci.org/pandas-dev/pandas.svg?branch=master" alt="travis build status" />
45 </a>
46 </td>
47 </tr>
48 <tr>
49 <td></td>
50 <td>
51 <a href="https://circleci.com/gh/pandas-dev/pandas">
52 <img src="https://circleci.com/gh/circleci/mongofinil/tree/master.svg?style=shield&circle-token=223d8cafa7b02902c3e150242520af8944e34671" alt="circleci build status" />
53 </a>
54 </td>
55 </tr>
56 <tr>
57 <td></td>
58 <td>
59 <a href="https://dev.azure.com/pandas-dev/pandas/_build/latest?definitionId=1&branch=master">
60 <img src="https://dev.azure.com/pandas-dev/pandas/_apis/build/status/pandas-dev.pandas?branch=master" alt="Azure Pipelines build status" />
61 </a>
62 </td>
63 </tr>
64 <tr>
65 <td>Coverage</td>
66 <td>
67 <a href="https://codecov.io/gh/pandas-dev/pandas">
68 <img src="https://codecov.io/github/pandas-dev/pandas/coverage.svg?branch=master" alt="coverage" />
69 </a>
70 </td>
71 </tr>
72 <tr>
73 <td>Downloads</td>
74 <td>
75 <a href="https://pandas.pydata.org">
76 <img src="https://anaconda.org/conda-forge/pandas/badges/downloads.svg" alt="conda-forge downloads" />
77 </a>
78 </td>
79 </tr>
80 <tr>
81 <td>Gitter</td>
82 <td>
83 <a href="https://gitter.im/pydata/pandas">
84 <img src="https://badges.gitter.im/Join%20Chat.svg"
85 </a>
86 </td>
87 </tr>
88 </table>
89
90
91
92 ## What is it?
93
94 **pandas** is a Python package providing fast, flexible, and expressive data
95 structures designed to make working with "relational" or "labeled" data both
96 easy and intuitive. It aims to be the fundamental high-level building block for
97 doing practical, **real world** data analysis in Python. Additionally, it has
98 the broader goal of becoming **the most powerful and flexible open source data
99 analysis / manipulation tool available in any language**. It is already well on
100 its way towards this goal.
101
102 ## Main Features
103 Here are just a few of the things that pandas does well:
104
105 - Easy handling of [**missing data**][missing-data] (represented as
106 `NaN`) in floating point as well as non-floating point data
107 - Size mutability: columns can be [**inserted and
108 deleted**][insertion-deletion] from DataFrame and higher dimensional
109 objects
110 - Automatic and explicit [**data alignment**][alignment]: objects can
111 be explicitly aligned to a set of labels, or the user can simply
112 ignore the labels and let `Series`, `DataFrame`, etc. automatically
113 align the data for you in computations
114 - Powerful, flexible [**group by**][groupby] functionality to perform
115 split-apply-combine operations on data sets, for both aggregating
116 and transforming data
117 - Make it [**easy to convert**][conversion] ragged,
118 differently-indexed data in other Python and NumPy data structures
119 into DataFrame objects
120 - Intelligent label-based [**slicing**][slicing], [**fancy
121 indexing**][fancy-indexing], and [**subsetting**][subsetting] of
122 large data sets
123 - Intuitive [**merging**][merging] and [**joining**][joining] data
124 sets
125 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of
126 data sets
127 - [**Hierarchical**][mi] labeling of axes (possible to have multiple
128 labels per tick)
129 - Robust IO tools for loading data from [**flat files**][flat-files]
130 (CSV and delimited), [**Excel files**][excel], [**databases**][db],
131 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore]
132 - [**Time series**][timeseries]-specific functionality: date range
133 generation and frequency conversion, moving window statistics,
134 moving window linear regressions, date shifting and lagging, etc.
135
136
137 [missing-data]: https://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data
138 [insertion-deletion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#column-selection-addition-deletion
139 [alignment]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html?highlight=alignment#intro-to-data-structures
140 [groupby]: https://pandas.pydata.org/pandas-docs/stable/groupby.html#group-by-split-apply-combine
141 [conversion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe
142 [slicing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges
143 [fancy-indexing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#advanced-indexing-with-ix
144 [subsetting]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing
145 [merging]: https://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging
146 [joining]: https://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index
147 [reshape]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-and-pivot-tables
148 [pivot-table]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#pivot-tables-and-cross-tabulations
149 [mi]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex
150 [flat-files]: https://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files
151 [excel]: https://pandas.pydata.org/pandas-docs/stable/io.html#excel-files
152 [db]: https://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries
153 [hdfstore]: https://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables
154 [timeseries]: https://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-series-date-functionality
155
156 ## Where to get it
157 The source code is currently hosted on GitHub at:
158 https://github.com/pandas-dev/pandas
159
160 Binary installers for the latest released version are available at the [Python
161 package index](https://pypi.org/project/pandas) and on conda.
162
163 ```sh
164 # conda
165 conda install pandas
166 ```
167
168 ```sh
169 # or PyPI
170 pip install pandas
171 ```
172
173 ## Dependencies
174 - [NumPy](https://www.numpy.org): 1.12.0 or higher
175 - [python-dateutil](https://labix.org/python-dateutil): 2.5.0 or higher
176 - [pytz](https://pythonhosted.org/pytz): 2011k or higher
177
178 See the [full installation instructions](https://pandas.pydata.org/pandas-docs/stable/install.html#dependencies)
179 for recommended and optional dependencies.
180
181 ## Installation from sources
182 To install pandas from source you need Cython in addition to the normal
183 dependencies above. Cython can be installed from pypi:
184
185 ```sh
186 pip install cython
187 ```
188
189 In the `pandas` directory (same one where you found this file after
190 cloning the git repo), execute:
191
192 ```sh
193 python setup.py install
194 ```
195
196 or for installing in [development mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs):
197
198 ```sh
199 python setup.py develop
200 ```
201
202 Alternatively, you can use `pip` if you want all the dependencies pulled
203 in automatically (the `-e` option is for installing it in [development
204 mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs)):
205
206 ```sh
207 pip install -e .
208 ```
209
210 See the full instructions for [installing from source](https://pandas.pydata.org/pandas-docs/stable/install.html#installing-from-source).
211
212 ## License
213 [BSD 3](LICENSE)
214
215 ## Documentation
216 The official documentation is hosted on PyData.org: https://pandas.pydata.org/pandas-docs/stable
217
218 ## Background
219 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and
220 has been under active development since then.
221
222 ## Getting Help
223
224 For usage questions, the best place to go to is [StackOverflow](https://stackoverflow.com/questions/tagged/pandas).
225 Further, general questions and discussions can also take place on the [pydata mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata).
226
227 ## Discussion and Development
228 Most development discussion is taking place on github in this repo. Further, the [pandas-dev mailing list](https://mail.python.org/mailman/listinfo/pandas-dev) can also be used for specialized discussions or design issues, and a [Gitter channel](https://gitter.im/pydata/pandas) is available for quick development related questions.
229
230 ## Contributing to pandas [](https://www.codetriage.com/pandas-dev/pandas)
231
232 All contributions, bug reports, bug fixes, documentation improvements, enhancements and ideas are welcome.
233
234 A detailed overview on how to contribute can be found in the **[contributing guide.](https://pandas.pydata.org/pandas-docs/stable/contributing.html)**
235
236 If you are simply looking to start working with the pandas codebase, navigate to the [GitHub “issues” tab](https://github.com/pandas-dev/pandas/issues) and start looking through interesting issues. There are a number of issues listed under [Docs](https://github.com/pandas-dev/pandas/issues?labels=Docs&sort=updated&state=open) and [good first issue](https://github.com/pandas-dev/pandas/issues?labels=good+first+issue&sort=updated&state=open) where you could start out.
237
238 You can also triage issues which may include reproducing bug reports, or asking for vital information such as version numbers or reproduction instructions. If you would like to start triaging issues, one easy way to get started is to [subscribe to pandas on CodeTriage](https://www.codetriage.com/pandas-dev/pandas).
239
240 Or maybe through using pandas you have an idea of your own or are looking for something in the documentation and thinking ‘this can be improved’...you can do something about it!
241
242 Feel free to ask questions on the [mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata) or on [Gitter](https://gitter.im/pydata/pandas).
243
[end of README.md]
[start of pandas/conftest.py]
1 from datetime import date, time, timedelta
2 from decimal import Decimal
3 import importlib
4 import os
5
6 from dateutil.tz import tzlocal, tzutc
7 import hypothesis
8 from hypothesis import strategies as st
9 import numpy as np
10 import pytest
11 from pytz import FixedOffset, utc
12
13 from pandas.compat import PY3, u
14 import pandas.util._test_decorators as td
15
16 import pandas as pd
17
18 hypothesis.settings.register_profile(
19 "ci",
20 # Hypothesis timing checks are tuned for scalars by default, so we bump
21 # them from 200ms to 500ms per test case as the global default. If this
22 # is too short for a specific test, (a) try to make it faster, and (b)
23 # if it really is slow add `@settings(deadline=...)` with a working value,
24 # or `deadline=None` to entirely disable timeouts for that test.
25 deadline=500,
26 timeout=hypothesis.unlimited,
27 suppress_health_check=(hypothesis.HealthCheck.too_slow,)
28 )
29 hypothesis.settings.load_profile("ci")
30
31
32 def pytest_addoption(parser):
33 parser.addoption("--skip-slow", action="store_true",
34 help="skip slow tests")
35 parser.addoption("--skip-network", action="store_true",
36 help="skip network tests")
37 parser.addoption("--run-high-memory", action="store_true",
38 help="run high memory tests")
39 parser.addoption("--only-slow", action="store_true",
40 help="run only slow tests")
41 parser.addoption("--strict-data-files", action="store_true",
42 help="Fail if a test is skipped for missing data file.")
43
44
45 def pytest_runtest_setup(item):
46 if 'slow' in item.keywords and item.config.getoption("--skip-slow"):
47 pytest.skip("skipping due to --skip-slow")
48
49 if 'slow' not in item.keywords and item.config.getoption("--only-slow"):
50 pytest.skip("skipping due to --only-slow")
51
52 if 'network' in item.keywords and item.config.getoption("--skip-network"):
53 pytest.skip("skipping due to --skip-network")
54
55 if 'high_memory' in item.keywords and not item.config.getoption(
56 "--run-high-memory"):
57 pytest.skip(
58 "skipping high memory test since --run-high-memory was not set")
59
60
61 # Configurations for all tests and all test modules
62
63 @pytest.fixture(autouse=True)
64 def configure_tests():
65 pd.set_option('chained_assignment', 'raise')
66
67
68 # For running doctests: make np and pd names available
69
70 @pytest.fixture(autouse=True)
71 def add_imports(doctest_namespace):
72 doctest_namespace['np'] = np
73 doctest_namespace['pd'] = pd
74
75
76 @pytest.fixture(params=['bsr', 'coo', 'csc', 'csr', 'dia', 'dok', 'lil'])
77 def spmatrix(request):
78 from scipy import sparse
79 return getattr(sparse, request.param + '_matrix')
80
81
82 @pytest.fixture(params=[0, 1, 'index', 'columns'],
83 ids=lambda x: "axis {!r}".format(x))
84 def axis(request):
85 """
86 Fixture for returning the axis numbers of a DataFrame.
87 """
88 return request.param
89
90
91 axis_frame = axis
92
93
94 @pytest.fixture(params=[0, 'index'], ids=lambda x: "axis {!r}".format(x))
95 def axis_series(request):
96 """
97 Fixture for returning the axis numbers of a Series.
98 """
99 return request.param
100
101
102 @pytest.fixture
103 def ip():
104 """
105 Get an instance of IPython.InteractiveShell.
106
107 Will raise a skip if IPython is not installed.
108 """
109
110 pytest.importorskip('IPython', minversion="6.0.0")
111 from IPython.core.interactiveshell import InteractiveShell
112 return InteractiveShell()
113
114
115 @pytest.fixture(params=[True, False, None])
116 def observed(request):
117 """ pass in the observed keyword to groupby for [True, False]
118 This indicates whether categoricals should return values for
119 values which are not in the grouper [False / None], or only values which
120 appear in the grouper [True]. [None] is supported for future compatiblity
121 if we decide to change the default (and would need to warn if this
122 parameter is not passed)"""
123 return request.param
124
125
126 _all_arithmetic_operators = ['__add__', '__radd__',
127 '__sub__', '__rsub__',
128 '__mul__', '__rmul__',
129 '__floordiv__', '__rfloordiv__',
130 '__truediv__', '__rtruediv__',
131 '__pow__', '__rpow__',
132 '__mod__', '__rmod__']
133 if not PY3:
134 _all_arithmetic_operators.extend(['__div__', '__rdiv__'])
135
136
137 @pytest.fixture(params=_all_arithmetic_operators)
138 def all_arithmetic_operators(request):
139 """
140 Fixture for dunder names for common arithmetic operations
141 """
142 return request.param
143
144
145 _all_numeric_reductions = ['sum', 'max', 'min',
146 'mean', 'prod', 'std', 'var', 'median',
147 'kurt', 'skew']
148
149
150 @pytest.fixture(params=_all_numeric_reductions)
151 def all_numeric_reductions(request):
152 """
153 Fixture for numeric reduction names
154 """
155 return request.param
156
157
158 _all_boolean_reductions = ['all', 'any']
159
160
161 @pytest.fixture(params=_all_boolean_reductions)
162 def all_boolean_reductions(request):
163 """
164 Fixture for boolean reduction names
165 """
166 return request.param
167
168
169 _cython_table = pd.core.base.SelectionMixin._cython_table.items()
170
171
172 @pytest.fixture(params=list(_cython_table))
173 def cython_table_items(request):
174 return request.param
175
176
177 def _get_cython_table_params(ndframe, func_names_and_expected):
178 """combine frame, functions from SelectionMixin._cython_table
179 keys and expected result.
180
181 Parameters
182 ----------
183 ndframe : DataFrame or Series
184 func_names_and_expected : Sequence of two items
185 The first item is a name of a NDFrame method ('sum', 'prod') etc.
186 The second item is the expected return value
187
188 Returns
189 -------
190 results : list
191 List of three items (DataFrame, function, expected result)
192 """
193 results = []
194 for func_name, expected in func_names_and_expected:
195 results.append((ndframe, func_name, expected))
196 results += [(ndframe, func, expected) for func, name in _cython_table
197 if name == func_name]
198 return results
199
200
201 @pytest.fixture(params=['__eq__', '__ne__', '__le__',
202 '__lt__', '__ge__', '__gt__'])
203 def all_compare_operators(request):
204 """
205 Fixture for dunder names for common compare operations
206
207 * >=
208 * >
209 * ==
210 * !=
211 * <
212 * <=
213 """
214 return request.param
215
216
217 @pytest.fixture(params=[None, 'gzip', 'bz2', 'zip',
218 pytest.param('xz', marks=td.skip_if_no_lzma)])
219 def compression(request):
220 """
221 Fixture for trying common compression types in compression tests
222 """
223 return request.param
224
225
226 @pytest.fixture(params=['gzip', 'bz2', 'zip',
227 pytest.param('xz', marks=td.skip_if_no_lzma)])
228 def compression_only(request):
229 """
230 Fixture for trying common compression types in compression tests excluding
231 uncompressed case
232 """
233 return request.param
234
235
236 @pytest.fixture(params=[True, False])
237 def writable(request):
238 """
239 Fixture that an array is writable
240 """
241 return request.param
242
243
244 @pytest.fixture(scope='module')
245 def datetime_tz_utc():
246 from datetime import timezone
247 return timezone.utc
248
249
250 utc_objs = ['utc', 'dateutil/UTC', utc, tzutc()]
251 if PY3:
252 from datetime import timezone
253 utc_objs.append(timezone.utc)
254
255
256 @pytest.fixture(params=utc_objs)
257 def utc_fixture(request):
258 """
259 Fixture to provide variants of UTC timezone strings and tzinfo objects
260 """
261 return request.param
262
263
264 @pytest.fixture(params=['inner', 'outer', 'left', 'right'])
265 def join_type(request):
266 """
267 Fixture for trying all types of join operations
268 """
269 return request.param
270
271
272 @pytest.fixture
273 def datapath(request):
274 """Get the path to a data file.
275
276 Parameters
277 ----------
278 path : str
279 Path to the file, relative to ``pandas/tests/``
280
281 Returns
282 -------
283 path : path including ``pandas/tests``.
284
285 Raises
286 ------
287 ValueError
288 If the path doesn't exist and the --strict-data-files option is set.
289 """
290 BASE_PATH = os.path.join(os.path.dirname(__file__), 'tests')
291
292 def deco(*args):
293 path = os.path.join(BASE_PATH, *args)
294 if not os.path.exists(path):
295 if request.config.getoption("--strict-data-files"):
296 msg = "Could not find file {} and --strict-data-files is set."
297 raise ValueError(msg.format(path))
298 else:
299 msg = "Could not find {}."
300 pytest.skip(msg.format(path))
301 return path
302 return deco
303
304
305 @pytest.fixture
306 def iris(datapath):
307 """The iris dataset as a DataFrame."""
308 return pd.read_csv(datapath('data', 'iris.csv'))
309
310
311 @pytest.fixture(params=['nlargest', 'nsmallest'])
312 def nselect_method(request):
313 """
314 Fixture for trying all nselect methods
315 """
316 return request.param
317
318
319 @pytest.fixture(params=['left', 'right', 'both', 'neither'])
320 def closed(request):
321 """
322 Fixture for trying all interval closed parameters
323 """
324 return request.param
325
326
327 @pytest.fixture(params=['left', 'right', 'both', 'neither'])
328 def other_closed(request):
329 """
330 Secondary closed fixture to allow parametrizing over all pairs of closed
331 """
332 return request.param
333
334
335 @pytest.fixture(params=[None, np.nan, pd.NaT, float('nan'), np.float('NaN')])
336 def nulls_fixture(request):
337 """
338 Fixture for each null type in pandas
339 """
340 return request.param
341
342
343 nulls_fixture2 = nulls_fixture # Generate cartesian product of nulls_fixture
344
345
346 @pytest.fixture(params=[None, np.nan, pd.NaT])
347 def unique_nulls_fixture(request):
348 """
349 Fixture for each null type in pandas, each null type exactly once
350 """
351 return request.param
352
353
354 # Generate cartesian product of unique_nulls_fixture:
355 unique_nulls_fixture2 = unique_nulls_fixture
356
357
358 TIMEZONES = [None, 'UTC', 'US/Eastern', 'Asia/Tokyo', 'dateutil/US/Pacific',
359 'dateutil/Asia/Singapore', tzutc(), tzlocal(), FixedOffset(300),
360 FixedOffset(0), FixedOffset(-300)]
361
362
363 @td.parametrize_fixture_doc(str(TIMEZONES))
364 @pytest.fixture(params=TIMEZONES)
365 def tz_naive_fixture(request):
366 """
367 Fixture for trying timezones including default (None): {0}
368 """
369 return request.param
370
371
372 @td.parametrize_fixture_doc(str(TIMEZONES[1:]))
373 @pytest.fixture(params=TIMEZONES[1:])
374 def tz_aware_fixture(request):
375 """
376 Fixture for trying explicit timezones: {0}
377 """
378 return request.param
379
380
381 UNSIGNED_INT_DTYPES = ["uint8", "uint16", "uint32", "uint64"]
382 SIGNED_INT_DTYPES = [int, "int8", "int16", "int32", "int64"]
383 ALL_INT_DTYPES = UNSIGNED_INT_DTYPES + SIGNED_INT_DTYPES
384
385 FLOAT_DTYPES = [float, "float32", "float64"]
386 COMPLEX_DTYPES = [complex, "complex64", "complex128"]
387 STRING_DTYPES = [str, 'str', 'U']
388
389 DATETIME_DTYPES = ['datetime64[ns]', 'M8[ns]']
390 TIMEDELTA_DTYPES = ['timedelta64[ns]', 'm8[ns]']
391
392 BOOL_DTYPES = [bool, 'bool']
393 BYTES_DTYPES = [bytes, 'bytes']
394 OBJECT_DTYPES = [object, 'object']
395
396 ALL_REAL_DTYPES = FLOAT_DTYPES + ALL_INT_DTYPES
397 ALL_NUMPY_DTYPES = (ALL_REAL_DTYPES + COMPLEX_DTYPES + STRING_DTYPES
398 + DATETIME_DTYPES + TIMEDELTA_DTYPES + BOOL_DTYPES
399 + OBJECT_DTYPES + BYTES_DTYPES * PY3) # bytes only for PY3
400
401
402 @pytest.fixture(params=STRING_DTYPES)
403 def string_dtype(request):
404 """Parametrized fixture for string dtypes.
405
406 * str
407 * 'str'
408 * 'U'
409 """
410 return request.param
411
412
413 @pytest.fixture(params=FLOAT_DTYPES)
414 def float_dtype(request):
415 """
416 Parameterized fixture for float dtypes.
417
418 * float
419 * 'float32'
420 * 'float64'
421 """
422
423 return request.param
424
425
426 @pytest.fixture(params=COMPLEX_DTYPES)
427 def complex_dtype(request):
428 """
429 Parameterized fixture for complex dtypes.
430
431 * complex
432 * 'complex64'
433 * 'complex128'
434 """
435
436 return request.param
437
438
439 @pytest.fixture(params=SIGNED_INT_DTYPES)
440 def sint_dtype(request):
441 """
442 Parameterized fixture for signed integer dtypes.
443
444 * int
445 * 'int8'
446 * 'int16'
447 * 'int32'
448 * 'int64'
449 """
450
451 return request.param
452
453
454 @pytest.fixture(params=UNSIGNED_INT_DTYPES)
455 def uint_dtype(request):
456 """
457 Parameterized fixture for unsigned integer dtypes.
458
459 * 'uint8'
460 * 'uint16'
461 * 'uint32'
462 * 'uint64'
463 """
464
465 return request.param
466
467
468 @pytest.fixture(params=ALL_INT_DTYPES)
469 def any_int_dtype(request):
470 """
471 Parameterized fixture for any integer dtype.
472
473 * int
474 * 'int8'
475 * 'uint8'
476 * 'int16'
477 * 'uint16'
478 * 'int32'
479 * 'uint32'
480 * 'int64'
481 * 'uint64'
482 """
483
484 return request.param
485
486
487 @pytest.fixture(params=ALL_REAL_DTYPES)
488 def any_real_dtype(request):
489 """
490 Parameterized fixture for any (purely) real numeric dtype.
491
492 * int
493 * 'int8'
494 * 'uint8'
495 * 'int16'
496 * 'uint16'
497 * 'int32'
498 * 'uint32'
499 * 'int64'
500 * 'uint64'
501 * float
502 * 'float32'
503 * 'float64'
504 """
505
506 return request.param
507
508
509 @pytest.fixture(params=ALL_NUMPY_DTYPES)
510 def any_numpy_dtype(request):
511 """
512 Parameterized fixture for all numpy dtypes.
513
514 * bool
515 * 'bool'
516 * int
517 * 'int8'
518 * 'uint8'
519 * 'int16'
520 * 'uint16'
521 * 'int32'
522 * 'uint32'
523 * 'int64'
524 * 'uint64'
525 * float
526 * 'float32'
527 * 'float64'
528 * complex
529 * 'complex64'
530 * 'complex128'
531 * str
532 * 'str'
533 * 'U'
534 * bytes
535 * 'bytes'
536 * 'datetime64[ns]'
537 * 'M8[ns]'
538 * 'timedelta64[ns]'
539 * 'm8[ns]'
540 * object
541 * 'object'
542 """
543
544 return request.param
545
546
547 # categoricals are handled separately
548 _any_skipna_inferred_dtype = [
549 ('string', ['a', np.nan, 'c']),
550 ('unicode' if not PY3 else 'string', [u('a'), np.nan, u('c')]),
551 ('bytes' if PY3 else 'string', [b'a', np.nan, b'c']),
552 ('empty', [np.nan, np.nan, np.nan]),
553 ('empty', []),
554 ('mixed-integer', ['a', np.nan, 2]),
555 ('mixed', ['a', np.nan, 2.0]),
556 ('floating', [1.0, np.nan, 2.0]),
557 ('integer', [1, np.nan, 2]),
558 ('mixed-integer-float', [1, np.nan, 2.0]),
559 ('decimal', [Decimal(1), np.nan, Decimal(2)]),
560 ('boolean', [True, np.nan, False]),
561 ('datetime64', [np.datetime64('2013-01-01'), np.nan,
562 np.datetime64('2018-01-01')]),
563 ('datetime', [pd.Timestamp('20130101'), np.nan, pd.Timestamp('20180101')]),
564 ('date', [date(2013, 1, 1), np.nan, date(2018, 1, 1)]),
565 # The following two dtypes are commented out due to GH 23554
566 # ('complex', [1 + 1j, np.nan, 2 + 2j]),
567 # ('timedelta64', [np.timedelta64(1, 'D'),
568 # np.nan, np.timedelta64(2, 'D')]),
569 ('timedelta', [timedelta(1), np.nan, timedelta(2)]),
570 ('time', [time(1), np.nan, time(2)]),
571 ('period', [pd.Period(2013), pd.NaT, pd.Period(2018)]),
572 ('interval', [pd.Interval(0, 1), np.nan, pd.Interval(0, 2)])]
573 ids, _ = zip(*_any_skipna_inferred_dtype) # use inferred type as fixture-id
574
575
576 @pytest.fixture(params=_any_skipna_inferred_dtype, ids=ids)
577 def any_skipna_inferred_dtype(request):
578 """
579 Fixture for all inferred dtypes from _libs.lib.infer_dtype
580
581 The covered (inferred) types are:
582 * 'string'
583 * 'unicode' (if PY2)
584 * 'empty'
585 * 'bytes' (if PY3)
586 * 'mixed'
587 * 'mixed-integer'
588 * 'mixed-integer-float'
589 * 'floating'
590 * 'integer'
591 * 'decimal'
592 * 'boolean'
593 * 'datetime64'
594 * 'datetime'
595 * 'date'
596 * 'timedelta'
597 * 'time'
598 * 'period'
599 * 'interval'
600
601 Returns
602 -------
603 inferred_dtype : str
604 The string for the inferred dtype from _libs.lib.infer_dtype
605 values : np.ndarray
606 An array of object dtype that will be inferred to have
607 `inferred_dtype`
608
609 Examples
610 --------
611 >>> import pandas._libs.lib as lib
612 >>>
613 >>> def test_something(any_skipna_inferred_dtype):
614 ... inferred_dtype, values = any_skipna_inferred_dtype
615 ... # will pass
616 ... assert lib.infer_dtype(values, skipna=True) == inferred_dtype
617 """
618 inferred_dtype, values = request.param
619 values = np.array(values, dtype=object) # object dtype to avoid casting
620
621 # correctness of inference tested in tests/dtypes/test_inference.py
622 return inferred_dtype, values
623
624
625 @pytest.fixture
626 def mock():
627 """
628 Fixture providing the 'mock' module.
629
630 Uses 'unittest.mock' for Python 3. Attempts to import the 3rd party 'mock'
631 package for Python 2, skipping if not present.
632 """
633 if PY3:
634 return importlib.import_module("unittest.mock")
635 else:
636 return pytest.importorskip("mock")
637
638
639 # ----------------------------------------------------------------
640 # Global setup for tests using Hypothesis
641
642
643 # Registering these strategies makes them globally available via st.from_type,
644 # which is use for offsets in tests/tseries/offsets/test_offsets_properties.py
645 for name in 'MonthBegin MonthEnd BMonthBegin BMonthEnd'.split():
646 cls = getattr(pd.tseries.offsets, name)
647 st.register_type_strategy(cls, st.builds(
648 cls,
649 n=st.integers(-99, 99),
650 normalize=st.booleans(),
651 ))
652
653 for name in 'YearBegin YearEnd BYearBegin BYearEnd'.split():
654 cls = getattr(pd.tseries.offsets, name)
655 st.register_type_strategy(cls, st.builds(
656 cls,
657 n=st.integers(-5, 5),
658 normalize=st.booleans(),
659 month=st.integers(min_value=1, max_value=12),
660 ))
661
662 for name in 'QuarterBegin QuarterEnd BQuarterBegin BQuarterEnd'.split():
663 cls = getattr(pd.tseries.offsets, name)
664 st.register_type_strategy(cls, st.builds(
665 cls,
666 n=st.integers(-24, 24),
667 normalize=st.booleans(),
668 startingMonth=st.integers(min_value=1, max_value=12)
669 ))
670
[end of pandas/conftest.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pandas-dev/pandas
|
b7bdf7cd24ed719e676c1eda50f486c43644ddca
|
Make default xfail strict
Pytest let's you configure the default xfail strictness in the `setup.cfg`
```
xfail_strict=true
```
This issue is to
1. Make that change
2. Update tests that are currently XPASSing
- If always passing then remove the xfail entirely
- If only sometimes passing (flaky test, dependency, configuration, etc.) then mark that specific test as `strict=False`.
cc @jbrockmendel @jreback
|
Hi, I am a beginner looking to contribute to pandas. Would like to learn a bit from this issue, can I take this up?
Sure.
I would run something like `pytest pandas -ra` which should print out all the tests that xfail and xpass. From there, you can go to the individual tests that are xpass and figure out what needs to be done.
Also, there look to be ~100 xfail tests (not sure how many are not strict). If there are too many then feel free to do fixes in batches and just fix a subset of the files.
Ok, sounds good. Will check it out and get back to you soon.
If this issue is not yet resolved can I work on this
|
2018-11-15T18:03:36Z
|
<patch>
diff --git a/setup.cfg b/setup.cfg
--- a/setup.cfg
+++ b/setup.cfg
@@ -106,6 +106,7 @@ markers =
clipboard: mark a pd.read_clipboard test
doctest_optionflags = NORMALIZE_WHITESPACE IGNORE_EXCEPTION_DETAIL
addopts = --strict-data-files
+xfail_strict = True
[coverage:run]
branch = False
</patch>
|
[]
|
[]
| |||
conda__conda-6662
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Conda remove doesn't update environments.txt
conda 4.4.6
I'm using `conda remove -p (envPath) --all -y`. The folder gets deleted but the environments.txt file is not updated.
</issue>
<code>
[start of README.rst]
1 .. NOTE: This file serves both as the README on GitHub and the index.html for
2 conda.pydata.org. If you update this file, be sure to cd to the web
3 directory and run ``make html; make live``
4
5 .. image:: https://s3.amazonaws.com/conda-dev/conda_logo.svg
6 :alt: Conda Logo
7
8 ----------------------------------------
9
10 .. image:: https://img.shields.io/circleci/project/github/conda/conda/4.4.x.svg?maxAge=900&label=Unix
11 :target: https://circleci.com/gh/conda/workflows/conda/tree/4.4.x
12 :alt: Unix tests (CircleCI)
13
14 .. image:: https://img.shields.io/appveyor/ci/ContinuumAnalyticsFOSS/conda/4.4.x.svg?maxAge=900&label=Windows
15 :target: https://ci.appveyor.com/project/ContinuumAnalyticsFOSS/conda
16 :alt: Windows tests (Appveyor)
17
18 .. image:: https://img.shields.io/codecov/c/github/conda/conda/4.4.x.svg?label=coverage
19 :alt: Codecov Status
20 :target: https://codecov.io/gh/conda/conda/branch/4.4.x
21
22 .. image:: https://img.shields.io/github/release/conda/conda.svg
23 :alt: latest release version
24 :target: https://github.com/conda/conda/releases
25
26 |
27
28 .. image:: https://s3.amazonaws.com/conda-dev/conda-announce-signup-button.svg
29 :alt: Join the Conda Announcment List
30 :target: http://conda.pydata.org/docs/announcements.html
31
32 |
33
34 Conda is a cross-platform, language-agnostic binary package manager. It is the
35 package manager used by `Anaconda
36 <http://docs.continuum.io/anaconda/index.html>`_ installations, but it may be
37 used for other systems as well. Conda makes environments first-class
38 citizens, making it easy to create independent environments even for C
39 libraries. Conda is written entirely in Python, and is BSD licensed open
40 source.
41
42 Conda is enhanced by organizations, tools, and repositories created and managed by
43 the amazing members of the conda community. Some of them can be found
44 `here <https://github.com/conda/conda/wiki/Conda-Community>`_.
45
46
47 Installation
48 ------------
49
50 Conda is a part of the `Anaconda distribution <https://store.continuum.io/cshop/anaconda/>`_. You can also download a
51 minimal installation that only includes conda and its dependencies, called
52 `Miniconda <http://conda.pydata.org/miniconda.html>`_.
53
54
55 Getting Started
56 ---------------
57
58 If you install Anaconda, you will already have hundreds of packages
59 installed. You can see what packages are installed by running
60
61 .. code-block:: bash
62
63 $ conda list
64
65 to see all the packages that are available, use
66
67 .. code-block:: bash
68
69 $ conda search
70
71 and to install a package, use
72
73 .. code-block:: bash
74
75 $ conda install <package-name>
76
77
78 The real power of conda comes from its ability to manage environments. In
79 conda, an environment can be thought of as a completely separate installation.
80 Conda installs packages into environments efficiently using `hard links
81 <http://en.wikipedia.org/wiki/Hard_links>`_ by default when it is possible, so
82 environments are space efficient, and take seconds to create.
83
84 The default environment, which ``conda`` itself is installed into is called
85 ``base``. To create another environment, use the ``conda create``
86 command. For instance, to create an environment with the IPython notebook and
87 NumPy 1.6, which is older than the version that comes with Anaconda by
88 default, you would run
89
90 .. code-block:: bash
91
92 $ conda create -n numpy16 ipython-notebook numpy=1.6
93
94 This creates an environment called ``numpy16`` with the latest version of
95 the IPython notebook, NumPy 1.6, and their dependencies.
96
97 We can now activate this environment, use
98
99 .. code-block:: bash
100
101 # On Linux and Mac OS X
102 $ source activate numpy16
103
104 # On Windows
105 > activate numpy16
106
107 This puts the bin directory of the ``numpy16`` environment in the front of the
108 ``PATH``, and sets it as the default environment for all subsequent conda commands.
109
110 To go back to the base environment, use
111
112 .. code-block:: bash
113
114 # On Linux and Mac OS X
115 $ source deactivate
116
117 # On Windows
118 > deactivate
119
120
121 Building Your Own Packages
122 --------------------------
123
124 You can easily build your own packages for conda, and upload them
125 to `anaconda.org <https://anaconda.org>`_, a free service for hosting
126 packages for conda, as well as other package managers.
127 To build a package, create a recipe.
128 See http://github.com/conda/conda-recipes for many example recipes, and
129 http://docs.continuum.io/conda/build.html for documentation on how to build
130 recipes.
131
132 To upload to anaconda.org, create an account. Then, install the
133 anaconda-client and login
134
135 .. code-block:: bash
136
137 $ conda install anaconda-client
138 $ anaconda login
139
140 Then, after you build your recipe
141
142 .. code-block:: bash
143
144 $ conda build <recipe-dir>
145
146 you will be prompted to upload to anaconda.org.
147
148 To add your anaconda.org channel, or the channel of others to conda so
149 that ``conda install`` will find and install their packages, run
150
151 .. code-block:: bash
152
153 $ conda config --add channels https://conda.anaconda.org/username
154
155 (replacing ``username`` with the user name of the person whose channel you want
156 to add).
157
158 Getting Help
159 ------------
160
161 The documentation for conda is at http://conda.pydata.org/docs/. You can
162 subscribe to the `conda mailing list
163 <https://groups.google.com/a/continuum.io/forum/#!forum/conda>`_. The source
164 code and issue tracker for conda are on `GitHub <https://github.com/conda/conda>`_.
165
166 Contributing
167 ------------
168
169 Contributions to conda are welcome. Just fork the GitHub repository and send a
170 pull request.
171
172 To develop on conda, the easiest way is to use a development build. This can be
173 accomplished as follows:
174
175 * clone the conda git repository to a computer with conda already installed
176 * navigate to the root directory of the git clone
177 * run ``$CONDA/bin/python setup.py develop`` where ``$CONDA`` is the path to your
178 miniconda installation
179
180 Note building a development file requires git to be installed.
181
182 To undo this, run ``$CONDA/bin/python setup.py develop -u``. Note that if you
183 used a python other than ``$CONDA/bin/python`` to install, you may have to manually
184 delete the conda executable. For example, on OS X, if you use a homebrew python
185 located at ``/usr/local/bin/python``, then you'll need to ``rm /usr/local/bin/conda``
186 so that ``which -a conda`` lists first your miniconda installation.
187
188 If you are worried about breaking your conda installation, you can install a
189 separate instance of `Miniconda <http://conda.pydata.org/miniconda.html>`_ and
190 work off it. This is also the only way to test conda in both Python 2 and
191 Python 3, as conda can only be installed into a base environment.
192
193 To run the tests, set up a testing environment by running
194
195 * ``$CONDA/bin/python -m pip install -r utils/requirements-test.txt``.
196 * ``$CONDA/bin/python utils/setup-testing.py develop``
197
198 and then running ``py.test`` in the conda directory. You can also run tests using the
199 Makefile by running ``make unit``, ``make smoketest`` (a single integration test), or
200 ``make integration``. The tests are also run by various CI systems when you make a
201 pull request.
202
[end of README.rst]
[start of conda/cli/conda_argparse.py]
1 # -*- coding: utf-8 -*-
2 from __future__ import absolute_import, division, print_function, unicode_literals
3
4 from argparse import (ArgumentParser as ArgumentParserBase, RawDescriptionHelpFormatter, SUPPRESS,
5 _CountAction, _HelpAction)
6 from logging import getLogger
7 import os
8 from os.path import abspath, expanduser, join
9 from subprocess import Popen
10 import sys
11 from textwrap import dedent
12
13 from .. import __version__
14 from ..base.constants import CONDA_HOMEPAGE_URL
15 from ..common.constants import NULL
16
17 log = getLogger(__name__)
18
19 # duplicated code in the interest of import efficiency
20 on_win = bool(sys.platform == "win32")
21 user_rc_path = abspath(expanduser('~/.condarc'))
22 sys_rc_path = join(sys.prefix, '.condarc')
23
24
25 def generate_parser():
26 p = ArgumentParser(
27 description='conda is a tool for managing and deploying applications,'
28 ' environments and packages.',
29 )
30 p.add_argument(
31 '-V', '--version',
32 action='version',
33 version='conda %s' % __version__,
34 help="Show the conda version number and exit."
35 )
36 p.add_argument(
37 "--debug",
38 action="store_true",
39 help=SUPPRESS,
40 )
41 p.add_argument(
42 "--json",
43 action="store_true",
44 help=SUPPRESS,
45 )
46 sub_parsers = p.add_subparsers(
47 metavar='command',
48 dest='cmd',
49 )
50 # http://bugs.python.org/issue9253
51 # http://stackoverflow.com/a/18283730/1599393
52 sub_parsers.required = True
53
54 configure_parser_clean(sub_parsers)
55 configure_parser_config(sub_parsers)
56 configure_parser_create(sub_parsers)
57 configure_parser_help(sub_parsers)
58 configure_parser_info(sub_parsers)
59 configure_parser_install(sub_parsers)
60 configure_parser_list(sub_parsers)
61 configure_parser_package(sub_parsers)
62 configure_parser_remove(sub_parsers)
63 configure_parser_remove(sub_parsers, name='uninstall')
64 configure_parser_search(sub_parsers)
65 configure_parser_update(sub_parsers)
66 configure_parser_update(sub_parsers, name='upgrade')
67
68 return p
69
70
71 def do_call(args, parser):
72 relative_mod, func_name = args.func.rsplit('.', 1)
73 # func_name should always be 'execute'
74 from importlib import import_module
75 module = import_module(relative_mod, __name__.rsplit('.', 1)[0])
76 exit_code = getattr(module, func_name)(args, parser)
77 return exit_code
78
79
80 class ArgumentParser(ArgumentParserBase):
81 def __init__(self, *args, **kwargs):
82 if not kwargs.get('formatter_class'):
83 kwargs['formatter_class'] = RawDescriptionHelpFormatter
84 if 'add_help' not in kwargs:
85 add_custom_help = True
86 kwargs['add_help'] = False
87 else:
88 add_custom_help = False
89 super(ArgumentParser, self).__init__(*args, **kwargs)
90
91 if add_custom_help:
92 add_parser_help(self)
93
94 if self.description:
95 self.description += "\n\nOptions:\n"
96
97 def _get_action_from_name(self, name):
98 """Given a name, get the Action instance registered with this parser.
99 If only it were made available in the ArgumentError object. It is
100 passed as it's first arg...
101 """
102 container = self._actions
103 if name is None:
104 return None
105 for action in container:
106 if '/'.join(action.option_strings) == name:
107 return action
108 elif action.metavar == name:
109 return action
110 elif action.dest == name:
111 return action
112
113 def error(self, message):
114 import re
115 from .find_commands import find_executable
116 exc = sys.exc_info()[1]
117 if exc:
118 # this is incredibly lame, but argparse stupidly does not expose
119 # reasonable hooks for customizing error handling
120 if hasattr(exc, 'argument_name'):
121 argument = self._get_action_from_name(exc.argument_name)
122 else:
123 argument = None
124 if argument and argument.dest == "cmd":
125 m = re.match(r"invalid choice: u?'(\w*?)'", exc.message)
126 if m:
127 cmd = m.group(1)
128 if not cmd:
129 self.print_help()
130 sys.exit(0)
131 else:
132 executable = find_executable('conda-' + cmd)
133 if not executable:
134 from ..exceptions import CommandNotFoundError
135 raise CommandNotFoundError(cmd)
136 args = [find_executable('conda-' + cmd)]
137 args.extend(sys.argv[2:])
138 p = Popen(args)
139 try:
140 p.communicate()
141 except KeyboardInterrupt:
142 p.wait()
143 finally:
144 sys.exit(p.returncode)
145
146 super(ArgumentParser, self).error(message)
147
148 def print_help(self):
149 super(ArgumentParser, self).print_help()
150
151 if sys.argv[1:] in ([], [''], ['help'], ['-h'], ['--help']):
152 from .find_commands import find_commands
153 other_commands = find_commands()
154 if other_commands:
155 builder = ['']
156 builder.append("conda commands available from other packages:")
157 builder.extend(' %s' % cmd for cmd in sorted(other_commands))
158 print('\n'.join(builder))
159
160
161 class NullCountAction(_CountAction):
162
163 @staticmethod
164 def _ensure_value(namespace, name, value):
165 if getattr(namespace, name, NULL) in (NULL, None):
166 setattr(namespace, name, value)
167 return getattr(namespace, name)
168
169 def __call__(self, parser, namespace, values, option_string=None):
170 new_count = self._ensure_value(namespace, self.dest, 0) + 1
171 setattr(namespace, self.dest, new_count)
172
173
174 # #############################################################################################
175 #
176 # sub-parsers
177 #
178 # #############################################################################################
179
180 def configure_parser_clean(sub_parsers):
181 descr = dedent("""
182 Remove unused packages and caches.
183 """)
184 example = dedent("""
185 Examples:
186
187 conda clean --tarballs
188 """)
189 p = sub_parsers.add_parser(
190 'clean',
191 description=descr,
192 help=descr,
193 epilog=example,
194 )
195 add_parser_yes(p)
196 add_parser_json(p)
197 add_parser_quiet(p)
198 p.add_argument(
199 "-a", "--all",
200 action="store_true",
201 help="Remove index cache, lock files, tarballs, "
202 "unused cache packages, and source cache.",
203 )
204 p.add_argument(
205 "-i", "--index-cache",
206 action="store_true",
207 help="Remove index cache.",
208 )
209 p.add_argument(
210 "-l", "--lock",
211 action="store_true",
212 help="Remove all conda lock files.",
213 )
214 p.add_argument(
215 "-t", "--tarballs",
216 action="store_true",
217 help="Remove cached package tarballs.",
218 )
219 p.add_argument(
220 '-p', '--packages',
221 action='store_true',
222 help="""Remove unused cached packages. Warning: this does not check
223 for symlinked packages.""",
224 )
225 p.add_argument(
226 '-s', '--source-cache',
227 action='store_true',
228 help="""Remove files from the source cache of conda build.""",
229 )
230 p.set_defaults(func='.main_clean.execute')
231
232
233 def configure_parser_info(sub_parsers):
234 help = "Display information about current conda install."
235
236 example = dedent("""
237
238 Examples:
239
240 conda info -a
241 """)
242 p = sub_parsers.add_parser(
243 'info',
244 description=help,
245 help=help,
246 epilog=example,
247 )
248 add_parser_json(p)
249 add_parser_offline(p)
250 p.add_argument(
251 '-a', "--all",
252 action="store_true",
253 help="Show all information, (environments, license, and system "
254 "information.")
255 p.add_argument(
256 '-e', "--envs",
257 action="store_true",
258 help="List all known conda environments.",
259 )
260 p.add_argument(
261 '-l', "--license",
262 action="store_true",
263 help="Display information about the local conda licenses list.",
264 )
265 p.add_argument(
266 '-s', "--system",
267 action="store_true",
268 help="List environment variables.",
269 )
270 p.add_argument(
271 'packages',
272 action="store",
273 nargs='*',
274 help="Display information about packages.",
275 )
276 p.add_argument(
277 '--base',
278 action='store_true',
279 help='Display base environment path.',
280 )
281 p.add_argument(
282 '--root',
283 action='store_true',
284 help=SUPPRESS,
285 dest='base',
286 )
287 p.add_argument(
288 '--unsafe-channels',
289 action='store_true',
290 help='Display list of channels with tokens exposed.',
291 )
292 p.set_defaults(func='.main_info.execute')
293
294
295 def configure_parser_config(sub_parsers):
296 descr = dedent("""
297 Modify configuration values in .condarc. This is modeled after the git
298 config command. Writes to the user .condarc file (%s) by default.
299
300 """) % user_rc_path
301
302 # Note, the extra whitespace in the list keys is on purpose. It's so the
303 # formatting from help2man is still valid YAML (otherwise it line wraps the
304 # keys like "- conda - defaults"). Technically the parser here still won't
305 # recognize it because it removes the indentation, but at least it will be
306 # valid.
307 additional_descr = """
308 See `conda config --describe` or %s/docs/config.html
309 for details on all the options that can go in .condarc.
310
311 Examples:
312
313 Display all configuration values as calculated and compiled:
314
315 conda config --show
316
317 Display all identified configuration sources:
318
319 conda config --show-sources
320
321 Describe all available configuration options:
322
323 conda config --describe
324
325 Add the conda-canary channel:
326
327 conda config --add channels conda-canary
328
329 Set the output verbosity to level 3 (highest):
330
331 conda config --set verbosity 3
332
333 Get the channels defined in the system .condarc:
334
335 conda config --get channels --system
336
337 Add the 'foo' Binstar channel:
338
339 conda config --add channels foo
340
341 Disable the 'show_channel_urls' option:
342
343 conda config --set show_channel_urls no
344 """ % CONDA_HOMEPAGE_URL
345
346 p = sub_parsers.add_parser(
347 'config',
348 description=descr,
349 help=descr,
350 epilog=additional_descr,
351 )
352 add_parser_json(p)
353
354 # TODO: use argparse.FileType
355 location = p.add_mutually_exclusive_group()
356 location.add_argument(
357 "--system",
358 action="store_true",
359 help="""Write to the system .condarc file ({system}). Otherwise writes to the user
360 config file ({user}).""".format(system=sys_rc_path,
361 user=user_rc_path),
362 )
363 location.add_argument(
364 "--env",
365 action="store_true",
366 help="Write to the active conda environment .condarc file (%s). "
367 "If no environment is active, write to the user config file (%s)."
368 "" % (os.getenv('CONDA_PREFIX', "<no active environment>"), user_rc_path),
369 )
370 location.add_argument(
371 "--file",
372 action="store",
373 help="""Write to the given file. Otherwise writes to the user config file ({user})
374 or the file path given by the 'CONDARC' environment variable, if it is set
375 (default: %(default)s).""".format(user=user_rc_path),
376 default=os.environ.get('CONDARC', user_rc_path)
377 )
378
379 # XXX: Does this really have to be mutually exclusive. I think the below
380 # code will work even if it is a regular group (although combination of
381 # --add and --remove with the same keys will not be well-defined).
382 action = p.add_mutually_exclusive_group(required=True)
383 action.add_argument(
384 "--show",
385 nargs='*',
386 default=None,
387 help="Display configuration values as calculated and compiled. "
388 "If no arguments given, show information for all configuration values.",
389 )
390 action.add_argument(
391 "--show-sources",
392 action="store_true",
393 help="Display all identified configuration sources.",
394 )
395 action.add_argument(
396 "--validate",
397 action="store_true",
398 help="Validate all configuration sources.",
399 )
400 action.add_argument(
401 "--describe",
402 nargs='*',
403 default=None,
404 help="Describe given configuration parameters. If no arguments given, show "
405 "information for all configuration parameters.",
406 )
407 action.add_argument(
408 "--write-default",
409 action="store_true",
410 help="Write the default configuration to a file. "
411 "Equivalent to `conda config --describe > ~/.condarc` "
412 "when no --env, --system, or --file flags are given.",
413 )
414 action.add_argument(
415 "--get",
416 nargs='*',
417 action="store",
418 help="Get a configuration value.",
419 default=None,
420 metavar='KEY',
421 )
422 action.add_argument(
423 "--append",
424 nargs=2,
425 action="append",
426 help="""Add one configuration value to the end of a list key.""",
427 default=[],
428 metavar=('KEY', 'VALUE'),
429 )
430 action.add_argument(
431 "--prepend", "--add",
432 nargs=2,
433 action="append",
434 help="""Add one configuration value to the beginning of a list key.""",
435 default=[],
436 metavar=('KEY', 'VALUE'),
437 )
438 action.add_argument(
439 "--set",
440 nargs=2,
441 action="append",
442 help="""Set a boolean or string key""",
443 default=[],
444 metavar=('KEY', 'VALUE'),
445 )
446 action.add_argument(
447 "--remove",
448 nargs=2,
449 action="append",
450 help="""Remove a configuration value from a list key. This removes
451 all instances of the value.""",
452 default=[],
453 metavar=('KEY', 'VALUE'),
454 )
455 action.add_argument(
456 "--remove-key",
457 nargs=1,
458 action="append",
459 help="""Remove a configuration key (and all its values).""",
460 default=[],
461 metavar="KEY",
462 )
463 action.add_argument(
464 "--stdin",
465 action="store_true",
466 help="Apply configuration information given in yaml format piped through stdin.",
467 )
468
469 p.add_argument(
470 "-f", "--force",
471 action="store_true",
472 default=NULL,
473 help=SUPPRESS, # TODO: No longer used. Remove in a future release.
474 )
475
476 p.set_defaults(func='.main_config.execute')
477
478
479 def configure_parser_create(sub_parsers):
480 help = "Create a new conda environment from a list of specified packages. "
481 descr = (help +
482 "To use the created environment, use 'source activate "
483 "envname' look in that directory first. This command requires either "
484 "the -n NAME or -p PREFIX option.")
485
486 example = dedent("""
487 Examples:
488
489 conda create -n myenv sqlite
490
491 """)
492 p = sub_parsers.add_parser(
493 'create',
494 description=descr,
495 help=help,
496 epilog=example,
497 )
498 if on_win:
499 p.add_argument(
500 "--shortcuts",
501 action="store_true",
502 help="Install start menu shortcuts",
503 dest="shortcuts",
504 default=NULL,
505 )
506 p.add_argument(
507 "--no-shortcuts",
508 action="store_false",
509 help="Don't install start menu shortcuts",
510 dest="shortcuts",
511 default=NULL,
512 )
513 add_parser_create_install_update(p)
514 add_parser_json(p)
515 p.add_argument(
516 "--clone",
517 action="store",
518 help='Path to (or name of) existing local environment.',
519 metavar='ENV',
520 )
521 p.add_argument(
522 "--no-default-packages",
523 action="store_true",
524 help='Ignore create_default_packages in the .condarc file.',
525 )
526 p.set_defaults(func='.main_create.execute')
527
528
529 def configure_parser_help(sub_parsers):
530 descr = "Displays a list of available conda commands and their help strings."
531
532 example = dedent("""
533 Examples:
534
535 conda help install
536 """)
537
538 p = sub_parsers.add_parser(
539 'help',
540 description=descr,
541 help=descr,
542 epilog=example,
543 )
544 p.add_argument(
545 'command',
546 metavar='COMMAND',
547 action="store",
548 nargs='?',
549 help="Print help information for COMMAND (same as: conda COMMAND --help).",
550 )
551 p.set_defaults(func='.main_help.execute')
552
553
554 def configure_parser_install(sub_parsers):
555 help = "Installs a list of packages into a specified conda environment."
556 descr = dedent(help + """
557
558 This command accepts a list of package specifications (e.g, bitarray=0.8)
559 and installs a set of packages consistent with those specifications and
560 compatible with the underlying environment. If full compatibility cannot
561 be assured, an error is reported and the environment is not changed.
562
563 Conda attempts to install the newest versions of the requested packages. To
564 accomplish this, it may update some packages that are already installed, or
565 install additional packages. To prevent existing packages from updating,
566 use the --no-update-deps option. This may force conda to install older
567 versions of the requested packages, and it does not prevent additional
568 dependency packages from being installed.
569
570 If you wish to skip dependency checking altogether, use the '--force'
571 option. This may result in an environment with incompatible packages, so
572 this option must be used with great caution.
573
574 conda can also be called with a list of explicit conda package filenames
575 (e.g. ./lxml-3.2.0-py27_0.tar.bz2). Using conda in this mode implies the
576 --force option, and should likewise be used with great caution. Explicit
577 filenames and package specifications cannot be mixed in a single command.
578 """)
579 example = dedent("""
580 Examples:
581
582 conda install -n myenv scipy
583
584 """)
585 p = sub_parsers.add_parser(
586 'install',
587 description=descr,
588 help=help,
589 epilog=example,
590 )
591 p.add_argument(
592 "--revision",
593 action="store",
594 help="Revert to the specified REVISION.",
595 metavar='REVISION',
596 )
597 if on_win:
598 p.add_argument(
599 "--shortcuts",
600 action="store_true",
601 help="Install start menu shortcuts",
602 dest="shortcuts",
603 default=True
604 )
605 p.add_argument(
606 "--no-shortcuts",
607 action="store_false",
608 help="Don't install start menu shortcuts",
609 dest="shortcuts",
610 default=True
611 )
612 add_parser_create_install_update(p)
613 add_parser_json(p)
614 p.set_defaults(func='.main_install.execute')
615
616
617 def configure_parser_list(sub_parsers):
618 descr = "List linked packages in a conda environment."
619
620 # Note, the formatting of this is designed to work well with help2man
621 examples = dedent("""
622 Examples:
623
624 List all packages in the current environment:
625
626 conda list
627
628 List all packages installed into the environment 'myenv':
629
630 conda list -n myenv
631
632 Save packages for future use:
633
634 conda list --export > package-list.txt
635
636 Reinstall packages from an export file:
637
638 conda create -n myenv --file package-list.txt
639
640 """)
641 p = sub_parsers.add_parser(
642 'list',
643 description=descr,
644 help=descr,
645 formatter_class=RawDescriptionHelpFormatter,
646 epilog=examples,
647 add_help=False,
648 )
649 add_parser_help(p)
650 add_parser_prefix(p)
651 add_parser_json(p)
652 add_parser_show_channel_urls(p)
653 p.add_argument(
654 '-c', "--canonical",
655 action="store_true",
656 help="Output canonical names of packages only. Implies --no-pip. ",
657 )
658 p.add_argument(
659 '-f', "--full-name",
660 action="store_true",
661 help="Only search for full names, i.e., ^<regex>$.",
662 )
663 p.add_argument(
664 "--explicit",
665 action="store_true",
666 help="List explicitly all installed conda packaged with URL "
667 "(output may be used by conda create --file).",
668 )
669 p.add_argument(
670 "--md5",
671 action="store_true",
672 help="Add MD5 hashsum when using --explicit",
673 )
674 p.add_argument(
675 '-e', "--export",
676 action="store_true",
677 help="Output requirement string only (output may be used by "
678 " conda create --file).",
679 )
680 p.add_argument(
681 '-r', "--revisions",
682 action="store_true",
683 help="List the revision history and exit.",
684 )
685 p.add_argument(
686 "--no-pip",
687 action="store_false",
688 default=True,
689 dest="pip",
690 help="Do not include pip-only installed packages.")
691 p.add_argument(
692 'regex',
693 action="store",
694 nargs="?",
695 help="List only packages matching this regular expression.",
696 )
697 p.set_defaults(func='.main_list.execute')
698
699
700 def configure_parser_package(sub_parsers):
701 descr = "Low-level conda package utility. (EXPERIMENTAL)"
702 p = sub_parsers.add_parser(
703 'package',
704 description=descr,
705 help=descr,
706 )
707 add_parser_prefix(p)
708 p.add_argument(
709 '-w', "--which",
710 metavar="PATH",
711 nargs='+',
712 action="store",
713 help="Given some PATH print which conda package the file came from.",
714 )
715 p.add_argument(
716 '-r', "--reset",
717 action="store_true",
718 help="Remove all untracked files and exit.",
719 )
720 p.add_argument(
721 '-u', "--untracked",
722 action="store_true",
723 help="Display all untracked files and exit.",
724 )
725 p.add_argument(
726 "--pkg-name",
727 action="store",
728 default="unknown",
729 help="Package name of the created package.",
730 )
731 p.add_argument(
732 "--pkg-version",
733 action="store",
734 default="0.0",
735 help="Package version of the created package.",
736 )
737 p.add_argument(
738 "--pkg-build",
739 action="store",
740 default=0,
741 help="Package build number of the created package.",
742 )
743 p.set_defaults(func='.main_package.execute')
744
745
746 def configure_parser_remove(sub_parsers, name='remove'):
747 help = "%s a list of packages from a specified conda environment."
748 descr = dedent(help + """
749
750 This command will also remove any package that depends on any of the
751 specified packages as well---unless a replacement can be found without
752 that dependency. If you wish to skip this dependency checking and remove
753 just the requested packages, add the '--force' option. Note however that
754 this may result in a broken environment, so use this with caution.
755 """)
756 example = dedent("""
757 Examples:
758
759 conda %s -n myenv scipy
760
761 """)
762
763 uninstall_help = "Alias for conda remove. See conda remove --help."
764 if name == 'remove':
765 p = sub_parsers.add_parser(
766 name,
767 formatter_class=RawDescriptionHelpFormatter,
768 description=descr % name.capitalize(),
769 help=help % name.capitalize(),
770 epilog=example % name,
771 add_help=False,
772 )
773 else:
774 p = sub_parsers.add_parser(
775 name,
776 formatter_class=RawDescriptionHelpFormatter,
777 description=uninstall_help,
778 help=uninstall_help,
779 epilog=example % name,
780 add_help=False,
781 )
782 add_parser_help(p)
783 add_parser_yes(p)
784 add_parser_json(p)
785 p.add_argument(
786 "--all",
787 action="store_true",
788 help="%s all packages, i.e., the entire environment." % name.capitalize(),
789 )
790
791 p.add_argument(
792 "--force",
793 action="store_true",
794 help="Forces removal of a package without removing packages that depend on it. "
795 "Using this option will usually leave your environment in a broken and "
796 "inconsistent state.",
797 )
798 add_parser_no_pin(p)
799 add_parser_channels(p)
800 add_parser_prefix(p)
801 add_parser_quiet(p)
802 # Putting this one first makes it the default
803 add_parser_use_index_cache(p)
804 add_parser_use_local(p)
805 add_parser_offline(p)
806 add_parser_pscheck(p)
807 add_parser_insecure(p)
808 p.add_argument(
809 'package_names',
810 metavar='package_name',
811 action="store",
812 nargs='*',
813 help="Package names to %s from the environment." % name,
814 )
815 p.add_argument(
816 "--features",
817 action="store_true",
818 help="%s features (instead of packages)." % name.capitalize(),
819 )
820 p.set_defaults(func='.main_remove.execute')
821
822
823 def configure_parser_search(sub_parsers):
824 descr = dedent("""Search for packages and display associated information.
825 The input is a MatchSpec, a query language for conda packages.
826 See examples below.
827 """)
828
829 example = dedent("""
830 Examples:
831
832 Search for a specific package named 'scikit-learn':
833
834 conda search scikit-learn
835
836 Search for packages containing 'scikit' in the package name:
837
838 conda search *scikit*
839
840 Note that your shell may expand '*' before handing the command over to conda.
841 Therefore it is sometimes necessary to use single or double quotes around the query.
842
843 conda search '*scikit'
844 conda search "*scikit*"
845
846 Search for packages for 64-bit Linux (by default, packages for your current
847 platform are shown):
848
849 conda search numpy[subdir=linux-64]
850
851 Search for a specific version of a package:
852
853 conda search 'numpy>=1.12'
854
855 Search for a package on a specific channel
856
857 conda search conda-forge::numpy
858 conda search 'numpy[channel=conda-forge, subdir=osx-64]'
859 """)
860 p = sub_parsers.add_parser(
861 'search',
862 description=descr,
863 help=descr,
864 epilog=example,
865 )
866 add_parser_prefix(p)
867 p.add_argument(
868 "--canonical",
869 action="store_true",
870 help=SUPPRESS,
871 )
872 p.add_argument(
873 '-f', "--full-name",
874 action="store_true",
875 help=SUPPRESS,
876 )
877 p.add_argument(
878 '-i', "--info",
879 action="store_true",
880 help="Provide detailed information about each package. "
881 "Similar to output of 'conda info package-name'."
882 )
883 p.add_argument(
884 "--names-only",
885 action="store_true",
886 help=SUPPRESS,
887 )
888 add_parser_known(p)
889 add_parser_use_index_cache(p)
890 p.add_argument(
891 '-o', "--outdated",
892 action="store_true",
893 help=SUPPRESS,
894 )
895 p.add_argument(
896 '--platform',
897 action='store',
898 dest='platform',
899 help="""Search the given platform. Should be formatted like 'osx-64', 'linux-32',
900 'win-64', and so on. The default is to search the current platform.""",
901 default=None,
902 )
903 p.add_argument(
904 'match_spec',
905 default='*',
906 nargs='?',
907 help=SUPPRESS,
908 )
909 p.add_argument(
910 "--spec",
911 action="store_true",
912 help=SUPPRESS,
913 )
914 p.add_argument(
915 "--reverse-dependency",
916 action="store_true",
917 help="Perform a reverse dependency search. When using this flag, the --full-name "
918 "flag is recommended. Use 'conda info package' to see the dependencies of a "
919 "package.",
920 )
921 add_parser_offline(p)
922 add_parser_channels(p)
923 add_parser_json(p)
924 add_parser_use_local(p)
925 add_parser_insecure(p)
926 p.set_defaults(func='.main_search.execute')
927
928
929 def configure_parser_update(sub_parsers, name='update'):
930 help = "Updates conda packages to the latest compatible version."
931 descr = dedent(help + """
932
933 This command accepts a list of package names and updates them to the latest
934 versions that are compatible with all other packages in the environment.
935
936 Conda attempts to install the newest versions of the requested packages. To
937 accomplish this, it may update some packages that are already installed, or
938 install additional packages. To prevent existing packages from updating,
939 use the --no-update-deps option. This may force conda to install older
940 versions of the requested packages, and it does not prevent additional
941 dependency packages from being installed.
942
943 If you wish to skip dependency checking altogether, use the '--force'
944 option. This may result in an environment with incompatible packages, so
945 this option must be used with great caution.
946 """)
947 example = dedent("""
948 Examples:
949
950 conda %s -n myenv scipy
951
952 """)
953
954 alias_help = "Alias for conda update. See conda update --help."
955 if name == 'update':
956 p = sub_parsers.add_parser(
957 'update',
958 description=descr,
959 help=descr,
960 epilog=example % name,
961 )
962 else:
963 p = sub_parsers.add_parser(
964 name,
965 description=alias_help,
966 help=alias_help,
967 epilog=example % name,
968 )
969 add_parser_create_install_update(p)
970 add_parser_json(p)
971 p.add_argument(
972 "--all",
973 action="store_true",
974 help="Update all installed packages in the environment.",
975 )
976 p.set_defaults(func='.main_update.execute')
977
978
979 # #############################################################################################
980 #
981 # parser helpers
982 #
983 # #############################################################################################
984
985 def add_parser_create_install_update(p):
986 add_parser_yes(p)
987 p.add_argument(
988 '-f', "--force",
989 action="store_true",
990 default=NULL,
991 help="Force install (even when package already installed), "
992 "implies --no-deps.",
993 )
994 add_parser_pscheck(p)
995 # Add the file kwarg. We don't use {action="store", nargs='*'} as we don't
996 # want to gobble up all arguments after --file.
997 p.add_argument(
998 "--file",
999 default=[],
1000 action='append',
1001 help="Read package versions from the given file. Repeated file "
1002 "specifications can be passed (e.g. --file=file1 --file=file2).",
1003 )
1004 add_parser_known(p)
1005 p.add_argument(
1006 "--no-deps",
1007 action="store_true",
1008 help="Do not install, update, remove, or change dependencies. This WILL lead "
1009 "to broken environments and inconsistent behavior. Use at your own risk.",
1010 )
1011 p.add_argument(
1012 "--only-deps",
1013 action="store_true",
1014 help="Only install dependencies.",
1015 )
1016 p.add_argument(
1017 '-m', "--mkdir",
1018 action="store_true",
1019 help="Create the environment directory if necessary.",
1020 )
1021 add_parser_use_index_cache(p)
1022 add_parser_use_local(p)
1023 add_parser_offline(p)
1024 add_parser_no_pin(p)
1025 add_parser_channels(p)
1026 add_parser_prefix(p)
1027 add_parser_quiet(p)
1028 add_parser_copy(p)
1029 add_parser_insecure(p)
1030 p.add_argument(
1031 "--update-dependencies", "--update-deps",
1032 action="store_true",
1033 dest="update_deps",
1034 default=NULL,
1035 help="Update dependencies. Overrides the value given by "
1036 "`conda config --show update_deps`.",
1037 )
1038 p.add_argument(
1039 "--no-update-dependencies", "--no-update-deps",
1040 action="store_false",
1041 dest="update_deps",
1042 default=NULL,
1043 help="Don't update dependencies. Overrides the value given by "
1044 "`conda config --show update_deps`.",
1045 )
1046 p.add_argument(
1047 "--channel-priority", "--channel-pri", "--chan-pri",
1048 action="store_true",
1049 dest="channel_priority",
1050 default=NULL,
1051 help="Channel priority takes precedence over package version. "
1052 "Overrides the value given by `conda config --show channel_priority`."
1053 )
1054 p.add_argument(
1055 "--no-channel-priority", "--no-channel-pri", "--no-chan-pri",
1056 action="store_false",
1057 dest="channel_priority",
1058 default=NULL,
1059 help="Package version takes precedence over channel priority. "
1060 "Overrides the value given by `conda config --show channel_priority`."
1061 )
1062 p.add_argument(
1063 "--clobber",
1064 action="store_true",
1065 default=NULL,
1066 help="Allow clobbering of overlapping file paths within packages, "
1067 "and suppress related warnings.",
1068 )
1069 add_parser_show_channel_urls(p)
1070
1071 p.add_argument(
1072 'packages',
1073 metavar='package_spec',
1074 action="store",
1075 nargs='*',
1076 help="Packages to install or update in the conda environment.",
1077 )
1078 p.add_argument(
1079 "--download-only",
1080 action="store_true",
1081 default=NULL,
1082 help="Solve an environment and ensure package caches are populated, but exit "
1083 "prior to unlinking and linking packages into the prefix.",
1084 )
1085
1086
1087 def add_parser_pscheck(p):
1088 p.add_argument(
1089 "--force-pscheck",
1090 action="store_true",
1091 help=SUPPRESS
1092 )
1093
1094
1095 def add_parser_use_local(p):
1096 p.add_argument(
1097 "--use-local",
1098 action="store_true",
1099 default=NULL,
1100 help="Use locally built packages.",
1101 )
1102
1103
1104 def add_parser_offline(p):
1105 p.add_argument(
1106 "--offline",
1107 action='store_true',
1108 default=NULL,
1109 help="Offline mode, don't connect to the Internet.",
1110 )
1111
1112
1113 def add_parser_no_pin(p):
1114 p.add_argument(
1115 "--no-pin",
1116 action="store_true",
1117 dest='ignore_pinned',
1118 default=NULL,
1119 help="Ignore pinned file.",
1120 )
1121
1122
1123 def add_parser_show_channel_urls(p):
1124 p.add_argument(
1125 "--show-channel-urls",
1126 action="store_true",
1127 dest="show_channel_urls",
1128 default=NULL,
1129 help="Show channel urls. "
1130 "Overrides the value given by `conda config --show show_channel_urls`.",
1131 )
1132 p.add_argument(
1133 "--no-show-channel-urls",
1134 action="store_false",
1135 dest="show_channel_urls",
1136 help="Don't show channel urls. "
1137 "Overrides the value given by `conda config --show show_channel_urls`.",
1138 )
1139
1140
1141 def add_parser_copy(p):
1142 p.add_argument(
1143 '--copy',
1144 action="store_true",
1145 default=NULL,
1146 help="Install all packages using copies instead of hard- or soft-linking."
1147 )
1148
1149
1150 def add_parser_help(p):
1151 """
1152 So we can use consistent capitalization and periods in the help. You must
1153 use the add_help=False argument to ArgumentParser or add_parser to use
1154 this. Add this first to be consistent with the default argparse output.
1155
1156 """
1157 p.add_argument(
1158 '-h', '--help',
1159 action=_HelpAction,
1160 help="Show this help message and exit.",
1161 )
1162
1163
1164 def add_parser_prefix(p):
1165 npgroup = p.add_mutually_exclusive_group()
1166 npgroup.add_argument(
1167 '-n', "--name",
1168 action="store",
1169 help="Name of environment.",
1170 metavar="ENVIRONMENT",
1171 )
1172 npgroup.add_argument(
1173 '-p', "--prefix",
1174 action="store",
1175 help="Full path to environment prefix.",
1176 metavar='PATH',
1177 )
1178
1179
1180 def add_parser_yes(p):
1181 p.add_argument(
1182 "-y", "--yes",
1183 action="store_true",
1184 default=NULL,
1185 help="Do not ask for confirmation.",
1186 )
1187 p.add_argument(
1188 "--dry-run",
1189 action="store_true",
1190 help="Only display what would have been done.",
1191 )
1192
1193
1194 def add_parser_json(p):
1195 p.add_argument(
1196 "--json",
1197 action="store_true",
1198 default=NULL,
1199 help="Report all output as json. Suitable for using conda programmatically."
1200 )
1201 p.add_argument(
1202 "--debug",
1203 action="store_true",
1204 default=NULL,
1205 help="Show debug output.",
1206 )
1207 p.add_argument(
1208 "--verbose", "-v",
1209 action=NullCountAction,
1210 help="Use once for info, twice for debug, three times for trace.",
1211 dest="verbosity",
1212 default=NULL,
1213 )
1214
1215
1216 def add_parser_quiet(p):
1217 p.add_argument(
1218 '-q', "--quiet",
1219 action="store_true",
1220 default=NULL,
1221 help="Do not display progress bar.",
1222 )
1223
1224
1225 def add_parser_channels(p):
1226 p.add_argument(
1227 '-c', '--channel',
1228 dest='channel', # apparently conda-build uses this; someday rename to channels are remove context.channels alias to channel # NOQA
1229 # TODO: if you ever change 'channel' to 'channels', make sure you modify the context.channels property accordingly # NOQA
1230 action="append",
1231 help="""Additional channel to search for packages. These are URLs searched in the order
1232 they are given (including file:// for local directories). Then, the defaults
1233 or channels from .condarc are searched (unless --override-channels is given). You can use
1234 'defaults' to get the default packages for conda, and 'system' to get the system
1235 packages, which also takes .condarc into account. You can also use any name and the
1236 .condarc channel_alias value will be prepended. The default channel_alias
1237 is http://conda.anaconda.org/.""",
1238 )
1239 p.add_argument(
1240 "--override-channels",
1241 action="store_true",
1242 help="""Do not search default or .condarc channels. Requires --channel.""",
1243 )
1244
1245
1246 def add_parser_known(p):
1247 p.add_argument(
1248 "--unknown",
1249 action="store_true",
1250 default=False,
1251 dest='unknown',
1252 help=SUPPRESS,
1253 )
1254
1255
1256 def add_parser_use_index_cache(p):
1257 p.add_argument(
1258 "-C", "--use-index-cache",
1259 action="store_true",
1260 default=False,
1261 help="Use cache of channel index files, even if it has expired.",
1262 )
1263
1264
1265 def add_parser_insecure(p):
1266 p.add_argument(
1267 "-k", "--insecure",
1268 action="store_false",
1269 default=NULL,
1270 help="Allow conda to perform \"insecure\" SSL connections and transfers. "
1271 "Equivalent to setting 'ssl_verify' to 'false'."
1272 )
1273
[end of conda/cli/conda_argparse.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
conda/conda
|
5b60e3d0acdbd7900c6965f74ba0620c370adda9
|
Conda remove doesn't update environments.txt
conda 4.4.6
I'm using `conda remove -p (envPath) --all -y`. The folder gets deleted but the environments.txt file is not updated.
|
2018-01-05T01:49:39Z
|
<patch>
diff --git a/conda/cli/main_remove.py b/conda/cli/main_remove.py
--- a/conda/cli/main_remove.py
+++ b/conda/cli/main_remove.py
@@ -11,6 +11,7 @@
from .install import handle_txn
from ..base.context import context
from ..common.compat import iteritems, iterkeys
+from ..core.envs_manager import unregister_env
from ..core.linked_data import linked_data
from ..core.solve import Solver
from ..exceptions import CondaEnvironmentError, CondaValueError
@@ -63,6 +64,7 @@ def execute(args, parser):
if not context.json:
confirm_yn()
rm_rf(prefix)
+ unregister_env(prefix)
if context.json:
stdout_json({
</patch>
|
[]
|
[]
| ||||
pandas-dev__pandas-6112
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
BUG: in HTMLFormatter._write_header(), str() fails on column names in unicode
## IPython snippet reproducing the problem
``` python
import pandas as pd
import numpy as np
df = pd.DataFrame({u'clé1': [u'a', u'a', u'b', u'b', u'a'],
u'clé2': [u'1er', u'2ème', u'1er', u'2ème', u'1er'],
'données1': np.random.randn(5),
'données2': np.random.randn(5)})
df.pivot_table(rows=[u'clé1'], cols=[u'clé2'])
```
## INSTALLED VERSIONS
Python: 2.7.5.final.0
OS: Linux
Release: 2.6.32-358.14.1.el6.x86_64
Processor: x86_64
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
pandas: 0.13.0
Cython: 0.19.2
Numpy: 1.8.0
Scipy: Not installed
statsmodels: Not installed
patsy: Not installed
scikits.timeseries: Not installed
dateutil: 2.2
pytz: 2013.9
bottleneck: Not installed
PyTables: Not Installed
numexpr: Not Installed
matplotlib: Not installed
openpyxl: Not installed
xlrd: 0.9.2
xlwt: 0.7.5
xlsxwriter: Not installed
sqlalchemy: Not installed
lxml: 3.2.5
bs4: Not installed
html5lib: Not installed
bigquery: Not installed
apiclient: Not installed
## Expected behavior
HTML formatted table.
## Seen instead
Warning message in IPython:
```
WARNING: Exception in text/html formatter: 'ascii' codec can't encode character u'\xe9' in position 2: ordinal not in range(128)
```
</issue>
<code>
[start of README.md]
1 # pandas: powerful Python data analysis toolkit
2
3 
4
5 ## What is it
6 **pandas** is a Python package providing fast, flexible, and expressive data
7 structures designed to make working with "relational" or "labeled" data both
8 easy and intuitive. It aims to be the fundamental high-level building block for
9 doing practical, **real world** data analysis in Python. Additionally, it has
10 the broader goal of becoming **the most powerful and flexible open source data
11 analysis / manipulation tool available in any language**. It is already well on
12 its way toward this goal.
13
14 ## Main Features
15 Here are just a few of the things that pandas does well:
16
17 - Easy handling of [**missing data**][missing-data] (represented as
18 `NaN`) in floating point as well as non-floating point data
19 - Size mutability: columns can be [**inserted and
20 deleted**][insertion-deletion] from DataFrame and higher dimensional
21 objects
22 - Automatic and explicit [**data alignment**][alignment]: objects can
23 be explicitly aligned to a set of labels, or the user can simply
24 ignore the labels and let `Series`, `DataFrame`, etc. automatically
25 align the data for you in computations
26 - Powerful, flexible [**group by**][groupby] functionality to perform
27 split-apply-combine operations on data sets, for both aggregating
28 and transforming data
29 - Make it [**easy to convert**][conversion] ragged,
30 differently-indexed data in other Python and NumPy data structures
31 into DataFrame objects
32 - Intelligent label-based [**slicing**][slicing], [**fancy
33 indexing**][fancy-indexing], and [**subsetting**][subsetting] of
34 large data sets
35 - Intuitive [**merging**][merging] and [**joining**][joining] data
36 sets
37 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of
38 data sets
39 - [**Hierarchical**][mi] labeling of axes (possible to have multiple
40 labels per tick)
41 - Robust IO tools for loading data from [**flat files**][flat-files]
42 (CSV and delimited), [**Excel files**][excel], [**databases**][db],
43 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore]
44 - [**Time series**][timeseries]-specific functionality: date range
45 generation and frequency conversion, moving window statistics,
46 moving window linear regressions, date shifting and lagging, etc.
47
48
49 [missing-data]: http://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data
50 [insertion-deletion]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html#column-selection-addition-deletion
51 [alignment]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html?highlight=alignment#intro-to-data-structures
52 [groupby]: http://pandas.pydata.org/pandas-docs/stable/groupby.html#group-by-split-apply-combine
53 [conversion]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe
54 [slicing]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges
55 [fancy-indexing]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#advanced-indexing-with-ix
56 [subsetting]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing
57 [merging]: http://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging
58 [joining]: http://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index
59 [reshape]: http://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-and-pivot-tables
60 [pivot-table]: http://pandas.pydata.org/pandas-docs/stable/reshaping.html#pivot-tables-and-cross-tabulations
61 [mi]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex
62 [flat-files]: http://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files
63 [excel]: http://pandas.pydata.org/pandas-docs/stable/io.html#excel-files
64 [db]: http://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries
65 [hdfstore]: http://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables
66 [timeseries]: http://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-series-date-functionality
67
68 ## Where to get it
69 The source code is currently hosted on GitHub at:
70 http://github.com/pydata/pandas
71
72 Binary installers for the latest released version are available at the Python
73 package index
74
75 http://pypi.python.org/pypi/pandas/
76
77 And via `easy_install`:
78
79 ```sh
80 easy_install pandas
81 ```
82
83 or `pip`:
84
85 ```sh
86 pip install pandas
87 ```
88
89 ## Dependencies
90 - [NumPy](http://www.numpy.org): 1.6.1 or higher
91 - [python-dateutil](http://labix.org/python-dateutil): 1.5 or higher
92 - [pytz](http://pytz.sourceforge.net)
93 - Needed for time zone support with ``pandas.date_range``
94
95 ### Highly Recommended Dependencies
96 - [numexpr](http://code.google.com/p/numexpr/)
97 - Needed to accelerate some expression evaluation operations
98 - Required by PyTables
99 - [bottleneck](http://berkeleyanalytics.com/bottleneck)
100 - Needed to accelerate certain numerical operations
101
102 ### Optional dependencies
103 - [Cython](http://www.cython.org): Only necessary to build development version. Version 0.17.1 or higher.
104 - [SciPy](http://www.scipy.org): miscellaneous statistical functions
105 - [PyTables](http://www.pytables.org): necessary for HDF5-based storage
106 - [matplotlib](http://matplotlib.sourceforge.net/): for plotting
107 - [statsmodels](http://statsmodels.sourceforge.net/)
108 - Needed for parts of `pandas.stats`
109 - For Excel I/O:
110 - [xlrd/xlwt](http://www.python-excel.org/)
111 - Excel reading (xlrd) and writing (xlwt)
112 - [openpyxl](http://packages.python.org/openpyxl/)
113 - openpyxl version 1.6.1 or higher, for writing .xlsx files
114 - xlrd >= 0.9.0
115 - [XlsxWriter](https://pypi.python.org/pypi/XlsxWriter)
116 - Alternative Excel writer.
117 - [Google bq Command Line Tool](https://developers.google.com/bigquery/bq-command-line-tool/)
118 - Needed for `pandas.io.gbq`
119 - [boto](https://pypi.python.org/pypi/boto): necessary for Amazon S3 access.
120 - One of the following combinations of libraries is needed to use the
121 top-level [`pandas.read_html`][read-html-docs] function:
122 - [BeautifulSoup4][BeautifulSoup4] and [html5lib][html5lib] (Any
123 recent version of [html5lib][html5lib] is okay.)
124 - [BeautifulSoup4][BeautifulSoup4] and [lxml][lxml]
125 - [BeautifulSoup4][BeautifulSoup4] and [html5lib][html5lib] and [lxml][lxml]
126 - Only [lxml][lxml], although see [HTML reading gotchas][html-gotchas]
127 for reasons as to why you should probably **not** take this approach.
128
129 #### Notes about HTML parsing libraries
130 - If you install [BeautifulSoup4][BeautifulSoup4] you must install
131 either [lxml][lxml] or [html5lib][html5lib] or both.
132 `pandas.read_html` will **not** work with *only* `BeautifulSoup4`
133 installed.
134 - You are strongly encouraged to read [HTML reading
135 gotchas][html-gotchas]. It explains issues surrounding the
136 installation and usage of the above three libraries.
137 - You may need to install an older version of
138 [BeautifulSoup4][BeautifulSoup4]:
139 - Versions 4.2.1, 4.1.3 and 4.0.2 have been confirmed for 64 and
140 32-bit Ubuntu/Debian
141 - Additionally, if you're using [Anaconda][Anaconda] you should
142 definitely read [the gotchas about HTML parsing][html-gotchas]
143 libraries
144 - If you're on a system with `apt-get` you can do
145
146 ```sh
147 sudo apt-get build-dep python-lxml
148 ```
149
150 to get the necessary dependencies for installation of [lxml][lxml].
151 This will prevent further headaches down the line.
152
153 [html5lib]: https://github.com/html5lib/html5lib-python "html5lib"
154 [BeautifulSoup4]: http://www.crummy.com/software/BeautifulSoup "BeautifulSoup4"
155 [lxml]: http://lxml.de
156 [Anaconda]: https://store.continuum.io/cshop/anaconda
157 [NumPy]: http://numpy.scipy.org/
158 [html-gotchas]: http://pandas.pydata.org/pandas-docs/stable/gotchas.html#html-table-parsing
159 [read-html-docs]: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.io.html.read_html.html#pandas.io.html.read_html
160
161 ## Installation from sources
162 To install pandas from source you need Cython in addition to the normal
163 dependencies above. Cython can be installed from pypi:
164
165 ```sh
166 pip install cython
167 ```
168
169 In the `pandas` directory (same one where you found this file after
170 cloning the git repo), execute:
171
172 ```sh
173 python setup.py install
174 ```
175
176 or for installing in [development mode](http://www.pip-installer.org/en/latest/usage.html):
177
178 ```sh
179 python setup.py develop
180 ```
181
182 Alternatively, you can use `pip` if you want all the dependencies pulled
183 in automatically (the `-e` option is for installing it in [development
184 mode](http://www.pip-installer.org/en/latest/usage.html)):
185
186 ```sh
187 pip install -e .
188 ```
189
190 On Windows, you will need to install MinGW and execute:
191
192 ```sh
193 python setup.py build --compiler=mingw32
194 python setup.py install
195 ```
196
197 See http://pandas.pydata.org/ for more information.
198
199 ## License
200 BSD
201
202 ## Documentation
203 The official documentation is hosted on PyData.org: http://pandas.pydata.org/
204
205 The Sphinx documentation should provide a good starting point for learning how
206 to use the library. Expect the docs to continue to expand as time goes on.
207
208 ## Background
209 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and
210 has been under active development since then.
211
212 ## Discussion and Development
213 Since pandas development is related to a number of other scientific
214 Python projects, questions are welcome on the scipy-user mailing
215 list. Specialized discussions or design issues should take place on
216 the pystatsmodels mailing list / Google group, where
217 ``scikits.statsmodels`` and other libraries will also be discussed:
218
219 http://groups.google.com/group/pystatsmodels
220
[end of README.md]
[start of pandas/io/html.py]
1 """:mod:`pandas.io.html` is a module containing functionality for dealing with
2 HTML IO.
3
4 """
5
6 import os
7 import re
8 import numbers
9 import collections
10 import warnings
11
12 from distutils.version import LooseVersion
13
14 import numpy as np
15
16 from pandas.io.common import _is_url, urlopen, parse_url
17 from pandas.io.parsers import TextParser
18 from pandas.compat import (lrange, lmap, u, string_types, iteritems, text_type,
19 raise_with_traceback)
20 from pandas.core import common as com
21 from pandas import Series
22
23
24 try:
25 import bs4
26 except ImportError:
27 _HAS_BS4 = False
28 else:
29 _HAS_BS4 = True
30
31
32 try:
33 import lxml
34 except ImportError:
35 _HAS_LXML = False
36 else:
37 _HAS_LXML = True
38
39
40 try:
41 import html5lib
42 except ImportError:
43 _HAS_HTML5LIB = False
44 else:
45 _HAS_HTML5LIB = True
46
47
48 #############
49 # READ HTML #
50 #############
51 _RE_WHITESPACE = re.compile(r'[\r\n]+|\s{2,}')
52
53
54 def _remove_whitespace(s, regex=_RE_WHITESPACE):
55 """Replace extra whitespace inside of a string with a single space.
56
57 Parameters
58 ----------
59 s : str or unicode
60 The string from which to remove extra whitespace.
61
62 regex : regex
63 The regular expression to use to remove extra whitespace.
64
65 Returns
66 -------
67 subd : str or unicode
68 `s` with all extra whitespace replaced with a single space.
69 """
70 return regex.sub(' ', s.strip())
71
72
73 def _get_skiprows(skiprows):
74 """Get an iterator given an integer, slice or container.
75
76 Parameters
77 ----------
78 skiprows : int, slice, container
79 The iterator to use to skip rows; can also be a slice.
80
81 Raises
82 ------
83 TypeError
84 * If `skiprows` is not a slice, integer, or Container
85
86 Returns
87 -------
88 it : iterable
89 A proper iterator to use to skip rows of a DataFrame.
90 """
91 if isinstance(skiprows, slice):
92 return lrange(skiprows.start or 0, skiprows.stop, skiprows.step or 1)
93 elif isinstance(skiprows, numbers.Integral) or com.is_list_like(skiprows):
94 return skiprows
95 elif skiprows is None:
96 return 0
97 raise TypeError('%r is not a valid type for skipping rows' %
98 type(skiprows).__name__)
99
100
101 def _read(io):
102 """Try to read from a url, file or string.
103
104 Parameters
105 ----------
106 io : str, unicode, or file-like
107
108 Returns
109 -------
110 raw_text : str
111 """
112 if _is_url(io):
113 with urlopen(io) as url:
114 raw_text = url.read()
115 elif hasattr(io, 'read'):
116 raw_text = io.read()
117 elif os.path.isfile(io):
118 with open(io) as f:
119 raw_text = f.read()
120 elif isinstance(io, string_types):
121 raw_text = io
122 else:
123 raise TypeError("Cannot read object of type %r" % type(io).__name__)
124 return raw_text
125
126
127 class _HtmlFrameParser(object):
128 """Base class for parsers that parse HTML into DataFrames.
129
130 Parameters
131 ----------
132 io : str or file-like
133 This can be either a string of raw HTML, a valid URL using the HTTP,
134 FTP, or FILE protocols or a file-like object.
135
136 match : str or regex
137 The text to match in the document.
138
139 attrs : dict
140 List of HTML <table> element attributes to match.
141
142 Attributes
143 ----------
144 io : str or file-like
145 raw HTML, URL, or file-like object
146
147 match : regex
148 The text to match in the raw HTML
149
150 attrs : dict-like
151 A dictionary of valid table attributes to use to search for table
152 elements.
153
154 Notes
155 -----
156 To subclass this class effectively you must override the following methods:
157 * :func:`_build_doc`
158 * :func:`_text_getter`
159 * :func:`_parse_td`
160 * :func:`_parse_tables`
161 * :func:`_parse_tr`
162 * :func:`_parse_thead`
163 * :func:`_parse_tbody`
164 * :func:`_parse_tfoot`
165 See each method's respective documentation for details on their
166 functionality.
167 """
168 def __init__(self, io, match, attrs):
169 self.io = io
170 self.match = match
171 self.attrs = attrs
172
173 def parse_tables(self):
174 tables = self._parse_tables(self._build_doc(), self.match, self.attrs)
175 return (self._build_table(table) for table in tables)
176
177 def _parse_raw_data(self, rows):
178 """Parse the raw data into a list of lists.
179
180 Parameters
181 ----------
182 rows : iterable of node-like
183 A list of row elements.
184
185 text_getter : callable
186 A callable that gets the text from an individual node. This must be
187 defined by subclasses.
188
189 column_finder : callable
190 A callable that takes a row node as input and returns a list of the
191 column node in that row. This must be defined by subclasses.
192
193 Returns
194 -------
195 data : list of list of strings
196 """
197 data = [[_remove_whitespace(self._text_getter(col)) for col in
198 self._parse_td(row)] for row in rows]
199 return data
200
201 def _text_getter(self, obj):
202 """Return the text of an individual DOM node.
203
204 Parameters
205 ----------
206 obj : node-like
207 A DOM node.
208
209 Returns
210 -------
211 text : str or unicode
212 The text from an individual DOM node.
213 """
214 raise NotImplementedError
215
216 def _parse_td(self, obj):
217 """Return the td elements from a row element.
218
219 Parameters
220 ----------
221 obj : node-like
222
223 Returns
224 -------
225 columns : list of node-like
226 These are the elements of each row, i.e., the columns.
227 """
228 raise NotImplementedError
229
230 def _parse_tables(self, doc, match, attrs):
231 """Return all tables from the parsed DOM.
232
233 Parameters
234 ----------
235 doc : tree-like
236 The DOM from which to parse the table element.
237
238 match : str or regular expression
239 The text to search for in the DOM tree.
240
241 attrs : dict
242 A dictionary of table attributes that can be used to disambiguate
243 mutliple tables on a page.
244
245 Raises
246 ------
247 ValueError
248 * If `match` does not match any text in the document.
249
250 Returns
251 -------
252 tables : list of node-like
253 A list of <table> elements to be parsed into raw data.
254 """
255 raise NotImplementedError
256
257 def _parse_tr(self, table):
258 """Return the list of row elements from the parsed table element.
259
260 Parameters
261 ----------
262 table : node-like
263 A table element that contains row elements.
264
265 Returns
266 -------
267 rows : list of node-like
268 A list row elements of a table, usually <tr> or <th> elements.
269 """
270 raise NotImplementedError
271
272 def _parse_thead(self, table):
273 """Return the header of a table.
274
275 Parameters
276 ----------
277 table : node-like
278 A table element that contains row elements.
279
280 Returns
281 -------
282 thead : node-like
283 A <thead>...</thead> element.
284 """
285 raise NotImplementedError
286
287 def _parse_tbody(self, table):
288 """Return the body of the table.
289
290 Parameters
291 ----------
292 table : node-like
293 A table element that contains row elements.
294
295 Returns
296 -------
297 tbody : node-like
298 A <tbody>...</tbody> element.
299 """
300 raise NotImplementedError
301
302 def _parse_tfoot(self, table):
303 """Return the footer of the table if any.
304
305 Parameters
306 ----------
307 table : node-like
308 A table element that contains row elements.
309
310 Returns
311 -------
312 tfoot : node-like
313 A <tfoot>...</tfoot> element.
314 """
315 raise NotImplementedError
316
317 def _build_doc(self):
318 """Return a tree-like object that can be used to iterate over the DOM.
319
320 Returns
321 -------
322 obj : tree-like
323 """
324 raise NotImplementedError
325
326 def _build_table(self, table):
327 header = self._parse_raw_thead(table)
328 body = self._parse_raw_tbody(table)
329 footer = self._parse_raw_tfoot(table)
330 return header, body, footer
331
332 def _parse_raw_thead(self, table):
333 thead = self._parse_thead(table)
334 res = []
335 if thead:
336 res = lmap(self._text_getter, self._parse_th(thead[0]))
337 return np.array(res).squeeze() if res and len(res) == 1 else res
338
339 def _parse_raw_tfoot(self, table):
340 tfoot = self._parse_tfoot(table)
341 res = []
342 if tfoot:
343 res = lmap(self._text_getter, self._parse_td(tfoot[0]))
344 return np.array(res).squeeze() if res and len(res) == 1 else res
345
346 def _parse_raw_tbody(self, table):
347 tbody = self._parse_tbody(table)
348
349 try:
350 res = self._parse_tr(tbody[0])
351 except IndexError:
352 res = self._parse_tr(table)
353 return self._parse_raw_data(res)
354
355
356 class _BeautifulSoupHtml5LibFrameParser(_HtmlFrameParser):
357 """HTML to DataFrame parser that uses BeautifulSoup under the hood.
358
359 See Also
360 --------
361 pandas.io.html._HtmlFrameParser
362 pandas.io.html._LxmlFrameParser
363
364 Notes
365 -----
366 Documentation strings for this class are in the base class
367 :class:`pandas.io.html._HtmlFrameParser`.
368 """
369 def __init__(self, *args, **kwargs):
370 super(_BeautifulSoupHtml5LibFrameParser, self).__init__(*args,
371 **kwargs)
372 from bs4 import SoupStrainer
373 self._strainer = SoupStrainer('table')
374
375 def _text_getter(self, obj):
376 return obj.text
377
378 def _parse_td(self, row):
379 return row.find_all(('td', 'th'))
380
381 def _parse_tr(self, element):
382 return element.find_all('tr')
383
384 def _parse_th(self, element):
385 return element.find_all('th')
386
387 def _parse_thead(self, table):
388 return table.find_all('thead')
389
390 def _parse_tbody(self, table):
391 return table.find_all('tbody')
392
393 def _parse_tfoot(self, table):
394 return table.find_all('tfoot')
395
396 def _parse_tables(self, doc, match, attrs):
397 element_name = self._strainer.name
398 tables = doc.find_all(element_name, attrs=attrs)
399
400 if not tables:
401 raise ValueError('No tables found')
402
403 result = []
404 unique_tables = set()
405
406 for table in tables:
407 if (table not in unique_tables and
408 table.find(text=match) is not None):
409 result.append(table)
410 unique_tables.add(table)
411
412 if not result:
413 raise ValueError("No tables found matching pattern %r" %
414 match.pattern)
415 return result
416
417 def _setup_build_doc(self):
418 raw_text = _read(self.io)
419 if not raw_text:
420 raise ValueError('No text parsed from document: %s' % self.io)
421 return raw_text
422
423 def _build_doc(self):
424 from bs4 import BeautifulSoup
425 return BeautifulSoup(self._setup_build_doc(), features='html5lib')
426
427
428 def _build_xpath_expr(attrs):
429 """Build an xpath expression to simulate bs4's ability to pass in kwargs to
430 search for attributes when using the lxml parser.
431
432 Parameters
433 ----------
434 attrs : dict
435 A dict of HTML attributes. These are NOT checked for validity.
436
437 Returns
438 -------
439 expr : unicode
440 An XPath expression that checks for the given HTML attributes.
441 """
442 # give class attribute as class_ because class is a python keyword
443 if 'class_' in attrs:
444 attrs['class'] = attrs.pop('class_')
445
446 s = [u("@%s=%r") % (k, v) for k, v in iteritems(attrs)]
447 return u('[%s]') % ' and '.join(s)
448
449
450 _re_namespace = {'re': 'http://exslt.org/regular-expressions'}
451 _valid_schemes = 'http', 'file', 'ftp'
452
453
454 class _LxmlFrameParser(_HtmlFrameParser):
455 """HTML to DataFrame parser that uses lxml under the hood.
456
457 Warning
458 -------
459 This parser can only handle HTTP, FTP, and FILE urls.
460
461 See Also
462 --------
463 _HtmlFrameParser
464 _BeautifulSoupLxmlFrameParser
465
466 Notes
467 -----
468 Documentation strings for this class are in the base class
469 :class:`_HtmlFrameParser`.
470 """
471 def __init__(self, *args, **kwargs):
472 super(_LxmlFrameParser, self).__init__(*args, **kwargs)
473
474 def _text_getter(self, obj):
475 return obj.text_content()
476
477 def _parse_td(self, row):
478 return row.xpath('.//td|.//th')
479
480 def _parse_tr(self, table):
481 expr = './/tr[normalize-space()]'
482 return table.xpath(expr)
483
484 def _parse_tables(self, doc, match, kwargs):
485 pattern = match.pattern
486
487 # 1. check all descendants for the given pattern and only search tables
488 # 2. go up the tree until we find a table
489 query = '//table//*[re:test(text(), %r)]/ancestor::table'
490 xpath_expr = u(query) % pattern
491
492 # if any table attributes were given build an xpath expression to
493 # search for them
494 if kwargs:
495 xpath_expr += _build_xpath_expr(kwargs)
496
497 tables = doc.xpath(xpath_expr, namespaces=_re_namespace)
498
499 if not tables:
500 raise ValueError("No tables found matching regex %r" % pattern)
501 return tables
502
503 def _build_doc(self):
504 """
505 Raises
506 ------
507 ValueError
508 * If a URL that lxml cannot parse is passed.
509
510 Exception
511 * Any other ``Exception`` thrown. For example, trying to parse a
512 URL that is syntactically correct on a machine with no internet
513 connection will fail.
514
515 See Also
516 --------
517 pandas.io.html._HtmlFrameParser._build_doc
518 """
519 from lxml.html import parse, fromstring, HTMLParser
520 from lxml.etree import XMLSyntaxError
521
522 parser = HTMLParser(recover=False)
523
524 try:
525 # try to parse the input in the simplest way
526 r = parse(self.io, parser=parser)
527
528 try:
529 r = r.getroot()
530 except AttributeError:
531 pass
532 except (UnicodeDecodeError, IOError):
533 # if the input is a blob of html goop
534 if not _is_url(self.io):
535 r = fromstring(self.io, parser=parser)
536
537 try:
538 r = r.getroot()
539 except AttributeError:
540 pass
541 else:
542 # not a url
543 scheme = parse_url(self.io).scheme
544 if scheme not in _valid_schemes:
545 # lxml can't parse it
546 msg = ('%r is not a valid url scheme, valid schemes are '
547 '%s') % (scheme, _valid_schemes)
548 raise ValueError(msg)
549 else:
550 # something else happened: maybe a faulty connection
551 raise
552 else:
553 if not hasattr(r, 'text_content'):
554 raise XMLSyntaxError("no text parsed from document", 0, 0, 0)
555 return r
556
557 def _parse_tbody(self, table):
558 return table.xpath('.//tbody')
559
560 def _parse_thead(self, table):
561 return table.xpath('.//thead')
562
563 def _parse_tfoot(self, table):
564 return table.xpath('.//tfoot')
565
566 def _parse_raw_thead(self, table):
567 expr = './/thead//th'
568 return [_remove_whitespace(x.text_content()) for x in
569 table.xpath(expr)]
570
571 def _parse_raw_tfoot(self, table):
572 expr = './/tfoot//th'
573 return [_remove_whitespace(x.text_content()) for x in
574 table.xpath(expr)]
575
576
577 def _expand_elements(body):
578 lens = Series(lmap(len, body))
579 lens_max = lens.max()
580 not_max = lens[lens != lens_max]
581
582 for ind, length in iteritems(not_max):
583 body[ind] += [np.nan] * (lens_max - length)
584
585
586 def _data_to_frame(data, header, index_col, skiprows, infer_types,
587 parse_dates, tupleize_cols, thousands):
588 head, body, _ = data # _ is footer which is rarely used: ignore for now
589
590 if head:
591 body = [head] + body
592
593 if header is None: # special case when a table has <th> elements
594 header = 0
595
596 # fill out elements of body that are "ragged"
597 _expand_elements(body)
598
599 tp = TextParser(body, header=header, index_col=index_col,
600 skiprows=_get_skiprows(skiprows),
601 parse_dates=parse_dates, tupleize_cols=tupleize_cols,
602 thousands=thousands)
603 df = tp.read()
604
605 if infer_types: # TODO: rm this code so infer_types has no effect in 0.14
606 df = df.convert_objects(convert_dates='coerce')
607 else:
608 df = df.applymap(text_type)
609 return df
610
611
612 _valid_parsers = {'lxml': _LxmlFrameParser, None: _LxmlFrameParser,
613 'html5lib': _BeautifulSoupHtml5LibFrameParser,
614 'bs4': _BeautifulSoupHtml5LibFrameParser}
615
616
617 def _parser_dispatch(flavor):
618 """Choose the parser based on the input flavor.
619
620 Parameters
621 ----------
622 flavor : str
623 The type of parser to use. This must be a valid backend.
624
625 Returns
626 -------
627 cls : _HtmlFrameParser subclass
628 The parser class based on the requested input flavor.
629
630 Raises
631 ------
632 ValueError
633 * If `flavor` is not a valid backend.
634 ImportError
635 * If you do not have the requested `flavor`
636 """
637 valid_parsers = list(_valid_parsers.keys())
638 if flavor not in valid_parsers:
639 raise ValueError('%r is not a valid flavor, valid flavors are %s' %
640 (flavor, valid_parsers))
641
642 if flavor in ('bs4', 'html5lib'):
643 if not _HAS_HTML5LIB:
644 raise ImportError("html5lib not found please install it")
645 if not _HAS_BS4:
646 raise ImportError("bs4 not found please install it")
647 if bs4.__version__ == LooseVersion('4.2.0'):
648 raise ValueError("You're using a version"
649 " of BeautifulSoup4 (4.2.0) that has been"
650 " known to cause problems on certain"
651 " operating systems such as Debian. "
652 "Please install a version of"
653 " BeautifulSoup4 != 4.2.0, both earlier"
654 " and later releases will work.")
655 else:
656 if not _HAS_LXML:
657 raise ImportError("lxml not found please install it")
658 return _valid_parsers[flavor]
659
660
661 def _print_as_set(s):
662 return '{%s}' % ', '.join([com.pprint_thing(el) for el in s])
663
664
665 def _validate_flavor(flavor):
666 if flavor is None:
667 flavor = 'lxml', 'bs4'
668 elif isinstance(flavor, string_types):
669 flavor = flavor,
670 elif isinstance(flavor, collections.Iterable):
671 if not all(isinstance(flav, string_types) for flav in flavor):
672 raise TypeError('Object of type %r is not an iterable of strings' %
673 type(flavor).__name__)
674 else:
675 fmt = '{0!r}' if isinstance(flavor, string_types) else '{0}'
676 fmt += ' is not a valid flavor'
677 raise ValueError(fmt.format(flavor))
678
679 flavor = tuple(flavor)
680 valid_flavors = set(_valid_parsers)
681 flavor_set = set(flavor)
682
683 if not flavor_set & valid_flavors:
684 raise ValueError('%s is not a valid set of flavors, valid flavors are '
685 '%s' % (_print_as_set(flavor_set),
686 _print_as_set(valid_flavors)))
687 return flavor
688
689
690 def _parse(flavor, io, match, header, index_col, skiprows, infer_types,
691 parse_dates, tupleize_cols, thousands, attrs):
692 flavor = _validate_flavor(flavor)
693 compiled_match = re.compile(match) # you can pass a compiled regex here
694
695 # hack around python 3 deleting the exception variable
696 retained = None
697 for flav in flavor:
698 parser = _parser_dispatch(flav)
699 p = parser(io, compiled_match, attrs)
700
701 try:
702 tables = p.parse_tables()
703 except Exception as caught:
704 retained = caught
705 else:
706 break
707 else:
708 raise_with_traceback(retained)
709
710 return [_data_to_frame(table, header, index_col, skiprows, infer_types,
711 parse_dates, tupleize_cols, thousands)
712 for table in tables]
713
714
715 def read_html(io, match='.+', flavor=None, header=None, index_col=None,
716 skiprows=None, infer_types=None, attrs=None, parse_dates=False,
717 tupleize_cols=False, thousands=','):
718 r"""Read HTML tables into a ``list`` of ``DataFrame`` objects.
719
720 Parameters
721 ----------
722 io : str or file-like
723 A URL, a file-like object, or a raw string containing HTML. Note that
724 lxml only accepts the http, ftp and file url protocols. If you have a
725 URL that starts with ``'https'`` you might try removing the ``'s'``.
726
727 match : str or compiled regular expression, optional
728 The set of tables containing text matching this regex or string will be
729 returned. Unless the HTML is extremely simple you will probably need to
730 pass a non-empty string here. Defaults to '.+' (match any non-empty
731 string). The default value will return all tables contained on a page.
732 This value is converted to a regular expression so that there is
733 consistent behavior between Beautiful Soup and lxml.
734
735 flavor : str or None, container of strings
736 The parsing engine to use. 'bs4' and 'html5lib' are synonymous with
737 each other, they are both there for backwards compatibility. The
738 default of ``None`` tries to use ``lxml`` to parse and if that fails it
739 falls back on ``bs4`` + ``html5lib``.
740
741 header : int or list-like or None, optional
742 The row (or list of rows for a :class:`~pandas.MultiIndex`) to use to
743 make the columns headers.
744
745 index_col : int or list-like or None, optional
746 The column (or list of columns) to use to create the index.
747
748 skiprows : int or list-like or slice or None, optional
749 0-based. Number of rows to skip after parsing the column integer. If a
750 sequence of integers or a slice is given, will skip the rows indexed by
751 that sequence. Note that a single element sequence means 'skip the nth
752 row' whereas an integer means 'skip n rows'.
753
754 infer_types : bool, optional
755 This option is deprecated in 0.13, an will have no effect in 0.14. It
756 defaults to ``True``.
757
758 attrs : dict or None, optional
759 This is a dictionary of attributes that you can pass to use to identify
760 the table in the HTML. These are not checked for validity before being
761 passed to lxml or Beautiful Soup. However, these attributes must be
762 valid HTML table attributes to work correctly. For example, ::
763
764 attrs = {'id': 'table'}
765
766 is a valid attribute dictionary because the 'id' HTML tag attribute is
767 a valid HTML attribute for *any* HTML tag as per `this document
768 <http://www.w3.org/TR/html-markup/global-attributes.html>`__. ::
769
770 attrs = {'asdf': 'table'}
771
772 is *not* a valid attribute dictionary because 'asdf' is not a valid
773 HTML attribute even if it is a valid XML attribute. Valid HTML 4.01
774 table attributes can be found `here
775 <http://www.w3.org/TR/REC-html40/struct/tables.html#h-11.2>`__. A
776 working draft of the HTML 5 spec can be found `here
777 <http://www.w3.org/TR/html-markup/table.html>`__. It contains the
778 latest information on table attributes for the modern web.
779
780 parse_dates : bool, optional
781 See :func:`~pandas.io.parsers.read_csv` for more details. In 0.13, this
782 parameter can sometimes interact strangely with ``infer_types``. If you
783 get a large number of ``NaT`` values in your results, consider passing
784 ``infer_types=False`` and manually converting types afterwards.
785
786 tupleize_cols : bool, optional
787 If ``False`` try to parse multiple header rows into a
788 :class:`~pandas.MultiIndex`, otherwise return raw tuples. Defaults to
789 ``False``.
790
791 thousands : str, optional
792 Separator to use to parse thousands. Defaults to ``','``.
793
794 Returns
795 -------
796 dfs : list of DataFrames
797
798 Notes
799 -----
800 Before using this function you should read the :ref:`gotchas about the
801 HTML parsing libraries <html-gotchas>`.
802
803 Expect to do some cleanup after you call this function. For example, you
804 might need to manually assign column names if the column names are
805 converted to NaN when you pass the `header=0` argument. We try to assume as
806 little as possible about the structure of the table and push the
807 idiosyncrasies of the HTML contained in the table to the user.
808
809 This function searches for ``<table>`` elements and only for ``<tr>``
810 and ``<th>`` rows and ``<td>`` elements within each ``<tr>`` or ``<th>``
811 element in the table. ``<td>`` stands for "table data".
812
813 Similar to :func:`~pandas.read_csv` the `header` argument is applied
814 **after** `skiprows` is applied.
815
816 This function will *always* return a list of :class:`DataFrame` *or*
817 it will fail, e.g., it will *not* return an empty list.
818
819 Examples
820 --------
821 See the :ref:`read_html documentation in the IO section of the docs
822 <io.read_html>` for some examples of reading in HTML tables.
823
824 See Also
825 --------
826 pandas.io.parsers.read_csv
827 """
828 if infer_types is not None:
829 warnings.warn("infer_types will have no effect in 0.14", FutureWarning)
830 else:
831 infer_types = True # TODO: remove effect of this in 0.14
832
833 # Type check here. We don't want to parse only to fail because of an
834 # invalid value of an integer skiprows.
835 if isinstance(skiprows, numbers.Integral) and skiprows < 0:
836 raise ValueError('cannot skip rows starting from the end of the '
837 'data (you passed a negative value)')
838 return _parse(flavor, io, match, header, index_col, skiprows, infer_types,
839 parse_dates, tupleize_cols, thousands, attrs)
840
[end of pandas/io/html.py]
[start of pandas/util/print_versions.py]
1 import os
2 import platform
3 import sys
4 import struct
5 import subprocess
6 import codecs
7
8
9 def get_sys_info():
10 "Returns system information as a dict"
11
12 blob = []
13
14 # get full commit hash
15 commit = None
16 if os.path.isdir(".git") and os.path.isdir("pandas"):
17 try:
18 pipe = subprocess.Popen('git log --format="%H" -n 1'.split(" "),
19 stdout=subprocess.PIPE, stderr=subprocess.PIPE)
20 so, serr = pipe.communicate()
21 except:
22 pass
23 else:
24 if pipe.returncode == 0:
25 commit = so
26 try:
27 commit = so.decode('utf-8')
28 except ValueError:
29 pass
30 commit = commit.strip().strip('"')
31
32 blob.append(('commit', commit))
33
34 try:
35 sysname, nodename, release, version, machine, processor = platform.uname(
36 )
37 blob.extend([
38 ("python", "%d.%d.%d.%s.%s" % sys.version_info[:]),
39 ("python-bits", struct.calcsize("P") * 8),
40 ("OS", "%s" % (sysname)),
41 ("OS-release", "%s" % (release)),
42 # ("Version", "%s" % (version)),
43 ("machine", "%s" % (machine)),
44 ("processor", "%s" % (processor)),
45 ("byteorder", "%s" % sys.byteorder),
46 ("LC_ALL", "%s" % os.environ.get('LC_ALL', "None")),
47 ("LANG", "%s" % os.environ.get('LANG', "None")),
48
49 ])
50 except:
51 pass
52
53 return blob
54
55
56 def show_versions(as_json=False):
57 import imp
58 sys_info = get_sys_info()
59
60 deps = [
61 # (MODULE_NAME, f(mod) -> mod version)
62 ("pandas", lambda mod: mod.__version__),
63 ("Cython", lambda mod: mod.__version__),
64 ("numpy", lambda mod: mod.version.version),
65 ("scipy", lambda mod: mod.version.version),
66 ("statsmodels", lambda mod: mod.__version__),
67 ("IPython", lambda mod: mod.__version__),
68 ("sphinx", lambda mod: mod.__version__),
69 ("patsy", lambda mod: mod.__version__),
70 ("scikits.timeseries", lambda mod: mod.__version__),
71 ("dateutil", lambda mod: mod.__version__),
72 ("pytz", lambda mod: mod.VERSION),
73 ("bottleneck", lambda mod: mod.__version__),
74 ("tables", lambda mod: mod.__version__),
75 ("numexpr", lambda mod: mod.__version__),
76 ("matplotlib", lambda mod: mod.__version__),
77 ("openpyxl", lambda mod: mod.__version__),
78 ("xlrd", lambda mod: mod.__VERSION__),
79 ("xlwt", lambda mod: mod.__VERSION__),
80 ("xlsxwriter", lambda mod: mod.__version__),
81 ("sqlalchemy", lambda mod: mod.__version__),
82 ("lxml", lambda mod: mod.etree.__version__),
83 ("bs4", lambda mod: mod.__version__),
84 ("html5lib", lambda mod: mod.__version__),
85 ("bq", lambda mod: mod._VersionNumber()),
86 ("apiclient", lambda mod: mod.__version__),
87 ]
88
89 deps_blob = list()
90 for (modname, ver_f) in deps:
91 try:
92 mod = imp.load_module(modname, *imp.find_module(modname))
93 ver = ver_f(mod)
94 deps_blob.append((modname, ver))
95 except:
96 deps_blob.append((modname, None))
97
98 if (as_json):
99 # 2.6-safe
100 try:
101 import json
102 except:
103 import simplejson as json
104
105 j = dict(system=dict(sys_info), dependencies=dict(deps_blob))
106
107 if as_json == True:
108 print(j)
109 else:
110 with codecs.open(as_json, "wb", encoding='utf8') as f:
111 json.dump(j, f, indent=2)
112
113 else:
114
115 print("\nINSTALLED VERSIONS")
116 print("------------------")
117
118 for k, stat in sys_info:
119 print("%s: %s" % (k, stat))
120
121 print("")
122 for k, stat in deps_blob:
123 print("%s: %s" % (k, stat))
124
125
126 def main():
127 # optparse is 2.6-safe
128 from optparse import OptionParser
129 parser = OptionParser()
130 parser.add_option("-j", "--json", metavar="FILE", nargs=1,
131 help="Save output as JSON into file, pass in '-' to output to stdout")
132
133 (options, args) = parser.parse_args()
134
135 if options.json == "-":
136 options.json = True
137
138 show_versions(as_json=options.json)
139
140 return 0
141
142 if __name__ == "__main__":
143 sys.exit(main())
144
[end of pandas/util/print_versions.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pandas-dev/pandas
|
da59b112e73dc4139f09f85a4203c67795a72634
|
BUG: in HTMLFormatter._write_header(), str() fails on column names in unicode
## IPython snippet reproducing the problem
``` python
import pandas as pd
import numpy as np
df = pd.DataFrame({u'clé1': [u'a', u'a', u'b', u'b', u'a'],
u'clé2': [u'1er', u'2ème', u'1er', u'2ème', u'1er'],
'données1': np.random.randn(5),
'données2': np.random.randn(5)})
df.pivot_table(rows=[u'clé1'], cols=[u'clé2'])
```
## INSTALLED VERSIONS
Python: 2.7.5.final.0
OS: Linux
Release: 2.6.32-358.14.1.el6.x86_64
Processor: x86_64
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
pandas: 0.13.0
Cython: 0.19.2
Numpy: 1.8.0
Scipy: Not installed
statsmodels: Not installed
patsy: Not installed
scikits.timeseries: Not installed
dateutil: 2.2
pytz: 2013.9
bottleneck: Not installed
PyTables: Not Installed
numexpr: Not Installed
matplotlib: Not installed
openpyxl: Not installed
xlrd: 0.9.2
xlwt: 0.7.5
xlsxwriter: Not installed
sqlalchemy: Not installed
lxml: 3.2.5
bs4: Not installed
html5lib: Not installed
bigquery: Not installed
apiclient: Not installed
## Expected behavior
HTML formatted table.
## Seen instead
Warning message in IPython:
```
WARNING: Exception in text/html formatter: 'ascii' codec can't encode character u'\xe9' in position 2: ordinal not in range(128)
```
|
2014-01-27T02:12:49Z
|
<patch>
diff --git a/doc/source/release.rst b/doc/source/release.rst
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -155,6 +155,7 @@ Bug Fixes
- Regression in ``.get(None)`` indexing from 0.12 (:issue:`5652`)
- Subtle ``iloc`` indexing bug, surfaced in (:issue:`6059`)
- Bug with insert of strings into DatetimeIndex (:issue:`5818`)
+ - Fixed unicode bug in to_html/HTML repr (:issue:`6098`)
pandas 0.13.0
-------------
diff --git a/pandas/core/format.py b/pandas/core/format.py
--- a/pandas/core/format.py
+++ b/pandas/core/format.py
@@ -767,7 +767,7 @@ def _column_header():
levels)):
name = self.columns.names[lnum]
row = [''] * (row_levels - 1) + ['' if name is None
- else str(name)]
+ else com.pprint_thing(name)]
tags = {}
j = len(row)
</patch>
|
[]
|
[]
| ||||
pantsbuild__pants-16635
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`packed` layout PEXes fail to `run` with `use_deprecated_pex_binary_run_semantics=false`
**Describe the bug**
With `use_deprecated_pex_binary_run_semantics=false`, a `layout="packed"` PEX will fail to run with:
```
Exception: Error executing interactive process: Permission denied (os error 13)
```
**Pants version**
`2.13.x` and `main`
</issue>
<code>
[start of README.md]
1 # Pants Build System
2
3 Pants is a scalable build system for _monorepos_: codebases containing
4 multiple projects, often using multiple programming languages and frameworks,
5 in a single unified code repository.
6
7 Some noteworthy features include:
8
9 * Explicit dependency modeling.
10 * Fine-grained invalidation.
11 * Shared result caching.
12 * Concurrent execution.
13 * Remote execution.
14 * Unified interface for multiple tools and languages.
15 * Extensibility and customizability via a plugin API.
16
17 Documentation: [www.pantsbuild.org](https://www.pantsbuild.org/).
18
19 We release to [PyPI](https://pypi.org/pypi)
20 [](https://pypi.org/pypi/pantsbuild.pants)
21 [](https://pypi.org/pypi/pantsbuild.pants)
22
23 # Requirements
24
25 To run Pants, you need:
26
27 * Linux or macOS.
28 * Python 3.7+ discoverable on your `PATH`.
29 * A C compiler, system headers and Python headers (to compile native Python modules).
30 * Internet access (so that Pants can fully bootstrap itself).
31
[end of README.md]
[start of build-support/bin/cache_comparison.py]
1 # Copyright 2022 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 from __future__ import annotations
5
6 import argparse
7 import json
8 import shlex
9 import shutil
10 import subprocess
11 import sys
12 from time import time
13
14
15 def create_parser() -> argparse.ArgumentParser:
16 parser = argparse.ArgumentParser(
17 description=(
18 "A (remote) cache comparison tool, which automates testing a single build of Pants (in "
19 "an isolated cache namespace) against a range of source commits."
20 )
21 )
22
23 parser.add_argument(
24 "-a",
25 "--args",
26 default="check lint test ::",
27 help="The arguments to test each source commit with.",
28 )
29 parser.add_argument(
30 "-b",
31 "--build-commit",
32 help="The commit to build a Pants PEX from.",
33 )
34 parser.add_argument(
35 "-s",
36 "--source-diffspec",
37 help=(
38 "The diffspec (e.g.: `main~10..main`) which selects the Pants-repo source commits "
39 "to run each Pants build against."
40 ),
41 )
42 parser.add_argument(
43 "--source-diffspec-step",
44 default=1,
45 help="The number of commits to step by within `--source-diffspec`.",
46 )
47 return parser
48
49
50 def main() -> None:
51 args = create_parser().parse_args()
52 build_commit = args.build_commit
53 source_commits = commits_in_range(args.source_diffspec, int(args.source_diffspec_step))
54 timings = timings_for_build(
55 shlex.split(args.args),
56 build_commit,
57 source_commits,
58 )
59 json.dump(timings, indent=2, fp=sys.stdout)
60
61
62 Commit = str
63
64
65 TimeInSeconds = float
66
67
68 def commits_in_range(diffspec: str, step: int) -> list[Commit]:
69 all_commits = list(
70 subprocess.run(
71 ["git", "rev-list", "--reverse", diffspec],
72 stdout=subprocess.PIPE,
73 check=True,
74 )
75 .stdout.decode()
76 .splitlines()
77 )
78 return all_commits[::step]
79
80
81 def timings_for_build(
82 args: list[str], build_commit: Commit, source_commits: list[Commit]
83 ) -> dict[Commit, TimeInSeconds]:
84 """Build a PEX from the build commit, and then collect timings for each source commit."""
85 # Build a PEX for the commit, then ensure that `pantsd` is not running.
86 checkout(build_commit)
87 run(["package", "src/python/pants/bin:pants"], use_pex=False)
88 shutil.rmtree(".pids")
89 # Then collect a runtime for each commit in the range.
90 cache_namespace = f"cache-comparison-{build_commit}-{time()}"
91 return {
92 source_commit: timing_for_commit(source_commit, args, cache_namespace)
93 for source_commit in source_commits
94 }
95
96
97 def timing_for_commit(commit: Commit, args: list[str], cache_namespace: str) -> TimeInSeconds:
98 # Checkout the commit, and ensure that the native code is built by running the `pants` script.
99 checkout(commit)
100 run(["--no-pantsd", "--version"], use_pex=False)
101
102 # Then time the actual run with the PEX.
103 start = time()
104 run(args, cache_namespace=cache_namespace)
105 return time() - start
106
107
108 def checkout(commit: Commit) -> None:
109 subprocess.run(["git", "checkout", commit], check=True)
110
111
112 def run(args: list[str], *, cache_namespace: str | None = None, use_pex: bool = True) -> None:
113 cmd = "dist/src.python.pants.bin/pants.pex" if use_pex else "./pants"
114 subprocess.run(
115 [cmd, *pants_options(cache_namespace), *args],
116 check=True,
117 )
118
119
120 def pants_options(cache_namespace: str | None = None) -> list[str]:
121 return [
122 "--no-local-cache",
123 "--pants-config-files=pants.ci.toml",
124 *(
125 []
126 if cache_namespace is None
127 else [f"--process-execution-cache-namespace={cache_namespace}"]
128 ),
129 ]
130
131
132 if __name__ == "__main__":
133 main()
134
[end of build-support/bin/cache_comparison.py]
[start of src/python/pants/backend/python/util_rules/pex.py]
1 # Copyright 2019 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 from __future__ import annotations
5
6 import dataclasses
7 import json
8 import logging
9 import os
10 import shlex
11 from dataclasses import dataclass
12 from pathlib import PurePath
13 from textwrap import dedent
14 from typing import Iterable, Iterator, Mapping
15
16 import packaging.specifiers
17 import packaging.version
18 from pkg_resources import Requirement
19
20 from pants.backend.python.subsystems.repos import PythonRepos
21 from pants.backend.python.subsystems.setup import PythonSetup
22 from pants.backend.python.target_types import (
23 MainSpecification,
24 PexCompletePlatformsField,
25 PexLayout,
26 )
27 from pants.backend.python.target_types import PexPlatformsField as PythonPlatformsField
28 from pants.backend.python.util_rules import pex_cli, pex_requirements
29 from pants.backend.python.util_rules.interpreter_constraints import InterpreterConstraints
30 from pants.backend.python.util_rules.pex_cli import PexCliProcess, PexPEX
31 from pants.backend.python.util_rules.pex_environment import (
32 CompletePexEnvironment,
33 PexEnvironment,
34 PexRuntimeEnvironment,
35 PythonExecutable,
36 )
37 from pants.backend.python.util_rules.pex_requirements import (
38 EntireLockfile,
39 LoadedLockfile,
40 LoadedLockfileRequest,
41 Lockfile,
42 )
43 from pants.backend.python.util_rules.pex_requirements import (
44 PexRequirements as PexRequirements, # Explicit re-export.
45 )
46 from pants.backend.python.util_rules.pex_requirements import validate_metadata
47 from pants.core.target_types import FileSourceField
48 from pants.core.util_rules.system_binaries import BashBinary
49 from pants.engine.addresses import UnparsedAddressInputs
50 from pants.engine.collection import Collection, DeduplicatedCollection
51 from pants.engine.engine_aware import EngineAwareParameter
52 from pants.engine.fs import EMPTY_DIGEST, AddPrefix, CreateDigest, Digest, FileContent, MergeDigests
53 from pants.engine.internals.native_engine import Snapshot
54 from pants.engine.internals.selectors import MultiGet
55 from pants.engine.platform import Platform
56 from pants.engine.process import Process, ProcessCacheScope, ProcessResult
57 from pants.engine.rules import Get, collect_rules, rule
58 from pants.engine.target import HydratedSources, HydrateSourcesRequest, SourcesField, Targets
59 from pants.util.frozendict import FrozenDict
60 from pants.util.logging import LogLevel
61 from pants.util.meta import frozen_after_init
62 from pants.util.strutil import pluralize, softwrap
63
64 logger = logging.getLogger(__name__)
65
66
67 class PexPlatforms(DeduplicatedCollection[str]):
68 sort_input = True
69
70 @classmethod
71 def create_from_platforms_field(cls, field: PythonPlatformsField) -> PexPlatforms:
72 return cls(field.value or ())
73
74 def generate_pex_arg_list(self) -> list[str]:
75 args = []
76 for platform in self:
77 args.extend(["--platform", platform])
78 return args
79
80
81 class CompletePlatforms(DeduplicatedCollection[str]):
82 sort_input = True
83
84 def __init__(self, iterable: Iterable[str] = (), *, digest: Digest = EMPTY_DIGEST):
85 super().__init__(iterable)
86 self._digest = digest
87
88 @classmethod
89 def from_snapshot(cls, snapshot: Snapshot) -> CompletePlatforms:
90 return cls(snapshot.files, digest=snapshot.digest)
91
92 @property
93 def digest(self) -> Digest:
94 return self._digest
95
96 def generate_pex_arg_list(self) -> Iterator[str]:
97 for path in self:
98 yield "--complete-platform"
99 yield path
100
101
102 @rule
103 async def digest_complete_platforms(
104 complete_platforms: PexCompletePlatformsField,
105 ) -> CompletePlatforms:
106 original_file_targets = await Get(
107 Targets,
108 UnparsedAddressInputs,
109 complete_platforms.to_unparsed_address_inputs(),
110 )
111 original_files_sources = await MultiGet(
112 Get(
113 HydratedSources,
114 HydrateSourcesRequest(tgt.get(SourcesField), for_sources_types=(FileSourceField,)),
115 )
116 for tgt in original_file_targets
117 )
118 snapshot = await Get(
119 Snapshot, MergeDigests(sources.snapshot.digest for sources in original_files_sources)
120 )
121 return CompletePlatforms.from_snapshot(snapshot)
122
123
124 @frozen_after_init
125 @dataclass(unsafe_hash=True)
126 class PexRequest(EngineAwareParameter):
127 output_filename: str
128 internal_only: bool
129 layout: PexLayout
130 python: PythonExecutable | None
131 requirements: PexRequirements | EntireLockfile
132 interpreter_constraints: InterpreterConstraints
133 platforms: PexPlatforms
134 complete_platforms: CompletePlatforms
135 sources: Digest | None
136 additional_inputs: Digest
137 main: MainSpecification | None
138 additional_args: tuple[str, ...]
139 pex_path: tuple[Pex, ...]
140 description: str | None = dataclasses.field(compare=False)
141
142 def __init__(
143 self,
144 *,
145 output_filename: str,
146 internal_only: bool,
147 layout: PexLayout | None = None,
148 python: PythonExecutable | None = None,
149 requirements: PexRequirements | EntireLockfile = PexRequirements(),
150 interpreter_constraints=InterpreterConstraints(),
151 platforms=PexPlatforms(),
152 complete_platforms=CompletePlatforms(),
153 sources: Digest | None = None,
154 additional_inputs: Digest | None = None,
155 main: MainSpecification | None = None,
156 additional_args: Iterable[str] = (),
157 pex_path: Iterable[Pex] = (),
158 description: str | None = None,
159 ) -> None:
160 """A request to create a PEX from its inputs.
161
162 :param output_filename: The name of the built Pex file, which typically should end in
163 `.pex`.
164 :param internal_only: Whether we ever materialize the Pex and distribute it directly
165 to end users, such as with the `binary` goal. Typically, instead, the user never
166 directly uses the Pex, e.g. with `lint` and `test`. If True, we will use a Pex setting
167 that results in faster build time but compatibility with fewer interpreters at runtime.
168 :param layout: The filesystem layout to create the PEX with.
169 :param python: A particular PythonExecutable to use, which must match any relevant
170 interpreter_constraints.
171 :param requirements: The requirements that the PEX should contain.
172 :param interpreter_constraints: Any constraints on which Python versions may be used.
173 :param platforms: Which abbreviated platforms should be supported. Setting this value will
174 cause interpreter constraints to not be used at PEX build time because platforms already
175 constrain the valid Python versions, e.g. by including `cp36m` in the platform string.
176 Unfortunately this also causes interpreter constraints to not be embedded in the built
177 PEX for use at runtime which can lead to problems.
178 See: https://github.com/pantsbuild/pants/issues/13904.
179 :param complete_platforms: Which complete platforms should be supported. Setting this value
180 will cause interpreter constraints to not be used at PEX build time because complete
181 platforms completely constrain the valid Python versions. Unfortunately this also causes
182 interpreter constraints to not be embedded in the built PEX for use at runtime which can
183 lead to problems. See: https://github.com/pantsbuild/pants/issues/13904.
184 :param sources: Any source files that should be included in the Pex.
185 :param additional_inputs: Any inputs that are not source files and should not be included
186 directly in the Pex, but should be present in the environment when building the Pex.
187 :param main: The main for the built Pex, equivalent to Pex's `-e` or '-c' flag. If
188 left off, the Pex will open up as a REPL.
189 :param additional_args: Any additional Pex flags.
190 :param pex_path: Pex files to add to the PEX_PATH.
191 :param description: A human-readable description to render in the dynamic UI when building
192 the Pex.
193 """
194 self.output_filename = output_filename
195 self.internal_only = internal_only
196 # Use any explicitly requested layout, or Packed for internal PEXes (which is a much
197 # friendlier layout for the CAS than Zipapp.)
198 self.layout = layout or (PexLayout.PACKED if internal_only else PexLayout.ZIPAPP)
199 self.python = python
200 self.requirements = requirements
201 self.interpreter_constraints = interpreter_constraints
202 self.platforms = platforms
203 self.complete_platforms = complete_platforms
204 self.sources = sources
205 self.additional_inputs = additional_inputs or EMPTY_DIGEST
206 self.main = main
207 self.additional_args = tuple(additional_args)
208 self.pex_path = tuple(pex_path)
209 self.description = description
210
211 self.__post_init__()
212
213 def __post_init__(self):
214 if self.internal_only and self.platforms:
215 raise ValueError(
216 softwrap(
217 f"""
218 Internal only PEXes can only constrain interpreters with interpreter_constraints.
219 Given platform constraints {self.platforms} for internal only pex request:
220 {self}.
221 """
222 )
223 )
224 if self.internal_only and self.complete_platforms:
225 raise ValueError(
226 softwrap(
227 f"""
228 Internal only PEXes can only constrain interpreters with interpreter_constraints.
229 Given complete_platform constraints {self.complete_platforms} for internal only
230 pex request: {self}.
231 """
232 )
233 )
234 if self.python and self.platforms:
235 raise ValueError(
236 softwrap(
237 f"""
238 Only one of platforms or a specific interpreter may be set. Got
239 both {self.platforms} and {self.python}.
240 """
241 )
242 )
243 if self.python and self.complete_platforms:
244 raise ValueError(
245 softwrap(
246 f"""
247 Only one of complete_platforms or a specific interpreter may be set. Got
248 both {self.complete_platforms} and {self.python}.
249 """
250 )
251 )
252 if self.python and self.interpreter_constraints:
253 raise ValueError(
254 softwrap(
255 f"""
256 Only one of interpreter_constraints or a specific interpreter may be set. Got
257 both {self.interpreter_constraints} and {self.python}.
258 """
259 )
260 )
261
262 def debug_hint(self) -> str:
263 return self.output_filename
264
265
266 @dataclass(frozen=True)
267 class OptionalPexRequest:
268 maybe_pex_request: PexRequest | None
269
270
271 @dataclass(frozen=True)
272 class Pex:
273 """Wrapper for a digest containing a pex file created with some filename."""
274
275 digest: Digest
276 name: str
277 python: PythonExecutable | None
278
279
280 @dataclass(frozen=True)
281 class OptionalPex:
282 maybe_pex: Pex | None
283
284
285 @rule(desc="Find Python interpreter for constraints", level=LogLevel.DEBUG)
286 async def find_interpreter(
287 interpreter_constraints: InterpreterConstraints, pex_runtime_env: PexRuntimeEnvironment
288 ) -> PythonExecutable:
289 formatted_constraints = " OR ".join(str(constraint) for constraint in interpreter_constraints)
290 result = await Get(
291 ProcessResult,
292 PexCliProcess(
293 description=f"Find interpreter for constraints: {formatted_constraints}",
294 subcommand=(),
295 # Here, we run the Pex CLI with no requirements, which just selects an interpreter.
296 # Normally, this would start an isolated repl. By passing `--`, we force the repl to
297 # instead act as an interpreter (the selected one) and tell us about itself. The upshot
298 # is we run the Pex interpreter selection logic unperturbed but without resolving any
299 # distributions.
300 extra_args=(
301 *interpreter_constraints.generate_pex_arg_list(),
302 "--",
303 "-c",
304 # N.B.: The following code snippet must be compatible with Python 2.7 and
305 # Python 3.5+.
306 #
307 # When hashing, we pick 8192 for efficiency of reads and fingerprint updates
308 # (writes) since it's a common OS buffer size and an even multiple of the
309 # hash block size.
310 dedent(
311 """\
312 import hashlib, os, sys
313
314 python = os.path.realpath(sys.executable)
315 print(python)
316
317 hasher = hashlib.sha256()
318 with open(python, "rb") as fp:
319 for chunk in iter(lambda: fp.read(8192), b""):
320 hasher.update(chunk)
321 print(hasher.hexdigest())
322 """
323 ),
324 ),
325 level=LogLevel.DEBUG,
326 # NB: We want interpreter discovery to re-run fairly frequently
327 # (PER_RESTART_SUCCESSFUL), but not on every run of Pants (NEVER, which is effectively
328 # per-Session). See #10769 for a solution that is less of a tradeoff.
329 cache_scope=ProcessCacheScope.PER_RESTART_SUCCESSFUL,
330 ),
331 )
332 path, fingerprint = result.stdout.decode().strip().splitlines()
333
334 if pex_runtime_env.verbosity > 0:
335 log_output = result.stderr.decode()
336 if log_output:
337 logger.info("%s", log_output)
338
339 return PythonExecutable(path=path, fingerprint=fingerprint)
340
341
342 @dataclass(frozen=True)
343 class BuildPexResult:
344 result: ProcessResult
345 pex_filename: str
346 digest: Digest
347 python: PythonExecutable | None
348
349 def create_pex(self) -> Pex:
350 return Pex(digest=self.digest, name=self.pex_filename, python=self.python)
351
352
353 @rule(level=LogLevel.DEBUG)
354 async def build_pex(
355 request: PexRequest,
356 python_setup: PythonSetup,
357 python_repos: PythonRepos,
358 platform: Platform,
359 pex_runtime_env: PexRuntimeEnvironment,
360 ) -> BuildPexResult:
361 """Returns a PEX with the given settings."""
362 argv = [
363 "--output-file",
364 request.output_filename,
365 "--no-emit-warnings",
366 *python_setup.manylinux_pex_args,
367 *request.additional_args,
368 ]
369
370 python: PythonExecutable | None = None
371
372 # NB: If `--platform` is specified, this signals that the PEX should not be built locally.
373 # `--interpreter-constraint` only makes sense in the context of building locally. These two
374 # flags are mutually exclusive. See https://github.com/pantsbuild/pex/issues/957.
375 if request.platforms or request.complete_platforms:
376 # Note that this means that this is not an internal-only pex.
377 # TODO(#9560): consider validating that these platforms are valid with the interpreter
378 # constraints.
379 argv.extend(request.platforms.generate_pex_arg_list())
380 argv.extend(request.complete_platforms.generate_pex_arg_list())
381 elif request.python:
382 python = request.python
383 elif request.internal_only:
384 # NB: If it's an internal_only PEX, we do our own lookup of the interpreter based on the
385 # interpreter constraints, and then will run the PEX with that specific interpreter. We
386 # will have already validated that there were no platforms.
387 python = await Get(
388 PythonExecutable, InterpreterConstraints, request.interpreter_constraints
389 )
390 else:
391 # `--interpreter-constraint` options are mutually exclusive with the `--python` option,
392 # so we only specify them if we have not already located a concrete Python.
393 argv.extend(request.interpreter_constraints.generate_pex_arg_list())
394
395 if python:
396 argv.extend(["--python", python.path])
397
398 if request.main is not None:
399 argv.extend(request.main.iter_pex_args())
400
401 # TODO(John Sirois): Right now any request requirements will shadow corresponding pex path
402 # requirements, which could lead to problems. Support shading python binaries.
403 # See: https://github.com/pantsbuild/pants/issues/9206
404 if request.pex_path:
405 argv.extend(["--pex-path", ":".join(pex.name for pex in request.pex_path)])
406
407 source_dir_name = "source_files"
408 argv.append(f"--sources-directory={source_dir_name}")
409 sources_digest_as_subdir = await Get(
410 Digest, AddPrefix(request.sources or EMPTY_DIGEST, source_dir_name)
411 )
412
413 # Include any additional arguments and input digests required by the requirements.
414 requirements_digests = []
415 pex_lock_resolver_args = [*python_repos.pex_args]
416 pip_resolver_args = [*python_repos.pex_args, "--resolver-version", "pip-2020-resolver"]
417 if isinstance(request.requirements, EntireLockfile):
418 lockfile = await Get(LoadedLockfile, LoadedLockfileRequest(request.requirements.lockfile))
419 concurrency_available = lockfile.requirement_estimate
420 requirements_digests.append(lockfile.lockfile_digest)
421 if lockfile.is_pex_native:
422 argv.extend(["--lock", lockfile.lockfile_path])
423 argv.extend(pex_lock_resolver_args)
424 else:
425 # We use pip to resolve a requirements.txt pseudo-lockfile, possibly with hashes.
426 argv.extend(["--requirement", lockfile.lockfile_path, "--no-transitive"])
427 argv.extend(pip_resolver_args)
428 if lockfile.metadata and request.requirements.complete_req_strings:
429 validate_metadata(
430 lockfile.metadata,
431 request.interpreter_constraints,
432 lockfile.original_lockfile,
433 request.requirements.complete_req_strings,
434 python_setup,
435 )
436 else:
437 # TODO: This is not the best heuristic for available concurrency, since the
438 # requirements almost certainly have transitive deps which also need building, but it
439 # is better than using something hardcoded.
440 concurrency_available = len(request.requirements.req_strings)
441 argv.extend(request.requirements.req_strings)
442
443 if isinstance(request.requirements.from_superset, Pex):
444 repository_pex = request.requirements.from_superset
445 argv.extend(["--pex-repository", repository_pex.name])
446 requirements_digests.append(repository_pex.digest)
447 elif isinstance(request.requirements.from_superset, LoadedLockfile):
448 loaded_lockfile = request.requirements.from_superset
449 # NB: This is also validated in the constructor.
450 assert loaded_lockfile.is_pex_native
451 if request.requirements.req_strings:
452 requirements_digests.append(loaded_lockfile.lockfile_digest)
453 argv.extend(["--lock", loaded_lockfile.lockfile_path])
454 argv.extend(pex_lock_resolver_args)
455
456 if loaded_lockfile.metadata:
457 validate_metadata(
458 loaded_lockfile.metadata,
459 request.interpreter_constraints,
460 loaded_lockfile.original_lockfile,
461 request.requirements.req_strings,
462 python_setup,
463 )
464 else:
465 assert request.requirements.from_superset is None
466
467 # We use pip to perform a normal resolve.
468 argv.extend(pip_resolver_args)
469 if request.requirements.constraints_strings:
470 constraints_file = "__constraints.txt"
471 constraints_content = "\n".join(request.requirements.constraints_strings)
472 requirements_digests.append(
473 await Get(
474 Digest,
475 CreateDigest([FileContent(constraints_file, constraints_content.encode())]),
476 )
477 )
478 argv.extend(["--constraints", constraints_file])
479
480 merged_digest = await Get(
481 Digest,
482 MergeDigests(
483 (
484 request.complete_platforms.digest,
485 sources_digest_as_subdir,
486 request.additional_inputs,
487 *requirements_digests,
488 *(pex.digest for pex in request.pex_path),
489 )
490 ),
491 )
492
493 argv.extend(["--layout", request.layout.value])
494 output_files: Iterable[str] | None = None
495 output_directories: Iterable[str] | None = None
496 if PexLayout.ZIPAPP == request.layout:
497 output_files = [request.output_filename]
498 else:
499 output_directories = [request.output_filename]
500
501 process = await Get(
502 Process,
503 PexCliProcess(
504 python=python,
505 subcommand=(),
506 extra_args=argv,
507 additional_input_digest=merged_digest,
508 description=_build_pex_description(request),
509 output_files=output_files,
510 output_directories=output_directories,
511 concurrency_available=concurrency_available,
512 ),
513 )
514
515 process = dataclasses.replace(process, platform=platform)
516
517 # NB: Building a Pex is platform dependent, so in order to get a PEX that we can use locally
518 # without cross-building, we specify that our PEX command should be run on the current local
519 # platform.
520 result = await Get(ProcessResult, Process, process)
521
522 if pex_runtime_env.verbosity > 0:
523 log_output = result.stderr.decode()
524 if log_output:
525 logger.info("%s", log_output)
526
527 digest = (
528 await Get(
529 Digest, MergeDigests((result.output_digest, *(pex.digest for pex in request.pex_path)))
530 )
531 if request.pex_path
532 else result.output_digest
533 )
534
535 return BuildPexResult(
536 result=result, pex_filename=request.output_filename, digest=digest, python=python
537 )
538
539
540 def _build_pex_description(request: PexRequest) -> str:
541 if request.description:
542 return request.description
543
544 if isinstance(request.requirements, EntireLockfile):
545 lockfile = request.requirements.lockfile
546 if isinstance(lockfile, Lockfile):
547 desc_suffix = f"from {lockfile.file_path}"
548 else:
549 desc_suffix = f"from {lockfile.file_content.path}"
550 else:
551 if not request.requirements.req_strings:
552 return f"Building {request.output_filename}"
553 elif isinstance(request.requirements.from_superset, Pex):
554 repo_pex = request.requirements.from_superset.name
555 return softwrap(
556 f"""
557 Extracting {pluralize(len(request.requirements.req_strings), 'requirement')}
558 to build {request.output_filename} from {repo_pex}:
559 {', '.join(request.requirements.req_strings)}
560 """
561 )
562 elif isinstance(request.requirements.from_superset, LoadedLockfile):
563 lockfile_path = request.requirements.from_superset.lockfile_path
564 return softwrap(
565 f"""
566 Building {pluralize(len(request.requirements.req_strings), 'requirement')}
567 for {request.output_filename} from the {lockfile_path} resolve:
568 {', '.join(request.requirements.req_strings)}
569 """
570 )
571 else:
572 desc_suffix = softwrap(
573 f"""
574 with {pluralize(len(request.requirements.req_strings), 'requirement')}:
575 {', '.join(request.requirements.req_strings)}
576 """
577 )
578 return f"Building {request.output_filename} {desc_suffix}"
579
580
581 @rule
582 async def create_pex(request: PexRequest) -> Pex:
583 result = await Get(BuildPexResult, PexRequest, request)
584 return result.create_pex()
585
586
587 @rule
588 async def create_optional_pex(request: OptionalPexRequest) -> OptionalPex:
589 if request.maybe_pex_request is None:
590 return OptionalPex(None)
591 result = await Get(Pex, PexRequest, request.maybe_pex_request)
592 return OptionalPex(result)
593
594
595 @dataclass(frozen=True)
596 class Script:
597 path: PurePath
598
599 @property
600 def argv0(self) -> str:
601 return f"./{self.path}" if self.path.parent == PurePath() else str(self.path)
602
603
604 @dataclass(frozen=True)
605 class VenvScript:
606 script: Script
607 content: FileContent
608
609
610 @dataclass(frozen=True)
611 class VenvScriptWriter:
612 complete_pex_env: CompletePexEnvironment
613 pex: Pex
614 venv_dir: PurePath
615
616 @classmethod
617 def create(
618 cls, pex_environment: PexEnvironment, pex: Pex, venv_rel_dir: PurePath
619 ) -> VenvScriptWriter:
620 # N.B.: We don't know the working directory that will be used in any given
621 # invocation of the venv scripts; so we deal with working_directory once in an
622 # `adjust_relative_paths` function inside the script to save rule authors from having to do
623 # CWD offset math in every rule for all the relative paths their process depends on.
624 complete_pex_env = pex_environment.in_sandbox(working_directory=None)
625 venv_dir = complete_pex_env.pex_root / venv_rel_dir
626 return cls(complete_pex_env=complete_pex_env, pex=pex, venv_dir=venv_dir)
627
628 def _create_venv_script(
629 self,
630 bash: BashBinary,
631 *,
632 script_path: PurePath,
633 venv_executable: PurePath,
634 ) -> VenvScript:
635 env_vars = (
636 f"{name}={shlex.quote(value)}"
637 for name, value in self.complete_pex_env.environment_dict(
638 python_configured=True
639 ).items()
640 )
641
642 target_venv_executable = shlex.quote(str(venv_executable))
643 venv_dir = shlex.quote(str(self.venv_dir))
644 execute_pex_args = " ".join(
645 f"$(adjust_relative_paths {shlex.quote(arg)})"
646 for arg in self.complete_pex_env.create_argv(self.pex.name, python=self.pex.python)
647 )
648
649 script = dedent(
650 f"""\
651 #!{bash.path}
652 set -euo pipefail
653
654 # N.B.: This relies on BASH_SOURCE which has been available since bash-3.0, released in
655 # 2004. It will either contain the absolute path of the venv script or it will contain
656 # the relative path from the CWD to the venv script. Either way, we know the venv script
657 # parent directory is the sandbox root directory.
658 SANDBOX_ROOT="${{BASH_SOURCE%/*}}"
659
660 function adjust_relative_paths() {{
661 local value0="$1"
662 shift
663 if [ "${{value0:0:1}}" == "/" ]; then
664 # Don't relativize absolute paths.
665 echo "${{value0}}" "$@"
666 else
667 # N.B.: We convert all relative paths to paths relative to the sandbox root so
668 # this script works when run with a PWD set somewhere else than the sandbox
669 # root.
670 #
671 # There are two cases to consider. For the purposes of example, assume PWD is
672 # `/tmp/sandboxes/abc123/foo/bar`; i.e.: the rule API sets working_directory to
673 # `foo/bar`. Also assume `config/tool.yml` is the relative path in question.
674 #
675 # 1. If our BASH_SOURCE is `/tmp/sandboxes/abc123/pex_shim.sh`; so our
676 # SANDBOX_ROOT is `/tmp/sandboxes/abc123`, we calculate
677 # `/tmp/sandboxes/abc123/config/tool.yml`.
678 # 2. If our BASH_SOURCE is instead `../../pex_shim.sh`; so our SANDBOX_ROOT is
679 # `../..`, we calculate `../../config/tool.yml`.
680 echo "${{SANDBOX_ROOT}}/${{value0}}" "$@"
681 fi
682 }}
683
684 export {" ".join(env_vars)}
685 export PEX_ROOT="$(adjust_relative_paths ${{PEX_ROOT}})"
686
687 execute_pex_args="{execute_pex_args}"
688 target_venv_executable="$(adjust_relative_paths {target_venv_executable})"
689 venv_dir="$(adjust_relative_paths {venv_dir})"
690
691 # Let PEX_TOOLS invocations pass through to the original PEX file since venvs don't come
692 # with tools support.
693 if [ -n "${{PEX_TOOLS:-}}" ]; then
694 exec ${{execute_pex_args}} "$@"
695 fi
696
697 # If the seeded venv has been removed from the PEX_ROOT, we re-seed from the original
698 # `--venv` mode PEX file.
699 if [ ! -e "${{target_venv_executable}}" ]; then
700 rm -rf "${{venv_dir}}" || true
701 PEX_INTERPRETER=1 ${{execute_pex_args}} -c ''
702 fi
703
704 exec "${{target_venv_executable}}" "$@"
705 """
706 )
707 return VenvScript(
708 script=Script(script_path),
709 content=FileContent(path=str(script_path), content=script.encode(), is_executable=True),
710 )
711
712 def exe(self, bash: BashBinary) -> VenvScript:
713 """Writes a safe shim for the venv's executable `pex` script."""
714 script_path = PurePath(f"{self.pex.name}_pex_shim.sh")
715 return self._create_venv_script(
716 bash, script_path=script_path, venv_executable=self.venv_dir / "pex"
717 )
718
719 def bin(self, bash: BashBinary, name: str) -> VenvScript:
720 """Writes a safe shim for an executable or script in the venv's `bin` directory."""
721 script_path = PurePath(f"{self.pex.name}_bin_{name}_shim.sh")
722 return self._create_venv_script(
723 bash,
724 script_path=script_path,
725 venv_executable=self.venv_dir / "bin" / name,
726 )
727
728 def python(self, bash: BashBinary) -> VenvScript:
729 """Writes a safe shim for the venv's python binary."""
730 return self.bin(bash, "python")
731
732
733 @dataclass(frozen=True)
734 class VenvPex:
735 digest: Digest
736 pex_filename: str
737 pex: Script
738 python: Script
739 bin: FrozenDict[str, Script]
740 venv_rel_dir: str
741
742
743 @frozen_after_init
744 @dataclass(unsafe_hash=True)
745 class VenvPexRequest:
746 pex_request: PexRequest
747 bin_names: tuple[str, ...] = ()
748 site_packages_copies: bool = False
749
750 def __init__(
751 self,
752 pex_request: PexRequest,
753 bin_names: Iterable[str] = (),
754 site_packages_copies: bool = False,
755 ) -> None:
756 """A request for a PEX that runs in a venv and optionally exposes select venv `bin` scripts.
757
758 :param pex_request: The details of the desired PEX.
759 :param bin_names: The names of venv `bin` scripts to expose for execution.
760 :param site_packages_copies: `True` to use copies (hardlinks when possible) of PEX
761 dependencies when installing them in the venv site-packages directory. By default this
762 is `False` and symlinks are used instead which is a win in the time and space dimensions
763 but results in a non-standard venv structure that does trip up some libraries.
764 """
765 self.pex_request = pex_request
766 self.bin_names = tuple(bin_names)
767 self.site_packages_copies = site_packages_copies
768
769
770 @rule
771 def wrap_venv_prex_request(pex_request: PexRequest) -> VenvPexRequest:
772 # Allow creating a VenvPex from a plain PexRequest when no extra bin scripts need to be exposed.
773 return VenvPexRequest(pex_request)
774
775
776 @rule
777 async def create_venv_pex(
778 request: VenvPexRequest, bash: BashBinary, pex_environment: PexEnvironment
779 ) -> VenvPex:
780 # VenvPex is motivated by improving performance of Python tools by eliminating traditional PEX
781 # file startup overhead.
782 #
783 # To achieve the minimal overhead (on the order of 1ms) we discard:
784 # 1. Using Pex default mode:
785 # Although this does reduce initial tool execution overhead, it still leaves a minimum
786 # O(100ms) of overhead per subsequent tool invocation. Fundamentally, Pex still needs to
787 # execute its `sys.path` isolation bootstrap code in this case.
788 # 2. Using the Pex `venv` tool:
789 # The idea here would be to create a tool venv as a Process output and then use the tool
790 # venv as an input digest for all tool invocations. This was tried and netted ~500ms of
791 # overhead over raw venv use.
792 #
793 # Instead we use Pex's `--venv` mode. In this mode you can run the Pex file and it will create a
794 # venv on the fly in the PEX_ROOT as needed. Since the PEX_ROOT is a named_cache, we avoid the
795 # digest materialization overhead present in 2 above. Since the venv is naturally isolated we
796 # avoid the `sys.path` isolation overhead of Pex itself present in 1 above.
797 #
798 # This does leave O(50ms) of overhead though for the PEX bootstrap code to detect an already
799 # created venv in the PEX_ROOT and re-exec into it. To eliminate this overhead we execute the
800 # `pex` venv script in the PEX_ROOT directly. This is not robust on its own though, since the
801 # named caches store might be pruned at any time. To guard against that case we introduce a shim
802 # bash script that checks to see if the `pex` venv script exists in the PEX_ROOT and re-creates
803 # the PEX_ROOT venv if not. Using the shim script to run Python tools gets us down to the ~1ms
804 # of overhead we currently enjoy.
805
806 pex_request = request.pex_request
807 seeded_venv_request = dataclasses.replace(
808 pex_request,
809 additional_args=pex_request.additional_args
810 + (
811 "--venv",
812 "--seed",
813 "verbose",
814 pex_environment.venv_site_packages_copies_option(
815 use_copies=request.site_packages_copies
816 ),
817 ),
818 )
819 venv_pex_result = await Get(BuildPexResult, PexRequest, seeded_venv_request)
820 # Pex verbose --seed mode outputs the absolute path of the PEX executable as well as the
821 # absolute path of the PEX_ROOT. In the --venv case this is the `pex` script in the venv root
822 # directory.
823 seed_info = json.loads(venv_pex_result.result.stdout.decode())
824 abs_pex_root = PurePath(seed_info["pex_root"])
825 abs_pex_path = PurePath(seed_info["pex"])
826 venv_rel_dir = abs_pex_path.relative_to(abs_pex_root).parent
827
828 venv_script_writer = VenvScriptWriter.create(
829 pex_environment=pex_environment, pex=venv_pex_result.create_pex(), venv_rel_dir=venv_rel_dir
830 )
831 pex = venv_script_writer.exe(bash)
832 python = venv_script_writer.python(bash)
833 scripts = {bin_name: venv_script_writer.bin(bash, bin_name) for bin_name in request.bin_names}
834 scripts_digest = await Get(
835 Digest,
836 CreateDigest(
837 (
838 pex.content,
839 python.content,
840 *(venv_script.content for venv_script in scripts.values()),
841 )
842 ),
843 )
844 input_digest = await Get(Digest, MergeDigests((venv_script_writer.pex.digest, scripts_digest)))
845
846 return VenvPex(
847 digest=input_digest,
848 pex_filename=venv_pex_result.pex_filename,
849 pex=pex.script,
850 python=python.script,
851 bin=FrozenDict((bin_name, venv_script.script) for bin_name, venv_script in scripts.items()),
852 venv_rel_dir=venv_rel_dir.as_posix(),
853 )
854
855
856 @frozen_after_init
857 @dataclass(unsafe_hash=True)
858 class PexProcess:
859 pex: Pex
860 argv: tuple[str, ...]
861 description: str = dataclasses.field(compare=False)
862 level: LogLevel
863 input_digest: Digest | None
864 working_directory: str | None
865 extra_env: FrozenDict[str, str]
866 output_files: tuple[str, ...] | None
867 output_directories: tuple[str, ...] | None
868 timeout_seconds: int | None
869 execution_slot_variable: str | None
870 concurrency_available: int
871 cache_scope: ProcessCacheScope
872
873 def __init__(
874 self,
875 pex: Pex,
876 *,
877 description: str,
878 argv: Iterable[str] = (),
879 level: LogLevel = LogLevel.INFO,
880 input_digest: Digest | None = None,
881 working_directory: str | None = None,
882 extra_env: Mapping[str, str] | None = None,
883 output_files: Iterable[str] | None = None,
884 output_directories: Iterable[str] | None = None,
885 timeout_seconds: int | None = None,
886 execution_slot_variable: str | None = None,
887 concurrency_available: int = 0,
888 cache_scope: ProcessCacheScope = ProcessCacheScope.SUCCESSFUL,
889 ) -> None:
890 self.pex = pex
891 self.argv = tuple(argv)
892 self.description = description
893 self.level = level
894 self.input_digest = input_digest
895 self.working_directory = working_directory
896 self.extra_env = FrozenDict(extra_env or {})
897 self.output_files = tuple(output_files) if output_files else None
898 self.output_directories = tuple(output_directories) if output_directories else None
899 self.timeout_seconds = timeout_seconds
900 self.execution_slot_variable = execution_slot_variable
901 self.concurrency_available = concurrency_available
902 self.cache_scope = cache_scope
903
904
905 @rule
906 async def setup_pex_process(request: PexProcess, pex_environment: PexEnvironment) -> Process:
907 pex = request.pex
908 complete_pex_env = pex_environment.in_sandbox(working_directory=request.working_directory)
909 argv = complete_pex_env.create_argv(pex.name, *request.argv, python=pex.python)
910 env = {
911 **complete_pex_env.environment_dict(python_configured=pex.python is not None),
912 **request.extra_env,
913 }
914 input_digest = (
915 await Get(Digest, MergeDigests((pex.digest, request.input_digest)))
916 if request.input_digest
917 else pex.digest
918 )
919 return Process(
920 argv,
921 description=request.description,
922 level=request.level,
923 input_digest=input_digest,
924 working_directory=request.working_directory,
925 env=env,
926 output_files=request.output_files,
927 output_directories=request.output_directories,
928 append_only_caches=complete_pex_env.append_only_caches,
929 timeout_seconds=request.timeout_seconds,
930 execution_slot_variable=request.execution_slot_variable,
931 concurrency_available=request.concurrency_available,
932 cache_scope=request.cache_scope,
933 )
934
935
936 @frozen_after_init
937 @dataclass(unsafe_hash=True)
938 class VenvPexProcess:
939 venv_pex: VenvPex
940 argv: tuple[str, ...]
941 description: str = dataclasses.field(compare=False)
942 level: LogLevel
943 input_digest: Digest | None
944 working_directory: str | None
945 extra_env: FrozenDict[str, str] | None
946 output_files: tuple[str, ...] | None
947 output_directories: tuple[str, ...] | None
948 timeout_seconds: int | None
949 execution_slot_variable: str | None
950 concurrency_available: int
951 cache_scope: ProcessCacheScope
952
953 def __init__(
954 self,
955 venv_pex: VenvPex,
956 *,
957 description: str,
958 argv: Iterable[str] = (),
959 level: LogLevel = LogLevel.INFO,
960 input_digest: Digest | None = None,
961 working_directory: str | None = None,
962 extra_env: Mapping[str, str] | None = None,
963 output_files: Iterable[str] | None = None,
964 output_directories: Iterable[str] | None = None,
965 timeout_seconds: int | None = None,
966 execution_slot_variable: str | None = None,
967 concurrency_available: int = 0,
968 cache_scope: ProcessCacheScope = ProcessCacheScope.SUCCESSFUL,
969 ) -> None:
970 self.venv_pex = venv_pex
971 self.argv = tuple(argv)
972 self.description = description
973 self.level = level
974 self.input_digest = input_digest
975 self.working_directory = working_directory
976 self.extra_env = FrozenDict(extra_env) if extra_env else None
977 self.output_files = tuple(output_files) if output_files else None
978 self.output_directories = tuple(output_directories) if output_directories else None
979 self.timeout_seconds = timeout_seconds
980 self.execution_slot_variable = execution_slot_variable
981 self.concurrency_available = concurrency_available
982 self.cache_scope = cache_scope
983
984
985 @rule
986 async def setup_venv_pex_process(
987 request: VenvPexProcess, pex_environment: PexEnvironment
988 ) -> Process:
989 venv_pex = request.venv_pex
990 pex_bin = (
991 os.path.relpath(venv_pex.pex.argv0, request.working_directory)
992 if request.working_directory
993 else venv_pex.pex.argv0
994 )
995 argv = (pex_bin, *request.argv)
996 input_digest = (
997 await Get(Digest, MergeDigests((venv_pex.digest, request.input_digest)))
998 if request.input_digest
999 else venv_pex.digest
1000 )
1001 return Process(
1002 argv=argv,
1003 description=request.description,
1004 level=request.level,
1005 input_digest=input_digest,
1006 working_directory=request.working_directory,
1007 env=request.extra_env,
1008 output_files=request.output_files,
1009 output_directories=request.output_directories,
1010 append_only_caches=pex_environment.in_sandbox(
1011 working_directory=request.working_directory
1012 ).append_only_caches,
1013 timeout_seconds=request.timeout_seconds,
1014 execution_slot_variable=request.execution_slot_variable,
1015 concurrency_available=request.concurrency_available,
1016 cache_scope=request.cache_scope,
1017 )
1018
1019
1020 @dataclass(frozen=True)
1021 class PexDistributionInfo:
1022 """Information about an individual distribution in a PEX file, as reported by `PEX_TOOLS=1
1023 repository info -v`."""
1024
1025 project_name: str
1026 version: packaging.version.Version
1027 requires_python: packaging.specifiers.SpecifierSet | None
1028 # Note: These are parsed from metadata written by the pex tool, and are always
1029 # a valid pkg_resources.Requirement.
1030 requires_dists: tuple[Requirement, ...]
1031
1032
1033 class PexResolveInfo(Collection[PexDistributionInfo]):
1034 """Information about all distributions resolved in a PEX file, as reported by `PEX_TOOLS=1
1035 repository info -v`."""
1036
1037
1038 def parse_repository_info(repository_info: str) -> PexResolveInfo:
1039 def iter_dist_info() -> Iterator[PexDistributionInfo]:
1040 for line in repository_info.splitlines():
1041 info = json.loads(line)
1042 requires_python = info["requires_python"]
1043 yield PexDistributionInfo(
1044 project_name=info["project_name"],
1045 version=packaging.version.Version(info["version"]),
1046 requires_python=(
1047 packaging.specifiers.SpecifierSet(requires_python)
1048 if requires_python is not None
1049 else None
1050 ),
1051 requires_dists=tuple(
1052 Requirement.parse(req) for req in sorted(info["requires_dists"])
1053 ),
1054 )
1055
1056 return PexResolveInfo(sorted(iter_dist_info(), key=lambda dist: dist.project_name))
1057
1058
1059 @rule
1060 async def determine_venv_pex_resolve_info(venv_pex: VenvPex) -> PexResolveInfo:
1061 process_result = await Get(
1062 ProcessResult,
1063 VenvPexProcess(
1064 venv_pex,
1065 argv=["repository", "info", "-v"],
1066 extra_env={"PEX_TOOLS": "1"},
1067 input_digest=venv_pex.digest,
1068 description=f"Determine distributions found in {venv_pex.pex_filename}",
1069 level=LogLevel.DEBUG,
1070 ),
1071 )
1072 return parse_repository_info(process_result.stdout.decode())
1073
1074
1075 @rule
1076 async def determine_pex_resolve_info(pex_pex: PexPEX, pex: Pex) -> PexResolveInfo:
1077 process_result = await Get(
1078 ProcessResult,
1079 PexProcess(
1080 pex=Pex(digest=pex_pex.digest, name=pex_pex.exe, python=pex.python),
1081 argv=[pex.name, "repository", "info", "-v"],
1082 input_digest=pex.digest,
1083 extra_env={"PEX_MODULE": "pex.tools"},
1084 description=f"Determine distributions found in {pex.name}",
1085 level=LogLevel.DEBUG,
1086 ),
1087 )
1088 return parse_repository_info(process_result.stdout.decode())
1089
1090
1091 def rules():
1092 return [*collect_rules(), *pex_cli.rules(), *pex_requirements.rules()]
1093
[end of src/python/pants/backend/python/util_rules/pex.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pantsbuild/pants
|
aafd966393babc220576514872f91a61c3b1714c
|
`packed` layout PEXes fail to `run` with `use_deprecated_pex_binary_run_semantics=false`
**Describe the bug**
With `use_deprecated_pex_binary_run_semantics=false`, a `layout="packed"` PEX will fail to run with:
```
Exception: Error executing interactive process: Permission denied (os error 13)
```
**Pants version**
`2.13.x` and `main`
|
2022-08-24T18:05:24Z
|
<patch>
diff --git a/src/python/pants/backend/python/goals/run_pex_binary.py b/src/python/pants/backend/python/goals/run_pex_binary.py
--- a/src/python/pants/backend/python/goals/run_pex_binary.py
+++ b/src/python/pants/backend/python/goals/run_pex_binary.py
@@ -9,7 +9,7 @@
_create_python_source_run_request,
)
from pants.backend.python.subsystems.debugpy import DebugPy
-from pants.backend.python.target_types import PexBinaryDefaults
+from pants.backend.python.target_types import PexBinaryDefaults, PexLayout
from pants.backend.python.util_rules.pex_environment import PexEnvironment
from pants.core.goals.package import BuiltPackage
from pants.core.goals.run import RunDebugAdapterRequest, RunFieldSet, RunRequest
@@ -31,6 +31,9 @@ async def create_pex_binary_run_request(
built_pex = await Get(BuiltPackage, PexBinaryFieldSet, field_set)
relpath = built_pex.artifacts[0].relpath
assert relpath is not None
+ if field_set.layout.value != PexLayout.ZIPAPP.value:
+ relpath = os.path.join(relpath, "__main__.py")
+
return RunRequest(
digest=built_pex.digest,
args=[os.path.join("{chroot}", relpath)],
</patch>
|
[]
|
[]
| ||||
apache__airflow-15277
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
UI Doesn't use all of Bootstrap theme css, Airflow 2.0
**Apache Airflow version**: 2.0.0
**Environment**: Ubuntu
- **Cloud provider or hardware configuration**:
- **OS** (e.g. from /etc/os-release):
- **Kernel** (e.g. `uname -a`):
- **Install tools**:
- **Others**:
**What happened**:
Picked Cyborg.css in web_server config but background is still default

<!-- (please include exact error messages if you can) -->
**What you expected to happen**:
<!-- What do you think went wrong? -->
**How to reproduce it**:
<!---
As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags.
If you are using kubernetes, please attempt to recreate the issue using minikube or kind.
## Install minikube/kind
- Minikube https://minikube.sigs.k8s.io/docs/start/
- Kind https://kind.sigs.k8s.io/docs/user/quick-start/
If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action
You can include images using the .md style of

To record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file.
--->
**Anything else we need to know**:
<!--
How often does this problem occur? Once? Every time etc?
Any relevant logs to include? Put them here in side a detail tag:
<details><summary>x.log</summary> lots of stuff </details>
-->
</issue>
<code>
[start of README.md]
1 <!--
2 Licensed to the Apache Software Foundation (ASF) under one
3 or more contributor license agreements. See the NOTICE file
4 distributed with this work for additional information
5 regarding copyright ownership. The ASF licenses this file
6 to you under the Apache License, Version 2.0 (the
7 "License"); you may not use this file except in compliance
8 with the License. You may obtain a copy of the License at
9
10 http://www.apache.org/licenses/LICENSE-2.0
11
12 Unless required by applicable law or agreed to in writing,
13 software distributed under the License is distributed on an
14 "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
15 KIND, either express or implied. See the License for the
16 specific language governing permissions and limitations
17 under the License.
18 -->
19
20 # Apache Airflow
21
22 [](https://badge.fury.io/py/apache-airflow)
23 [](https://github.com/apache/airflow/actions)
24 [](https://codecov.io/github/apache/airflow?branch=main)
25 [](https://www.apache.org/licenses/LICENSE-2.0.txt)
26 [](https://pypi.org/project/apache-airflow/)
27 [](https://hub.docker.com/r/apache/airflow)
28 [](https://hub.docker.com/r/apache/airflow)
29 [](https://pypi.org/project/apache-airflow/)
30 [](https://artifacthub.io/packages/search?repo=apache-airflow)
31 [](https://github.com/psf/black)
32 [](https://twitter.com/ApacheAirflow)
33 [](https://s.apache.org/airflow-slack)
34
35 [Apache Airflow](https://airflow.apache.org/docs/apache-airflow/stable/) (or simply Airflow) is a platform to programmatically author, schedule, and monitor workflows.
36
37 When workflows are defined as code, they become more maintainable, versionable, testable, and collaborative.
38
39 Use Airflow to author workflows as directed acyclic graphs (DAGs) of tasks. The Airflow scheduler executes your tasks on an array of workers while following the specified dependencies. Rich command line utilities make performing complex surgeries on DAGs a snap. The rich user interface makes it easy to visualize pipelines running in production, monitor progress, and troubleshoot issues when needed.
40
41 <!-- START doctoc generated TOC please keep comment here to allow auto update -->
42 <!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
43 **Table of contents**
44
45 - [Project Focus](#project-focus)
46 - [Principles](#principles)
47 - [Requirements](#requirements)
48 - [Getting started](#getting-started)
49 - [Installing from PyPI](#installing-from-pypi)
50 - [Official source code](#official-source-code)
51 - [Convenience packages](#convenience-packages)
52 - [User Interface](#user-interface)
53 - [Semantic versioning](#semantic-versioning)
54 - [Version Life Cycle](#version-life-cycle)
55 - [Support for Python and Kubernetes versions](#support-for-python-and-kubernetes-versions)
56 - [Base OS support for reference Airflow images](#base-os-support-for-reference-airflow-images)
57 - [Approach to dependencies of Airflow](#approach-to-dependencies-of-airflow)
58 - [Support for providers](#support-for-providers)
59 - [Contributing](#contributing)
60 - [Who uses Apache Airflow?](#who-uses-apache-airflow)
61 - [Who Maintains Apache Airflow?](#who-maintains-apache-airflow)
62 - [Can I use the Apache Airflow logo in my presentation?](#can-i-use-the-apache-airflow-logo-in-my-presentation)
63 - [Airflow merchandise](#airflow-merchandise)
64 - [Links](#links)
65 - [Sponsors](#sponsors)
66
67 <!-- END doctoc generated TOC please keep comment here to allow auto update -->
68
69 ## Project Focus
70
71 Airflow works best with workflows that are mostly static and slowly changing. When the DAG structure is similar from one run to the next, it clarifies the unit of work and continuity. Other similar projects include [Luigi](https://github.com/spotify/luigi), [Oozie](https://oozie.apache.org/) and [Azkaban](https://azkaban.github.io/).
72
73 Airflow is commonly used to process data, but has the opinion that tasks should ideally be idempotent (i.e., results of the task will be the same, and will not create duplicated data in a destination system), and should not pass large quantities of data from one task to the next (though tasks can pass metadata using Airflow's [Xcom feature](https://airflow.apache.org/docs/apache-airflow/stable/concepts.html#xcoms)). For high-volume, data-intensive tasks, a best practice is to delegate to external services specializing in that type of work.
74
75 Airflow is not a streaming solution, but it is often used to process real-time data, pulling data off streams in batches.
76
77 ## Principles
78
79 - **Dynamic**: Airflow pipelines are configuration as code (Python), allowing for dynamic pipeline generation. This allows for writing code that instantiates pipelines dynamically.
80 - **Extensible**: Easily define your own operators, executors and extend the library so that it fits the level of abstraction that suits your environment.
81 - **Elegant**: Airflow pipelines are lean and explicit. Parameterizing your scripts is built into the core of Airflow using the powerful **Jinja** templating engine.
82 - **Scalable**: Airflow has a modular architecture and uses a message queue to orchestrate an arbitrary number of workers.
83
84 ## Requirements
85
86 Apache Airflow is tested with:
87
88 | | Main version (dev) | Stable version (2.3.1) |
89 |---------------------|------------------------------|------------------------------|
90 | Python | 3.7, 3.8, 3.9, 3.10 | 3.7, 3.8, 3.9, 3.10 |
91 | Platform | AMD64/ARM64(\*) | AMD64/ARM64(\*) |
92 | Kubernetes | 1.20, 1.21, 1.22, 1.23, 1.24 | 1.20, 1.21, 1.22, 1.23, 1.24 |
93 | PostgreSQL | 10, 11, 12, 13, 14 | 10, 11, 12, 13, 14 |
94 | MySQL | 5.7, 8 | 5.7, 8 |
95 | SQLite | 3.15.0+ | 3.15.0+ |
96 | MSSQL | 2017(\*), 2019 (\*) | 2017(\*), 2019 (\*) |
97
98 \* Experimental
99
100 **Note**: MySQL 5.x versions are unable to or have limitations with
101 running multiple schedulers -- please see the [Scheduler docs](https://airflow.apache.org/docs/apache-airflow/stable/scheduler.html).
102 MariaDB is not tested/recommended.
103
104 **Note**: SQLite is used in Airflow tests. Do not use it in production. We recommend
105 using the latest stable version of SQLite for local development.
106
107 **Note**: Support for Python v3.10 will be available from Airflow 2.3.0. The `main` (development) branch
108 already supports Python 3.10.
109
110 **Note**: Airflow currently can be run on POSIX-compliant Operating Systems. For development it is regularly
111 tested on fairly modern Linux Distros and recent versions of MacOS.
112 On Windows you can run it via WSL2 (Windows Subsystem for Linux 2) or via Linux Containers.
113 The work to add Windows support is tracked via [#10388](https://github.com/apache/airflow/issues/10388) but
114 it is not a high priority. You should only use Linux-based distros as "Production" execution environment
115 as this is the only environment that is supported. The only distro that is used in our CI tests and that
116 is used in the [Community managed DockerHub image](https://hub.docker.com/p/apache/airflow) is
117 `Debian Bullseye`.
118
119 ## Getting started
120
121 Visit the official Airflow website documentation (latest **stable** release) for help with
122 [installing Airflow](https://airflow.apache.org/docs/apache-airflow/stable/installation.html),
123 [getting started](https://airflow.apache.org/docs/apache-airflow/stable/start/index.html), or walking
124 through a more complete [tutorial](https://airflow.apache.org/docs/apache-airflow/stable/tutorial.html).
125
126 > Note: If you're looking for documentation for the main branch (latest development branch): you can find it on [s.apache.org/airflow-docs](https://s.apache.org/airflow-docs/).
127
128 For more information on Airflow Improvement Proposals (AIPs), visit
129 the [Airflow Wiki](https://cwiki.apache.org/confluence/display/AIRFLOW/Airflow+Improvements+Proposals).
130
131 Documentation for dependent projects like provider packages, Docker image, Helm Chart, you'll find it in [the documentation index](https://airflow.apache.org/docs/).
132
133 ## Installing from PyPI
134
135 We publish Apache Airflow as `apache-airflow` package in PyPI. Installing it however might be sometimes tricky
136 because Airflow is a bit of both a library and application. Libraries usually keep their dependencies open, and
137 applications usually pin them, but we should do neither and both simultaneously. We decided to keep
138 our dependencies as open as possible (in `setup.py`) so users can install different versions of libraries
139 if needed. This means that `pip install apache-airflow` will not work from time to time or will
140 produce unusable Airflow installation.
141
142 To have repeatable installation, however, we keep a set of "known-to-be-working" constraint
143 files in the orphan `constraints-main` and `constraints-2-0` branches. We keep those "known-to-be-working"
144 constraints files separately per major/minor Python version.
145 You can use them as constraint files when installing Airflow from PyPI. Note that you have to specify
146 correct Airflow tag/version/branch and Python versions in the URL.
147
148
149 1. Installing just Airflow:
150
151 > Note: Only `pip` installation is currently officially supported.
152
153 While it is possible to install Airflow with tools like [Poetry](https://python-poetry.org) or
154 [pip-tools](https://pypi.org/project/pip-tools), they do not share the same workflow as
155 `pip` - especially when it comes to constraint vs. requirements management.
156 Installing via `Poetry` or `pip-tools` is not currently supported.
157
158 If you wish to install Airflow using those tools, you should use the constraint files and convert
159 them to the appropriate format and workflow that your tool requires.
160
161
162 ```bash
163 pip install 'apache-airflow==2.3.1' \
164 --constraint "https://raw.githubusercontent.com/apache/airflow/constraints-2.3.1/constraints-3.7.txt"
165 ```
166
167 2. Installing with extras (i.e., postgres, google)
168
169 ```bash
170 pip install 'apache-airflow[postgres,google]==2.3.1' \
171 --constraint "https://raw.githubusercontent.com/apache/airflow/constraints-2.3.1/constraints-3.7.txt"
172 ```
173
174 For information on installing provider packages, check
175 [providers](http://airflow.apache.org/docs/apache-airflow-providers/index.html).
176
177 ## Official source code
178
179 Apache Airflow is an [Apache Software Foundation](https://www.apache.org) (ASF) project,
180 and our official source code releases:
181
182 - Follow the [ASF Release Policy](https://www.apache.org/legal/release-policy.html)
183 - Can be downloaded from [the ASF Distribution Directory](https://downloads.apache.org/airflow)
184 - Are cryptographically signed by the release manager
185 - Are officially voted on by the PMC members during the
186 [Release Approval Process](https://www.apache.org/legal/release-policy.html#release-approval)
187
188 Following the ASF rules, the source packages released must be sufficient for a user to build and test the
189 release provided they have access to the appropriate platform and tools.
190
191 ## Convenience packages
192
193 There are other ways of installing and using Airflow. Those are "convenience" methods - they are
194 not "official releases" as stated by the `ASF Release Policy`, but they can be used by the users
195 who do not want to build the software themselves.
196
197 Those are - in the order of most common ways people install Airflow:
198
199 - [PyPI releases](https://pypi.org/project/apache-airflow/) to install Airflow using standard `pip` tool
200 - [Docker Images](https://hub.docker.com/r/apache/airflow) to install airflow via
201 `docker` tool, use them in Kubernetes, Helm Charts, `docker-compose`, `docker swarm`, etc. You can
202 read more about using, customising, and extending the images in the
203 [Latest docs](https://airflow.apache.org/docs/docker-stack/index.html), and
204 learn details on the internals in the [IMAGES.rst](https://github.com/apache/airflow/blob/main/IMAGES.rst) document.
205 - [Tags in GitHub](https://github.com/apache/airflow/tags) to retrieve the git project sources that
206 were used to generate official source packages via git
207
208 All those artifacts are not official releases, but they are prepared using officially released sources.
209 Some of those artifacts are "development" or "pre-release" ones, and they are clearly marked as such
210 following the ASF Policy.
211
212 ## User Interface
213
214 - **DAGs**: Overview of all DAGs in your environment.
215
216 
217
218 - **Grid**: Grid representation of a DAG that spans across time.
219
220 
221
222 - **Graph**: Visualization of a DAG's dependencies and their current status for a specific run.
223
224 
225
226 - **Task Duration**: Total time spent on different tasks over time.
227
228 
229
230 - **Gantt**: Duration and overlap of a DAG.
231
232 
233
234 - **Code**: Quick way to view source code of a DAG.
235
236 
237
238 ## Semantic versioning
239
240 As of Airflow 2.0.0, we support a strict [SemVer](https://semver.org/) approach for all packages released.
241
242 There are few specific rules that we agreed to that define details of versioning of the different
243 packages:
244
245 * **Airflow**: SemVer rules apply to core airflow only (excludes any changes to providers).
246 Changing limits for versions of Airflow dependencies is not a breaking change on its own.
247 * **Airflow Providers**: SemVer rules apply to changes in the particular provider's code only.
248 SemVer MAJOR and MINOR versions for the packages are independent of the Airflow version.
249 For example, `google 4.1.0` and `amazon 3.0.3` providers can happily be installed
250 with `Airflow 2.1.2`. If there are limits of cross-dependencies between providers and Airflow packages,
251 they are present in providers as `install_requires` limitations. We aim to keep backwards
252 compatibility of providers with all previously released Airflow 2 versions but
253 there will sometimes be breaking changes that might make some, or all
254 providers, have minimum Airflow version specified. Change of that minimum supported Airflow version
255 is a breaking change for provider because installing the new provider might automatically
256 upgrade Airflow (which might be an undesired side effect of upgrading provider).
257 * **Airflow Helm Chart**: SemVer rules apply to changes in the chart only. SemVer MAJOR and MINOR
258 versions for the chart are independent from the Airflow version. We aim to keep backwards
259 compatibility of the Helm Chart with all released Airflow 2 versions, but some new features might
260 only work starting from specific Airflow releases. We might however limit the Helm
261 Chart to depend on minimal Airflow version.
262 * **Airflow API clients**: SemVer MAJOR and MINOR versions follow MAJOR and MINOR versions of Airflow.
263 The first MAJOR or MINOR X.Y.0 release of Airflow should always be followed by X.Y.0 release of
264 all clients. The clients then can release their own PATCH releases with bugfixes,
265 independently of Airflow PATCH releases.
266
267 ## Version Life Cycle
268
269 Apache Airflow version life cycle:
270
271 <!-- This table is automatically updated by pre-commit scripts/ci/pre_commit/pre_commit_supported_versions.py -->
272 <!-- Beginning of auto-generated table -->
273
274 | Version | Current Patch/Minor | State | First Release | Limited Support | EOL/Terminated |
275 |-----------|-----------------------|-----------|-----------------|-------------------|------------------|
276 | 2 | 2.3.1 | Supported | Dec 17, 2020 | TBD | TBD |
277 | 1.10 | 1.10.15 | EOL | Aug 27, 2018 | Dec 17, 2020 | June 17, 2021 |
278 | 1.9 | 1.9.0 | EOL | Jan 03, 2018 | Aug 27, 2018 | Aug 27, 2018 |
279 | 1.8 | 1.8.2 | EOL | Mar 19, 2017 | Jan 03, 2018 | Jan 03, 2018 |
280 | 1.7 | 1.7.1.2 | EOL | Mar 28, 2016 | Mar 19, 2017 | Mar 19, 2017 |
281
282 <!-- End of auto-generated table -->
283
284 Limited support versions will be supported with security and critical bug fix only.
285 EOL versions will not get any fixes nor support.
286 We always recommend that all users run the latest available minor release for whatever major version is in use.
287 We **highly** recommend upgrading to the latest Airflow major release at the earliest convenient time and before the EOL date.
288
289 ## Support for Python and Kubernetes versions
290
291 As of Airflow 2.0, we agreed to certain rules we follow for Python and Kubernetes support.
292 They are based on the official release schedule of Python and Kubernetes, nicely summarized in the
293 [Python Developer's Guide](https://devguide.python.org/#status-of-python-branches) and
294 [Kubernetes version skew policy](https://kubernetes.io/docs/setup/release/version-skew-policy/).
295
296 1. We drop support for Python and Kubernetes versions when they reach EOL. Except for kubernetes, a
297 version stay supported by Airflow if two major cloud provider still provide support for it. We drop
298 support for those EOL versions in main right after EOL date, and it is effectively removed when we release
299 the first new MINOR (Or MAJOR if there is no new MINOR version) of Airflow. For example, for Python 3.7 it
300 means that we will drop support in main right after 27.06.2023, and the first MAJOR or MINOR version of
301 Airflow released after will not have it.
302
303 2. The "oldest" supported version of Python/Kubernetes is the default one until we decide to switch to
304 later version. "Default" is only meaningful in terms of "smoke tests" in CI PRs, which are run using this
305 default version and the default reference image available. Currently `apache/airflow:latest`
306 and `apache/airflow:2.3.1` images are Python 3.7 images. This means that default reference image will
307 become the default at the time when we start preparing for dropping 3.7 support which is few months
308 before the end of life for Python 3.7.
309
310 3. We support a new version of Python/Kubernetes in main after they are officially released, as soon as we
311 make them work in our CI pipeline (which might not be immediate due to dependencies catching up with
312 new versions of Python mostly) we release new images/support in Airflow based on the working CI setup.
313
314 ## Base OS support for reference Airflow images
315
316 The Airflow Community provides conveniently packaged container images that are published whenever
317 we publish an Apache Airflow release. Those images contain:
318
319 * Base OS with necessary packages to install Airflow (stable Debian OS)
320 * Base Python installation in versions supported at the time of release for the MINOR version of
321 Airflow released (so there could be different versions for 2.3 and 2.2 line for example)
322 * Libraries required to connect to suppoerted Databases (again the set of databases supported depends
323 on the MINOR version of Airflow.
324 * Predefined set of popular providers (for details see the [Dockerfile](Dockerfile)).
325 * Possibility of building your own, custom image where the user can choose their own set of providers
326 and libraries (see [Building the image](https://airflow.apache.org/docs/docker-stack/build.html))
327 * In the future Airflow might also support a "slim" version without providers nor database clients installed
328
329 The version of the base OS image is the stable version of Debian. Airflow supports using all currently active
330 stable versions - as soon as all Airflow dependencies support building, and we set up the CI pipeline for
331 building and testing the OS version. Approximately 6 months before the end-of-life of a previous stable
332 version of the OS, Airflow switches the images released to use the latest supported version of the OS.
333 For example since Debian Buster end-of-life is August 2022, Airflow switches the images in `main` branch
334 to use Debian Bullseye in February/March 2022. The version will be used in the next MINOR release after
335 the switch happens. In case of the Bullseye switch - 2.3.0 version will use Bullseye. The images released
336 in the previous MINOR version continue to use the version that all other releases for the MINOR version
337 used.
338
339 Users will continue to be able to build their images using stable Debian releases until the end of life and
340 building and verifying of the images happens in our CI but no unit tests are executed using this image in
341 the `main` branch.
342
343 ## Approach to dependencies of Airflow
344
345 Airflow has a lot of dependencies - direct and transitive, also Airflow is both - library and application,
346 therefore our policies to dependencies has to include both - stability of installation of application,
347 but also ability to install newer version of dependencies for those users who develop DAGs. We developed
348 the approach where `constraints` are used to make sure airflow can be installed in a repeatable way, while
349 we do not limit our users to upgrade most of the dependencies. As a result we decided not to upper-bound
350 version of Airflow dependencies by default, unless we have good reasons to believe upper-bounding them is
351 needed because of importance of the dependency as well as risk it involves to upgrade specific dependency.
352 We also upper-bound the dependencies that we know cause problems.
353
354 The constraint mechanism of ours takes care about finding and upgrading all the non-upper bound dependencies
355 automatically (providing that all the tests pass). Our `main` build failures will indicate in case there
356 are versions of dependencies that break our tests - indicating that we should either upper-bind them or
357 that we should fix our code/tests to account for the upstream changes from those dependencies.
358
359 Whenever we upper-bound such a dependency, we should always comment why we are doing it - i.e. we should have
360 a good reason why dependency is upper-bound. And we should also mention what is the condition to remove the
361 binding.
362
363 ### Approach for dependencies for Airflow Core
364
365 Those `extras` and `providers` dependencies are maintained in `setup.cfg`.
366
367 There are few dependencies that we decided are important enough to upper-bound them by default, as they are
368 known to follow predictable versioning scheme, and we know that new versions of those are very likely to
369 bring breaking changes. We commit to regularly review and attempt to upgrade to the newer versions of
370 the dependencies as they are released, but this is manual process.
371
372 The important dependencies are:
373
374 * `SQLAlchemy`: upper-bound to specific MINOR version (SQLAlchemy is known to remove deprecations and
375 introduce breaking changes especially that support for different Databases varies and changes at
376 various speed (example: SQLAlchemy 1.4 broke MSSQL integration for Airflow)
377 * `Alembic`: it is important to handle our migrations in predictable and performant way. It is developed
378 together with SQLAlchemy. Our experience with Alembic is that it very stable in MINOR version
379 * `Flask`: We are using Flask as the back-bone of our web UI and API. We know major version of Flask
380 are very likely to introduce breaking changes across those so limiting it to MAJOR version makes sense
381 * `werkzeug`: the library is known to cause problems in new versions. It is tightly coupled with Flask
382 libraries, and we should update them together
383 * `celery`: Celery is crucial component of Airflow as it used for CeleryExecutor (and similar). Celery
384 [follows SemVer](https://docs.celeryq.dev/en/stable/contributing.html?highlight=semver#versions), so
385 we should upper-bound it to the next MAJOR version. Also when we bump the upper version of the library,
386 we should make sure Celery Provider minimum Airflow version is updated).
387 * `kubernetes`: Kubernetes is a crucial component of Airflow as it is used for the KubernetesExecutor
388 (and similar). Kubernetes Python library [follows SemVer](https://github.com/kubernetes-client/python#compatibility),
389 so we should upper-bound it to the next MAJOR version. Also when we bump the upper version of the library,
390 we should make sure Kubernetes Provider minimum Airflow version is updated.
391
392 ### Approach for dependencies in Airflow Providers and extras
393
394 Those `extras` and `providers` dependencies are maintained in `setup.py`.
395
396 By default, we should not upper-bound dependencies for providers, however each provider's maintainer
397 might decide to add additional limits (and justify them with comment)
398
399 ## Support for providers
400
401 Providers released by the community have limitation of a minimum supported version of Airflow. The minimum
402 version of Airflow is the `MINOR` version (2.1, 2.2 etc.) indicating that the providers might use features
403 that appeared in this release. The default support timespan for the minimum version of Airflow
404 (there could be justified exceptions) is that we increase the minimum Airflow version, when 12 months passed
405 since the first release for the MINOR version of Airflow.
406
407 For example this means that by default we upgrade the minimum version of Airflow supported by providers
408 to 2.2.0 in the first Provider's release after 21st of May 2022 (21st of May 2021 is the date when the
409 first `PATCHLEVEL` of 2.1 (2.1.0) has been released.
410
411 ## Contributing
412
413 Want to help build Apache Airflow? Check out our [contributing documentation](https://github.com/apache/airflow/blob/main/CONTRIBUTING.rst).
414
415 Official Docker (container) images for Apache Airflow are described in [IMAGES.rst](https://github.com/apache/airflow/blob/main/IMAGES.rst).
416
417 ## Who uses Apache Airflow?
418
419 More than 400 organizations are using Apache Airflow
420 [in the wild](https://github.com/apache/airflow/blob/main/INTHEWILD.md).
421
422 ## Who Maintains Apache Airflow?
423
424 Airflow is the work of the [community](https://github.com/apache/airflow/graphs/contributors),
425 but the [core committers/maintainers](https://people.apache.org/committers-by-project.html#airflow)
426 are responsible for reviewing and merging PRs as well as steering conversations around new feature requests.
427 If you would like to become a maintainer, please review the Apache Airflow
428 [committer requirements](https://github.com/apache/airflow/blob/main/COMMITTERS.rst#guidelines-to-become-an-airflow-committer).
429
430 ## Can I use the Apache Airflow logo in my presentation?
431
432 Yes! Be sure to abide by the Apache Foundation [trademark policies](https://www.apache.org/foundation/marks/#books) and the Apache Airflow [Brandbook](https://cwiki.apache.org/confluence/display/AIRFLOW/Brandbook). The most up to date logos are found in [this repo](/docs/apache-airflow/img/logos) and on the Apache Software Foundation [website](https://www.apache.org/logos/about.html).
433
434 ## Airflow merchandise
435
436 If you would love to have Apache Airflow stickers, t-shirt, etc. then check out
437 [Redbubble Shop](https://www.redbubble.com/i/sticker/Apache-Airflow-by-comdev/40497530.EJUG5).
438
439 ## Links
440
441 - [Documentation](https://airflow.apache.org/docs/apache-airflow/stable/)
442 - [Chat](https://s.apache.org/airflow-slack)
443
444 ## Sponsors
445
446 The CI infrastructure for Apache Airflow has been sponsored by:
447
448 <!-- Ordered by most recently "funded" -->
449
450 <a href="https://astronomer.io"><img src="https://assets2.astronomer.io/logos/logoForLIGHTbackground.png" alt="astronomer.io" width="250px"></a>
451 <a href="https://aws.amazon.com/opensource/"><img src="docs/integration-logos/aws/[email protected]" alt="AWS OpenSource" width="130px"></a>
452
[end of README.md]
[start of dev/breeze/src/airflow_breeze/utils/run_utils.py]
1 # Licensed to the Apache Software Foundation (ASF) under one
2 # or more contributor license agreements. See the NOTICE file
3 # distributed with this work for additional information
4 # regarding copyright ownership. The ASF licenses this file
5 # to you under the Apache License, Version 2.0 (the
6 # "License"); you may not use this file except in compliance
7 # with the License. You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing,
12 # software distributed under the License is distributed on an
13 # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
14 # KIND, either express or implied. See the License for the
15 # specific language governing permissions and limitations
16 # under the License.
17 """Useful tools for running commands."""
18 import contextlib
19 import os
20 import shlex
21 import stat
22 import subprocess
23 import sys
24 from distutils.version import StrictVersion
25 from functools import lru_cache
26 from pathlib import Path
27 from re import match
28 from typing import Dict, List, Mapping, Optional, Union
29
30 from airflow_breeze.params._common_build_params import _CommonBuildParams
31 from airflow_breeze.utils.ci_group import ci_group
32 from airflow_breeze.utils.console import get_console
33 from airflow_breeze.utils.path_utils import AIRFLOW_SOURCES_ROOT
34
35 RunCommandResult = Union[subprocess.CompletedProcess, subprocess.CalledProcessError]
36
37
38 def run_command(
39 cmd: List[str],
40 title: Optional[str] = None,
41 *,
42 check: bool = True,
43 verbose: bool = False,
44 dry_run: bool = False,
45 no_output_dump_on_exception: bool = False,
46 env: Optional[Mapping[str, str]] = None,
47 cwd: Optional[Path] = None,
48 input: Optional[str] = None,
49 **kwargs,
50 ) -> RunCommandResult:
51 """
52 Runs command passed as list of strings with some extra functionality over POpen (kwargs from PoPen can
53 be used in this command even if not explicitly specified).
54
55 It prints diagnostics when requested, also allows to "dry_run" the commands rather than actually
56 execute them.
57
58 An important factor for having this command running tool is to be able (in verbose mode) to directly
59 copy&paste the verbose output and run the command manually - including all the environment variables
60 needed to run the command.
61
62 :param cmd: command to run
63 :param title: optional title for the command (otherwise likely title is automatically determined)
64 :param check: whether to check status value and run exception (same as POpem)
65 :param verbose: print commands when running
66 :param dry_run: do not execute "the" command - just print what would happen
67 :param no_output_dump_on_exception: whether to suppress printing logs from output when command fails
68 :param env: mapping of environment variables to set for the run command
69 :param cwd: working directory to set for the command
70 :param input: input string to pass to stdin of the process
71 :param kwargs: kwargs passed to POpen
72 """
73 workdir: str = str(cwd) if cwd else os.getcwd()
74 if verbose or dry_run:
75 command_to_print = ' '.join(shlex.quote(c) for c in cmd)
76 if not title:
77 # Heuristics to get a short but explanatory title showing what the command does
78 # If title is not provided explicitly
79 title = ' '.join(
80 shlex.quote(c)
81 for c in cmd
82 if not c.startswith('-') # exclude options
83 and len(c) > 0
84 and (c[0] != "/" or c.endswith(".sh")) # exclude volumes
85 and not c == "never" # exclude --pull never
86 and not match(r"^[A-Z_]*=.*$", c)
87 )
88 env_to_print = get_environments_to_print(env)
89 with ci_group(title=f"Running {title}"):
90 get_console().print(f"\n[info]Working directory {workdir} [/]\n")
91 # Soft wrap allows to copy&paste and run resulting output as it has no hard EOL
92 get_console().print(f"\n[info]{env_to_print}{command_to_print}[/]\n", soft_wrap=True)
93 if dry_run:
94 return subprocess.CompletedProcess(cmd, returncode=0)
95 try:
96 cmd_env = os.environ.copy()
97 if env:
98 cmd_env.update(env)
99 return subprocess.run(cmd, input=input, check=check, env=cmd_env, cwd=workdir, **kwargs)
100 except subprocess.CalledProcessError as ex:
101 if not no_output_dump_on_exception:
102 if ex.stdout:
103 get_console().print(
104 "[info]========================= OUTPUT start ============================[/]"
105 )
106 get_console().print(ex.stdout)
107 get_console().print(
108 "[info]========================= OUTPUT end ==============================[/]"
109 )
110 if ex.stderr:
111 get_console().print(
112 "[error]========================= STDERR start ============================[/]"
113 )
114 get_console().print(ex.stderr)
115 get_console().print(
116 "[error]========================= STDERR end ==============================[/]"
117 )
118 if check:
119 raise
120 return ex
121
122
123 def get_environments_to_print(env: Optional[Mapping[str, str]]):
124 if not env:
125 return ""
126 system_env: Dict[str, str] = {}
127 my_env: Dict[str, str] = {}
128 for key, val in env.items():
129 if os.environ.get(key) == val:
130 system_env[key] = val
131 else:
132 my_env[key] = val
133 env_to_print = ''.join(f'{key}="{val}" \\\n' for (key, val) in sorted(system_env.items()))
134 env_to_print += r"""\
135 """
136 env_to_print += ''.join(f'{key}="{val}" \\\n' for (key, val) in sorted(my_env.items()))
137 return env_to_print
138
139
140 def assert_pre_commit_installed(verbose: bool):
141 """
142 Check if pre-commit is installed in the right version.
143 :param verbose: print commands when running
144 :return: True is the pre-commit is installed in the right version.
145 """
146 # Local import to make autocomplete work
147 import yaml
148
149 pre_commit_config = yaml.safe_load((AIRFLOW_SOURCES_ROOT / ".pre-commit-config.yaml").read_text())
150 min_pre_commit_version = pre_commit_config["minimum_pre_commit_version"]
151
152 python_executable = sys.executable
153 get_console().print(f"[info]Checking pre-commit installed for {python_executable}[/]")
154 command_result = run_command(
155 [python_executable, "-m", "pre_commit", "--version"],
156 verbose=verbose,
157 capture_output=True,
158 text=True,
159 check=False,
160 )
161 if command_result.returncode == 0:
162 if command_result.stdout:
163 pre_commit_version = command_result.stdout.split(" ")[-1].strip()
164 if StrictVersion(pre_commit_version) >= StrictVersion(min_pre_commit_version):
165 get_console().print(
166 f"\n[success]Package pre_commit is installed. "
167 f"Good version {pre_commit_version} (>= {min_pre_commit_version})[/]\n"
168 )
169 else:
170 get_console().print(
171 f"\n[error]Package name pre_commit version is wrong. It should be"
172 f"aat least {min_pre_commit_version} and is {pre_commit_version}.[/]\n\n"
173 )
174 sys.exit(1)
175 else:
176 get_console().print(
177 "\n[warning]Could not determine version of pre-commit. " "You might need to update it![/]\n"
178 )
179 else:
180 get_console().print("\n[error]Error checking for pre-commit-installation:[/]\n")
181 get_console().print(command_result.stderr)
182 get_console().print("\nMake sure to run:\n breeze self-upgrade\n\n")
183 sys.exit(1)
184
185
186 def get_filesystem_type(filepath):
187 """
188 Determine the type of filesystem used - we might want to use different parameters if tmpfs is used.
189 :param filepath: path to check
190 :return: type of filesystem
191 """
192 # We import it locally so that click autocomplete works
193 import psutil
194
195 root_type = "unknown"
196 for part in psutil.disk_partitions():
197 if part.mountpoint == '/':
198 root_type = part.fstype
199 continue
200 if filepath.startswith(part.mountpoint):
201 return part.fstype
202
203 return root_type
204
205
206 def instruct_build_image(python: str):
207 """Print instructions to the user that they should build the image"""
208 get_console().print(f'[warning]\nThe CI image for Python version {python} may be outdated[/]\n')
209 get_console().print(
210 f"\n[info]Please run at the earliest convenience:[/]\n\nbreeze build-image --python {python}\n\n"
211 )
212
213
214 @contextlib.contextmanager
215 def working_directory(source_path: Path):
216 """
217 # Equivalent of pushd and popd in bash script.
218 # https://stackoverflow.com/a/42441759/3101838
219 :param source_path:
220 :return:
221 """
222 prev_cwd = Path.cwd()
223 os.chdir(source_path)
224 try:
225 yield
226 finally:
227 os.chdir(prev_cwd)
228
229
230 def change_file_permission(file_to_fix: Path):
231 """Update file permissions to not be group-writeable. Needed to solve cache invalidation problems."""
232 if file_to_fix.exists():
233 current = stat.S_IMODE(os.stat(file_to_fix).st_mode)
234 new = current & ~stat.S_IWGRP & ~stat.S_IWOTH # Removes group/other write permission
235 os.chmod(file_to_fix, new)
236
237
238 def change_directory_permission(directory_to_fix: Path):
239 """Update directory permissions to not be group-writeable. Needed to solve cache invalidation problems."""
240 if directory_to_fix.exists():
241 current = stat.S_IMODE(os.stat(directory_to_fix).st_mode)
242 new = current & ~stat.S_IWGRP & ~stat.S_IWOTH # Removes group/other write permission
243 new = (
244 new | stat.S_IXGRP | stat.S_IXOTH
245 ) # Add group/other execute permission (to be able to list directories)
246 os.chmod(directory_to_fix, new)
247
248
249 @working_directory(AIRFLOW_SOURCES_ROOT)
250 def fix_group_permissions(verbose: bool):
251 """Fixes permissions of all the files and directories that have group-write access."""
252 if verbose:
253 get_console().print("[info]Fixing group permissions[/]")
254 files_to_fix_result = run_command(['git', 'ls-files', './'], capture_output=True, text=True)
255 if files_to_fix_result.returncode == 0:
256 files_to_fix = files_to_fix_result.stdout.strip().split('\n')
257 for file_to_fix in files_to_fix:
258 change_file_permission(Path(file_to_fix))
259 directories_to_fix_result = run_command(
260 ['git', 'ls-tree', '-r', '-d', '--name-only', 'HEAD'], capture_output=True, text=True
261 )
262 if directories_to_fix_result.returncode == 0:
263 directories_to_fix = directories_to_fix_result.stdout.strip().split('\n')
264 for directory_to_fix in directories_to_fix:
265 change_directory_permission(Path(directory_to_fix))
266
267
268 def is_repo_rebased(repo: str, branch: str):
269 """Returns True if the local branch contains latest remote SHA (i.e. if it is rebased)"""
270 # We import it locally so that click autocomplete works
271 import requests
272
273 gh_url = f"https://api.github.com/repos/{repo}/commits/{branch}"
274 headers_dict = {"Accept": "application/vnd.github.VERSION.sha"}
275 latest_sha = requests.get(gh_url, headers=headers_dict).text.strip()
276 rebased = False
277 command_result = run_command(['git', 'log', '--format=format:%H'], capture_output=True, text=True)
278 output = command_result.stdout.strip().splitlines() if command_result is not None else "missing"
279 if latest_sha in output:
280 rebased = True
281 return rebased
282
283
284 def check_if_buildx_plugin_installed(verbose: bool) -> bool:
285 """
286 Checks if buildx plugin is locally available.
287 :param verbose: print commands when running
288 :return True if the buildx plugin is installed.
289 """
290 is_buildx_available = False
291 check_buildx = ['docker', 'buildx', 'version']
292 docker_buildx_version_result = run_command(
293 check_buildx,
294 verbose=verbose,
295 no_output_dump_on_exception=True,
296 capture_output=True,
297 text=True,
298 )
299 if (
300 docker_buildx_version_result
301 and docker_buildx_version_result.returncode == 0
302 and docker_buildx_version_result.stdout != ''
303 ):
304 is_buildx_available = True
305 return is_buildx_available
306
307
308 def prepare_base_build_command(image_params: _CommonBuildParams, verbose: bool) -> List[str]:
309 """
310 Prepare build command for docker build. Depending on whether we have buildx plugin installed or not,
311 and whether we run cache preparation, there might be different results:
312
313 * if buildx plugin is installed - `docker buildx` command is returned - using regular or cache builder
314 depending on whether we build regular image or cache
315 * if no buildx plugin is installed, and we do not prepare cache, regular docker `build` command is used.
316 * if no buildx plugin is installed, and we prepare cache - we fail. Cache can only be done with buildx
317 :param image_params: parameters of the image
318 :param verbose: print commands when running
319 :return: command to use as docker build command
320 """
321 build_command_param = []
322 is_buildx_available = check_if_buildx_plugin_installed(verbose=verbose)
323 if is_buildx_available:
324 if image_params.prepare_buildx_cache:
325 build_command_param.extend(
326 ["buildx", "build", "--builder", "airflow_cache", "--progress=tty", "--push"]
327 )
328 else:
329 build_command_param.extend(
330 [
331 "buildx",
332 "build",
333 "--builder",
334 "default",
335 "--progress=tty",
336 "--push" if image_params.push_image else "--load",
337 ]
338 )
339 else:
340 if image_params.prepare_buildx_cache or image_params.push_image:
341 get_console().print(
342 '\n[error] Buildx cli plugin is not available and you need it to prepare'
343 ' buildx cache or push image after build. \n'
344 )
345 get_console().print(
346 '[error] Please install it following https://docs.docker.com/buildx/working-with-buildx/ \n'
347 )
348 sys.exit(1)
349 build_command_param.append("build")
350 return build_command_param
351
352
353 def prepare_build_cache_command() -> List[str]:
354 """
355 Prepare build cache command for docker build. We need to have buildx for that command.
356 This command is needed separately from the build image command because of the bug in multiplatform
357 support for buildx plugin https://github.com/docker/buildx/issues/1044 where when you run multiple
358 platform build, cache from one platform overrides cache for the other platform.
359
360 :param verbose: print commands when running
361 :return: command to use as docker build command
362 """
363 return ["buildx", "build", "--builder", "airflow_cache", "--progress=tty"]
364
365
366 @lru_cache(maxsize=None)
367 def commit_sha():
368 """Returns commit SHA of current repo. Cached for various usages."""
369 command_result = run_command(['git', 'rev-parse', 'HEAD'], capture_output=True, text=True, check=False)
370 if command_result.stdout:
371 return command_result.stdout.strip()
372 else:
373 return "COMMIT_SHA_NOT_FOUND"
374
375
376 def filter_out_none(**kwargs) -> dict:
377 """Filters out all None values from parameters passed."""
378 for key in list(kwargs):
379 if kwargs[key] is None:
380 kwargs.pop(key)
381 return kwargs
382
[end of dev/breeze/src/airflow_breeze/utils/run_utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
apache/airflow
|
9214018153dd193be6b1147629f73b23d8195cce
|
UI Doesn't use all of Bootstrap theme css, Airflow 2.0
**Apache Airflow version**: 2.0.0
**Environment**: Ubuntu
- **Cloud provider or hardware configuration**:
- **OS** (e.g. from /etc/os-release):
- **Kernel** (e.g. `uname -a`):
- **Install tools**:
- **Others**:
**What happened**:
Picked Cyborg.css in web_server config but background is still default

<!-- (please include exact error messages if you can) -->
**What you expected to happen**:
<!-- What do you think went wrong? -->
**How to reproduce it**:
<!---
As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags.
If you are using kubernetes, please attempt to recreate the issue using minikube or kind.
## Install minikube/kind
- Minikube https://minikube.sigs.k8s.io/docs/start/
- Kind https://kind.sigs.k8s.io/docs/user/quick-start/
If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action
You can include images using the .md style of

To record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file.
--->
**Anything else we need to know**:
<!--
How often does this problem occur? Once? Every time etc?
Any relevant logs to include? Put them here in side a detail tag:
<details><summary>x.log</summary> lots of stuff </details>
-->
|
@hedrickw TIL that APP_THEME was a thing, so it certainly wasn't considered in the 2.0 refresh. I'm curious, is there a specific reason why you want to use an alternative theme?
Because we are stuck on an old version of Bootstrap (via secondary dependency through FAB), we had to move away from some of the previous Bootstrap theme patterns. I propose that instead of trying to accommodate this, that we instead remove the feature. It appears to only add maintenance overhead. If differentiation between Airflow deployments is the goal, there is the option to set a different nav bar color by setting a color value in the `navbar_color` config.
|
2021-04-08T14:17:48Z
|
<patch>
diff --git a/airflow/config_templates/default_webserver_config.py b/airflow/config_templates/default_webserver_config.py
--- a/airflow/config_templates/default_webserver_config.py
+++ b/airflow/config_templates/default_webserver_config.py
@@ -98,31 +98,3 @@
# { 'name': 'AOL', 'url': 'http://openid.aol.com/<username>' },
# { 'name': 'Flickr', 'url': 'http://www.flickr.com/<username>' },
# { 'name': 'MyOpenID', 'url': 'https://www.myopenid.com' }]
-
-# ----------------------------------------------------
-# Theme CONFIG
-# ----------------------------------------------------
-# Flask App Builder comes up with a number of predefined themes
-# that you can use for Apache Airflow.
-# http://flask-appbuilder.readthedocs.io/en/latest/customizing.html#changing-themes
-# Please make sure to remove "navbar_color" configuration from airflow.cfg
-# in order to fully utilize the theme. (or use that property in conjunction with theme)
-# APP_THEME = "bootstrap-theme.css" # default bootstrap
-# APP_THEME = "amelia.css"
-# APP_THEME = "cerulean.css"
-# APP_THEME = "cosmo.css"
-# APP_THEME = "cyborg.css"
-# APP_THEME = "darkly.css"
-# APP_THEME = "flatly.css"
-# APP_THEME = "journal.css"
-# APP_THEME = "lumen.css"
-# APP_THEME = "paper.css"
-# APP_THEME = "readable.css"
-# APP_THEME = "sandstone.css"
-# APP_THEME = "simplex.css"
-# APP_THEME = "slate.css"
-# APP_THEME = "solar.css"
-# APP_THEME = "spacelab.css"
-# APP_THEME = "superhero.css"
-# APP_THEME = "united.css"
-# APP_THEME = "yeti.css"
</patch>
|
[]
|
[]
| |||
apache__airflow-1431
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
depend_on_past strange behavior
Dear Airflow Maintainers,
my environment is
- Airflow version: 1.7.0
- Airflow components: webserver and scheduler with a postgres database and LocalExecutor
- Airflow config
> arallelism = 32
> dag_concurrency = 16
> max_active_runs_per_dag = 16
- Python Version: 2.7.6
- Operating System: Linux ubuntu 3.19.0-25-generic #26~14.04.1-Ubuntu SMP Fri Jul 24 21:16:20 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
- Python packages:
iabel==1.3, Flask==0.10.1, Flask-Admin==1.4.0, Flask-Cache==0.13.1, Flask-Login==0.2.11, Flask-WTF==0.12, Jinja2==2.8, Landscape-Client==14.12, Mako==1.0.3, Markdown==2.6.5, MarkupSafe==0.23, PAM==0.4.2, Pygments==2.0.2, SQLAlchemy==1.0.9, Twisted-Core==13.2.0, WTForms==2.0.2, Werkzeug==0.11.2, airflow==1.7.0, alembic==0.8.3, apt-xapian-index==0.45, argparse==1.2.1, cffi==1.3.1, chardet==2.0.1, chartkick==0.4.2, colorama==0.2.5, configobj==4.7.2, croniter==0.3.10, cryptography==1.1.2, dill==0.2.4, enum34==1.1.1, future==0.15.2, gunicorn==19.3.0, html5lib==0.999, idna==2.0, ipaddress==1.0.15, itsdangerous==0.24, numpy==1.10.1, pandas==0.17.1, psycopg2==2.6.1, pyOpenSSL==0.13, pyasn1==0.1.9, pycparser==2.14, pyserial==2.6, python-apt==0.9.3.5ubuntu1, python-dateutil==2.4.2, python-debian==0.1.21-nmu2ubuntu2, python-editor==0.5, python-telegram-bot==3.4, pytz==2015.7, requests==2.2.1, setproctitle==1.1.9, six==1.5.2, ssh-import-id==3.21, thrift==0.9.3, urllib3==1.7.1, vertica-python==0.5.5, wheel==0.24.0, wsgiref==0.1.2, zope.interface==4.0.5
I have the following DAG:
``` python
from airflow import DAG
from airflow.operators import PythonOperator
from datetime import datetime
import logging
import time
default_args = {
'owner': 'airflow',
'depends_on_past': True,
'start_date': datetime(2016, 4, 24),
}
dag_name = 'dp_test'
dag = DAG(
dag_name,
default_args=default_args,
schedule_interval='10 1 * * *')
def cb(**kw):
time.sleep(10)
logging.info('Done %s' % kw['ds'])
d = PythonOperator(task_id="delay", provide_context=True, python_callable=cb, dag=dag)
```
It is run by the scheduer the following way:
the first run scheduled__2016-04-24T00:00:00 completes successfully
the second run scheduled__2016-04-24T01:10:00 is marked as running but the task is not run and it keeps hanging in this state.
I tried the same dag with the SequentialExecuter and also from the latest Airflow version from git. The behavior doesn't change.
Another strange thing that bothers me is the run scheduled at 2016-04-24T00:00:00. The dag schedule interval doesn't suggest such run.
</issue>
<code>
[start of README.md]
1 # Airflow
2
3 [](https://badge.fury.io/py/airflow)
4 [](https://travis-ci.org/airbnb/airflow)
5 [](https://coveralls.io/github/airbnb/airflow)
6 [](https://landscape.io/github/airbnb/airflow/master)
7 [](https://requires.io/github/airbnb/airflow/requirements/?branch=master)
8 [](https://pypi.python.org/pypi/airflow/)
9 [](http://pythonhosted.org/airflow/)
10 [](https://gitter.im/airbnb/airflow?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
11
12 Airflow is a platform to programmatically author, schedule and monitor
13 workflows.
14
15 When workflows are defined as code, they become more maintainable,
16 versionable, testable, and collaborative.
17
18 Use Airflow to author workflows as directed acyclic graphs (DAGs) of tasks.
19 The Airflow scheduler executes your tasks on an array of workers while
20 following the specified dependencies. Rich command line utilities make
21 performing complex surgeries on DAGs a snap. The rich user interface
22 makes it easy to visualize pipelines running in production,
23 monitor progress, and troubleshoot issues when needed.
24
25 ## Getting started
26 Please visit the Airflow Platform documentation for help with [installing Airflow](http://pythonhosted.org/airflow/installation.html), getting a [quick start](http://pythonhosted.org/airflow/start.html), or a more complete [tutorial](http://pythonhosted.org/airflow/tutorial.html).
27
28 For further information, please visit the [Airflow Wiki](https://cwiki.apache.org/confluence/display/AIRFLOW/Airflow+Home).
29
30 ## Beyond the Horizon
31
32 Airflow **is not** a data streaming solution. Tasks do not move data from
33 one to the other (though tasks can exchange metadata!). Airflow is not
34 in the [Spark Streaming](http://spark.apache.org/streaming/)
35 or [Storm](https://storm.apache.org/) space, it is more comparable to
36 [Oozie](http://oozie.apache.org/) or
37 [Azkaban](https://azkaban.github.io/).
38
39 Workflows are expected to be mostly static or slowly changing. You can think
40 of the structure of the tasks in your workflow as slightly more dynamic
41 than a database structure would be. Airflow workflows are expected to look
42 similar from a run to the next, this allows for clarity around
43 unit of work and continuity.
44
45 ## Principles
46
47 - **Dynamic**: Airflow pipelines are configuration as code (Python), allowing for dynamic pipeline generation. This allows for writing code that instantiates pipelines dynamically.
48 - **Extensible**: Easily define your own operators, executors and extend the library so that it fits the level of abstraction that suits your environment.
49 - **Elegant**: Airflow pipelines are lean and explicit. Parameterizing your scripts is built into the core of Airflow using the powerful **Jinja** templating engine.
50 - **Scalable**: Airflow has a modular architecture and uses a message queue to orchestrate an arbitrary number of workers. Airflow is ready to scale to infinity.
51
52 ## User Interface
53
54 - **DAGs**: Overview of all DAGs in your environment.
55 
56
57 - **Tree View**: Tree representation of a DAG that spans across time.
58 
59
60 - **Graph View**: Visualization of a DAG's dependencies and their current status for a specific run.
61 
62
63 - **Task Duration**: Total time spent on different tasks over time.
64 
65
66 - **Gantt View**: Duration and overlap of a DAG.
67 
68
69 - **Code View**: Quick way to view source code of a DAG.
70 
71
72 ## Who uses Airflow?
73
74 As the Airflow community grows, we'd like to keep track of who is using
75 the platform. Please send a PR with your company name and @githubhandle
76 if you may.
77
78 Committers:
79
80 * Refer to [Committers](https://cwiki.apache.org/confluence/display/AIRFLOW/Committers)
81
82 Currently **officially** using Airflow:
83
84 * [Airbnb](http://airbnb.io/) [[@mistercrunch](https://github.com/mistercrunch), [@artwr](https://github.com/artwr)]
85 * [Agari] (https://github.com/agaridata) [[@r39132](https://github.com/r39132)]
86 * [allegro.pl](http://allegro.tech/) [[@kretes](https://github.com/kretes)]
87 * [Bellhops](https://github.com/bellhops)
88 * BlueApron [[@jasonjho](https://github.com/jasonjho) & [@matthewdavidhauser](https://github.com/matthewdavidhauser)]
89 * [Clover Health] (https://www.cloverhealth.com) [[@gwax](https://github.com/gwax) & [@vansivallab](https://github.com/vansivallab)]
90 * Chartboost [[@cgelman](https://github.com/cgelman) & [@dclubb](https://github.com/dclubb)]
91 * [Cotap](https://github.com/cotap/) [[@maraca](https://github.com/maraca) & [@richardchew](https://github.com/richardchew)]
92 * Easy Taxi [[@caique-lima](https://github.com/caique-lima)]
93 * [FreshBooks](https://github.com/freshbooks) [[@DinoCow](https://github.com/DinoCow)]
94 * [Gentner Lab](http://github.com/gentnerlab) [[@neuromusic](https://github.com/neuromusic)]
95 * [Glassdoor](https://github.com/Glassdoor) [[@syvineckruyk](https://github.com/syvineckruyk)]
96 * [Handy](http://www.handy.com/careers/73115?gh_jid=73115&gh_src=o5qcxn) [[@marcintustin](https://github.com/marcintustin) / [@mtustin-handy](https://github.com/mtustin-handy)]
97 * [Holimetrix](http://holimetrix.com/) [[@thibault-ketterer](https://github.com/thibault-ketterer)]
98 * [Hootsuite](https://github.com/hootsuite)
99 * [ING](http://www.ing.com/)
100 * [Jampp](https://github.com/jampp)
101 * [Kogan.com](https://github.com/kogan) [[@geeknam](https://github.com/geeknam)]
102 * [LendUp](https://www.lendup.com/) [[@lendup](https://github.com/lendup)]
103 * [LingoChamp](http://www.liulishuo.com/) [[@haitaoyao](https://github.com/haitaoyao)]
104 * [Lucid](http://luc.id) [[@jbrownlucid](https://github.com/jbrownlucid) & [@kkourtchikov](https://github.com/kkourtchikov)]
105 * [Lyft](https://www.lyft.com/)[[@SaurabhBajaj](https://github.com/SaurabhBajaj)]
106 * [Sense360](https://github.com/Sense360) [[@kamilmroczek](https://github.com/KamilMroczek)]
107 * [Sidecar](https://hello.getsidecar.com/) [[@getsidecar](https://github.com/getsidecar)]
108 * [SimilarWeb](https://www.similarweb.com/) [[@similarweb](https://github.com/similarweb)]
109 * [SmartNews](https://www.smartnews.com/) [[@takus](https://github.com/takus)]
110 * Stripe [@jbalogh]
111 * [Thumbtack](https://www.thumbtack.com/) [[@natekupp](https://github.com/natekupp)]
112 * [WeTransfer](https://github.com/WeTransfer) [[@jochem](https://github.com/jochem)]
113 * Wooga
114 * Xoom [[@gepser](https://github.com/gepser) & [@omarvides](https://github.com/omarvides)]
115 * [WePay](http://www.wepay.com) [[@criccomini](https://github.com/criccomini) & [@mtagle](https://github.com/mtagle)]
116 * Yahoo!
117 * [Zendesk](https://www.github.com/zendesk)
118
119 ## Links
120
121 * [Full documentation on pythonhosted.org](http://pythonhosted.org/airflow/)
122 * [Airflow Google Group (mailing list / forum)](https://groups.google.com/forum/#!forum/airbnb_airflow)
123 * [Airbnb Blog Post about Airflow](http://nerds.airbnb.com/airflow/)
124 * [Airflow Common Pitfalls](https://cwiki.apache.org/confluence/display/AIRFLOW/Common+Pitfalls)
125 * [Hadoop Summit Airflow Video](https://www.youtube.com/watch?v=oYp49mBwH60)
126 * [Airflow at Agari Blog Post](http://agari.com/blog/airflow-agari)
127 * [Best practices with Airflow (Max) nov 2015](https://youtu.be/dgaoqOZlvEA)
128 * [Airflow (Lesson 1) : TriggerDagRunOperator](https://www.linkedin.com/pulse/airflow-lesson-1-triggerdagrunoperator-siddharth-anand?published=t)
129 * [Docker Airflow (externally maintained)](https://github.com/puckel/docker-airflow)
130 * [Airflow: Tips, Tricks, and Pitfalls @ Handy](https://medium.com/handy-tech/airflow-tips-tricks-and-pitfalls-9ba53fba14eb#.o2snqeoz7)
131 * Airflow Chef recipe (community contributed) [github] (https://github.com/bahchis/airflow-cookbook) [chef] (https://supermarket.chef.io/cookbooks/airflow)
132 * Airflow Puppet Module (community contributed) [github] (https://github.com/similarweb/puppet-airflow) [puppet forge] (https://forge.puppetlabs.com/similarweb/airflow)
133 * [Gitter (live chat) Channel](https://gitter.im/airbnb/airflow)
134
[end of README.md]
[start of airflow/configuration.py]
1 from __future__ import absolute_import
2 from __future__ import division
3 from __future__ import print_function
4 from __future__ import unicode_literals
5
6 import copy
7 import errno
8 import logging
9 import os
10 import subprocess
11 import warnings
12
13 from future import standard_library
14 standard_library.install_aliases()
15
16 from builtins import str
17 from collections import OrderedDict
18 from configparser import ConfigParser
19
20 # show Airflow's deprecation warnings
21 warnings.filterwarnings(
22 action='default', category=DeprecationWarning, module='airflow')
23 warnings.filterwarnings(
24 action='default', category=PendingDeprecationWarning, module='airflow')
25
26 class AirflowConfigException(Exception):
27 pass
28
29 try:
30 from cryptography.fernet import Fernet
31 except ImportError:
32 pass
33
34
35 def generate_fernet_key():
36 try:
37 FERNET_KEY = Fernet.generate_key().decode()
38 except NameError:
39 FERNET_KEY = "cryptography_not_found_storing_passwords_in_plain_text"
40 return FERNET_KEY
41
42
43 def expand_env_var(env_var):
44 """
45 Expands (potentially nested) env vars by repeatedly applying
46 `expandvars` and `expanduser` until interpolation stops having
47 any effect.
48 """
49 if not env_var:
50 return env_var
51 while True:
52 interpolated = os.path.expanduser(os.path.expandvars(str(env_var)))
53 if interpolated == env_var:
54 return interpolated
55 else:
56 env_var = interpolated
57
58
59 def run_command(command):
60 """
61 Runs command and returns stdout
62 """
63 process = subprocess.Popen(
64 command.split(), stdout=subprocess.PIPE, stderr=subprocess.PIPE)
65 output, stderr = process.communicate()
66
67 if process.returncode != 0:
68 raise AirflowConfigException(
69 "Cannot execute {}. Error code is: {}. Output: {}, Stderr: {}"
70 .format(command, process.returncode, output, stderr)
71 )
72
73 return output
74
75
76 defaults = {
77 'core': {
78 'unit_test_mode': False,
79 'parallelism': 32,
80 'load_examples': True,
81 'plugins_folder': None,
82 'security': None,
83 'donot_pickle': False,
84 'remote_base_log_folder': '',
85 'remote_log_conn_id': '',
86 'encrypt_s3_logs': False,
87 's3_log_folder': '', # deprecated!
88 'dag_concurrency': 16,
89 'max_active_runs_per_dag': 16,
90 'executor': 'SequentialExecutor',
91 'dags_are_paused_at_creation': True,
92 'sql_alchemy_pool_size': 5,
93 'sql_alchemy_pool_recycle': 3600,
94 'dagbag_import_timeout': 30,
95 'non_pooled_task_slot_count': 128,
96 },
97 'operators': {
98 'default_owner': 'airflow'
99 },
100 'webserver': {
101 'base_url': 'http://localhost:8080',
102 'web_server_host': '0.0.0.0',
103 'web_server_port': '8080',
104 'web_server_worker_timeout': 120,
105 'authenticate': False,
106 'filter_by_owner': False,
107 'demo_mode': False,
108 'secret_key': 'airflowified',
109 'expose_config': False,
110 'workers': 4,
111 'worker_class': 'sync',
112 },
113 'scheduler': {
114 'statsd_on': False,
115 'statsd_host': 'localhost',
116 'statsd_port': 8125,
117 'statsd_prefix': 'airflow',
118 'job_heartbeat_sec': 5,
119 'scheduler_heartbeat_sec': 60,
120 'authenticate': False,
121 'max_threads': 2,
122 },
123 'celery': {
124 'default_queue': 'default',
125 'flower_port': '5555'
126 },
127 'email': {
128 'email_backend': 'airflow.utils.email.send_email_smtp',
129 },
130 'smtp': {
131 'smtp_starttls': True,
132 'smtp_ssl': False,
133 'smtp_user': '',
134 'smtp_password': '',
135 },
136 'kerberos': {
137 'ccache': '/tmp/airflow_krb5_ccache',
138 'principal': 'airflow', # gets augmented with fqdn
139 'reinit_frequency': '3600',
140 'kinit_path': 'kinit',
141 'keytab': 'airflow.keytab',
142 },
143 'github_enterprise': {
144 'api_rev': 'v3'
145 }
146 }
147
148 DEFAULT_CONFIG = """\
149 [core]
150 # The home folder for airflow, default is ~/airflow
151 airflow_home = {AIRFLOW_HOME}
152
153 # The folder where your airflow pipelines live, most likely a
154 # subfolder in a code repository
155 dags_folder = {AIRFLOW_HOME}/dags
156
157 # The folder where airflow should store its log files. This location
158 base_log_folder = {AIRFLOW_HOME}/logs
159
160 # Airflow can store logs remotely in AWS S3 or Google Cloud Storage. Users
161 # must supply a remote location URL (starting with either 's3://...' or
162 # 'gs://...') and an Airflow connection id that provides access to the storage
163 # location.
164 remote_base_log_folder =
165 remote_log_conn_id =
166 # Use server-side encryption for logs stored in S3
167 encrypt_s3_logs = False
168 # deprecated option for remote log storage, use remote_base_log_folder instead!
169 # s3_log_folder =
170
171 # The executor class that airflow should use. Choices include
172 # SequentialExecutor, LocalExecutor, CeleryExecutor
173 executor = SequentialExecutor
174
175 # The SqlAlchemy connection string to the metadata database.
176 # SqlAlchemy supports many different database engine, more information
177 # their website
178 sql_alchemy_conn = sqlite:///{AIRFLOW_HOME}/airflow.db
179
180 # The SqlAlchemy pool size is the maximum number of database connections
181 # in the pool.
182 sql_alchemy_pool_size = 5
183
184 # The SqlAlchemy pool recycle is the number of seconds a connection
185 # can be idle in the pool before it is invalidated. This config does
186 # not apply to sqlite.
187 sql_alchemy_pool_recycle = 3600
188
189 # The amount of parallelism as a setting to the executor. This defines
190 # the max number of task instances that should run simultaneously
191 # on this airflow installation
192 parallelism = 32
193
194 # The number of task instances allowed to run concurrently by the scheduler
195 dag_concurrency = 16
196
197 # Are DAGs paused by default at creation
198 dags_are_paused_at_creation = True
199
200 # When not using pools, tasks are run in the "default pool",
201 # whose size is guided by this config element
202 non_pooled_task_slot_count = 128
203
204 # The maximum number of active DAG runs per DAG
205 max_active_runs_per_dag = 16
206
207 # Whether to load the examples that ship with Airflow. It's good to
208 # get started, but you probably want to set this to False in a production
209 # environment
210 load_examples = True
211
212 # Where your Airflow plugins are stored
213 plugins_folder = {AIRFLOW_HOME}/plugins
214
215 # Secret key to save connection passwords in the db
216 fernet_key = {FERNET_KEY}
217
218 # Whether to disable pickling dags
219 donot_pickle = False
220
221 # How long before timing out a python file import while filling the DagBag
222 dagbag_import_timeout = 30
223
224
225 [operators]
226 # The default owner assigned to each new operator, unless
227 # provided explicitly or passed via `default_args`
228 default_owner = Airflow
229
230
231 [webserver]
232 # The base url of your website as airflow cannot guess what domain or
233 # cname you are using. This is used in automated emails that
234 # airflow sends to point links to the right web server
235 base_url = http://localhost:8080
236
237 # The ip specified when starting the web server
238 web_server_host = 0.0.0.0
239
240 # The port on which to run the web server
241 web_server_port = 8080
242
243 # The time the gunicorn webserver waits before timing out on a worker
244 web_server_worker_timeout = 120
245
246 # Secret key used to run your flask app
247 secret_key = temporary_key
248
249 # Number of workers to run the Gunicorn web server
250 workers = 4
251
252 # The worker class gunicorn should use. Choices include
253 # sync (default), eventlet, gevent
254 worker_class = sync
255
256 # Expose the configuration file in the web server
257 expose_config = true
258
259 # Set to true to turn on authentication:
260 # http://pythonhosted.org/airflow/installation.html#web-authentication
261 authenticate = False
262
263 # Filter the list of dags by owner name (requires authentication to be enabled)
264 filter_by_owner = False
265
266 [email]
267 email_backend = airflow.utils.email.send_email_smtp
268
269 [smtp]
270 # If you want airflow to send emails on retries, failure, and you want to use
271 # the airflow.utils.email.send_email_smtp function, you have to configure an smtp
272 # server here
273 smtp_host = localhost
274 smtp_starttls = True
275 smtp_ssl = False
276 smtp_user = airflow
277 smtp_port = 25
278 smtp_password = airflow
279 smtp_mail_from = [email protected]
280
281 [celery]
282 # This section only applies if you are using the CeleryExecutor in
283 # [core] section above
284
285 # The app name that will be used by celery
286 celery_app_name = airflow.executors.celery_executor
287
288 # The concurrency that will be used when starting workers with the
289 # "airflow worker" command. This defines the number of task instances that
290 # a worker will take, so size up your workers based on the resources on
291 # your worker box and the nature of your tasks
292 celeryd_concurrency = 16
293
294 # When you start an airflow worker, airflow starts a tiny web server
295 # subprocess to serve the workers local log files to the airflow main
296 # web server, who then builds pages and sends them to users. This defines
297 # the port on which the logs are served. It needs to be unused, and open
298 # visible from the main web server to connect into the workers.
299 worker_log_server_port = 8793
300
301 # The Celery broker URL. Celery supports RabbitMQ, Redis and experimentally
302 # a sqlalchemy database. Refer to the Celery documentation for more
303 # information.
304 broker_url = sqla+mysql://airflow:airflow@localhost:3306/airflow
305
306 # Another key Celery setting
307 celery_result_backend = db+mysql://airflow:airflow@localhost:3306/airflow
308
309 # Celery Flower is a sweet UI for Celery. Airflow has a shortcut to start
310 # it `airflow flower`. This defines the port that Celery Flower runs on
311 flower_port = 5555
312
313 # Default queue that tasks get assigned to and that worker listen on.
314 default_queue = default
315
316 [scheduler]
317 # Task instances listen for external kill signal (when you clear tasks
318 # from the CLI or the UI), this defines the frequency at which they should
319 # listen (in seconds).
320 job_heartbeat_sec = 5
321
322 # The scheduler constantly tries to trigger new tasks (look at the
323 # scheduler section in the docs for more information). This defines
324 # how often the scheduler should run (in seconds).
325 scheduler_heartbeat_sec = 5
326
327 # Statsd (https://github.com/etsy/statsd) integration settings
328 # statsd_on = False
329 # statsd_host = localhost
330 # statsd_port = 8125
331 # statsd_prefix = airflow
332
333 # The scheduler can run multiple threads in parallel to schedule dags.
334 # This defines how many threads will run. However airflow will never
335 # use more threads than the amount of cpu cores available.
336 max_threads = 2
337
338 [mesos]
339 # Mesos master address which MesosExecutor will connect to.
340 master = localhost:5050
341
342 # The framework name which Airflow scheduler will register itself as on mesos
343 framework_name = Airflow
344
345 # Number of cpu cores required for running one task instance using
346 # 'airflow run <dag_id> <task_id> <execution_date> --local -p <pickle_id>'
347 # command on a mesos slave
348 task_cpu = 1
349
350 # Memory in MB required for running one task instance using
351 # 'airflow run <dag_id> <task_id> <execution_date> --local -p <pickle_id>'
352 # command on a mesos slave
353 task_memory = 256
354
355 # Enable framework checkpointing for mesos
356 # See http://mesos.apache.org/documentation/latest/slave-recovery/
357 checkpoint = False
358
359 # Failover timeout in milliseconds.
360 # When checkpointing is enabled and this option is set, Mesos waits
361 # until the configured timeout for
362 # the MesosExecutor framework to re-register after a failover. Mesos
363 # shuts down running tasks if the
364 # MesosExecutor framework fails to re-register within this timeframe.
365 # failover_timeout = 604800
366
367 # Enable framework authentication for mesos
368 # See http://mesos.apache.org/documentation/latest/configuration/
369 authenticate = False
370
371 # Mesos credentials, if authentication is enabled
372 # default_principal = admin
373 # default_secret = admin
374
375 """
376
377 TEST_CONFIG = """\
378 [core]
379 airflow_home = {AIRFLOW_HOME}
380 dags_folder = {TEST_DAGS_FOLDER}
381 base_log_folder = {AIRFLOW_HOME}/logs
382 executor = SequentialExecutor
383 sql_alchemy_conn = sqlite:///{AIRFLOW_HOME}/unittests.db
384 unit_test_mode = True
385 load_examples = True
386 donot_pickle = False
387 dag_concurrency = 16
388 dags_are_paused_at_creation = False
389 fernet_key = {FERNET_KEY}
390 non_pooled_task_slot_count = 128
391
392 [operators]
393 default_owner = airflow
394
395 [webserver]
396 base_url = http://localhost:8080
397 web_server_host = 0.0.0.0
398 web_server_port = 8080
399
400 [email]
401 email_backend = airflow.utils.email.send_email_smtp
402
403 [smtp]
404 smtp_host = localhost
405 smtp_user = airflow
406 smtp_port = 25
407 smtp_password = airflow
408 smtp_mail_from = [email protected]
409
410 [celery]
411 celery_app_name = airflow.executors.celery_executor
412 celeryd_concurrency = 16
413 worker_log_server_port = 8793
414 broker_url = sqla+mysql://airflow:airflow@localhost:3306/airflow
415 celery_result_backend = db+mysql://airflow:airflow@localhost:3306/airflow
416 flower_port = 5555
417 default_queue = default
418
419 [scheduler]
420 job_heartbeat_sec = 1
421 scheduler_heartbeat_sec = 5
422 authenticate = true
423 max_threads = 2
424 """
425
426
427 class ConfigParserWithDefaults(ConfigParser):
428
429 # These configuration elements can be fetched as the stdout of commands
430 # following the "{section}__{name}__cmd" pattern, the idea behind this is to not
431 # store password on boxes in text files.
432 as_command_stdout = {
433 ('core', 'sql_alchemy_conn'),
434 ('core', 'fernet_key'),
435 ('celery', 'broker_url'),
436 ('celery', 'celery_result_backend')
437 }
438
439 def __init__(self, defaults, *args, **kwargs):
440 self.defaults = defaults
441 ConfigParser.__init__(self, *args, **kwargs)
442 self.is_validated = False
443
444 def _validate(self):
445 if (
446 self.get("core", "executor") != 'SequentialExecutor' and
447 "sqlite" in self.get('core', 'sql_alchemy_conn')):
448 raise AirflowConfigException("error: cannot use sqlite with the {}".
449 format(self.get('core', 'executor')))
450
451 self.is_validated = True
452
453 def _get_env_var_option(self, section, key):
454 # must have format AIRFLOW__{SECTION}__{KEY} (note double underscore)
455 env_var = 'AIRFLOW__{S}__{K}'.format(S=section.upper(), K=key.upper())
456 if env_var in os.environ:
457 return expand_env_var(os.environ[env_var])
458
459 def _get_cmd_option(self, section, key):
460 fallback_key = key + '_cmd'
461 if (
462 (section, key) in ConfigParserWithDefaults.as_command_stdout and
463 self.has_option(section, fallback_key)):
464 command = self.get(section, fallback_key)
465 return run_command(command)
466
467 def get(self, section, key, **kwargs):
468 section = str(section).lower()
469 key = str(key).lower()
470
471 d = self.defaults
472
473 # first check environment variables
474 option = self._get_env_var_option(section, key)
475 if option:
476 return option
477
478 # ...then the config file
479 if self.has_option(section, key):
480 return expand_env_var(
481 ConfigParser.get(self, section, key, **kwargs))
482
483 # ...then commands
484 option = self._get_cmd_option(section, key)
485 if option:
486 return option
487
488 # ...then the defaults
489 if section in d and key in d[section]:
490 return expand_env_var(d[section][key])
491
492 else:
493 logging.warn("section/key [{section}/{key}] not found "
494 "in config".format(**locals()))
495
496 raise AirflowConfigException(
497 "section/key [{section}/{key}] not found "
498 "in config".format(**locals()))
499
500 def getboolean(self, section, key):
501 val = str(self.get(section, key)).lower().strip()
502 if '#' in val:
503 val = val.split('#')[0].strip()
504 if val == "true":
505 return True
506 elif val == "false":
507 return False
508 else:
509 raise AirflowConfigException("Not a boolean.")
510
511 def getint(self, section, key):
512 return int(self.get(section, key))
513
514 def getfloat(self, section, key):
515 return float(self.get(section, key))
516
517 def read(self, filenames):
518 ConfigParser.read(self, filenames)
519 self._validate()
520
521 def as_dict(self, display_source=False, display_sensitive=False):
522 """
523 Returns the current configuration as an OrderedDict of OrderedDicts.
524 :param display_source: If False, the option value is returned. If True,
525 a tuple of (option_value, source) is returned. Source is either
526 'airflow.cfg' or 'default'.
527 :type display_source: bool
528 :param display_sensitive: If True, the values of options set by env
529 vars and bash commands will be displayed. If False, those options
530 are shown as '< hidden >'
531 :type display_sensitive: bool
532 """
533 cfg = copy.deepcopy(self._sections)
534
535 # remove __name__ (affects Python 2 only)
536 for options in cfg.values():
537 options.pop('__name__', None)
538
539 # add source
540 if display_source:
541 for section in cfg:
542 for k, v in cfg[section].items():
543 cfg[section][k] = (v, 'airflow.cfg')
544
545 # add env vars and overwrite because they have priority
546 for ev in [ev for ev in os.environ if ev.startswith('AIRFLOW__')]:
547 try:
548 _, section, key = ev.split('__')
549 opt = self._get_env_var_option(section, key)
550 except ValueError:
551 opt = None
552 if opt:
553 if not display_sensitive:
554 opt = '< hidden >'
555 if display_source:
556 opt = (opt, 'env var')
557 cfg.setdefault(section.lower(), OrderedDict()).update(
558 {key.lower(): opt})
559
560 # add bash commands
561 for (section, key) in ConfigParserWithDefaults.as_command_stdout:
562 opt = self._get_cmd_option(section, key)
563 if opt:
564 if not display_sensitive:
565 opt = '< hidden >'
566 if display_source:
567 opt = (opt, 'bash cmd')
568 cfg.setdefault(section, OrderedDict()).update({key: opt})
569
570 # add defaults
571 for section in sorted(self.defaults):
572 for key in sorted(self.defaults[section].keys()):
573 if key not in cfg.setdefault(section, OrderedDict()):
574 opt = str(self.defaults[section][key])
575 if display_source:
576 cfg[section][key] = (opt, 'default')
577 else:
578 cfg[section][key] = opt
579
580 return cfg
581
582
583 def mkdir_p(path):
584 try:
585 os.makedirs(path)
586 except OSError as exc: # Python >2.5
587 if exc.errno == errno.EEXIST and os.path.isdir(path):
588 pass
589 else:
590 raise AirflowConfigException('Had trouble creating a directory')
591
592 """
593 Setting AIRFLOW_HOME and AIRFLOW_CONFIG from environment variables, using
594 "~/airflow" and "~/airflow/airflow.cfg" respectively as defaults.
595 """
596
597 if 'AIRFLOW_HOME' not in os.environ:
598 AIRFLOW_HOME = expand_env_var('~/airflow')
599 else:
600 AIRFLOW_HOME = expand_env_var(os.environ['AIRFLOW_HOME'])
601
602 mkdir_p(AIRFLOW_HOME)
603
604 if 'AIRFLOW_CONFIG' not in os.environ:
605 if os.path.isfile(expand_env_var('~/airflow.cfg')):
606 AIRFLOW_CONFIG = expand_env_var('~/airflow.cfg')
607 else:
608 AIRFLOW_CONFIG = AIRFLOW_HOME + '/airflow.cfg'
609 else:
610 AIRFLOW_CONFIG = expand_env_var(os.environ['AIRFLOW_CONFIG'])
611
612 # Set up dags folder for unit tests
613 # this directory won't exist if users install via pip
614 _TEST_DAGS_FOLDER = os.path.join(
615 os.path.dirname(os.path.dirname(os.path.realpath(__file__))),
616 'tests',
617 'dags')
618 if os.path.exists(_TEST_DAGS_FOLDER):
619 TEST_DAGS_FOLDER = _TEST_DAGS_FOLDER
620 else:
621 TEST_DAGS_FOLDER = os.path.join(AIRFLOW_HOME, 'dags')
622
623
624 def parameterized_config(template):
625 """
626 Generates a configuration from the provided template + variables defined in
627 current scope
628 :param template: a config content templated with {{variables}}
629 """
630 FERNET_KEY = generate_fernet_key()
631 all_vars = {k: v for d in [globals(), locals()] for k, v in d.items()}
632 return template.format(**all_vars)
633
634 TEST_CONFIG_FILE = AIRFLOW_HOME + '/unittests.cfg'
635 if not os.path.isfile(TEST_CONFIG_FILE):
636 logging.info("Creating new airflow config file for unit tests in: " +
637 TEST_CONFIG_FILE)
638 with open(TEST_CONFIG_FILE, 'w') as f:
639 f.write(parameterized_config(TEST_CONFIG))
640
641 if not os.path.isfile(AIRFLOW_CONFIG):
642 # These configuration options are used to generate a default configuration
643 # when it is missing. The right way to change your configuration is to alter
644 # your configuration file, not this code.
645 logging.info("Creating new airflow config file in: " + AIRFLOW_CONFIG)
646 with open(AIRFLOW_CONFIG, 'w') as f:
647 f.write(parameterized_config(DEFAULT_CONFIG))
648
649 logging.info("Reading the config from " + AIRFLOW_CONFIG)
650
651
652 def test_mode():
653 conf = ConfigParserWithDefaults(defaults)
654 conf.read(TEST_CONFIG)
655
656 conf = ConfigParserWithDefaults(defaults)
657 conf.read(AIRFLOW_CONFIG)
658
659
660 def get(section, key, **kwargs):
661 return conf.get(section, key, **kwargs)
662
663
664 def getboolean(section, key):
665 return conf.getboolean(section, key)
666
667
668 def getfloat(section, key):
669 return conf.getfloat(section, key)
670
671
672 def getint(section, key):
673 return conf.getint(section, key)
674
675
676 def has_option(section, key):
677 return conf.has_option(section, key)
678
679
680 def remove_option(section, option):
681 return conf.remove_option(section, option)
682
683
684 def as_dict(display_source=False, display_sensitive=False):
685 return conf.as_dict(
686 display_source=display_source, display_sensitive=display_sensitive)
687 as_dict.__doc__ = conf.as_dict.__doc__
688
689
690 def set(section, option, value): # noqa
691 return conf.set(section, option, value)
692
693 ########################
694 # convenience method to access config entries
695
696
697 def get_dags_folder():
698 return os.path.expanduser(get('core', 'DAGS_FOLDER'))
699
[end of airflow/configuration.py]
[start of airflow/example_dags/example_python_operator.py]
1 from __future__ import print_function
2 from builtins import range
3 from airflow.operators import PythonOperator
4 from airflow.models import DAG
5 from datetime import datetime, timedelta
6
7 import time
8 from pprint import pprint
9
10 seven_days_ago = datetime.combine(
11 datetime.today() - timedelta(7), datetime.min.time())
12
13 args = {
14 'owner': 'airflow',
15 'start_date': seven_days_ago,
16 }
17
18 dag = DAG(
19 dag_id='example_python_operator', default_args=args,
20 schedule_interval=None)
21
22
23 def my_sleeping_function(random_base):
24 '''This is a function that will run within the DAG execution'''
25 time.sleep(random_base)
26
27
28 def print_context(ds, **kwargs):
29 pprint(kwargs)
30 print(ds)
31 return 'Whatever you return gets printed in the logs'
32
33 run_this = PythonOperator(
34 task_id='print_the_context',
35 provide_context=True,
36 python_callable=print_context,
37 dag=dag)
38
39 for i in range(10):
40 '''
41 Generating 10 sleeping task, sleeping from 0 to 9 seconds
42 respectively
43 '''
44 task = PythonOperator(
45 task_id='sleep_for_'+str(i),
46 python_callable=my_sleeping_function,
47 op_kwargs={'random_base': float(i)/10},
48 dag=dag)
49
50 task.set_upstream(run_this)
51
[end of airflow/example_dags/example_python_operator.py]
[start of airflow/example_dags/example_trigger_target_dag.py]
1 from airflow.operators import BashOperator, PythonOperator
2 from airflow.models import DAG
3 from datetime import datetime
4
5 import pprint
6 pp = pprint.PrettyPrinter(indent=4)
7
8 # This example illustrates the use of the TriggerDagRunOperator. There are 2
9 # entities at work in this scenario:
10 # 1. The Controller DAG - the DAG that conditionally executes the trigger
11 # (in example_trigger_controller.py)
12 # 2. The Target DAG - DAG being triggered
13 #
14 # This example illustrates the following features :
15 # 1. A TriggerDagRunOperator that takes:
16 # a. A python callable that decides whether or not to trigger the Target DAG
17 # b. An optional params dict passed to the python callable to help in
18 # evaluating whether or not to trigger the Target DAG
19 # c. The id (name) of the Target DAG
20 # d. The python callable can add contextual info to the DagRun created by
21 # way of adding a Pickleable payload (e.g. dictionary of primitives). This
22 # state is then made available to the TargetDag
23 # 2. A Target DAG : c.f. example_trigger_target_dag.py
24
25 args = {
26 'start_date': datetime.now(),
27 'owner': 'airflow',
28 }
29
30 dag = DAG(
31 dag_id='example_trigger_target_dag',
32 default_args=args,
33 schedule_interval=None)
34
35
36 def run_this_func(ds, **kwargs):
37 print("Remotely received value of {} for key=message".format(kwargs['dag_run'].conf['message']))
38
39 run_this = PythonOperator(
40 task_id='run_this',
41 provide_context=True,
42 python_callable=run_this_func,
43 dag=dag)
44
45 # You can also access the DagRun object in templates
46 bash_task = BashOperator(
47 task_id="bash_task",
48 bash_command='echo "Here is the message: {{ dag_run.conf["message"] if dag_run else "" }}" ',
49 dag=dag)
50
[end of airflow/example_dags/example_trigger_target_dag.py]
[start of setup.py]
1 from setuptools import setup, find_packages, Command
2 from setuptools.command.test import test as TestCommand
3
4 import os
5 import sys
6
7 # Kept manually in sync with airflow.__version__
8 version = '1.7.0'
9
10
11 class Tox(TestCommand):
12 user_options = [('tox-args=', None, "Arguments to pass to tox")]
13 def initialize_options(self):
14 TestCommand.initialize_options(self)
15 self.tox_args = ''
16 def finalize_options(self):
17 TestCommand.finalize_options(self)
18 self.test_args = []
19 self.test_suite = True
20 def run_tests(self):
21 #import here, cause outside the eggs aren't loaded
22 import tox
23 errno = tox.cmdline(args=self.tox_args.split())
24 sys.exit(errno)
25
26
27 class CleanCommand(Command):
28 """Custom clean command to tidy up the project root."""
29 user_options = []
30 def initialize_options(self):
31 pass
32 def finalize_options(self):
33 pass
34 def run(self):
35 os.system('rm -vrf ./build ./dist ./*.pyc ./*.tgz ./*.egg-info')
36
37
38 async = [
39 'greenlet>=0.4.9',
40 'eventlet>= 0.9.7',
41 'gevent>=0.13'
42 ]
43 celery = [
44 'celery>=3.1.17',
45 'flower>=0.7.3'
46 ]
47 crypto = ['cryptography>=0.9.3']
48 doc = [
49 'sphinx>=1.2.3',
50 'sphinx-argparse>=0.1.13',
51 'sphinx-rtd-theme>=0.1.6',
52 'Sphinx-PyPI-upload>=0.2.1'
53 ]
54 docker = ['docker-py>=1.6.0']
55 druid = ['pydruid>=0.2.1']
56 gcp_api = [
57 'httplib2',
58 'google-api-python-client>=1.5.0, <1.6.0',
59 'oauth2client>=2.0.2, <2.1.0',
60 'PyOpenSSL',
61 ]
62 hdfs = ['snakebite>=2.7.8']
63 webhdfs = ['hdfs[dataframe,avro,kerberos]>=2.0.4']
64 hive = [
65 'hive-thrift-py>=0.0.1',
66 'pyhive>=0.1.3',
67 'impyla>=0.13.3',
68 'unicodecsv>=0.14.1'
69 ]
70 jdbc = ['jaydebeapi>=0.2.0']
71 mssql = ['pymssql>=2.1.1', 'unicodecsv>=0.14.1']
72 mysql = ['mysqlclient>=1.3.6']
73 rabbitmq = ['librabbitmq>=1.6.1']
74 oracle = ['cx_Oracle>=5.1.2']
75 postgres = ['psycopg2>=2.6']
76 s3 = [
77 'boto>=2.36.0',
78 'filechunkio>=1.6',
79 ]
80 samba = ['pysmbclient>=0.1.3']
81 slack = ['slackclient>=1.0.0']
82 statsd = ['statsd>=3.0.1, <4.0']
83 vertica = ['vertica-python>=0.5.1']
84 ldap = ['ldap3>=0.9.9.1']
85 kerberos = ['pykerberos>=1.1.8',
86 'thrift_sasl>=0.2.0',
87 'snakebite[kerberos]>=2.7.8']
88 password = [
89 'bcrypt>=2.0.0',
90 'flask-bcrypt>=0.7.1',
91 ]
92 github_enterprise = ['Flask-OAuthlib>=0.9.1']
93 qds = ['qds-sdk>=1.9.0']
94 cloudant = ['cloudant>=0.5.9,<2.0'] # major update coming soon, clamp to 0.x
95
96
97 all_dbs = postgres + mysql + hive + mssql + hdfs + vertica + cloudant
98 devel = ['lxml>=3.3.4', 'nose', 'nose-parameterized', 'mock']
99 devel_minreq = devel + mysql + doc + password + s3
100 devel_hadoop = devel_minreq + hive + hdfs + webhdfs + kerberos
101 devel_all = devel + all_dbs + doc + samba + s3 + slack + crypto + oracle + docker
102
103 setup(
104 name='airflow',
105 description='Programmatically author, schedule and monitor data pipelines',
106 license='Apache License 2.0',
107 version=version,
108 packages=find_packages(),
109 package_data={'': ['airflow/alembic.ini']},
110 include_package_data=True,
111 zip_safe=False,
112 scripts=['airflow/bin/airflow'],
113 install_requires=[
114 'alembic>=0.8.3, <0.9',
115 'babel>=1.3, <2.0',
116 'chartkick>=0.4.2, < 0.5',
117 'croniter>=0.3.8, <0.4',
118 'dill>=0.2.2, <0.3',
119 'python-daemon>=2.1.1, <2.2',
120 'flask>=0.10.1, <0.11',
121 'flask-admin>=1.4.0, <2.0.0',
122 'flask-cache>=0.13.1, <0.14',
123 'flask-login==0.2.11',
124 'future>=0.15.0, <0.16',
125 'funcsigs>=0.4, <1',
126 'gunicorn>=19.3.0, <19.4.0', # 19.4.? seemed to have issues
127 'jinja2>=2.7.3, <3.0',
128 'markdown>=2.5.2, <3.0',
129 'pandas>=0.15.2, <1.0.0',
130 'pygments>=2.0.1, <3.0',
131 'python-dateutil>=2.3, <3',
132 'requests>=2.5.1, <3',
133 'setproctitle>=1.1.8, <2',
134 'sqlalchemy>=0.9.8',
135 'thrift>=0.9.2, <0.10',
136 'Flask-WTF==0.12'
137 ],
138 extras_require={
139 'all': devel_all,
140 'all_dbs': all_dbs,
141 'async': async,
142 'celery': celery,
143 'crypto': crypto,
144 'devel': devel_minreq,
145 'devel_hadoop': devel_hadoop,
146 'doc': doc,
147 'docker': docker,
148 'druid': druid,
149 'gcp_api': gcp_api,
150 'hdfs': hdfs,
151 'hive': hive,
152 'jdbc': jdbc,
153 'mssql': mssql,
154 'mysql': mysql,
155 'oracle': oracle,
156 'postgres': postgres,
157 'rabbitmq': rabbitmq,
158 's3': s3,
159 'samba': samba,
160 'slack': slack,
161 'statsd': statsd,
162 'vertica': vertica,
163 'ldap': ldap,
164 'webhdfs': webhdfs,
165 'kerberos': kerberos,
166 'password': password,
167 'github_enterprise': github_enterprise,
168 'qds': qds,
169 'cloudant': cloudant
170 },
171 classifiers={
172 'Development Status :: 5 - Production/Stable',
173 'Environment :: Console',
174 'Environment :: Web Environment',
175 'Intended Audience :: Developers',
176 'Intended Audience :: System Administrators',
177 'License :: OSI Approved :: Apache Software License',
178 'Programming Language :: Python :: 2.7',
179 'Programming Language :: Python :: 3.4',
180 'Topic :: System :: Monitoring',
181 },
182 author='Maxime Beauchemin',
183 author_email='[email protected]',
184 url='https://github.com/airbnb/airflow',
185 download_url=(
186 'https://github.com/airbnb/airflow/tarball/' + version),
187 cmdclass={'test': Tox,
188 'extra_clean': CleanCommand,
189 },
190 )
191
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
apache/airflow
|
40b3fffa07134afe9c87608e6439589db133edb0
|
depend_on_past strange behavior
Dear Airflow Maintainers,
my environment is
- Airflow version: 1.7.0
- Airflow components: webserver and scheduler with a postgres database and LocalExecutor
- Airflow config
> arallelism = 32
> dag_concurrency = 16
> max_active_runs_per_dag = 16
- Python Version: 2.7.6
- Operating System: Linux ubuntu 3.19.0-25-generic #26~14.04.1-Ubuntu SMP Fri Jul 24 21:16:20 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
- Python packages:
iabel==1.3, Flask==0.10.1, Flask-Admin==1.4.0, Flask-Cache==0.13.1, Flask-Login==0.2.11, Flask-WTF==0.12, Jinja2==2.8, Landscape-Client==14.12, Mako==1.0.3, Markdown==2.6.5, MarkupSafe==0.23, PAM==0.4.2, Pygments==2.0.2, SQLAlchemy==1.0.9, Twisted-Core==13.2.0, WTForms==2.0.2, Werkzeug==0.11.2, airflow==1.7.0, alembic==0.8.3, apt-xapian-index==0.45, argparse==1.2.1, cffi==1.3.1, chardet==2.0.1, chartkick==0.4.2, colorama==0.2.5, configobj==4.7.2, croniter==0.3.10, cryptography==1.1.2, dill==0.2.4, enum34==1.1.1, future==0.15.2, gunicorn==19.3.0, html5lib==0.999, idna==2.0, ipaddress==1.0.15, itsdangerous==0.24, numpy==1.10.1, pandas==0.17.1, psycopg2==2.6.1, pyOpenSSL==0.13, pyasn1==0.1.9, pycparser==2.14, pyserial==2.6, python-apt==0.9.3.5ubuntu1, python-dateutil==2.4.2, python-debian==0.1.21-nmu2ubuntu2, python-editor==0.5, python-telegram-bot==3.4, pytz==2015.7, requests==2.2.1, setproctitle==1.1.9, six==1.5.2, ssh-import-id==3.21, thrift==0.9.3, urllib3==1.7.1, vertica-python==0.5.5, wheel==0.24.0, wsgiref==0.1.2, zope.interface==4.0.5
I have the following DAG:
``` python
from airflow import DAG
from airflow.operators import PythonOperator
from datetime import datetime
import logging
import time
default_args = {
'owner': 'airflow',
'depends_on_past': True,
'start_date': datetime(2016, 4, 24),
}
dag_name = 'dp_test'
dag = DAG(
dag_name,
default_args=default_args,
schedule_interval='10 1 * * *')
def cb(**kw):
time.sleep(10)
logging.info('Done %s' % kw['ds'])
d = PythonOperator(task_id="delay", provide_context=True, python_callable=cb, dag=dag)
```
It is run by the scheduer the following way:
the first run scheduled__2016-04-24T00:00:00 completes successfully
the second run scheduled__2016-04-24T01:10:00 is marked as running but the task is not run and it keeps hanging in this state.
I tried the same dag with the SequentialExecuter and also from the latest Airflow version from git. The behavior doesn't change.
Another strange thing that bothers me is the run scheduled at 2016-04-24T00:00:00. The dag schedule interval doesn't suggest such run.
|
Hi @aadim , thanks for being the thorough with your description. It allowed my to dive in properly.
On why the scheduler starts the job 2016-04-24T00:00:00 (I am _not_ saying it is correct, I dont think it is). This is because the tasks in the DAG also get as start date "2016-04-24" and this is used to set the next_run_date in models.py:
```
# set start_date based on tasks
task_start_dates = [t.start_date for t in dag.tasks]
if task_start_dates:
next_run_date = min(task_start_dates)
else:
next_run_date = None
logging.info("Set next run date to {} based on task_start_date".format(next_run_date))
```
At the moment I, personally, think that next_run_date should be set to min(task_start_dates) + schedule_interval. What do you think @mistercrunch @jlowin ?
On the first issue it gets interesting here:
```
[2016-04-25 12:32:58,252] {models.py:2661} INFO - Marking run <DagRun dp_test @ 2016-04-24 00:00:00: scheduled__2016-04-24T00:00:00, externally triggered: False> successful
[2016-04-25 12:32:58,254] {jobs.py:520} INFO - Getting list of tasks to skip for active runs.
[2016-04-25 12:32:58,254] {jobs.py:536} INFO - Checking dependencies on 0 tasks instances, minus 0 skippable ones
[2016-04-25 12:32:58,266] {jobs.py:785} INFO - Done queuing tasks, calling the executor's heartbeat
[2016-04-25 12:32:58,266] {jobs.py:788} INFO - Loop took: 0.261704 seconds
[2016-04-25 12:32:58,269] {models.py:305} INFO - Finding 'running' jobs without a recent heartbeat
[2016-04-25 12:32:58,269] {models.py:311} INFO - Failing jobs without heartbeat after 2016-04-25 12:30:43.269621
/Users/bolke/Documents/dev/airflow_env/lib/python2.7/site-packages/airflow-1.7.0-py2.7.egg/airflow/bin/cli.py:287: DeprecationWarning: The S3_LOG_FOLDER conf key has been replaced by REMOTE_BASE_LOG_FOLDER. Your conf still works but please update airflow.cfg to ensure future compatibility.
DeprecationWarning)
[2016-04-25 12:33:03,010] {jobs.py:618} INFO - Prioritizing 0 queued jobs
[2016-04-25 12:33:03,019] {jobs.py:772} INFO - Starting 1 scheduler jobs
[2016-04-25 12:33:03,031] {jobs.py:387} INFO - Scheduling dag: dp_test Schedule interval:10 1 * * *
[2016-04-25 12:33:03,036] {jobs.py:420} INFO - last scheduled run: 2016-04-24 00:00:00
[2016-04-25 12:33:03,036] {models.py:2479} INFO - Shedule interval:10 1 * * * ddtm:2016-04-24 00:00:00
[2016-04-25 12:33:03,037] {models.py:2482} INFO - Cron Next: 2016-04-24 01:10:00
[2016-04-25 12:33:03,038] {jobs.py:455} INFO - next_run_date: 2016-04-25 01:10:00
[2016-04-25 12:33:03,038] {jobs.py:458} INFO - (after dag.check_start) next_run_date: 2016-04-25 01:10:00
[2016-04-25 12:33:03,038] {models.py:2479} INFO - Shedule interval:10 1 * * * ddtm:2016-04-25 01:10:00
[2016-04-25 12:33:03,038] {models.py:2482} INFO - Cron Next: 2016-04-26 01:10:00
[2016-04-25 12:33:03,038] {jobs.py:470} INFO - final next_run_date: 2016-04-25 01:10:00
```
So, on my machine, the next run date gets set to 2016-04-25 01:10:00 which wont happen as it is in the past. It is also not marked running. I will dive in a little further now with dates in the future.
Closing in favor of Jira issue. Please keep an eye on that one and/or add a vote.
|
2016-04-26T10:01:55Z
|
<patch>
diff --git a/airflow/models.py b/airflow/models.py
--- a/airflow/models.py
+++ b/airflow/models.py
@@ -868,6 +868,7 @@ def set_state(self, state, session):
self.start_date = datetime.now()
self.end_date = datetime.now()
session.merge(self)
+ session.commit()
def is_queueable(
self,
@@ -1106,7 +1107,6 @@ def are_dependencies_met(
session=session, successes=successes, skipped=skipped,
failed=failed, upstream_failed=upstream_failed, done=done,
flag_upstream_failed=flag_upstream_failed)
- session.commit()
if verbose and not satisfied:
logging.warning("Trigger rule `{}` not satisfied".format(task.trigger_rule))
return satisfied
</patch>
|
[]
|
[]
| |||
ray-project__ray-6941
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[rllib]compute_advantages doesn't support a critic without GAE
Looks like this was a [planned feature](https://github.com/ray-project/ray/blob/7edc677304f90bcab0a01e9722e59e36dd9afcb0/python/ray/rllib/evaluation/postprocessing.py#L47) that got overlooked. Seems like a simple fix but maybe I'm missing something.
</issue>
<code>
[start of README.rst]
1 .. image:: https://github.com/ray-project/ray/raw/master/doc/source/images/ray_header_logo.png
2
3 .. image:: https://travis-ci.com/ray-project/ray.svg?branch=master
4 :target: https://travis-ci.com/ray-project/ray
5
6 .. image:: https://readthedocs.org/projects/ray/badge/?version=latest
7 :target: http://ray.readthedocs.io/en/latest/?badge=latest
8
9 |
10
11
12 **Ray is a fast and simple framework for building and running distributed applications.**
13
14 Ray is packaged with the following libraries for accelerating machine learning workloads:
15
16 - `Tune`_: Scalable Hyperparameter Tuning
17 - `RLlib`_: Scalable Reinforcement Learning
18 - `Distributed Training <https://ray.readthedocs.io/en/latest/distributed_training.html>`__
19
20 Install Ray with: ``pip install ray``. For nightly wheels, see the
21 `Installation page <https://ray.readthedocs.io/en/latest/installation.html>`__.
22
23 **NOTE:** `We are deprecating Python 2 support soon.`_
24
25 .. _`We are deprecating Python 2 support soon.`: https://github.com/ray-project/ray/issues/6580
26
27 Quick Start
28 -----------
29
30 Execute Python functions in parallel.
31
32 .. code-block:: python
33
34 import ray
35 ray.init()
36
37 @ray.remote
38 def f(x):
39 return x * x
40
41 futures = [f.remote(i) for i in range(4)]
42 print(ray.get(futures))
43
44 To use Ray's actor model:
45
46 .. code-block:: python
47
48
49 import ray
50 ray.init()
51
52 @ray.remote
53 class Counter(object):
54 def __init__(self):
55 self.n = 0
56
57 def increment(self):
58 self.n += 1
59
60 def read(self):
61 return self.n
62
63 counters = [Counter.remote() for i in range(4)]
64 [c.increment.remote() for c in counters]
65 futures = [c.read.remote() for c in counters]
66 print(ray.get(futures))
67
68
69 Ray programs can run on a single machine, and can also seamlessly scale to large clusters. To execute the above Ray script in the cloud, just download `this configuration file <https://github.com/ray-project/ray/blob/master/python/ray/autoscaler/aws/example-full.yaml>`__, and run:
70
71 ``ray submit [CLUSTER.YAML] example.py --start``
72
73 Read more about `launching clusters <https://ray.readthedocs.io/en/latest/autoscaling.html>`_.
74
75 Tune Quick Start
76 ----------------
77
78 .. image:: https://github.com/ray-project/ray/raw/master/doc/source/images/tune-wide.png
79
80 `Tune`_ is a library for hyperparameter tuning at any scale.
81
82 - Launch a multi-node distributed hyperparameter sweep in less than 10 lines of code.
83 - Supports any deep learning framework, including PyTorch, TensorFlow, and Keras.
84 - Visualize results with `TensorBoard <https://www.tensorflow.org/get_started/summaries_and_tensorboard>`__.
85 - Choose among scalable SOTA algorithms such as `Population Based Training (PBT)`_, `Vizier's Median Stopping Rule`_, `HyperBand/ASHA`_.
86 - Tune integrates with many optimization libraries such as `Facebook Ax <http://ax.dev>`_, `HyperOpt <https://github.com/hyperopt/hyperopt>`_, and `Bayesian Optimization <https://github.com/fmfn/BayesianOptimization>`_ and enables you to scale them transparently.
87
88 To run this example, you will need to install the following:
89
90 .. code-block:: bash
91
92 $ pip install ray[tune] torch torchvision filelock
93
94
95 This example runs a parallel grid search to train a Convolutional Neural Network using PyTorch.
96
97 .. code-block:: python
98
99
100 import torch.optim as optim
101 from ray import tune
102 from ray.tune.examples.mnist_pytorch import (
103 get_data_loaders, ConvNet, train, test)
104
105
106 def train_mnist(config):
107 train_loader, test_loader = get_data_loaders()
108 model = ConvNet()
109 optimizer = optim.SGD(model.parameters(), lr=config["lr"])
110 for i in range(10):
111 train(model, optimizer, train_loader)
112 acc = test(model, test_loader)
113 tune.track.log(mean_accuracy=acc)
114
115
116 analysis = tune.run(
117 train_mnist, config={"lr": tune.grid_search([0.001, 0.01, 0.1])})
118
119 print("Best config: ", analysis.get_best_config(metric="mean_accuracy"))
120
121 # Get a dataframe for analyzing trial results.
122 df = analysis.dataframe()
123
124 If TensorBoard is installed, automatically visualize all trial results:
125
126 .. code-block:: bash
127
128 tensorboard --logdir ~/ray_results
129
130 .. _`Tune`: https://ray.readthedocs.io/en/latest/tune.html
131 .. _`Population Based Training (PBT)`: https://ray.readthedocs.io/en/latest/tune-schedulers.html#population-based-training-pbt
132 .. _`Vizier's Median Stopping Rule`: https://ray.readthedocs.io/en/latest/tune-schedulers.html#median-stopping-rule
133 .. _`HyperBand/ASHA`: https://ray.readthedocs.io/en/latest/tune-schedulers.html#asynchronous-hyperband
134
135 RLlib Quick Start
136 -----------------
137
138 .. image:: https://github.com/ray-project/ray/raw/master/doc/source/images/rllib-wide.jpg
139
140 `RLlib`_ is an open-source library for reinforcement learning built on top of Ray that offers both high scalability and a unified API for a variety of applications.
141
142 .. code-block:: bash
143
144 pip install tensorflow # or tensorflow-gpu
145 pip install ray[rllib] # also recommended: ray[debug]
146
147 .. code-block:: python
148
149 import gym
150 from gym.spaces import Discrete, Box
151 from ray import tune
152
153 class SimpleCorridor(gym.Env):
154 def __init__(self, config):
155 self.end_pos = config["corridor_length"]
156 self.cur_pos = 0
157 self.action_space = Discrete(2)
158 self.observation_space = Box(0.0, self.end_pos, shape=(1, ))
159
160 def reset(self):
161 self.cur_pos = 0
162 return [self.cur_pos]
163
164 def step(self, action):
165 if action == 0 and self.cur_pos > 0:
166 self.cur_pos -= 1
167 elif action == 1:
168 self.cur_pos += 1
169 done = self.cur_pos >= self.end_pos
170 return [self.cur_pos], 1 if done else 0, done, {}
171
172 tune.run(
173 "PPO",
174 config={
175 "env": SimpleCorridor,
176 "num_workers": 4,
177 "env_config": {"corridor_length": 5}})
178
179 .. _`RLlib`: https://ray.readthedocs.io/en/latest/rllib.html
180
181
182 More Information
183 ----------------
184
185 - `Documentation`_
186 - `Tutorial`_
187 - `Blog`_
188 - `Ray paper`_
189 - `Ray HotOS paper`_
190 - `RLlib paper`_
191 - `Tune paper`_
192
193 .. _`Documentation`: http://ray.readthedocs.io/en/latest/index.html
194 .. _`Tutorial`: https://github.com/ray-project/tutorial
195 .. _`Blog`: https://ray-project.github.io/
196 .. _`Ray paper`: https://arxiv.org/abs/1712.05889
197 .. _`Ray HotOS paper`: https://arxiv.org/abs/1703.03924
198 .. _`RLlib paper`: https://arxiv.org/abs/1712.09381
199 .. _`Tune paper`: https://arxiv.org/abs/1807.05118
200
201 Getting Involved
202 ----------------
203
204 - `[email protected]`_: For discussions about development or any general
205 questions.
206 - `StackOverflow`_: For questions about how to use Ray.
207 - `GitHub Issues`_: For reporting bugs and feature requests.
208 - `Pull Requests`_: For submitting code contributions.
209 - `Meetup Group`_: Join our meetup group.
210 - `Community Slack`_: Join our Slack workspace.
211 - `Twitter`_: Follow updates on Twitter.
212
213 .. _`[email protected]`: https://groups.google.com/forum/#!forum/ray-dev
214 .. _`GitHub Issues`: https://github.com/ray-project/ray/issues
215 .. _`StackOverflow`: https://stackoverflow.com/questions/tagged/ray
216 .. _`Pull Requests`: https://github.com/ray-project/ray/pulls
217 .. _`Meetup Group`: https://www.meetup.com/Bay-Area-Ray-Meetup/
218 .. _`Community Slack`: https://forms.gle/9TSdDYUgxYs8SA9e8
219 .. _`Twitter`: https://twitter.com/raydistributed
220
[end of README.rst]
[start of doc/source/conf.py]
1 # -*- coding: utf-8 -*-
2 #
3 # Ray documentation build configuration file, created by
4 # sphinx-quickstart on Fri Jul 1 13:19:58 2016.
5 #
6 # This file is execfile()d with the current directory set to its
7 # containing dir.
8 #
9 # Note that not all possible configuration values are present in this
10 # autogenerated file.
11 #
12 # All configuration values have a default; values that are commented out
13 # serve to show the default.
14
15 import glob
16 import shutil
17 import sys
18 import os
19 import urllib
20 sys.path.insert(0, os.path.abspath('.'))
21 from custom_directives import CustomGalleryItemDirective
22
23 # These lines added to enable Sphinx to work without installing Ray.
24 import mock
25 MOCK_MODULES = [
26 "blist",
27 "gym",
28 "gym.spaces",
29 "ray._raylet",
30 "ray.core.generated",
31 "ray.core.generated.gcs_pb2",
32 "ray.core.generated.ray.protocol.Task",
33 "scipy",
34 "scipy.signal",
35 "scipy.stats",
36 "tensorflow_probability",
37 "tensorflow",
38 "tensorflow.contrib",
39 "tensorflow.contrib.all_reduce",
40 "tensorflow.contrib.all_reduce.python",
41 "tensorflow.contrib.layers",
42 "tensorflow.contrib.rnn",
43 "tensorflow.contrib.slim",
44 "tensorflow.core",
45 "tensorflow.core.util",
46 "tensorflow.python",
47 "tensorflow.python.client",
48 "tensorflow.python.util",
49 "torch",
50 "torch.distributed",
51 "torch.nn",
52 "torch.nn.parallel",
53 "torch.utils.data",
54 ]
55 for mod_name in MOCK_MODULES:
56 sys.modules[mod_name] = mock.Mock()
57 # ray.rllib.models.action_dist.py and
58 # ray.rllib.models.lstm.py will use tf.VERSION
59 sys.modules["tensorflow"].VERSION = "9.9.9"
60
61 # If extensions (or modules to document with autodoc) are in another directory,
62 # add these directories to sys.path here. If the directory is relative to the
63 # documentation root, use os.path.abspath to make it absolute, like shown here.
64 sys.path.insert(0, os.path.abspath("../../python/"))
65
66 # -- General configuration ------------------------------------------------
67
68 # If your documentation needs a minimal Sphinx version, state it here.
69 #needs_sphinx = '1.0'
70
71 # Add any Sphinx extension module names here, as strings. They can be
72 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
73 # ones.
74 extensions = [
75 'sphinx.ext.autodoc',
76 'sphinx.ext.viewcode',
77 'sphinx.ext.napoleon',
78 'sphinx_click.ext',
79 'sphinx-jsonschema',
80 'sphinx_gallery.gen_gallery',
81 'sphinx_copybutton',
82 ]
83
84 sphinx_gallery_conf = {
85 "examples_dirs": ["../examples"], # path to example scripts
86 "gallery_dirs": ["auto_examples"], # path where to save generated examples
87 "ignore_pattern": "../examples/doc_code/",
88 "plot_gallery": "False",
89 # "filename_pattern": "tutorial.py",
90 "backreferences_dir": False
91 # "show_memory': False,
92 # 'min_reported_time': False
93 }
94
95 for i in range(len(sphinx_gallery_conf["examples_dirs"])):
96 gallery_dir = sphinx_gallery_conf["gallery_dirs"][i]
97 source_dir = sphinx_gallery_conf["examples_dirs"][i]
98 try:
99 os.mkdir(gallery_dir)
100 except OSError:
101 pass
102
103 # Copy rst files from source dir to gallery dir.
104 for f in glob.glob(os.path.join(source_dir, '*.rst')):
105 shutil.copy(f, gallery_dir)
106
107 # Add any paths that contain templates here, relative to this directory.
108 templates_path = ['_templates']
109
110 # The suffix of source filenames.
111 from recommonmark.parser import CommonMarkParser
112
113 # The suffix of source filenames.
114 source_suffix = ['.rst', '.md']
115
116 source_parsers = {
117 '.md': CommonMarkParser,
118 }
119
120 # The encoding of source files.
121 #source_encoding = 'utf-8-sig'
122
123 # The master toctree document.
124 master_doc = 'index'
125
126 # General information about the project.
127 project = u'Ray'
128 copyright = u'2019, The Ray Team'
129 author = u'The Ray Team'
130
131 # The version info for the project you're documenting, acts as replacement for
132 # |version| and |release|, also used in various other places throughout the
133 # built documents.
134 #
135 # The short X.Y version.
136 from ray import __version__ as version
137 # The full version, including alpha/beta/rc tags.
138 release = version
139
140 # The language for content autogenerated by Sphinx. Refer to documentation
141 # for a list of supported languages.
142 #
143 # This is also used if you do content translation via gettext catalogs.
144 # Usually you set "language" from the command line for these cases.
145 language = None
146
147 # There are two options for replacing |today|: either, you set today to some
148 # non-false value, then it is used:
149 #today = ''
150 # Else, today_fmt is used as the format for a strftime call.
151 #today_fmt = '%B %d, %Y'
152
153 # List of patterns, relative to source directory, that match files and
154 # directories to ignore when looking for source files.
155 exclude_patterns = ['_build']
156 exclude_patterns += sphinx_gallery_conf['examples_dirs']
157 exclude_patterns += ["*/README.rst"]
158
159 # The reST default role (used for this markup: `text`) to use for all
160 # documents.
161 #default_role = None
162
163 # If true, '()' will be appended to :func: etc. cross-reference text.
164 #add_function_parentheses = True
165
166 # If true, the current module name will be prepended to all description
167 # unit titles (such as .. function::).
168 #add_module_names = True
169
170 # If true, sectionauthor and moduleauthor directives will be shown in the
171 # output. They are ignored by default.
172 #show_authors = False
173
174 # The name of the Pygments (syntax highlighting) style to use.
175 pygments_style = 'sphinx'
176
177 # A list of ignored prefixes for module index sorting.
178 #modindex_common_prefix = []
179
180 # If true, keep warnings as "system message" paragraphs in the built documents.
181 #keep_warnings = False
182
183 # If true, `todo` and `todoList` produce output, else they produce nothing.
184 todo_include_todos = False
185
186 # -- Options for HTML output ----------------------------------------------
187
188 # The theme to use for HTML and HTML Help pages. See the documentation for
189 # a list of builtin themes.
190 import sphinx_rtd_theme
191 html_theme = 'sphinx_rtd_theme'
192 html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
193
194 # Theme options are theme-specific and customize the look and feel of a theme
195 # further. For a list of options available for each theme, see the
196 # documentation.
197 #html_theme_options = {}
198
199 # Add any paths that contain custom themes here, relative to this directory.
200 #html_theme_path = []
201
202 # The name for this set of Sphinx documents. If None, it defaults to
203 # "<project> v<release> documentation".
204 #html_title = None
205
206 # A shorter title for the navigation bar. Default is the same as html_title.
207 #html_short_title = None
208
209 # The name of an image file (relative to this directory) to place at the top
210 # of the sidebar.
211 #html_logo = None
212
213 # The name of an image file (within the static path) to use as favicon of the
214 # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
215 # pixels large.
216 #html_favicon = None
217
218 # Add any paths that contain custom static files (such as style sheets) here,
219 # relative to this directory. They are copied after the builtin static files,
220 # so a file named "default.css" will overwrite the builtin "default.css".
221 html_static_path = ['_static']
222
223 # Add any extra paths that contain custom files (such as robots.txt or
224 # .htaccess) here, relative to this directory. These files are copied
225 # directly to the root of the documentation.
226 #html_extra_path = []
227
228 # If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
229 # using the given strftime format.
230 #html_last_updated_fmt = '%b %d, %Y'
231
232 # If true, SmartyPants will be used to convert quotes and dashes to
233 # typographically correct entities.
234 #html_use_smartypants = True
235
236 # Custom sidebar templates, maps document names to template names.
237 html_sidebars = {'**': ['index.html']}
238
239 # Additional templates that should be rendered to pages, maps page names to
240 # template names.
241 #html_additional_pages = {}
242
243 # If false, no module index is generated.
244 #html_domain_indices = True
245
246 # If false, no index is generated.
247 #html_use_index = True
248
249 # If true, the index is split into individual pages for each letter.
250 #html_split_index = False
251
252 # If true, links to the reST sources are added to the pages.
253 #html_show_sourcelink = True
254
255 # If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
256 #html_show_sphinx = True
257
258 # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
259 #html_show_copyright = True
260
261 # If true, an OpenSearch description file will be output, and all pages will
262 # contain a <link> tag referring to it. The value of this option must be the
263 # base URL from which the finished HTML is served.
264 #html_use_opensearch = ''
265
266 # This is the file name suffix for HTML files (e.g. ".xhtml").
267 #html_file_suffix = None
268
269 # Language to be used for generating the HTML full-text search index.
270 # Sphinx supports the following languages:
271 # 'da', 'de', 'en', 'es', 'fi', 'fr', 'hu', 'it', 'ja'
272 # 'nl', 'no', 'pt', 'ro', 'ru', 'sv', 'tr'
273 #html_search_language = 'en'
274
275 # A dictionary with options for the search language support, empty by default.
276 # Now only 'ja' uses this config value
277 #html_search_options = {'type': 'default'}
278
279 # The name of a javascript file (relative to the configuration directory) that
280 # implements a search results scorer. If empty, the default will be used.
281 #html_search_scorer = 'scorer.js'
282
283 # Output file base name for HTML help builder.
284 htmlhelp_basename = 'Raydoc'
285
286 # -- Options for LaTeX output ---------------------------------------------
287
288 latex_elements = {
289 # The paper size ('letterpaper' or 'a4paper').
290 #'papersize': 'letterpaper',
291
292 # The font size ('10pt', '11pt' or '12pt').
293 #'pointsize': '10pt',
294
295 # Additional stuff for the LaTeX preamble.
296 #'preamble': '',
297
298 # Latex figure (float) alignment
299 #'figure_align': 'htbp',
300 }
301
302 # Grouping the document tree into LaTeX files. List of tuples
303 # (source start file, target name, title,
304 # author, documentclass [howto, manual, or own class]).
305 latex_documents = [
306 (master_doc, 'Ray.tex', u'Ray Documentation', u'The Ray Team', 'manual'),
307 ]
308
309 # The name of an image file (relative to this directory) to place at the top of
310 # the title page.
311 #latex_logo = None
312
313 # For "manual" documents, if this is true, then toplevel headings are parts,
314 # not chapters.
315 #latex_use_parts = False
316
317 # If true, show page references after internal links.
318 #latex_show_pagerefs = False
319
320 # If true, show URL addresses after external links.
321 #latex_show_urls = False
322
323 # Documents to append as an appendix to all manuals.
324 #latex_appendices = []
325
326 # If false, no module index is generated.
327 #latex_domain_indices = True
328
329 # -- Options for manual page output ---------------------------------------
330
331 # One entry per manual page. List of tuples
332 # (source start file, name, description, authors, manual section).
333 man_pages = [(master_doc, 'ray', u'Ray Documentation', [author], 1)]
334
335 # If true, show URL addresses after external links.
336 #man_show_urls = False
337
338 # -- Options for Texinfo output -------------------------------------------
339
340 # Grouping the document tree into Texinfo files. List of tuples
341 # (source start file, target name, title, author,
342 # dir menu entry, description, category)
343 texinfo_documents = [
344 (master_doc, 'Ray', u'Ray Documentation', author, 'Ray',
345 'One line description of project.', 'Miscellaneous'),
346 ]
347
348 # Documents to append as an appendix to all manuals.
349 #texinfo_appendices = []
350
351 # If false, no module index is generated.
352 #texinfo_domain_indices = True
353
354 # How to display URL addresses: 'footnote', 'no', or 'inline'.
355 #texinfo_show_urls = 'footnote'
356
357 # If true, do not generate a @detailmenu in the "Top" node's menu.
358 #texinfo_no_detailmenu = False
359
360 # pcmoritz: To make the following work, you have to run
361 # sudo pip install recommonmark
362
363 # Python methods should be presented in source code order
364 autodoc_member_order = 'bysource'
365
366 # Taken from https://github.com/edx/edx-documentation
367 FEEDBACK_FORM_FMT = "https://github.com/ray-project/ray/issues/new?title={title}&labels=docs&body={body}"
368
369
370 def feedback_form_url(project, page):
371 """Create a URL for feedback on a particular page in a project."""
372 return FEEDBACK_FORM_FMT.format(
373 title=urllib.parse.quote(
374 "[docs] Issue on `{page}.rst`".format(page=page)),
375 body=urllib.parse.quote(
376 "# Documentation Problem/Question/Comment\n"
377 "<!-- Describe your issue/question/comment below. -->\n"
378 "<!-- If there are typos or errors in the docs, feel free to create a pull-request. -->\n"
379 "\n\n\n\n"
380 "(Created directly from the docs)\n"))
381
382
383 def update_context(app, pagename, templatename, context, doctree):
384 """Update the page rendering context to include ``feedback_form_url``."""
385 context['feedback_form_url'] = feedback_form_url(app.config.project,
386 pagename)
387
388
389 # see also http://searchvoidstar.tumblr.com/post/125486358368/making-pdfs-from-markdown-on-readthedocsorg-using
390
391
392 def setup(app):
393 app.connect('html-page-context', update_context)
394 # Custom directives
395 app.add_directive('customgalleryitem', CustomGalleryItemDirective)
396
[end of doc/source/conf.py]
[start of python/ray/__init__.py]
1 import os
2 from os.path import dirname
3 import sys
4
5 # MUST add pickle5 to the import path because it will be imported by some
6 # raylet modules.
7
8 if "pickle5" in sys.modules:
9 raise ImportError("Ray must be imported before pickle5 because Ray "
10 "requires a specific version of pickle5 (which is "
11 "packaged along with Ray).")
12
13 # Add the directory containing pickle5 to the Python path so that we find the
14 # pickle5 version packaged with ray and not a pre-existing pickle5.
15 pickle5_path = os.path.join(
16 os.path.abspath(os.path.dirname(__file__)), "pickle5_files")
17 sys.path.insert(0, pickle5_path)
18
19 # Expose ray ABI symbols which may be dependent by other shared
20 # libraries such as _streaming.so. See BUILD.bazel:_raylet
21 so_path = os.path.join(dirname(__file__), "_raylet.so")
22 if os.path.exists(so_path):
23 import ctypes
24 from ctypes import CDLL
25 CDLL(so_path, ctypes.RTLD_GLOBAL)
26
27 # MUST import ray._raylet before pyarrow to initialize some global variables.
28 # It seems the library related to memory allocation in pyarrow will destroy the
29 # initialization of grpc if we import pyarrow at first.
30 # NOTE(JoeyJiang): See https://github.com/ray-project/ray/issues/5219 for more
31 # details.
32 import ray._raylet # noqa: E402
33
34 if "pyarrow" in sys.modules:
35 raise ImportError("Ray must be imported before pyarrow because Ray "
36 "requires a specific version of pyarrow (which is "
37 "packaged along with Ray).")
38
39 # Add the directory containing pyarrow to the Python path so that we find the
40 # pyarrow version packaged with ray and not a pre-existing pyarrow.
41 pyarrow_path = os.path.join(
42 os.path.abspath(os.path.dirname(__file__)), "pyarrow_files")
43 sys.path.insert(0, pyarrow_path)
44
45 # See https://github.com/ray-project/ray/issues/131.
46 helpful_message = """
47
48 If you are using Anaconda, try fixing this problem by running:
49
50 conda install libgcc
51 """
52
53 try:
54 import pyarrow # noqa: F401
55
56 # pyarrow is not imported inside of _raylet because of the issue described
57 # above. In order for Cython to compile _raylet, pyarrow is set to None
58 # in _raylet instead, so we give _raylet a real reference to it here.
59 # We first do the attribute checks here so that building the documentation
60 # succeeds without fully installing ray..
61 # TODO(edoakes): Fix this.
62 if hasattr(ray, "_raylet") and hasattr(ray._raylet, "pyarrow"):
63 ray._raylet.pyarrow = pyarrow
64 except ImportError as e:
65 if ((hasattr(e, "msg") and isinstance(e.msg, str)
66 and ("libstdc++" in e.msg or "CXX" in e.msg))):
67 # This code path should be taken with Python 3.
68 e.msg += helpful_message
69 elif (hasattr(e, "message") and isinstance(e.message, str)
70 and ("libstdc++" in e.message or "CXX" in e.message)):
71 # This code path should be taken with Python 2.
72 condition = (hasattr(e, "args") and isinstance(e.args, tuple)
73 and len(e.args) == 1 and isinstance(e.args[0], str))
74 if condition:
75 e.args = (e.args[0] + helpful_message, )
76 else:
77 if not hasattr(e, "args"):
78 e.args = ()
79 elif not isinstance(e.args, tuple):
80 e.args = (e.args, )
81 e.args += (helpful_message, )
82 raise
83
84 from ray._raylet import (
85 ActorCheckpointID,
86 ActorClassID,
87 ActorID,
88 ClientID,
89 Config as _Config,
90 JobID,
91 WorkerID,
92 FunctionID,
93 ObjectID,
94 TaskID,
95 UniqueID,
96 ) # noqa: E402
97
98 _config = _Config()
99
100 from ray.profiling import profile # noqa: E402
101 from ray.state import (jobs, nodes, actors, tasks, objects, timeline,
102 object_transfer_timeline, cluster_resources,
103 available_resources, errors) # noqa: E402
104 from ray.worker import (
105 LOCAL_MODE,
106 SCRIPT_MODE,
107 WORKER_MODE,
108 connect,
109 disconnect,
110 get,
111 get_gpu_ids,
112 get_resource_ids,
113 get_webui_url,
114 init,
115 is_initialized,
116 put,
117 register_custom_serializer,
118 remote,
119 shutdown,
120 show_in_webui,
121 wait,
122 ) # noqa: E402
123 import ray.internal # noqa: E402
124 import ray.projects # noqa: E402
125 # We import ray.actor because some code is run in actor.py which initializes
126 # some functions in the worker.
127 import ray.actor # noqa: F401
128 from ray.actor import method # noqa: E402
129 from ray.runtime_context import _get_runtime_context # noqa: E402
130
131 # Ray version string.
132 __version__ = "0.9.0.dev0"
133
134 __all__ = [
135 "jobs",
136 "nodes",
137 "actors",
138 "tasks",
139 "objects",
140 "timeline",
141 "object_transfer_timeline",
142 "cluster_resources",
143 "available_resources",
144 "errors",
145 "LOCAL_MODE",
146 "PYTHON_MODE",
147 "SCRIPT_MODE",
148 "WORKER_MODE",
149 "__version__",
150 "_config",
151 "_get_runtime_context",
152 "actor",
153 "connect",
154 "disconnect",
155 "get",
156 "get_gpu_ids",
157 "get_resource_ids",
158 "get_webui_url",
159 "init",
160 "internal",
161 "is_initialized",
162 "method",
163 "profile",
164 "projects",
165 "put",
166 "register_custom_serializer",
167 "remote",
168 "shutdown",
169 "show_in_webui",
170 "wait",
171 ]
172
173 # ID types
174 __all__ += [
175 "ActorCheckpointID",
176 "ActorClassID",
177 "ActorID",
178 "ClientID",
179 "JobID",
180 "WorkerID",
181 "FunctionID",
182 "ObjectID",
183 "TaskID",
184 "UniqueID",
185 ]
186
187 import ctypes # noqa: E402
188 # Windows only
189 if hasattr(ctypes, "windll"):
190 # Makes sure that all child processes die when we die. Also makes sure that
191 # fatal crashes result in process termination rather than an error dialog
192 # (the latter is annoying since we have a lot of processes). This is done
193 # by associating all child processes with a "job" object that imposes this
194 # behavior.
195 (lambda kernel32: (lambda job: (lambda n: kernel32.SetInformationJobObject(job, 9, "\0" * 17 + chr(0x8 | 0x4 | 0x20) + "\0" * (n - 18), n))(0x90 if ctypes.sizeof(ctypes.c_void_p) > ctypes.sizeof(ctypes.c_int) else 0x70) and kernel32.AssignProcessToJobObject(job, ctypes.c_void_p(kernel32.GetCurrentProcess())))(ctypes.c_void_p(kernel32.CreateJobObjectW(None, None))) if kernel32 is not None else None)(ctypes.windll.kernel32) # noqa: E501
196
[end of python/ray/__init__.py]
[start of python/setup.py]
1 from itertools import chain
2 import os
3 import re
4 import shutil
5 import subprocess
6 import sys
7
8 from setuptools import setup, find_packages, Distribution
9 import setuptools.command.build_ext as _build_ext
10
11 # Ideally, we could include these files by putting them in a
12 # MANIFEST.in or using the package_data argument to setup, but the
13 # MANIFEST.in gets applied at the very beginning when setup.py runs
14 # before these files have been created, so we have to move the files
15 # manually.
16
17 # NOTE: The lists below must be kept in sync with ray/BUILD.bazel.
18
19 ray_files = [
20 "ray/core/src/ray/thirdparty/redis/src/redis-server",
21 "ray/core/src/ray/gcs/redis_module/libray_redis_module.so",
22 "ray/core/src/plasma/plasma_store_server",
23 "ray/_raylet.so",
24 "ray/core/src/ray/raylet/raylet_monitor",
25 "ray/core/src/ray/raylet/raylet",
26 "ray/dashboard/dashboard.py",
27 "ray/streaming/_streaming.so",
28 ]
29
30 build_java = os.getenv("RAY_INSTALL_JAVA") == "1"
31 if build_java:
32 ray_files.append("ray/jars/ray_dist.jar")
33
34 # These are the directories where automatically generated Python protobuf
35 # bindings are created.
36 generated_python_directories = [
37 "ray/core/generated",
38 "ray/streaming/generated",
39 ]
40
41 optional_ray_files = []
42
43 ray_autoscaler_files = [
44 "ray/autoscaler/aws/example-full.yaml",
45 "ray/autoscaler/gcp/example-full.yaml",
46 "ray/autoscaler/local/example-full.yaml",
47 "ray/autoscaler/kubernetes/example-full.yaml",
48 "ray/autoscaler/kubernetes/kubectl-rsync.sh",
49 ]
50
51 ray_project_files = [
52 "ray/projects/schema.json", "ray/projects/templates/cluster_template.yaml",
53 "ray/projects/templates/project_template.yaml",
54 "ray/projects/templates/requirements.txt"
55 ]
56
57 ray_dashboard_files = [
58 os.path.join(dirpath, filename)
59 for dirpath, dirnames, filenames in os.walk("ray/dashboard/client/build")
60 for filename in filenames
61 ]
62
63 optional_ray_files += ray_autoscaler_files
64 optional_ray_files += ray_project_files
65 optional_ray_files += ray_dashboard_files
66
67 if "RAY_USE_NEW_GCS" in os.environ and os.environ["RAY_USE_NEW_GCS"] == "on":
68 ray_files += [
69 "ray/core/src/credis/build/src/libmember.so",
70 "ray/core/src/credis/build/src/libmaster.so",
71 "ray/core/src/credis/redis/src/redis-server"
72 ]
73
74 extras = {
75 "debug": ["psutil", "setproctitle", "py-spy >= 0.2.0"],
76 "dashboard": ["aiohttp", "google", "grpcio", "psutil", "setproctitle"],
77 "serve": ["uvicorn", "pygments", "werkzeug", "flask", "pandas", "blist"],
78 "tune": ["tabulate", "tensorboardX"],
79 }
80
81 extras["rllib"] = extras["tune"] + [
82 "pyyaml",
83 "gym[atari]",
84 "opencv-python-headless",
85 "lz4",
86 "scipy",
87 ]
88
89 extras["all"] = list(set(chain.from_iterable(extras.values())))
90
91
92 class build_ext(_build_ext.build_ext):
93 def run(self):
94 # Note: We are passing in sys.executable so that we use the same
95 # version of Python to build pyarrow inside the build.sh script. Note
96 # that certain flags will not be passed along such as --user or sudo.
97 # TODO(rkn): Fix this.
98 command = ["../build.sh", "-p", sys.executable]
99 if build_java:
100 # Also build binaries for Java if the above env variable exists.
101 command += ["-l", "python,java"]
102 subprocess.check_call(command)
103
104 # We also need to install pyarrow along with Ray, so make sure that the
105 # relevant non-Python pyarrow files get copied.
106 pyarrow_files = []
107 for (root, dirs, filenames) in os.walk("./ray/pyarrow_files/pyarrow"):
108 for name in filenames:
109 pyarrow_files.append(os.path.join(root, name))
110
111 # We also need to install pickle5 along with Ray, so make sure that the
112 # relevant non-Python pickle5 files get copied.
113 pickle5_files = []
114 for (root, dirs, filenames) in os.walk("./ray/pickle5_files/pickle5"):
115 for name in filenames:
116 pickle5_files.append(os.path.join(root, name))
117
118 files_to_include = ray_files + pyarrow_files + pickle5_files
119
120 # Copy over the autogenerated protobuf Python bindings.
121 for directory in generated_python_directories:
122 for filename in os.listdir(directory):
123 if filename[-3:] == ".py":
124 files_to_include.append(os.path.join(directory, filename))
125
126 for filename in files_to_include:
127 self.move_file(filename)
128
129 # Try to copy over the optional files.
130 for filename in optional_ray_files:
131 try:
132 self.move_file(filename)
133 except Exception:
134 print("Failed to copy optional file {}. This is ok."
135 .format(filename))
136
137 def move_file(self, filename):
138 # TODO(rkn): This feels very brittle. It may not handle all cases. See
139 # https://github.com/apache/arrow/blob/master/python/setup.py for an
140 # example.
141 source = filename
142 destination = os.path.join(self.build_lib, filename)
143 # Create the target directory if it doesn't already exist.
144 parent_directory = os.path.dirname(destination)
145 if not os.path.exists(parent_directory):
146 os.makedirs(parent_directory)
147 if not os.path.exists(destination):
148 print("Copying {} to {}.".format(source, destination))
149 shutil.copy(source, destination, follow_symlinks=True)
150
151
152 class BinaryDistribution(Distribution):
153 def has_ext_modules(self):
154 return True
155
156
157 def find_version(*filepath):
158 # Extract version information from filepath
159 here = os.path.abspath(os.path.dirname(__file__))
160 with open(os.path.join(here, *filepath)) as fp:
161 version_match = re.search(r"^__version__ = ['\"]([^'\"]*)['\"]",
162 fp.read(), re.M)
163 if version_match:
164 return version_match.group(1)
165 raise RuntimeError("Unable to find version string.")
166
167
168 requires = [
169 "numpy >= 1.16",
170 "filelock",
171 "jsonschema",
172 "funcsigs",
173 "click",
174 "colorama",
175 "packaging",
176 "pytest",
177 "pyyaml",
178 "redis>=3.3.2",
179 # NOTE: Don't upgrade the version of six! Doing so causes installation
180 # problems. See https://github.com/ray-project/ray/issues/4169.
181 "six >= 1.0.0",
182 "faulthandler;python_version<'3.3'",
183 "protobuf >= 3.8.0",
184 "cloudpickle",
185 ]
186
187 setup(
188 name="ray",
189 version=find_version("ray", "__init__.py"),
190 author="Ray Team",
191 author_email="[email protected]",
192 description=("A system for parallel and distributed Python that unifies "
193 "the ML ecosystem."),
194 long_description=open("../README.rst").read(),
195 url="https://github.com/ray-project/ray",
196 keywords=("ray distributed parallel machine-learning "
197 "reinforcement-learning deep-learning python"),
198 packages=find_packages(),
199 cmdclass={"build_ext": build_ext},
200 # The BinaryDistribution argument triggers build_ext.
201 distclass=BinaryDistribution,
202 install_requires=requires,
203 setup_requires=["cython >= 0.29"],
204 extras_require=extras,
205 entry_points={
206 "console_scripts": [
207 "ray=ray.scripts.scripts:main",
208 "rllib=ray.rllib.scripts:cli [rllib]", "tune=ray.tune.scripts:cli"
209 ]
210 },
211 include_package_data=True,
212 zip_safe=False,
213 license="Apache 2.0")
214
[end of python/setup.py]
[start of rllib/contrib/maddpg/maddpg.py]
1 """Contributed port of MADDPG from OpenAI baselines.
2
3 The implementation has a couple assumptions:
4 - The number of agents is fixed and known upfront.
5 - Each agent is bound to a policy of the same name.
6 - Discrete actions are sent as logits (pre-softmax).
7
8 For a minimal example, see twostep_game.py, and the README for how to run
9 with the multi-agent particle envs.
10 """
11
12 import logging
13
14 from ray.rllib.agents.trainer import with_common_config
15 from ray.rllib.agents.dqn.dqn import GenericOffPolicyTrainer
16 from ray.rllib.contrib.maddpg.maddpg_policy import MADDPGTFPolicy
17 from ray.rllib.optimizers import SyncReplayOptimizer
18 from ray.rllib.policy.sample_batch import SampleBatch, MultiAgentBatch
19
20 logger = logging.getLogger(__name__)
21 logger.setLevel(logging.INFO)
22
23 # yapf: disable
24 # __sphinx_doc_begin__
25 DEFAULT_CONFIG = with_common_config({
26 # === Settings for each individual policy ===
27 # ID of the agent controlled by this policy
28 "agent_id": None,
29 # Use a local critic for this policy.
30 "use_local_critic": False,
31
32 # === Evaluation ===
33 # Evaluation interval
34 "evaluation_interval": None,
35 # Number of episodes to run per evaluation period.
36 "evaluation_num_episodes": 10,
37
38 # === Model ===
39 # Apply a state preprocessor with spec given by the "model" config option
40 # (like other RL algorithms). This is mostly useful if you have a weird
41 # observation shape, like an image. Disabled by default.
42 "use_state_preprocessor": False,
43 # Postprocess the policy network model output with these hidden layers. If
44 # use_state_preprocessor is False, then these will be the *only* hidden
45 # layers in the network.
46 "actor_hiddens": [64, 64],
47 # Hidden layers activation of the postprocessing stage of the policy
48 # network
49 "actor_hidden_activation": "relu",
50 # Postprocess the critic network model output with these hidden layers;
51 # again, if use_state_preprocessor is True, then the state will be
52 # preprocessed by the model specified with the "model" config option first.
53 "critic_hiddens": [64, 64],
54 # Hidden layers activation of the postprocessing state of the critic.
55 "critic_hidden_activation": "relu",
56 # N-step Q learning
57 "n_step": 1,
58 # Algorithm for good policies
59 "good_policy": "maddpg",
60 # Algorithm for adversary policies
61 "adv_policy": "maddpg",
62
63 # === Replay buffer ===
64 # Size of the replay buffer. Note that if async_updates is set, then
65 # each worker will have a replay buffer of this size.
66 "buffer_size": int(1e6),
67 # Observation compression. Note that compression makes simulation slow in
68 # MPE.
69 "compress_observations": False,
70
71 # === Optimization ===
72 # Learning rate for the critic (Q-function) optimizer.
73 "critic_lr": 1e-2,
74 # Learning rate for the actor (policy) optimizer.
75 "actor_lr": 1e-2,
76 # Update the target network every `target_network_update_freq` steps.
77 "target_network_update_freq": 0,
78 # Update the target by \tau * policy + (1-\tau) * target_policy
79 "tau": 0.01,
80 # Weights for feature regularization for the actor
81 "actor_feature_reg": 0.001,
82 # If not None, clip gradients during optimization at this value
83 "grad_norm_clipping": 0.5,
84 # How many steps of the model to sample before learning starts.
85 "learning_starts": 1024 * 25,
86 # Update the replay buffer with this many samples at once. Note that this
87 # setting applies per-worker if num_workers > 1.
88 "sample_batch_size": 100,
89 # Size of a batched sampled from replay buffer for training. Note that
90 # if async_updates is set, then each worker returns gradients for a
91 # batch of this size.
92 "train_batch_size": 1024,
93 # Number of env steps to optimize for before returning
94 "timesteps_per_iteration": 0,
95
96 # === Parallelism ===
97 # Number of workers for collecting samples with. This only makes sense
98 # to increase if your environment is particularly slow to sample, or if
99 # you're using the Async or Ape-X optimizers.
100 "num_workers": 1,
101 # Prevent iterations from going lower than this time span
102 "min_iter_time_s": 0,
103 })
104 # __sphinx_doc_end__
105 # yapf: enable
106
107
108 def set_global_timestep(trainer):
109 global_timestep = trainer.optimizer.num_steps_sampled
110 trainer.train_start_timestep = global_timestep
111
112
113 def before_learn_on_batch(multi_agent_batch, policies, train_batch_size):
114 samples = {}
115
116 # Modify keys.
117 for pid, p in policies.items():
118 i = p.config["agent_id"]
119 keys = multi_agent_batch.policy_batches[pid].data.keys()
120 keys = ["_".join([k, str(i)]) for k in keys]
121 samples.update(
122 dict(
123 zip(keys,
124 multi_agent_batch.policy_batches[pid].data.values())))
125
126 # Make ops and feed_dict to get "new_obs" from target action sampler.
127 new_obs_ph_n = [p.new_obs_ph for p in policies.values()]
128 new_obs_n = list()
129 for k, v in samples.items():
130 if "new_obs" in k:
131 new_obs_n.append(v)
132
133 target_act_sampler_n = [p.target_act_sampler for p in policies.values()]
134 feed_dict = dict(zip(new_obs_ph_n, new_obs_n))
135
136 new_act_n = p.sess.run(target_act_sampler_n, feed_dict)
137 samples.update(
138 {"new_actions_%d" % i: new_act
139 for i, new_act in enumerate(new_act_n)})
140
141 # Share samples among agents.
142 policy_batches = {pid: SampleBatch(samples) for pid in policies.keys()}
143 return MultiAgentBatch(policy_batches, train_batch_size)
144
145
146 def make_optimizer(workers, config):
147 return SyncReplayOptimizer(
148 workers,
149 learning_starts=config["learning_starts"],
150 buffer_size=config["buffer_size"],
151 train_batch_size=config["train_batch_size"],
152 before_learn_on_batch=before_learn_on_batch,
153 synchronize_sampling=True,
154 prioritized_replay=False)
155
156
157 def add_trainer_metrics(trainer, result):
158 global_timestep = trainer.optimizer.num_steps_sampled
159 result.update(
160 timesteps_this_iter=global_timestep - trainer.train_start_timestep,
161 info=dict({
162 "num_target_updates": trainer.state["num_target_updates"],
163 }, **trainer.optimizer.stats()))
164
165
166 def collect_metrics(trainer):
167 result = trainer.collect_metrics()
168 return result
169
170
171 MADDPGTrainer = GenericOffPolicyTrainer.with_updates(
172 name="MADDPG",
173 default_config=DEFAULT_CONFIG,
174 default_policy=MADDPGTFPolicy,
175 before_init=None,
176 before_train_step=set_global_timestep,
177 make_policy_optimizer=make_optimizer,
178 after_train_result=add_trainer_metrics,
179 collect_metrics_fn=collect_metrics,
180 before_evaluate_fn=None)
181
[end of rllib/contrib/maddpg/maddpg.py]
[start of rllib/evaluation/postprocessing.py]
1 import numpy as np
2 import scipy.signal
3 from ray.rllib.policy.sample_batch import SampleBatch
4 from ray.rllib.utils.annotations import DeveloperAPI
5
6
7 def discount(x, gamma):
8 return scipy.signal.lfilter([1], [1, -gamma], x[::-1], axis=0)[::-1]
9
10
11 class Postprocessing:
12 """Constant definitions for postprocessing."""
13
14 ADVANTAGES = "advantages"
15 VALUE_TARGETS = "value_targets"
16
17
18 @DeveloperAPI
19 def compute_advantages(rollout, last_r, gamma=0.9, lambda_=1.0, use_gae=True):
20 """
21 Given a rollout, compute its value targets and the advantage.
22
23 Args:
24 rollout (SampleBatch): SampleBatch of a single trajectory
25 last_r (float): Value estimation for last observation
26 gamma (float): Discount factor.
27 lambda_ (float): Parameter for GAE
28 use_gae (bool): Using Generalized Advantage Estimation
29
30 Returns:
31 SampleBatch (SampleBatch): Object with experience from rollout and
32 processed rewards.
33 """
34
35 traj = {}
36 trajsize = len(rollout[SampleBatch.ACTIONS])
37 for key in rollout:
38 traj[key] = np.stack(rollout[key])
39
40 if use_gae:
41 assert SampleBatch.VF_PREDS in rollout, "Values not found!"
42 vpred_t = np.concatenate(
43 [rollout[SampleBatch.VF_PREDS],
44 np.array([last_r])])
45 delta_t = (
46 traj[SampleBatch.REWARDS] + gamma * vpred_t[1:] - vpred_t[:-1])
47 # This formula for the advantage comes from:
48 # "Generalized Advantage Estimation": https://arxiv.org/abs/1506.02438
49 traj[Postprocessing.ADVANTAGES] = discount(delta_t, gamma * lambda_)
50 traj[Postprocessing.VALUE_TARGETS] = (
51 traj[Postprocessing.ADVANTAGES] +
52 traj[SampleBatch.VF_PREDS]).copy().astype(np.float32)
53 else:
54 rewards_plus_v = np.concatenate(
55 [rollout[SampleBatch.REWARDS],
56 np.array([last_r])])
57 traj[Postprocessing.ADVANTAGES] = discount(rewards_plus_v, gamma)[:-1]
58 # TODO(ekl): support using a critic without GAE
59 traj[Postprocessing.VALUE_TARGETS] = np.zeros_like(
60 traj[Postprocessing.ADVANTAGES])
61
62 traj[Postprocessing.ADVANTAGES] = traj[
63 Postprocessing.ADVANTAGES].copy().astype(np.float32)
64
65 assert all(val.shape[0] == trajsize for val in traj.values()), \
66 "Rollout stacked incorrectly!"
67 return SampleBatch(traj)
68
[end of rllib/evaluation/postprocessing.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
ray-project/ray
|
67319bc887ad952d619cb5e02b843f64dc3f8874
|
[rllib]compute_advantages doesn't support a critic without GAE
Looks like this was a [planned feature](https://github.com/ray-project/ray/blob/7edc677304f90bcab0a01e9722e59e36dd9afcb0/python/ray/rllib/evaluation/postprocessing.py#L47) that got overlooked. Seems like a simple fix but maybe I'm missing something.
|
Yeah I think this is basically a couple line fix.
Hey Neil,
Would you be interested in opening a PR for the fix? We'll help you clean
it up into a good state and get it merged.
Thanks!
Richard
On Fri, Jan 11, 2019 at 3:56 AM Eric Liang <[email protected]> wrote:
> Yeah I think this is basically a couple line fix.
>
> —
> You are receiving this because you are subscribed to this thread.
> Reply to this email directly, view it on GitHub
> <https://github.com/ray-project/ray/issues/3746#issuecomment-453494959>,
> or mute the thread
> <https://github.com/notifications/unsubscribe-auth/AEUc5SF8AACOmJ__XpcL6wv3q5NeSsFzks5vCHv1gaJpZM4Z64Jh>
> .
>
[Done.](https://github.com/ray-project/ray/pull/3778)
|
2020-01-28T13:44:50Z
|
<patch>
diff --git a/rllib/agents/a3c/a3c.py b/rllib/agents/a3c/a3c.py
--- a/rllib/agents/a3c/a3c.py
+++ b/rllib/agents/a3c/a3c.py
@@ -6,6 +6,13 @@
# yapf: disable
# __sphinx_doc_begin__
DEFAULT_CONFIG = with_common_config({
+ # Should use a critic as a baseline (otherwise don't use value baseline;
+ # required for using GAE).
+ "use_critic": True,
+ # If true, use the Generalized Advantage Estimator (GAE)
+ # with a value function, see https://arxiv.org/pdf/1506.02438.pdf.
+ "use_gae": True,
+
# Size of rollout batch
"sample_batch_size": 10,
# GAE(gamma) parameter
diff --git a/rllib/agents/a3c/a3c_tf_policy.py b/rllib/agents/a3c/a3c_tf_policy.py
--- a/rllib/agents/a3c/a3c_tf_policy.py
+++ b/rllib/agents/a3c/a3c_tf_policy.py
@@ -61,8 +61,9 @@ def postprocess_advantages(policy,
sample_batch[SampleBatch.ACTIONS][-1],
sample_batch[SampleBatch.REWARDS][-1],
*next_state)
- return compute_advantages(sample_batch, last_r, policy.config["gamma"],
- policy.config["lambda"])
+ return compute_advantages(
+ sample_batch, last_r, policy.config["gamma"], policy.config["lambda"],
+ policy.config["use_gae"], policy.config["use_critic"])
def add_value_function_fetch(policy):
diff --git a/rllib/agents/a3c/a3c_torch_policy.py b/rllib/agents/a3c/a3c_torch_policy.py
--- a/rllib/agents/a3c/a3c_torch_policy.py
+++ b/rllib/agents/a3c/a3c_torch_policy.py
@@ -43,8 +43,9 @@ def add_advantages(policy,
last_r = 0.0
else:
last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1])
- return compute_advantages(sample_batch, last_r, policy.config["gamma"],
- policy.config["lambda"])
+ return compute_advantages(
+ sample_batch, last_r, policy.config["gamma"], policy.config["lambda"],
+ policy.config["use_gae"], policy.config["use_critic"])
def model_value_predictions(policy, input_dict, state_batches, model,
diff --git a/rllib/agents/marwil/marwil_policy.py b/rllib/agents/marwil/marwil_policy.py
--- a/rllib/agents/marwil/marwil_policy.py
+++ b/rllib/agents/marwil/marwil_policy.py
@@ -80,7 +80,11 @@ def postprocess_advantages(policy,
sample_batch[SampleBatch.REWARDS][-1],
*next_state)
return compute_advantages(
- sample_batch, last_r, policy.config["gamma"], use_gae=False)
+ sample_batch,
+ last_r,
+ policy.config["gamma"],
+ use_gae=False,
+ use_critic=False)
class MARWILLoss(object):
diff --git a/rllib/agents/pg/pg_tf_policy.py b/rllib/agents/pg/pg_tf_policy.py
--- a/rllib/agents/pg/pg_tf_policy.py
+++ b/rllib/agents/pg/pg_tf_policy.py
@@ -8,19 +8,26 @@
tf = try_import_tf()
-def post_process_advantages(policy, sample_batch, other_agent_batches=None,
+def post_process_advantages(policy,
+ sample_batch,
+ other_agent_batches=None,
episode=None):
"""This adds the "advantages" column to the sample train_batch."""
- return compute_advantages(sample_batch, 0.0, policy.config["gamma"],
- use_gae=False)
+ return compute_advantages(
+ sample_batch,
+ 0.0,
+ policy.config["gamma"],
+ use_gae=False,
+ use_critic=False)
def pg_tf_loss(policy, model, dist_class, train_batch):
"""The basic policy gradients loss."""
logits, _ = model.from_batch(train_batch)
action_dist = dist_class(logits, model)
- return -tf.reduce_mean(action_dist.logp(train_batch[SampleBatch.ACTIONS])
- * train_batch[Postprocessing.ADVANTAGES])
+ return -tf.reduce_mean(
+ action_dist.logp(train_batch[SampleBatch.ACTIONS]) *
+ train_batch[Postprocessing.ADVANTAGES])
PGTFPolicy = build_tf_policy(
diff --git a/rllib/agents/ppo/appo.py b/rllib/agents/ppo/appo.py
--- a/rllib/agents/ppo/appo.py
+++ b/rllib/agents/ppo/appo.py
@@ -11,6 +11,9 @@
"vtrace": False,
# == These two options only apply if vtrace: False ==
+ # Should use a critic as a baseline (otherwise don't use value baseline;
+ # required for using GAE).
+ "use_critic": True,
# If true, use the Generalized Advantage Estimator (GAE)
# with a value function, see https://arxiv.org/pdf/1506.02438.pdf.
"use_gae": True,
diff --git a/rllib/agents/ppo/appo_policy.py b/rllib/agents/ppo/appo_policy.py
--- a/rllib/agents/ppo/appo_policy.py
+++ b/rllib/agents/ppo/appo_policy.py
@@ -389,7 +389,8 @@ def postprocess_trajectory(policy,
last_r,
policy.config["gamma"],
policy.config["lambda"],
- use_gae=policy.config["use_gae"])
+ use_gae=policy.config["use_gae"],
+ use_critic=policy.config["use_critic"])
else:
batch = sample_batch
del batch.data["new_obs"] # not used, so save some bandwidth
diff --git a/rllib/agents/ppo/ppo.py b/rllib/agents/ppo/ppo.py
--- a/rllib/agents/ppo/ppo.py
+++ b/rllib/agents/ppo/ppo.py
@@ -14,9 +14,13 @@
# yapf: disable
# __sphinx_doc_begin__
DEFAULT_CONFIG = with_common_config({
+ # Should use a critic as a baseline (otherwise don't use value baseline;
+ # required for using GAE).
+ "use_critic": True,
# If true, use the Generalized Advantage Estimator (GAE)
# with a value function, see https://arxiv.org/pdf/1506.02438.pdf.
"use_gae": True,
+
# The GAE(lambda) parameter.
"lambda": 1.0,
# Initial coefficient for KL divergence.
diff --git a/rllib/evaluation/postprocessing.py b/rllib/evaluation/postprocessing.py
--- a/rllib/evaluation/postprocessing.py
+++ b/rllib/evaluation/postprocessing.py
@@ -16,7 +16,12 @@ class Postprocessing:
@DeveloperAPI
-def compute_advantages(rollout, last_r, gamma=0.9, lambda_=1.0, use_gae=True):
+def compute_advantages(rollout,
+ last_r,
+ gamma=0.9,
+ lambda_=1.0,
+ use_gae=True,
+ use_critic=True):
"""
Given a rollout, compute its value targets and the advantage.
@@ -26,6 +31,8 @@ def compute_advantages(rollout, last_r, gamma=0.9, lambda_=1.0, use_gae=True):
gamma (float): Discount factor.
lambda_ (float): Parameter for GAE
use_gae (bool): Using Generalized Advantage Estimation
+ use_critic (bool): Whether to use critic (value estimates). Setting
+ this to False will use 0 as baseline.
Returns:
SampleBatch (SampleBatch): Object with experience from rollout and
@@ -37,8 +44,12 @@ def compute_advantages(rollout, last_r, gamma=0.9, lambda_=1.0, use_gae=True):
for key in rollout:
traj[key] = np.stack(rollout[key])
+ assert SampleBatch.VF_PREDS in rollout or not use_critic, \
+ "use_critic=True but values not found"
+ assert use_critic or not use_gae, \
+ "Can't use gae without using a value function"
+
if use_gae:
- assert SampleBatch.VF_PREDS in rollout, "Values not found!"
vpred_t = np.concatenate(
[rollout[SampleBatch.VF_PREDS],
np.array([last_r])])
@@ -54,10 +65,18 @@ def compute_advantages(rollout, last_r, gamma=0.9, lambda_=1.0, use_gae=True):
rewards_plus_v = np.concatenate(
[rollout[SampleBatch.REWARDS],
np.array([last_r])])
- traj[Postprocessing.ADVANTAGES] = discount(rewards_plus_v, gamma)[:-1]
- # TODO(ekl): support using a critic without GAE
- traj[Postprocessing.VALUE_TARGETS] = np.zeros_like(
- traj[Postprocessing.ADVANTAGES])
+ discounted_returns = discount(rewards_plus_v,
+ gamma)[:-1].copy().astype(np.float32)
+
+ if use_critic:
+ traj[Postprocessing.
+ ADVANTAGES] = discounted_returns - rollout[SampleBatch.
+ VF_PREDS]
+ traj[Postprocessing.VALUE_TARGETS] = discounted_returns
+ else:
+ traj[Postprocessing.ADVANTAGES] = discounted_returns
+ traj[Postprocessing.VALUE_TARGETS] = np.zeros_like(
+ traj[Postprocessing.ADVANTAGES])
traj[Postprocessing.ADVANTAGES] = traj[
Postprocessing.ADVANTAGES].copy().astype(np.float32)
</patch>
|
[]
|
[]
| |||
pandas-dev__pandas-26698
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
CI - Azure: print skipped tests
We have a `ci/print_skipped.py` script to print the skipped tests, that is currently used in Travis (see end of `.travis.yml` file). It would be to also do this in Azure as an extra step.
</issue>
<code>
[start of README.md]
1 <div align="center">
2 <img src="https://github.com/pandas-dev/pandas/blob/master/doc/logo/pandas_logo.png"><br>
3 </div>
4
5 -----------------
6
7 # pandas: powerful Python data analysis toolkit
8
9 <table>
10 <tr>
11 <td>Latest Release</td>
12 <td>
13 <a href="https://pypi.org/project/pandas/">
14 <img src="https://img.shields.io/pypi/v/pandas.svg" alt="latest release" />
15 </a>
16 </td>
17 </tr>
18 <td></td>
19 <td>
20 <a href="https://anaconda.org/anaconda/pandas/">
21 <img src="https://anaconda.org/conda-forge/pandas/badges/version.svg" alt="latest release" />
22 </a>
23 </td>
24 </tr>
25 <tr>
26 <td>Package Status</td>
27 <td>
28 <a href="https://pypi.org/project/pandas/">
29 <img src="https://img.shields.io/pypi/status/pandas.svg" alt="status" />
30 </a>
31 </td>
32 </tr>
33 <tr>
34 <td>License</td>
35 <td>
36 <a href="https://github.com/pandas-dev/pandas/blob/master/LICENSE">
37 <img src="https://img.shields.io/pypi/l/pandas.svg" alt="license" />
38 </a>
39 </td>
40 </tr>
41 <tr>
42 <td>Build Status</td>
43 <td>
44 <a href="https://travis-ci.org/pandas-dev/pandas">
45 <img src="https://travis-ci.org/pandas-dev/pandas.svg?branch=master" alt="travis build status" />
46 </a>
47 </td>
48 </tr>
49 <tr>
50 <td></td>
51 <td>
52 <a href="https://dev.azure.com/pandas-dev/pandas/_build/latest?definitionId=1&branch=master">
53 <img src="https://dev.azure.com/pandas-dev/pandas/_apis/build/status/pandas-dev.pandas?branch=master" alt="Azure Pipelines build status" />
54 </a>
55 </td>
56 </tr>
57 <tr>
58 <td>Coverage</td>
59 <td>
60 <a href="https://codecov.io/gh/pandas-dev/pandas">
61 <img src="https://codecov.io/github/pandas-dev/pandas/coverage.svg?branch=master" alt="coverage" />
62 </a>
63 </td>
64 </tr>
65 <tr>
66 <td>Downloads</td>
67 <td>
68 <a href="https://pandas.pydata.org">
69 <img src="https://anaconda.org/conda-forge/pandas/badges/downloads.svg" alt="conda-forge downloads" />
70 </a>
71 </td>
72 </tr>
73 <tr>
74 <td>Gitter</td>
75 <td>
76 <a href="https://gitter.im/pydata/pandas">
77 <img src="https://badges.gitter.im/Join%20Chat.svg" />
78 </a>
79 </td>
80 </tr>
81 </table>
82
83
84
85 ## What is it?
86
87 **pandas** is a Python package providing fast, flexible, and expressive data
88 structures designed to make working with "relational" or "labeled" data both
89 easy and intuitive. It aims to be the fundamental high-level building block for
90 doing practical, **real world** data analysis in Python. Additionally, it has
91 the broader goal of becoming **the most powerful and flexible open source data
92 analysis / manipulation tool available in any language**. It is already well on
93 its way towards this goal.
94
95 ## Main Features
96 Here are just a few of the things that pandas does well:
97
98 - Easy handling of [**missing data**][missing-data] (represented as
99 `NaN`) in floating point as well as non-floating point data
100 - Size mutability: columns can be [**inserted and
101 deleted**][insertion-deletion] from DataFrame and higher dimensional
102 objects
103 - Automatic and explicit [**data alignment**][alignment]: objects can
104 be explicitly aligned to a set of labels, or the user can simply
105 ignore the labels and let `Series`, `DataFrame`, etc. automatically
106 align the data for you in computations
107 - Powerful, flexible [**group by**][groupby] functionality to perform
108 split-apply-combine operations on data sets, for both aggregating
109 and transforming data
110 - Make it [**easy to convert**][conversion] ragged,
111 differently-indexed data in other Python and NumPy data structures
112 into DataFrame objects
113 - Intelligent label-based [**slicing**][slicing], [**fancy
114 indexing**][fancy-indexing], and [**subsetting**][subsetting] of
115 large data sets
116 - Intuitive [**merging**][merging] and [**joining**][joining] data
117 sets
118 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of
119 data sets
120 - [**Hierarchical**][mi] labeling of axes (possible to have multiple
121 labels per tick)
122 - Robust IO tools for loading data from [**flat files**][flat-files]
123 (CSV and delimited), [**Excel files**][excel], [**databases**][db],
124 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore]
125 - [**Time series**][timeseries]-specific functionality: date range
126 generation and frequency conversion, moving window statistics,
127 moving window linear regressions, date shifting and lagging, etc.
128
129
130 [missing-data]: https://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data
131 [insertion-deletion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#column-selection-addition-deletion
132 [alignment]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html?highlight=alignment#intro-to-data-structures
133 [groupby]: https://pandas.pydata.org/pandas-docs/stable/groupby.html#group-by-split-apply-combine
134 [conversion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe
135 [slicing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges
136 [fancy-indexing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#advanced-indexing-with-ix
137 [subsetting]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing
138 [merging]: https://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging
139 [joining]: https://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index
140 [reshape]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-and-pivot-tables
141 [pivot-table]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#pivot-tables-and-cross-tabulations
142 [mi]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex
143 [flat-files]: https://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files
144 [excel]: https://pandas.pydata.org/pandas-docs/stable/io.html#excel-files
145 [db]: https://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries
146 [hdfstore]: https://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables
147 [timeseries]: https://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-series-date-functionality
148
149 ## Where to get it
150 The source code is currently hosted on GitHub at:
151 https://github.com/pandas-dev/pandas
152
153 Binary installers for the latest released version are available at the [Python
154 package index](https://pypi.org/project/pandas) and on conda.
155
156 ```sh
157 # conda
158 conda install pandas
159 ```
160
161 ```sh
162 # or PyPI
163 pip install pandas
164 ```
165
166 ## Dependencies
167 - [NumPy](https://www.numpy.org): 1.13.3 or higher
168 - [python-dateutil](https://labix.org/python-dateutil): 2.5.0 or higher
169 - [pytz](https://pythonhosted.org/pytz): 2015.4 or higher
170
171 See the [full installation instructions](https://pandas.pydata.org/pandas-docs/stable/install.html#dependencies)
172 for recommended and optional dependencies.
173
174 ## Installation from sources
175 To install pandas from source you need Cython in addition to the normal
176 dependencies above. Cython can be installed from pypi:
177
178 ```sh
179 pip install cython
180 ```
181
182 In the `pandas` directory (same one where you found this file after
183 cloning the git repo), execute:
184
185 ```sh
186 python setup.py install
187 ```
188
189 or for installing in [development mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs):
190
191 ```sh
192 python setup.py develop
193 ```
194
195 Alternatively, you can use `pip` if you want all the dependencies pulled
196 in automatically (the `-e` option is for installing it in [development
197 mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs)):
198
199 ```sh
200 pip install -e .
201 ```
202
203 See the full instructions for [installing from source](https://pandas.pydata.org/pandas-docs/stable/install.html#installing-from-source).
204
205 ## License
206 [BSD 3](LICENSE)
207
208 ## Documentation
209 The official documentation is hosted on PyData.org: https://pandas.pydata.org/pandas-docs/stable
210
211 ## Background
212 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and
213 has been under active development since then.
214
215 ## Getting Help
216
217 For usage questions, the best place to go to is [StackOverflow](https://stackoverflow.com/questions/tagged/pandas).
218 Further, general questions and discussions can also take place on the [pydata mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata).
219
220 ## Discussion and Development
221 Most development discussion is taking place on github in this repo. Further, the [pandas-dev mailing list](https://mail.python.org/mailman/listinfo/pandas-dev) can also be used for specialized discussions or design issues, and a [Gitter channel](https://gitter.im/pydata/pandas) is available for quick development related questions.
222
223 ## Contributing to pandas [](https://www.codetriage.com/pandas-dev/pandas)
224
225 All contributions, bug reports, bug fixes, documentation improvements, enhancements and ideas are welcome.
226
227 A detailed overview on how to contribute can be found in the **[contributing guide](https://pandas-docs.github.io/pandas-docs-travis/contributing.html)**. There is also an [overview](.github/CONTRIBUTING.md) on GitHub.
228
229 If you are simply looking to start working with the pandas codebase, navigate to the [GitHub "issues" tab](https://github.com/pandas-dev/pandas/issues) and start looking through interesting issues. There are a number of issues listed under [Docs](https://github.com/pandas-dev/pandas/issues?labels=Docs&sort=updated&state=open) and [good first issue](https://github.com/pandas-dev/pandas/issues?labels=good+first+issue&sort=updated&state=open) where you could start out.
230
231 You can also triage issues which may include reproducing bug reports, or asking for vital information such as version numbers or reproduction instructions. If you would like to start triaging issues, one easy way to get started is to [subscribe to pandas on CodeTriage](https://www.codetriage.com/pandas-dev/pandas).
232
233 Or maybe through using pandas you have an idea of your own or are looking for something in the documentation and thinking ‘this can be improved’...you can do something about it!
234
235 Feel free to ask questions on the [mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata) or on [Gitter](https://gitter.im/pydata/pandas).
236
[end of README.md]
[start of ci/print_skipped.py]
1 #!/usr/bin/env python
2
3 import sys
4 import math
5 import xml.etree.ElementTree as et
6
7
8 def parse_results(filename):
9 tree = et.parse(filename)
10 root = tree.getroot()
11 skipped = []
12
13 current_class = ''
14 i = 1
15 assert i - 1 == len(skipped)
16 for el in root.findall('testcase'):
17 cn = el.attrib['classname']
18 for sk in el.findall('skipped'):
19 old_class = current_class
20 current_class = cn
21 name = '{classname}.{name}'.format(classname=current_class,
22 name=el.attrib['name'])
23 msg = sk.attrib['message']
24 out = ''
25 if old_class != current_class:
26 ndigits = int(math.log(i, 10) + 1)
27
28 # 4 for : + space + # + space
29 out += ('-' * (len(name + msg) + 4 + ndigits) + '\n')
30 out += '#{i} {name}: {msg}'.format(i=i, name=name, msg=msg)
31 skipped.append(out)
32 i += 1
33 assert i - 1 == len(skipped)
34 assert i - 1 == len(skipped)
35 # assert len(skipped) == int(root.attrib['skip'])
36 return '\n'.join(skipped)
37
38
39 def main(args):
40 print('SKIPPED TESTS:')
41 for fn in args.filename:
42 print(parse_results(fn))
43 return 0
44
45
46 def parse_args():
47 import argparse
48 parser = argparse.ArgumentParser()
49 parser.add_argument('filename', nargs='+', help='XUnit file to parse')
50 return parser.parse_args()
51
52
53 if __name__ == '__main__':
54 sys.exit(main(parse_args()))
55
[end of ci/print_skipped.py]
[start of scripts/validate_docstrings.py]
1 #!/usr/bin/env python
2 """
3 Analyze docstrings to detect errors.
4
5 If no argument is provided, it does a quick check of docstrings and returns
6 a csv with all API functions and results of basic checks.
7
8 If a function or method is provided in the form "pandas.function",
9 "pandas.module.class.method", etc. a list of all errors in the docstring for
10 the specified function or method.
11
12 Usage::
13 $ ./validate_docstrings.py
14 $ ./validate_docstrings.py pandas.DataFrame.head
15 """
16 import os
17 import sys
18 import json
19 import re
20 import glob
21 import functools
22 import collections
23 import argparse
24 import pydoc
25 import inspect
26 import importlib
27 import doctest
28 import tempfile
29 import ast
30 import textwrap
31
32 import flake8.main.application
33
34 try:
35 from io import StringIO
36 except ImportError:
37 from cStringIO import StringIO
38
39 # Template backend makes matplotlib to not plot anything. This is useful
40 # to avoid that plot windows are open from the doctests while running the
41 # script. Setting here before matplotlib is loaded.
42 # We don't warn for the number of open plots, as none is actually being opened
43 os.environ['MPLBACKEND'] = 'Template'
44 import matplotlib
45 matplotlib.rc('figure', max_open_warning=10000)
46
47 import numpy
48
49 BASE_PATH = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
50
51 sys.path.insert(0, os.path.join(BASE_PATH))
52 import pandas
53
54 sys.path.insert(1, os.path.join(BASE_PATH, 'doc', 'sphinxext'))
55 from numpydoc.docscrape import NumpyDocString
56 from pandas.io.formats.printing import pprint_thing
57
58
59 PRIVATE_CLASSES = ['NDFrame', 'IndexOpsMixin']
60 DIRECTIVES = ['versionadded', 'versionchanged', 'deprecated']
61 ALLOWED_SECTIONS = ['Parameters', 'Attributes', 'Methods', 'Returns', 'Yields',
62 'Other Parameters', 'Raises', 'Warns', 'See Also', 'Notes',
63 'References', 'Examples']
64 ERROR_MSGS = {
65 'GL01': 'Docstring text (summary) should start in the line immediately '
66 'after the opening quotes (not in the same line, or leaving a '
67 'blank line in between)',
68 'GL02': 'Closing quotes should be placed in the line after the last text '
69 'in the docstring (do not close the quotes in the same line as '
70 'the text, or leave a blank line between the last text and the '
71 'quotes)',
72 'GL03': 'Double line break found; please use only one blank line to '
73 'separate sections or paragraphs, and do not leave blank lines '
74 'at the end of docstrings',
75 'GL04': 'Private classes ({mentioned_private_classes}) should not be '
76 'mentioned in public docstrings',
77 'GL05': 'Tabs found at the start of line "{line_with_tabs}", please use '
78 'whitespace only',
79 'GL06': 'Found unknown section "{section}". Allowed sections are: '
80 '{allowed_sections}',
81 'GL07': 'Sections are in the wrong order. Correct order is: '
82 '{correct_sections}',
83 'GL08': 'The object does not have a docstring',
84 'GL09': 'Deprecation warning should precede extended summary',
85 'SS01': 'No summary found (a short summary in a single line should be '
86 'present at the beginning of the docstring)',
87 'SS02': 'Summary does not start with a capital letter',
88 'SS03': 'Summary does not end with a period',
89 'SS04': 'Summary contains heading whitespaces',
90 'SS05': 'Summary must start with infinitive verb, not third person '
91 '(e.g. use "Generate" instead of "Generates")',
92 'SS06': 'Summary should fit in a single line',
93 'ES01': 'No extended summary found',
94 'PR01': 'Parameters {missing_params} not documented',
95 'PR02': 'Unknown parameters {unknown_params}',
96 'PR03': 'Wrong parameters order. Actual: {actual_params}. '
97 'Documented: {documented_params}',
98 'PR04': 'Parameter "{param_name}" has no type',
99 'PR05': 'Parameter "{param_name}" type should not finish with "."',
100 'PR06': 'Parameter "{param_name}" type should use "{right_type}" instead '
101 'of "{wrong_type}"',
102 'PR07': 'Parameter "{param_name}" has no description',
103 'PR08': 'Parameter "{param_name}" description should start with a '
104 'capital letter',
105 'PR09': 'Parameter "{param_name}" description should finish with "."',
106 'PR10': 'Parameter "{param_name}" requires a space before the colon '
107 'separating the parameter name and type',
108 'RT01': 'No Returns section found',
109 'RT02': 'The first line of the Returns section should contain only the '
110 'type, unless multiple values are being returned',
111 'RT03': 'Return value has no description',
112 'RT04': 'Return value description should start with a capital letter',
113 'RT05': 'Return value description should finish with "."',
114 'YD01': 'No Yields section found',
115 'SA01': 'See Also section not found',
116 'SA02': 'Missing period at end of description for See Also '
117 '"{reference_name}" reference',
118 'SA03': 'Description should be capitalized for See Also '
119 '"{reference_name}" reference',
120 'SA04': 'Missing description for See Also "{reference_name}" reference',
121 'SA05': '{reference_name} in `See Also` section does not need `pandas` '
122 'prefix, use {right_reference} instead.',
123 'EX01': 'No examples section found',
124 'EX02': 'Examples do not pass tests:\n{doctest_log}',
125 'EX03': 'flake8 error: {error_code} {error_message}{times_happening}',
126 'EX04': 'Do not import {imported_library}, as it is imported '
127 'automatically for the examples (numpy as np, pandas as pd)',
128 }
129
130
131 def error(code, **kwargs):
132 """
133 Return a tuple with the error code and the message with variables replaced.
134
135 This is syntactic sugar so instead of:
136 - `('EX02', ERROR_MSGS['EX02'].format(doctest_log=log))`
137
138 We can simply use:
139 - `error('EX02', doctest_log=log)`
140
141 Parameters
142 ----------
143 code : str
144 Error code.
145 **kwargs
146 Values for the variables in the error messages
147
148 Returns
149 -------
150 code : str
151 Error code.
152 message : str
153 Error message with varaibles replaced.
154 """
155 return (code, ERROR_MSGS[code].format(**kwargs))
156
157
158 def get_api_items(api_doc_fd):
159 """
160 Yield information about all public API items.
161
162 Parse api.rst file from the documentation, and extract all the functions,
163 methods, classes, attributes... This should include all pandas public API.
164
165 Parameters
166 ----------
167 api_doc_fd : file descriptor
168 A file descriptor of the API documentation page, containing the table
169 of contents with all the public API.
170
171 Yields
172 ------
173 name : str
174 The name of the object (e.g. 'pandas.Series.str.upper).
175 func : function
176 The object itself. In most cases this will be a function or method,
177 but it can also be classes, properties, cython objects...
178 section : str
179 The name of the section in the API page where the object item is
180 located.
181 subsection : str
182 The name of the subsection in the API page where the object item is
183 located.
184 """
185 current_module = 'pandas'
186 previous_line = current_section = current_subsection = ''
187 position = None
188 for line in api_doc_fd:
189 line = line.strip()
190 if len(line) == len(previous_line):
191 if set(line) == set('-'):
192 current_section = previous_line
193 continue
194 if set(line) == set('~'):
195 current_subsection = previous_line
196 continue
197
198 if line.startswith('.. currentmodule::'):
199 current_module = line.replace('.. currentmodule::', '').strip()
200 continue
201
202 if line == '.. autosummary::':
203 position = 'autosummary'
204 continue
205
206 if position == 'autosummary':
207 if line == '':
208 position = 'items'
209 continue
210
211 if position == 'items':
212 if line == '':
213 position = None
214 continue
215 item = line.strip()
216 func = importlib.import_module(current_module)
217 for part in item.split('.'):
218 func = getattr(func, part)
219
220 yield ('.'.join([current_module, item]), func,
221 current_section, current_subsection)
222
223 previous_line = line
224
225
226 class Docstring:
227 def __init__(self, name):
228 self.name = name
229 obj = self._load_obj(name)
230 self.obj = obj
231 self.code_obj = self._to_original_callable(obj)
232 self.raw_doc = obj.__doc__ or ''
233 self.clean_doc = pydoc.getdoc(obj)
234 self.doc = NumpyDocString(self.clean_doc)
235
236 def __len__(self):
237 return len(self.raw_doc)
238
239 @staticmethod
240 def _load_obj(name):
241 """
242 Import Python object from its name as string.
243
244 Parameters
245 ----------
246 name : str
247 Object name to import (e.g. pandas.Series.str.upper)
248
249 Returns
250 -------
251 object
252 Python object that can be a class, method, function...
253
254 Examples
255 --------
256 >>> Docstring._load_obj('pandas.Series')
257 <class 'pandas.core.series.Series'>
258 """
259 for maxsplit in range(1, name.count('.') + 1):
260 # TODO when py3 only replace by: module, *func_parts = ...
261 func_name_split = name.rsplit('.', maxsplit)
262 module = func_name_split[0]
263 func_parts = func_name_split[1:]
264 try:
265 obj = importlib.import_module(module)
266 except ImportError:
267 pass
268 else:
269 continue
270
271 if 'obj' not in locals():
272 raise ImportError('No module can be imported '
273 'from "{}"'.format(name))
274
275 for part in func_parts:
276 obj = getattr(obj, part)
277 return obj
278
279 @staticmethod
280 def _to_original_callable(obj):
281 """
282 Find the Python object that contains the source code of the object.
283
284 This is useful to find the place in the source code (file and line
285 number) where a docstring is defined. It does not currently work for
286 all cases, but it should help find some (properties...).
287 """
288 while True:
289 if inspect.isfunction(obj) or inspect.isclass(obj):
290 f = inspect.getfile(obj)
291 if f.startswith('<') and f.endswith('>'):
292 return None
293 return obj
294 if inspect.ismethod(obj):
295 obj = obj.__func__
296 elif isinstance(obj, functools.partial):
297 obj = obj.func
298 elif isinstance(obj, property):
299 obj = obj.fget
300 else:
301 return None
302
303 @property
304 def type(self):
305 return type(self.obj).__name__
306
307 @property
308 def is_function_or_method(self):
309 # TODO(py27): remove ismethod
310 return (inspect.isfunction(self.obj)
311 or inspect.ismethod(self.obj))
312
313 @property
314 def source_file_name(self):
315 """
316 File name where the object is implemented (e.g. pandas/core/frame.py).
317 """
318 try:
319 fname = inspect.getsourcefile(self.code_obj)
320 except TypeError:
321 # In some cases the object is something complex like a cython
322 # object that can't be easily introspected. An it's better to
323 # return the source code file of the object as None, than crash
324 pass
325 else:
326 if fname:
327 fname = os.path.relpath(fname, BASE_PATH)
328 return fname
329
330 @property
331 def source_file_def_line(self):
332 """
333 Number of line where the object is defined in its file.
334 """
335 try:
336 return inspect.getsourcelines(self.code_obj)[-1]
337 except (OSError, TypeError):
338 # In some cases the object is something complex like a cython
339 # object that can't be easily introspected. An it's better to
340 # return the line number as None, than crash
341 pass
342
343 @property
344 def github_url(self):
345 url = 'https://github.com/pandas-dev/pandas/blob/master/'
346 url += '{}#L{}'.format(self.source_file_name,
347 self.source_file_def_line)
348 return url
349
350 @property
351 def start_blank_lines(self):
352 i = None
353 if self.raw_doc:
354 for i, row in enumerate(self.raw_doc.split('\n')):
355 if row.strip():
356 break
357 return i
358
359 @property
360 def end_blank_lines(self):
361 i = None
362 if self.raw_doc:
363 for i, row in enumerate(reversed(self.raw_doc.split('\n'))):
364 if row.strip():
365 break
366 return i
367
368 @property
369 def double_blank_lines(self):
370 prev = True
371 for row in self.raw_doc.split('\n'):
372 if not prev and not row.strip():
373 return True
374 prev = row.strip()
375 return False
376
377 @property
378 def section_titles(self):
379 sections = []
380 self.doc._doc.reset()
381 while not self.doc._doc.eof():
382 content = self.doc._read_to_next_section()
383 if (len(content) > 1
384 and len(content[0]) == len(content[1])
385 and set(content[1]) == {'-'}):
386 sections.append(content[0])
387 return sections
388
389 @property
390 def summary(self):
391 return ' '.join(self.doc['Summary'])
392
393 @property
394 def num_summary_lines(self):
395 return len(self.doc['Summary'])
396
397 @property
398 def extended_summary(self):
399 if not self.doc['Extended Summary'] and len(self.doc['Summary']) > 1:
400 return ' '.join(self.doc['Summary'])
401 return ' '.join(self.doc['Extended Summary'])
402
403 @property
404 def needs_summary(self):
405 return not (bool(self.summary) and bool(self.extended_summary))
406
407 @property
408 def doc_parameters(self):
409 return collections.OrderedDict((name, (type_, ''.join(desc)))
410 for name, type_, desc
411 in self.doc['Parameters'])
412
413 @property
414 def signature_parameters(self):
415 if inspect.isclass(self.obj):
416 if hasattr(self.obj, '_accessors') and (
417 self.name.split('.')[-1] in
418 self.obj._accessors):
419 # accessor classes have a signature but don't want to show this
420 return tuple()
421 try:
422 sig = inspect.getfullargspec(self.obj)
423 except (TypeError, ValueError):
424 # Some objects, mainly in C extensions do not support introspection
425 # of the signature
426 return tuple()
427 params = sig.args
428 if sig.varargs:
429 params.append("*" + sig.varargs)
430 if sig.varkw:
431 params.append("**" + sig.varkw)
432 params = tuple(params)
433 if params and params[0] in ('self', 'cls'):
434 return params[1:]
435 return params
436
437 @property
438 def parameter_mismatches(self):
439 errs = []
440 signature_params = self.signature_parameters
441 doc_params = tuple(self.doc_parameters)
442 missing = set(signature_params) - set(doc_params)
443 if missing:
444 errs.append(error('PR01', missing_params=pprint_thing(missing)))
445 extra = set(doc_params) - set(signature_params)
446 if extra:
447 errs.append(error('PR02', unknown_params=pprint_thing(extra)))
448 if (not missing and not extra and signature_params != doc_params
449 and not (not signature_params and not doc_params)):
450 errs.append(error('PR03',
451 actual_params=signature_params,
452 documented_params=doc_params))
453
454 return errs
455
456 @property
457 def correct_parameters(self):
458 return not bool(self.parameter_mismatches)
459
460 def parameter_type(self, param):
461 return self.doc_parameters[param][0]
462
463 def parameter_desc(self, param):
464 desc = self.doc_parameters[param][1]
465 # Find and strip out any sphinx directives
466 for directive in DIRECTIVES:
467 full_directive = '.. {}'.format(directive)
468 if full_directive in desc:
469 # Only retain any description before the directive
470 desc = desc[:desc.index(full_directive)]
471 return desc
472
473 @property
474 def see_also(self):
475 result = collections.OrderedDict()
476 for funcs, desc in self.doc['See Also']:
477 for func, _ in funcs:
478 result[func] = ''.join(desc)
479
480 return result
481
482 @property
483 def examples(self):
484 return self.doc['Examples']
485
486 @property
487 def returns(self):
488 return self.doc['Returns']
489
490 @property
491 def yields(self):
492 return self.doc['Yields']
493
494 @property
495 def method_source(self):
496 try:
497 source = inspect.getsource(self.obj)
498 except TypeError:
499 return ''
500 return textwrap.dedent(source)
501
502 @property
503 def method_returns_something(self):
504 '''
505 Check if the docstrings method can return something.
506
507 Bare returns, returns valued None and returns from nested functions are
508 disconsidered.
509
510 Returns
511 -------
512 bool
513 Whether the docstrings method can return something.
514 '''
515
516 def get_returns_not_on_nested_functions(node):
517 returns = [node] if isinstance(node, ast.Return) else []
518 for child in ast.iter_child_nodes(node):
519 # Ignore nested functions and its subtrees.
520 if not isinstance(child, ast.FunctionDef):
521 child_returns = get_returns_not_on_nested_functions(child)
522 returns.extend(child_returns)
523 return returns
524
525 tree = ast.parse(self.method_source).body
526 if tree:
527 returns = get_returns_not_on_nested_functions(tree[0])
528 return_values = [r.value for r in returns]
529 # Replace NameConstant nodes valued None for None.
530 for i, v in enumerate(return_values):
531 if isinstance(v, ast.NameConstant) and v.value is None:
532 return_values[i] = None
533 return any(return_values)
534 else:
535 return False
536
537 @property
538 def first_line_ends_in_dot(self):
539 if self.doc:
540 return self.doc.split('\n')[0][-1] == '.'
541
542 @property
543 def deprecated(self):
544 return '.. deprecated:: ' in (self.summary + self.extended_summary)
545
546 @property
547 def mentioned_private_classes(self):
548 return [klass for klass in PRIVATE_CLASSES if klass in self.raw_doc]
549
550 @property
551 def examples_errors(self):
552 flags = doctest.NORMALIZE_WHITESPACE | doctest.IGNORE_EXCEPTION_DETAIL
553 finder = doctest.DocTestFinder()
554 runner = doctest.DocTestRunner(optionflags=flags)
555 context = {'np': numpy, 'pd': pandas}
556 error_msgs = ''
557 for test in finder.find(self.raw_doc, self.name, globs=context):
558 f = StringIO()
559 runner.run(test, out=f.write)
560 error_msgs += f.getvalue()
561 return error_msgs
562
563 @property
564 def examples_source_code(self):
565 lines = doctest.DocTestParser().get_examples(self.raw_doc)
566 return [line.source for line in lines]
567
568 def validate_pep8(self):
569 if not self.examples:
570 return
571
572 # F401 is needed to not generate flake8 errors in examples
573 # that do not user numpy or pandas
574 content = ''.join(('import numpy as np # noqa: F401\n',
575 'import pandas as pd # noqa: F401\n',
576 *self.examples_source_code))
577
578 application = flake8.main.application.Application()
579 application.initialize(["--quiet"])
580
581 with tempfile.NamedTemporaryFile(mode='w', encoding='utf-8') as file:
582 file.write(content)
583 file.flush()
584 application.run_checks([file.name])
585
586 # We need this to avoid flake8 printing the names of the files to
587 # the standard output
588 application.formatter.write = lambda line, source: None
589 application.report()
590
591 yield from application.guide.stats.statistics_for('')
592
593
594 def get_validation_data(doc):
595 """
596 Validate the docstring.
597
598 Parameters
599 ----------
600 doc : Docstring
601 A Docstring object with the given function name.
602
603 Returns
604 -------
605 tuple
606 errors : list of tuple
607 Errors occurred during validation.
608 warnings : list of tuple
609 Warnings occurred during validation.
610 examples_errs : str
611 Examples usage displayed along the error, otherwise empty string.
612
613 Notes
614 -----
615 The errors codes are defined as:
616 - First two characters: Section where the error happens:
617 * GL: Global (no section, like section ordering errors)
618 * SS: Short summary
619 * ES: Extended summary
620 * PR: Parameters
621 * RT: Returns
622 * YD: Yields
623 * RS: Raises
624 * WN: Warns
625 * SA: See Also
626 * NT: Notes
627 * RF: References
628 * EX: Examples
629 - Last two characters: Numeric error code inside the section
630
631 For example, EX02 is the second codified error in the Examples section
632 (which in this case is assigned to examples that do not pass the tests).
633
634 The error codes, their corresponding error messages, and the details on how
635 they are validated, are not documented more than in the source code of this
636 function.
637 """
638
639 errs = []
640 wrns = []
641 if not doc.raw_doc:
642 errs.append(error('GL08'))
643 return errs, wrns, ''
644
645 if doc.start_blank_lines != 1:
646 errs.append(error('GL01'))
647 if doc.end_blank_lines != 1:
648 errs.append(error('GL02'))
649 if doc.double_blank_lines:
650 errs.append(error('GL03'))
651 mentioned_errs = doc.mentioned_private_classes
652 if mentioned_errs:
653 errs.append(error('GL04',
654 mentioned_private_classes=', '.join(mentioned_errs)))
655 for line in doc.raw_doc.splitlines():
656 if re.match("^ *\t", line):
657 errs.append(error('GL05', line_with_tabs=line.lstrip()))
658
659 unexpected_sections = [section for section in doc.section_titles
660 if section not in ALLOWED_SECTIONS]
661 for section in unexpected_sections:
662 errs.append(error('GL06',
663 section=section,
664 allowed_sections=', '.join(ALLOWED_SECTIONS)))
665
666 correct_order = [section for section in ALLOWED_SECTIONS
667 if section in doc.section_titles]
668 if correct_order != doc.section_titles:
669 errs.append(error('GL07',
670 correct_sections=', '.join(correct_order)))
671
672 if (doc.deprecated
673 and not doc.extended_summary.startswith('.. deprecated:: ')):
674 errs.append(error('GL09'))
675
676 if not doc.summary:
677 errs.append(error('SS01'))
678 else:
679 if not doc.summary[0].isupper():
680 errs.append(error('SS02'))
681 if doc.summary[-1] != '.':
682 errs.append(error('SS03'))
683 if doc.summary != doc.summary.lstrip():
684 errs.append(error('SS04'))
685 elif (doc.is_function_or_method
686 and doc.summary.split(' ')[0][-1] == 's'):
687 errs.append(error('SS05'))
688 if doc.num_summary_lines > 1:
689 errs.append(error('SS06'))
690
691 if not doc.extended_summary:
692 wrns.append(('ES01', 'No extended summary found'))
693
694 # PR01: Parameters not documented
695 # PR02: Unknown parameters
696 # PR03: Wrong parameters order
697 errs += doc.parameter_mismatches
698
699 for param in doc.doc_parameters:
700 if not param.startswith("*"): # Check can ignore var / kwargs
701 if not doc.parameter_type(param):
702 if ':' in param:
703 errs.append(error('PR10',
704 param_name=param.split(':')[0]))
705 else:
706 errs.append(error('PR04', param_name=param))
707 else:
708 if doc.parameter_type(param)[-1] == '.':
709 errs.append(error('PR05', param_name=param))
710 common_type_errors = [('integer', 'int'),
711 ('boolean', 'bool'),
712 ('string', 'str')]
713 for wrong_type, right_type in common_type_errors:
714 if wrong_type in doc.parameter_type(param):
715 errs.append(error('PR06',
716 param_name=param,
717 right_type=right_type,
718 wrong_type=wrong_type))
719 if not doc.parameter_desc(param):
720 errs.append(error('PR07', param_name=param))
721 else:
722 if not doc.parameter_desc(param)[0].isupper():
723 errs.append(error('PR08', param_name=param))
724 if doc.parameter_desc(param)[-1] != '.':
725 errs.append(error('PR09', param_name=param))
726
727 if doc.is_function_or_method:
728 if not doc.returns:
729 if doc.method_returns_something:
730 errs.append(error('RT01'))
731 else:
732 if len(doc.returns) == 1 and doc.returns[0].name:
733 errs.append(error('RT02'))
734 for name_or_type, type_, desc in doc.returns:
735 if not desc:
736 errs.append(error('RT03'))
737 else:
738 desc = ' '.join(desc)
739 if not desc[0].isupper():
740 errs.append(error('RT04'))
741 if not desc.endswith('.'):
742 errs.append(error('RT05'))
743
744 if not doc.yields and 'yield' in doc.method_source:
745 errs.append(error('YD01'))
746
747 if not doc.see_also:
748 wrns.append(error('SA01'))
749 else:
750 for rel_name, rel_desc in doc.see_also.items():
751 if rel_desc:
752 if not rel_desc.endswith('.'):
753 errs.append(error('SA02', reference_name=rel_name))
754 if not rel_desc[0].isupper():
755 errs.append(error('SA03', reference_name=rel_name))
756 else:
757 errs.append(error('SA04', reference_name=rel_name))
758 if rel_name.startswith('pandas.'):
759 errs.append(error('SA05',
760 reference_name=rel_name,
761 right_reference=rel_name[len('pandas.'):]))
762
763 examples_errs = ''
764 if not doc.examples:
765 wrns.append(error('EX01'))
766 else:
767 examples_errs = doc.examples_errors
768 if examples_errs:
769 errs.append(error('EX02', doctest_log=examples_errs))
770 for err in doc.validate_pep8():
771 errs.append(error('EX03',
772 error_code=err.error_code,
773 error_message=err.message,
774 times_happening=' ({} times)'.format(err.count)
775 if err.count > 1 else ''))
776 examples_source_code = ''.join(doc.examples_source_code)
777 for wrong_import in ('numpy', 'pandas'):
778 if 'import {}'.format(wrong_import) in examples_source_code:
779 errs.append(error('EX04', imported_library=wrong_import))
780 return errs, wrns, examples_errs
781
782
783 def validate_one(func_name):
784 """
785 Validate the docstring for the given func_name
786
787 Parameters
788 ----------
789 func_name : function
790 Function whose docstring will be evaluated (e.g. pandas.read_csv).
791
792 Returns
793 -------
794 dict
795 A dictionary containing all the information obtained from validating
796 the docstring.
797 """
798 doc = Docstring(func_name)
799 errs, wrns, examples_errs = get_validation_data(doc)
800 return {'type': doc.type,
801 'docstring': doc.clean_doc,
802 'deprecated': doc.deprecated,
803 'file': doc.source_file_name,
804 'file_line': doc.source_file_def_line,
805 'github_link': doc.github_url,
806 'errors': errs,
807 'warnings': wrns,
808 'examples_errors': examples_errs}
809
810
811 def validate_all(prefix, ignore_deprecated=False):
812 """
813 Execute the validation of all docstrings, and return a dict with the
814 results.
815
816 Parameters
817 ----------
818 prefix : str or None
819 If provided, only the docstrings that start with this pattern will be
820 validated. If None, all docstrings will be validated.
821 ignore_deprecated: bool, default False
822 If True, deprecated objects are ignored when validating docstrings.
823
824 Returns
825 -------
826 dict
827 A dictionary with an item for every function/method... containing
828 all the validation information.
829 """
830 result = {}
831 seen = {}
832
833 # functions from the API docs
834 api_doc_fnames = os.path.join(
835 BASE_PATH, 'doc', 'source', 'reference', '*.rst')
836 api_items = []
837 for api_doc_fname in glob.glob(api_doc_fnames):
838 with open(api_doc_fname) as f:
839 api_items += list(get_api_items(f))
840 for func_name, func_obj, section, subsection in api_items:
841 if prefix and not func_name.startswith(prefix):
842 continue
843 doc_info = validate_one(func_name)
844 if ignore_deprecated and doc_info['deprecated']:
845 continue
846 result[func_name] = doc_info
847
848 shared_code_key = doc_info['file'], doc_info['file_line']
849 shared_code = seen.get(shared_code_key, '')
850 result[func_name].update({'in_api': True,
851 'section': section,
852 'subsection': subsection,
853 'shared_code_with': shared_code})
854
855 seen[shared_code_key] = func_name
856
857 # functions from introspecting Series and DataFrame
858 api_item_names = set(list(zip(*api_items))[0])
859 for class_ in (pandas.Series, pandas.DataFrame):
860 for member in inspect.getmembers(class_):
861 func_name = 'pandas.{}.{}'.format(class_.__name__, member[0])
862 if (not member[0].startswith('_')
863 and func_name not in api_item_names):
864 if prefix and not func_name.startswith(prefix):
865 continue
866 doc_info = validate_one(func_name)
867 if ignore_deprecated and doc_info['deprecated']:
868 continue
869 result[func_name] = doc_info
870 result[func_name]['in_api'] = False
871
872 return result
873
874
875 def main(func_name, prefix, errors, output_format, ignore_deprecated):
876 def header(title, width=80, char='#'):
877 full_line = char * width
878 side_len = (width - len(title) - 2) // 2
879 adj = '' if len(title) % 2 == 0 else ' '
880 title_line = '{side} {title}{adj} {side}'.format(side=char * side_len,
881 title=title,
882 adj=adj)
883
884 return '\n{full_line}\n{title_line}\n{full_line}\n\n'.format(
885 full_line=full_line, title_line=title_line)
886
887 exit_status = 0
888 if func_name is None:
889 result = validate_all(prefix, ignore_deprecated)
890
891 if output_format == 'json':
892 output = json.dumps(result)
893 else:
894 if output_format == 'default':
895 output_format = '{text}\n'
896 elif output_format == 'azure':
897 output_format = ('##vso[task.logissue type=error;'
898 'sourcepath={path};'
899 'linenumber={row};'
900 'code={code};'
901 ']{text}\n')
902 else:
903 raise ValueError('Unknown output_format "{}"'.format(
904 output_format))
905
906 output = ''
907 for name, res in result.items():
908 for err_code, err_desc in res['errors']:
909 # The script would be faster if instead of filtering the
910 # errors after validating them, it didn't validate them
911 # initially. But that would complicate the code too much
912 if errors and err_code not in errors:
913 continue
914 exit_status += 1
915 output += output_format.format(
916 name=name,
917 path=res['file'],
918 row=res['file_line'],
919 code=err_code,
920 text='{}: {}'.format(name, err_desc))
921
922 sys.stdout.write(output)
923
924 else:
925 result = validate_one(func_name)
926 sys.stderr.write(header('Docstring ({})'.format(func_name)))
927 sys.stderr.write('{}\n'.format(result['docstring']))
928 sys.stderr.write(header('Validation'))
929 if result['errors']:
930 sys.stderr.write('{} Errors found:\n'.format(
931 len(result['errors'])))
932 for err_code, err_desc in result['errors']:
933 # Failing examples are printed at the end
934 if err_code == 'EX02':
935 sys.stderr.write('\tExamples do not pass tests\n')
936 continue
937 sys.stderr.write('\t{}\n'.format(err_desc))
938 if result['warnings']:
939 sys.stderr.write('{} Warnings found:\n'.format(
940 len(result['warnings'])))
941 for wrn_code, wrn_desc in result['warnings']:
942 sys.stderr.write('\t{}\n'.format(wrn_desc))
943
944 if not result['errors']:
945 sys.stderr.write('Docstring for "{}" correct. :)\n'.format(
946 func_name))
947
948 if result['examples_errors']:
949 sys.stderr.write(header('Doctests'))
950 sys.stderr.write(result['examples_errors'])
951
952 return exit_status
953
954
955 if __name__ == '__main__':
956 format_opts = 'default', 'json', 'azure'
957 func_help = ('function or method to validate (e.g. pandas.DataFrame.head) '
958 'if not provided, all docstrings are validated and returned '
959 'as JSON')
960 argparser = argparse.ArgumentParser(
961 description='validate pandas docstrings')
962 argparser.add_argument('function',
963 nargs='?',
964 default=None,
965 help=func_help)
966 argparser.add_argument('--format', default='default', choices=format_opts,
967 help='format of the output when validating '
968 'multiple docstrings (ignored when validating one).'
969 'It can be {}'.format(str(format_opts)[1:-1]))
970 argparser.add_argument('--prefix', default=None, help='pattern for the '
971 'docstring names, in order to decide which ones '
972 'will be validated. A prefix "pandas.Series.str.'
973 'will make the script validate all the docstrings'
974 'of methods starting by this pattern. It is '
975 'ignored if parameter function is provided')
976 argparser.add_argument('--errors', default=None, help='comma separated '
977 'list of error codes to validate. By default it '
978 'validates all errors (ignored when validating '
979 'a single docstring)')
980 argparser.add_argument('--ignore_deprecated', default=False,
981 action='store_true', help='if this flag is set, '
982 'deprecated objects are ignored when validating '
983 'all docstrings')
984
985 args = argparser.parse_args()
986 sys.exit(main(args.function, args.prefix,
987 args.errors.split(',') if args.errors else None,
988 args.format,
989 args.ignore_deprecated))
990
[end of scripts/validate_docstrings.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pandas-dev/pandas
|
cf25c5cd3ab1f647f780167df622c74b737fa8f5
|
CI - Azure: print skipped tests
We have a `ci/print_skipped.py` script to print the skipped tests, that is currently used in Travis (see end of `.travis.yml` file). It would be to also do this in Azure as an extra step.
|
2019-06-07T01:44:49Z
|
<patch>
diff --git a/.travis.yml b/.travis.yml
--- a/.travis.yml
+++ b/.travis.yml
@@ -103,10 +103,5 @@ script:
after_script:
- echo "after_script start"
- source activate pandas-dev && pushd /tmp && python -c "import pandas; pandas.show_versions();" && popd
- - if [ -e test-data-single.xml ]; then
- ci/print_skipped.py test-data-single.xml;
- fi
- - if [ -e test-data-multiple.xml ]; then
- ci/print_skipped.py test-data-multiple.xml;
- fi
+ - ci/print_skipped.py
- echo "after_script done"
diff --git a/ci/azure/posix.yml b/ci/azure/posix.yml
--- a/ci/azure/posix.yml
+++ b/ci/azure/posix.yml
@@ -89,4 +89,9 @@ jobs:
# note that this will produce $LASTEXITCODE=1
Write-Error "$($matches[1]) tests failed"
}
- displayName: Check for test failures
+ displayName: 'Check for test failures'
+ - script: |
+ export PATH=$HOME/miniconda3/bin:$PATH
+ source activate pandas-dev
+ python ci/print_skipped.py
+ displayName: 'Print skipped tests'
diff --git a/ci/azure/windows.yml b/ci/azure/windows.yml
--- a/ci/azure/windows.yml
+++ b/ci/azure/windows.yml
@@ -18,11 +18,11 @@ jobs:
steps:
- powershell: Write-Host "##vso[task.prependpath]$env:CONDA\Scripts"
- displayName: Add conda to PATH
+ displayName: 'Add conda to PATH'
- script: conda update -q -n base conda
displayName: Update conda
- script: conda env create -q --file ci\\deps\\azure-windows-$(CONDA_PY).yaml
- displayName: Create anaconda environment
+ displayName: 'Create anaconda environment'
- script: |
call activate pandas-dev
call conda list
@@ -48,4 +48,9 @@ jobs:
# note that this will produce $LASTEXITCODE=1
Write-Error "$($matches[1]) tests failed"
}
- displayName: Check for test failures
+ displayName: 'Check for test failures'
+ - script: |
+ export PATH=$HOME/miniconda3/bin:$PATH
+ source activate pandas-dev
+ python ci/print_skipped.py
+ displayName: 'Print skipped tests'
diff --git a/ci/print_skipped.py b/ci/print_skipped.py
--- a/ci/print_skipped.py
+++ b/ci/print_skipped.py
@@ -1,5 +1,6 @@
#!/usr/bin/env python
+import os
import sys
import math
import xml.etree.ElementTree as et
@@ -36,19 +37,19 @@ def parse_results(filename):
return '\n'.join(skipped)
-def main(args):
+def main():
+ test_files = [
+ 'test-data-single.xml',
+ 'test-data-multiple.xml',
+ 'test-data.xml',
+ ]
+
print('SKIPPED TESTS:')
- for fn in args.filename:
- print(parse_results(fn))
+ for fn in test_files:
+ if os.path.isfile(fn):
+ print(parse_results(fn))
return 0
-def parse_args():
- import argparse
- parser = argparse.ArgumentParser()
- parser.add_argument('filename', nargs='+', help='XUnit file to parse')
- return parser.parse_args()
-
-
if __name__ == '__main__':
- sys.exit(main(parse_args()))
+ sys.exit(main())
</patch>
|
[]
|
[]
| ||||
pandas-dev__pandas-21514
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
TST: split up pandas/tests/indexes/test_multi.py
getting pretty long, could create a sub-dir and split out into separate files.
</issue>
<code>
[start of README.md]
1 <div align="center">
2 <img src="https://github.com/pandas-dev/pandas/blob/master/doc/logo/pandas_logo.png"><br>
3 </div>
4
5 -----------------
6
7 # pandas: powerful Python data analysis toolkit
8
9 <table>
10 <tr>
11 <td>Latest Release</td>
12 <td>
13 <a href="https://pypi.org/project/pandas/">
14 <img src="https://img.shields.io/pypi/v/pandas.svg" alt="latest release" />
15 </a>
16 </td>
17 </tr>
18 <td></td>
19 <td>
20 <a href="https://anaconda.org/anaconda/pandas/">
21 <img src="https://anaconda.org/conda-forge/pandas/badges/version.svg" alt="latest release" />
22 </a>
23 </td>
24 </tr>
25 <tr>
26 <td>Package Status</td>
27 <td>
28 <a href="https://pypi.org/project/pandas/">
29 <img src="https://img.shields.io/pypi/status/pandas.svg" alt="status" /></td>
30 </a>
31 </tr>
32 <tr>
33 <td>License</td>
34 <td>
35 <a href="https://github.com/pandas-dev/pandas/blob/master/LICENSE">
36 <img src="https://img.shields.io/pypi/l/pandas.svg" alt="license" />
37 </a>
38 </td>
39 </tr>
40 <tr>
41 <td>Build Status</td>
42 <td>
43 <a href="https://travis-ci.org/pandas-dev/pandas">
44 <img src="https://travis-ci.org/pandas-dev/pandas.svg?branch=master" alt="travis build status" />
45 </a>
46 </td>
47 </tr>
48 <tr>
49 <td></td>
50 <td>
51 <a href="https://circleci.com/gh/pandas-dev/pandas">
52 <img src="https://circleci.com/gh/circleci/mongofinil/tree/master.svg?style=shield&circle-token=223d8cafa7b02902c3e150242520af8944e34671" alt="circleci build status" />
53 </a>
54 </td>
55 </tr>
56 <tr>
57 <td></td>
58 <td>
59 <a href="https://ci.appveyor.com/project/pandas-dev/pandas">
60 <img src="https://ci.appveyor.com/api/projects/status/86vn83mxgnl4xf1s/branch/master?svg=true" alt="appveyor build status" />
61 </a>
62 </td>
63 </tr>
64 <tr>
65 <td>Coverage</td>
66 <td>
67 <a href="https://codecov.io/gh/pandas-dev/pandas">
68 <img src="https://codecov.io/github/pandas-dev/pandas/coverage.svg?branch=master" alt="coverage" />
69 </a>
70 </td>
71 </tr>
72 <tr>
73 <td>Downloads</td>
74 <td>
75 <a href="https://pandas.pydata.org">
76 <img src="https://anaconda.org/conda-forge/pandas/badges/downloads.svg" alt="conda-forge downloads" />
77 </a>
78 </td>
79 </tr>
80 <tr>
81 <td>Gitter</td>
82 <td>
83 <a href="https://gitter.im/pydata/pandas">
84 <img src="https://badges.gitter.im/Join%20Chat.svg"
85 </a>
86 </td>
87 </tr>
88 </table>
89
90
91
92 ## What is it
93
94 **pandas** is a Python package providing fast, flexible, and expressive data
95 structures designed to make working with "relational" or "labeled" data both
96 easy and intuitive. It aims to be the fundamental high-level building block for
97 doing practical, **real world** data analysis in Python. Additionally, it has
98 the broader goal of becoming **the most powerful and flexible open source data
99 analysis / manipulation tool available in any language**. It is already well on
100 its way toward this goal.
101
102 ## Main Features
103 Here are just a few of the things that pandas does well:
104
105 - Easy handling of [**missing data**][missing-data] (represented as
106 `NaN`) in floating point as well as non-floating point data
107 - Size mutability: columns can be [**inserted and
108 deleted**][insertion-deletion] from DataFrame and higher dimensional
109 objects
110 - Automatic and explicit [**data alignment**][alignment]: objects can
111 be explicitly aligned to a set of labels, or the user can simply
112 ignore the labels and let `Series`, `DataFrame`, etc. automatically
113 align the data for you in computations
114 - Powerful, flexible [**group by**][groupby] functionality to perform
115 split-apply-combine operations on data sets, for both aggregating
116 and transforming data
117 - Make it [**easy to convert**][conversion] ragged,
118 differently-indexed data in other Python and NumPy data structures
119 into DataFrame objects
120 - Intelligent label-based [**slicing**][slicing], [**fancy
121 indexing**][fancy-indexing], and [**subsetting**][subsetting] of
122 large data sets
123 - Intuitive [**merging**][merging] and [**joining**][joining] data
124 sets
125 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of
126 data sets
127 - [**Hierarchical**][mi] labeling of axes (possible to have multiple
128 labels per tick)
129 - Robust IO tools for loading data from [**flat files**][flat-files]
130 (CSV and delimited), [**Excel files**][excel], [**databases**][db],
131 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore]
132 - [**Time series**][timeseries]-specific functionality: date range
133 generation and frequency conversion, moving window statistics,
134 moving window linear regressions, date shifting and lagging, etc.
135
136
137 [missing-data]: https://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data
138 [insertion-deletion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#column-selection-addition-deletion
139 [alignment]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html?highlight=alignment#intro-to-data-structures
140 [groupby]: https://pandas.pydata.org/pandas-docs/stable/groupby.html#group-by-split-apply-combine
141 [conversion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe
142 [slicing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges
143 [fancy-indexing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#advanced-indexing-with-ix
144 [subsetting]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing
145 [merging]: https://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging
146 [joining]: https://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index
147 [reshape]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-and-pivot-tables
148 [pivot-table]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#pivot-tables-and-cross-tabulations
149 [mi]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex
150 [flat-files]: https://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files
151 [excel]: https://pandas.pydata.org/pandas-docs/stable/io.html#excel-files
152 [db]: https://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries
153 [hdfstore]: https://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables
154 [timeseries]: https://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-series-date-functionality
155
156 ## Where to get it
157 The source code is currently hosted on GitHub at:
158 https://github.com/pandas-dev/pandas
159
160 Binary installers for the latest released version are available at the [Python
161 package index](https://pypi.org/project/pandas) and on conda.
162
163 ```sh
164 # conda
165 conda install pandas
166 ```
167
168 ```sh
169 # or PyPI
170 pip install pandas
171 ```
172
173 ## Dependencies
174 - [NumPy](https://www.numpy.org): 1.9.0 or higher
175 - [python-dateutil](https://labix.org/python-dateutil): 2.5.0 or higher
176 - [pytz](https://pythonhosted.org/pytz): 2011k or higher
177
178 See the [full installation instructions](https://pandas.pydata.org/pandas-docs/stable/install.html#dependencies)
179 for recommended and optional dependencies.
180
181 ## Installation from sources
182 To install pandas from source you need Cython in addition to the normal
183 dependencies above. Cython can be installed from pypi:
184
185 ```sh
186 pip install cython
187 ```
188
189 In the `pandas` directory (same one where you found this file after
190 cloning the git repo), execute:
191
192 ```sh
193 python setup.py install
194 ```
195
196 or for installing in [development mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs):
197
198 ```sh
199 python setup.py develop
200 ```
201
202 Alternatively, you can use `pip` if you want all the dependencies pulled
203 in automatically (the `-e` option is for installing it in [development
204 mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs)):
205
206 ```sh
207 pip install -e .
208 ```
209
210 See the full instructions for [installing from source](https://pandas.pydata.org/pandas-docs/stable/install.html#installing-from-source).
211
212 ## License
213 [BSD 3](LICENSE)
214
215 ## Documentation
216 The official documentation is hosted on PyData.org: https://pandas.pydata.org/pandas-docs/stable
217
218 ## Background
219 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and
220 has been under active development since then.
221
222 ## Getting Help
223
224 For usage questions, the best place to go to is [StackOverflow](https://stackoverflow.com/questions/tagged/pandas).
225 Further, general questions and discussions can also take place on the [pydata mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata).
226
227 ## Discussion and Development
228 Most development discussion is taking place on github in this repo. Further, the [pandas-dev mailing list](https://mail.python.org/mailman/listinfo/pandas-dev) can also be used for specialized discussions or design issues, and a [Gitter channel](https://gitter.im/pydata/pandas) is available for quick development related questions.
229
230 ## Contributing to pandas [](https://www.codetriage.com/pandas-dev/pandas)
231
232 All contributions, bug reports, bug fixes, documentation improvements, enhancements and ideas are welcome.
233
234 A detailed overview on how to contribute can be found in the **[contributing guide.](https://pandas.pydata.org/pandas-docs/stable/contributing.html)**
235
236 If you are simply looking to start working with the pandas codebase, navigate to the [GitHub “issues” tab](https://github.com/pandas-dev/pandas/issues) and start looking through interesting issues. There are a number of issues listed under [Docs](https://github.com/pandas-dev/pandas/issues?labels=Docs&sort=updated&state=open) and [good first issue](https://github.com/pandas-dev/pandas/issues?labels=good+first+issue&sort=updated&state=open) where you could start out.
237
238 You can also triage issues which may include reproducing bug reports, or asking for vital information such as version numbers or reproduction instructions. If you would like to start triaging issues, one easy way to get started is to [subscribe to pandas on CodeTriage](https://www.codetriage.com/pandas-dev/pandas).
239
240 Or maybe through using pandas you have an idea of your own or are looking for something in the documentation and thinking ‘this can be improved’...you can do something about it!
241
242 Feel free to ask questions on the [mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata) or on [Gitter](https://gitter.im/pydata/pandas).
243
[end of README.md]
[start of pandas/io/json/json.py]
1 # pylint: disable-msg=E1101,W0613,W0603
2 from itertools import islice
3 import os
4 import numpy as np
5
6 import pandas._libs.json as json
7 from pandas._libs.tslib import iNaT
8 from pandas.compat import StringIO, long, u, to_str
9 from pandas import compat, isna
10 from pandas import Series, DataFrame, to_datetime, MultiIndex
11 from pandas.io.common import (get_filepath_or_buffer, _get_handle,
12 _infer_compression, _stringify_path,
13 BaseIterator)
14 from pandas.io.parsers import _validate_integer
15 import pandas.core.common as com
16 from pandas.core.reshape.concat import concat
17 from pandas.io.formats.printing import pprint_thing
18 from .normalize import _convert_to_line_delimits
19 from .table_schema import build_table_schema, parse_table_schema
20 from pandas.core.dtypes.common import is_period_dtype
21
22 loads = json.loads
23 dumps = json.dumps
24
25 TABLE_SCHEMA_VERSION = '0.20.0'
26
27
28 # interface to/from
29 def to_json(path_or_buf, obj, orient=None, date_format='epoch',
30 double_precision=10, force_ascii=True, date_unit='ms',
31 default_handler=None, lines=False, compression=None,
32 index=True):
33
34 if not index and orient not in ['split', 'table']:
35 raise ValueError("'index=False' is only valid when 'orient' is "
36 "'split' or 'table'")
37
38 path_or_buf = _stringify_path(path_or_buf)
39 if lines and orient != 'records':
40 raise ValueError(
41 "'lines' keyword only valid when 'orient' is records")
42
43 if orient == 'table' and isinstance(obj, Series):
44 obj = obj.to_frame(name=obj.name or 'values')
45 if orient == 'table' and isinstance(obj, DataFrame):
46 writer = JSONTableWriter
47 elif isinstance(obj, Series):
48 writer = SeriesWriter
49 elif isinstance(obj, DataFrame):
50 writer = FrameWriter
51 else:
52 raise NotImplementedError("'obj' should be a Series or a DataFrame")
53
54 s = writer(
55 obj, orient=orient, date_format=date_format,
56 double_precision=double_precision, ensure_ascii=force_ascii,
57 date_unit=date_unit, default_handler=default_handler,
58 index=index).write()
59
60 if lines:
61 s = _convert_to_line_delimits(s)
62
63 if isinstance(path_or_buf, compat.string_types):
64 fh, handles = _get_handle(path_or_buf, 'w', compression=compression)
65 try:
66 fh.write(s)
67 finally:
68 fh.close()
69 elif path_or_buf is None:
70 return s
71 else:
72 path_or_buf.write(s)
73
74
75 class Writer(object):
76
77 def __init__(self, obj, orient, date_format, double_precision,
78 ensure_ascii, date_unit, index, default_handler=None):
79 self.obj = obj
80
81 if orient is None:
82 orient = self._default_orient
83
84 self.orient = orient
85 self.date_format = date_format
86 self.double_precision = double_precision
87 self.ensure_ascii = ensure_ascii
88 self.date_unit = date_unit
89 self.default_handler = default_handler
90 self.index = index
91
92 self.is_copy = None
93 self._format_axes()
94
95 def _format_axes(self):
96 raise com.AbstractMethodError(self)
97
98 def write(self):
99 return self._write(self.obj, self.orient, self.double_precision,
100 self.ensure_ascii, self.date_unit,
101 self.date_format == 'iso', self.default_handler)
102
103 def _write(self, obj, orient, double_precision, ensure_ascii,
104 date_unit, iso_dates, default_handler):
105 return dumps(
106 obj,
107 orient=orient,
108 double_precision=double_precision,
109 ensure_ascii=ensure_ascii,
110 date_unit=date_unit,
111 iso_dates=iso_dates,
112 default_handler=default_handler
113 )
114
115
116 class SeriesWriter(Writer):
117 _default_orient = 'index'
118
119 def _format_axes(self):
120 if not self.obj.index.is_unique and self.orient == 'index':
121 raise ValueError("Series index must be unique for orient="
122 "'{orient}'".format(orient=self.orient))
123
124 def _write(self, obj, orient, double_precision, ensure_ascii,
125 date_unit, iso_dates, default_handler):
126 if not self.index and orient == 'split':
127 obj = {"name": obj.name, "data": obj.values}
128 return super(SeriesWriter, self)._write(obj, orient,
129 double_precision,
130 ensure_ascii, date_unit,
131 iso_dates, default_handler)
132
133
134 class FrameWriter(Writer):
135 _default_orient = 'columns'
136
137 def _format_axes(self):
138 """ try to axes if they are datelike """
139 if not self.obj.index.is_unique and self.orient in (
140 'index', 'columns'):
141 raise ValueError("DataFrame index must be unique for orient="
142 "'{orient}'.".format(orient=self.orient))
143 if not self.obj.columns.is_unique and self.orient in (
144 'index', 'columns', 'records'):
145 raise ValueError("DataFrame columns must be unique for orient="
146 "'{orient}'.".format(orient=self.orient))
147
148 def _write(self, obj, orient, double_precision, ensure_ascii,
149 date_unit, iso_dates, default_handler):
150 if not self.index and orient == 'split':
151 obj = obj.to_dict(orient='split')
152 del obj["index"]
153 return super(FrameWriter, self)._write(obj, orient,
154 double_precision,
155 ensure_ascii, date_unit,
156 iso_dates, default_handler)
157
158
159 class JSONTableWriter(FrameWriter):
160 _default_orient = 'records'
161
162 def __init__(self, obj, orient, date_format, double_precision,
163 ensure_ascii, date_unit, index, default_handler=None):
164 """
165 Adds a `schema` attribute with the Table Schema, resets
166 the index (can't do in caller, because the schema inference needs
167 to know what the index is, forces orient to records, and forces
168 date_format to 'iso'.
169 """
170 super(JSONTableWriter, self).__init__(
171 obj, orient, date_format, double_precision, ensure_ascii,
172 date_unit, index, default_handler=default_handler)
173
174 if date_format != 'iso':
175 msg = ("Trying to write with `orient='table'` and "
176 "`date_format='{fmt}'`. Table Schema requires dates "
177 "to be formatted with `date_format='iso'`"
178 .format(fmt=date_format))
179 raise ValueError(msg)
180
181 self.schema = build_table_schema(obj, index=self.index)
182
183 # NotImplementd on a column MultiIndex
184 if obj.ndim == 2 and isinstance(obj.columns, MultiIndex):
185 raise NotImplementedError(
186 "orient='table' is not supported for MultiIndex")
187
188 # TODO: Do this timedelta properly in objToJSON.c See GH #15137
189 if ((obj.ndim == 1) and (obj.name in set(obj.index.names)) or
190 len(obj.columns & obj.index.names)):
191 msg = "Overlapping names between the index and columns"
192 raise ValueError(msg)
193
194 obj = obj.copy()
195 timedeltas = obj.select_dtypes(include=['timedelta']).columns
196 if len(timedeltas):
197 obj[timedeltas] = obj[timedeltas].applymap(
198 lambda x: x.isoformat())
199 # Convert PeriodIndex to datetimes before serialzing
200 if is_period_dtype(obj.index):
201 obj.index = obj.index.to_timestamp()
202
203 # exclude index from obj if index=False
204 if not self.index:
205 self.obj = obj.reset_index(drop=True)
206 else:
207 self.obj = obj.reset_index(drop=False)
208 self.date_format = 'iso'
209 self.orient = 'records'
210 self.index = index
211
212 def _write(self, obj, orient, double_precision, ensure_ascii,
213 date_unit, iso_dates, default_handler):
214 data = super(JSONTableWriter, self)._write(obj, orient,
215 double_precision,
216 ensure_ascii, date_unit,
217 iso_dates,
218 default_handler)
219 serialized = '{{"schema": {schema}, "data": {data}}}'.format(
220 schema=dumps(self.schema), data=data)
221 return serialized
222
223
224 def read_json(path_or_buf=None, orient=None, typ='frame', dtype=True,
225 convert_axes=True, convert_dates=True, keep_default_dates=True,
226 numpy=False, precise_float=False, date_unit=None, encoding=None,
227 lines=False, chunksize=None, compression='infer'):
228 """
229 Convert a JSON string to pandas object
230
231 Parameters
232 ----------
233 path_or_buf : a valid JSON string or file-like, default: None
234 The string could be a URL. Valid URL schemes include http, ftp, s3,
235 gcs, and file. For file URLs, a host is expected. For instance, a local
236 file could be ``file://localhost/path/to/table.json``
237
238 orient : string,
239 Indication of expected JSON string format.
240 Compatible JSON strings can be produced by ``to_json()`` with a
241 corresponding orient value.
242 The set of possible orients is:
243
244 - ``'split'`` : dict like
245 ``{index -> [index], columns -> [columns], data -> [values]}``
246 - ``'records'`` : list like
247 ``[{column -> value}, ... , {column -> value}]``
248 - ``'index'`` : dict like ``{index -> {column -> value}}``
249 - ``'columns'`` : dict like ``{column -> {index -> value}}``
250 - ``'values'`` : just the values array
251
252 The allowed and default values depend on the value
253 of the `typ` parameter.
254
255 * when ``typ == 'series'``,
256
257 - allowed orients are ``{'split','records','index'}``
258 - default is ``'index'``
259 - The Series index must be unique for orient ``'index'``.
260
261 * when ``typ == 'frame'``,
262
263 - allowed orients are ``{'split','records','index',
264 'columns','values', 'table'}``
265 - default is ``'columns'``
266 - The DataFrame index must be unique for orients ``'index'`` and
267 ``'columns'``.
268 - The DataFrame columns must be unique for orients ``'index'``,
269 ``'columns'``, and ``'records'``.
270
271 .. versionadded:: 0.23.0
272 'table' as an allowed value for the ``orient`` argument
273
274 typ : type of object to recover (series or frame), default 'frame'
275 dtype : boolean or dict, default True
276 If True, infer dtypes, if a dict of column to dtype, then use those,
277 if False, then don't infer dtypes at all, applies only to the data.
278 convert_axes : boolean, default True
279 Try to convert the axes to the proper dtypes.
280 convert_dates : boolean, default True
281 List of columns to parse for dates; If True, then try to parse
282 datelike columns default is True; a column label is datelike if
283
284 * it ends with ``'_at'``,
285
286 * it ends with ``'_time'``,
287
288 * it begins with ``'timestamp'``,
289
290 * it is ``'modified'``, or
291
292 * it is ``'date'``
293
294 keep_default_dates : boolean, default True
295 If parsing dates, then parse the default datelike columns
296 numpy : boolean, default False
297 Direct decoding to numpy arrays. Supports numeric data only, but
298 non-numeric column and index labels are supported. Note also that the
299 JSON ordering MUST be the same for each term if numpy=True.
300 precise_float : boolean, default False
301 Set to enable usage of higher precision (strtod) function when
302 decoding string to double values. Default (False) is to use fast but
303 less precise builtin functionality
304 date_unit : string, default None
305 The timestamp unit to detect if converting dates. The default behaviour
306 is to try and detect the correct precision, but if this is not desired
307 then pass one of 's', 'ms', 'us' or 'ns' to force parsing only seconds,
308 milliseconds, microseconds or nanoseconds respectively.
309 lines : boolean, default False
310 Read the file as a json object per line.
311
312 .. versionadded:: 0.19.0
313
314 encoding : str, default is 'utf-8'
315 The encoding to use to decode py3 bytes.
316
317 .. versionadded:: 0.19.0
318
319 chunksize: integer, default None
320 Return JsonReader object for iteration.
321 See the `line-delimted json docs
322 <http://pandas.pydata.org/pandas-docs/stable/io.html#io-jsonl>`_
323 for more information on ``chunksize``.
324 This can only be passed if `lines=True`.
325 If this is None, the file will be read into memory all at once.
326
327 .. versionadded:: 0.21.0
328
329 compression : {'infer', 'gzip', 'bz2', 'zip', 'xz', None}, default 'infer'
330 For on-the-fly decompression of on-disk data. If 'infer', then use
331 gzip, bz2, zip or xz if path_or_buf is a string ending in
332 '.gz', '.bz2', '.zip', or 'xz', respectively, and no decompression
333 otherwise. If using 'zip', the ZIP file must contain only one data
334 file to be read in. Set to None for no decompression.
335
336 .. versionadded:: 0.21.0
337
338 Returns
339 -------
340 result : Series or DataFrame, depending on the value of `typ`.
341
342 Notes
343 -----
344 Specific to ``orient='table'``, if a :class:`DataFrame` with a literal
345 :class:`Index` name of `index` gets written with :func:`to_json`, the
346 subsequent read operation will incorrectly set the :class:`Index` name to
347 ``None``. This is because `index` is also used by :func:`DataFrame.to_json`
348 to denote a missing :class:`Index` name, and the subsequent
349 :func:`read_json` operation cannot distinguish between the two. The same
350 limitation is encountered with a :class:`MultiIndex` and any names
351 beginning with ``'level_'``.
352
353 See Also
354 --------
355 DataFrame.to_json
356
357 Examples
358 --------
359
360 >>> df = pd.DataFrame([['a', 'b'], ['c', 'd']],
361 ... index=['row 1', 'row 2'],
362 ... columns=['col 1', 'col 2'])
363
364 Encoding/decoding a Dataframe using ``'split'`` formatted JSON:
365
366 >>> df.to_json(orient='split')
367 '{"columns":["col 1","col 2"],
368 "index":["row 1","row 2"],
369 "data":[["a","b"],["c","d"]]}'
370 >>> pd.read_json(_, orient='split')
371 col 1 col 2
372 row 1 a b
373 row 2 c d
374
375 Encoding/decoding a Dataframe using ``'index'`` formatted JSON:
376
377 >>> df.to_json(orient='index')
378 '{"row 1":{"col 1":"a","col 2":"b"},"row 2":{"col 1":"c","col 2":"d"}}'
379 >>> pd.read_json(_, orient='index')
380 col 1 col 2
381 row 1 a b
382 row 2 c d
383
384 Encoding/decoding a Dataframe using ``'records'`` formatted JSON.
385 Note that index labels are not preserved with this encoding.
386
387 >>> df.to_json(orient='records')
388 '[{"col 1":"a","col 2":"b"},{"col 1":"c","col 2":"d"}]'
389 >>> pd.read_json(_, orient='records')
390 col 1 col 2
391 0 a b
392 1 c d
393
394 Encoding with Table Schema
395
396 >>> df.to_json(orient='table')
397 '{"schema": {"fields": [{"name": "index", "type": "string"},
398 {"name": "col 1", "type": "string"},
399 {"name": "col 2", "type": "string"}],
400 "primaryKey": "index",
401 "pandas_version": "0.20.0"},
402 "data": [{"index": "row 1", "col 1": "a", "col 2": "b"},
403 {"index": "row 2", "col 1": "c", "col 2": "d"}]}'
404 """
405
406 compression = _infer_compression(path_or_buf, compression)
407 filepath_or_buffer, _, compression, should_close = get_filepath_or_buffer(
408 path_or_buf, encoding=encoding, compression=compression,
409 )
410
411 json_reader = JsonReader(
412 filepath_or_buffer, orient=orient, typ=typ, dtype=dtype,
413 convert_axes=convert_axes, convert_dates=convert_dates,
414 keep_default_dates=keep_default_dates, numpy=numpy,
415 precise_float=precise_float, date_unit=date_unit, encoding=encoding,
416 lines=lines, chunksize=chunksize, compression=compression,
417 )
418
419 if chunksize:
420 return json_reader
421
422 result = json_reader.read()
423 if should_close:
424 try:
425 filepath_or_buffer.close()
426 except: # noqa: flake8
427 pass
428 return result
429
430
431 class JsonReader(BaseIterator):
432 """
433 JsonReader provides an interface for reading in a JSON file.
434
435 If initialized with ``lines=True`` and ``chunksize``, can be iterated over
436 ``chunksize`` lines at a time. Otherwise, calling ``read`` reads in the
437 whole document.
438 """
439 def __init__(self, filepath_or_buffer, orient, typ, dtype, convert_axes,
440 convert_dates, keep_default_dates, numpy, precise_float,
441 date_unit, encoding, lines, chunksize, compression):
442
443 self.path_or_buf = filepath_or_buffer
444 self.orient = orient
445 self.typ = typ
446 self.dtype = dtype
447 self.convert_axes = convert_axes
448 self.convert_dates = convert_dates
449 self.keep_default_dates = keep_default_dates
450 self.numpy = numpy
451 self.precise_float = precise_float
452 self.date_unit = date_unit
453 self.encoding = encoding
454 self.compression = compression
455 self.lines = lines
456 self.chunksize = chunksize
457 self.nrows_seen = 0
458 self.should_close = False
459
460 if self.chunksize is not None:
461 self.chunksize = _validate_integer("chunksize", self.chunksize, 1)
462 if not self.lines:
463 raise ValueError("chunksize can only be passed if lines=True")
464
465 data = self._get_data_from_filepath(filepath_or_buffer)
466 self.data = self._preprocess_data(data)
467
468 def _preprocess_data(self, data):
469 """
470 At this point, the data either has a `read` attribute (e.g. a file
471 object or a StringIO) or is a string that is a JSON document.
472
473 If self.chunksize, we prepare the data for the `__next__` method.
474 Otherwise, we read it into memory for the `read` method.
475 """
476 if hasattr(data, 'read') and not self.chunksize:
477 data = data.read()
478 if not hasattr(data, 'read') and self.chunksize:
479 data = StringIO(data)
480
481 return data
482
483 def _get_data_from_filepath(self, filepath_or_buffer):
484 """
485 read_json accepts three input types:
486 1. filepath (string-like)
487 2. file-like object (e.g. open file object, StringIO)
488 3. JSON string
489
490 This method turns (1) into (2) to simplify the rest of the processing.
491 It returns input types (2) and (3) unchanged.
492 """
493
494 data = filepath_or_buffer
495
496 exists = False
497 if isinstance(data, compat.string_types):
498 try:
499 exists = os.path.exists(filepath_or_buffer)
500 # gh-5874: if the filepath is too long will raise here
501 except (TypeError, ValueError):
502 pass
503
504 if exists or self.compression is not None:
505 data, _ = _get_handle(filepath_or_buffer, 'r',
506 encoding=self.encoding,
507 compression=self.compression)
508 self.should_close = True
509 self.open_stream = data
510
511 return data
512
513 def _combine_lines(self, lines):
514 """Combines a list of JSON objects into one JSON object"""
515 lines = filter(None, map(lambda x: x.strip(), lines))
516 return '[' + ','.join(lines) + ']'
517
518 def read(self):
519 """Read the whole JSON input into a pandas object"""
520 if self.lines and self.chunksize:
521 obj = concat(self)
522 elif self.lines:
523
524 data = to_str(self.data)
525 obj = self._get_object_parser(
526 self._combine_lines(data.split('\n'))
527 )
528 else:
529 obj = self._get_object_parser(self.data)
530 self.close()
531 return obj
532
533 def _get_object_parser(self, json):
534 """parses a json document into a pandas object"""
535 typ = self.typ
536 dtype = self.dtype
537 kwargs = {
538 "orient": self.orient, "dtype": self.dtype,
539 "convert_axes": self.convert_axes,
540 "convert_dates": self.convert_dates,
541 "keep_default_dates": self.keep_default_dates, "numpy": self.numpy,
542 "precise_float": self.precise_float, "date_unit": self.date_unit
543 }
544 obj = None
545 if typ == 'frame':
546 obj = FrameParser(json, **kwargs).parse()
547
548 if typ == 'series' or obj is None:
549 if not isinstance(dtype, bool):
550 dtype = dict(data=dtype)
551 obj = SeriesParser(json, **kwargs).parse()
552
553 return obj
554
555 def close(self):
556 """
557 If we opened a stream earlier, in _get_data_from_filepath, we should
558 close it. If an open stream or file was passed, we leave it open.
559 """
560 if self.should_close:
561 try:
562 self.open_stream.close()
563 except (IOError, AttributeError):
564 pass
565
566 def __next__(self):
567 lines = list(islice(self.data, self.chunksize))
568 if lines:
569 lines_json = self._combine_lines(lines)
570 obj = self._get_object_parser(lines_json)
571
572 # Make sure that the returned objects have the right index.
573 obj.index = range(self.nrows_seen, self.nrows_seen + len(obj))
574 self.nrows_seen += len(obj)
575
576 return obj
577
578 self.close()
579 raise StopIteration
580
581
582 class Parser(object):
583
584 _STAMP_UNITS = ('s', 'ms', 'us', 'ns')
585 _MIN_STAMPS = {
586 's': long(31536000),
587 'ms': long(31536000000),
588 'us': long(31536000000000),
589 'ns': long(31536000000000000)}
590
591 def __init__(self, json, orient, dtype=True, convert_axes=True,
592 convert_dates=True, keep_default_dates=False, numpy=False,
593 precise_float=False, date_unit=None):
594 self.json = json
595
596 if orient is None:
597 orient = self._default_orient
598
599 self.orient = orient
600 self.dtype = dtype
601
602 if orient == "split":
603 numpy = False
604
605 if date_unit is not None:
606 date_unit = date_unit.lower()
607 if date_unit not in self._STAMP_UNITS:
608 raise ValueError('date_unit must be one of {units}'
609 .format(units=self._STAMP_UNITS))
610 self.min_stamp = self._MIN_STAMPS[date_unit]
611 else:
612 self.min_stamp = self._MIN_STAMPS['s']
613
614 self.numpy = numpy
615 self.precise_float = precise_float
616 self.convert_axes = convert_axes
617 self.convert_dates = convert_dates
618 self.date_unit = date_unit
619 self.keep_default_dates = keep_default_dates
620 self.obj = None
621
622 def check_keys_split(self, decoded):
623 "checks that dict has only the appropriate keys for orient='split'"
624 bad_keys = set(decoded.keys()).difference(set(self._split_keys))
625 if bad_keys:
626 bad_keys = ", ".join(bad_keys)
627 raise ValueError(u("JSON data had unexpected key(s): {bad_keys}")
628 .format(bad_keys=pprint_thing(bad_keys)))
629
630 def parse(self):
631
632 # try numpy
633 numpy = self.numpy
634 if numpy:
635 self._parse_numpy()
636
637 else:
638 self._parse_no_numpy()
639
640 if self.obj is None:
641 return None
642 if self.convert_axes:
643 self._convert_axes()
644 self._try_convert_types()
645 return self.obj
646
647 def _convert_axes(self):
648 """ try to convert axes """
649 for axis in self.obj._AXIS_NUMBERS.keys():
650 new_axis, result = self._try_convert_data(
651 axis, self.obj._get_axis(axis), use_dtypes=False,
652 convert_dates=True)
653 if result:
654 setattr(self.obj, axis, new_axis)
655
656 def _try_convert_types(self):
657 raise com.AbstractMethodError(self)
658
659 def _try_convert_data(self, name, data, use_dtypes=True,
660 convert_dates=True):
661 """ try to parse a ndarray like into a column by inferring dtype """
662
663 # don't try to coerce, unless a force conversion
664 if use_dtypes:
665 if self.dtype is False:
666 return data, False
667 elif self.dtype is True:
668 pass
669
670 else:
671
672 # dtype to force
673 dtype = (self.dtype.get(name)
674 if isinstance(self.dtype, dict) else self.dtype)
675 if dtype is not None:
676 try:
677 dtype = np.dtype(dtype)
678 return data.astype(dtype), True
679 except (TypeError, ValueError):
680 return data, False
681
682 if convert_dates:
683 new_data, result = self._try_convert_to_date(data)
684 if result:
685 return new_data, True
686
687 result = False
688
689 if data.dtype == 'object':
690
691 # try float
692 try:
693 data = data.astype('float64')
694 result = True
695 except (TypeError, ValueError):
696 pass
697
698 if data.dtype.kind == 'f':
699
700 if data.dtype != 'float64':
701
702 # coerce floats to 64
703 try:
704 data = data.astype('float64')
705 result = True
706 except (TypeError, ValueError):
707 pass
708
709 # do't coerce 0-len data
710 if len(data) and (data.dtype == 'float' or data.dtype == 'object'):
711
712 # coerce ints if we can
713 try:
714 new_data = data.astype('int64')
715 if (new_data == data).all():
716 data = new_data
717 result = True
718 except (TypeError, ValueError):
719 pass
720
721 # coerce ints to 64
722 if data.dtype == 'int':
723
724 # coerce floats to 64
725 try:
726 data = data.astype('int64')
727 result = True
728 except (TypeError, ValueError):
729 pass
730
731 return data, result
732
733 def _try_convert_to_date(self, data):
734 """ try to parse a ndarray like into a date column
735 try to coerce object in epoch/iso formats and
736 integer/float in epcoh formats, return a boolean if parsing
737 was successful """
738
739 # no conversion on empty
740 if not len(data):
741 return data, False
742
743 new_data = data
744 if new_data.dtype == 'object':
745 try:
746 new_data = data.astype('int64')
747 except (TypeError, ValueError, OverflowError):
748 pass
749
750 # ignore numbers that are out of range
751 if issubclass(new_data.dtype.type, np.number):
752 in_range = (isna(new_data.values) | (new_data > self.min_stamp) |
753 (new_data.values == iNaT))
754 if not in_range.all():
755 return data, False
756
757 date_units = (self.date_unit,) if self.date_unit else self._STAMP_UNITS
758 for date_unit in date_units:
759 try:
760 new_data = to_datetime(new_data, errors='raise',
761 unit=date_unit)
762 except ValueError:
763 continue
764 except Exception:
765 break
766 return new_data, True
767 return data, False
768
769 def _try_convert_dates(self):
770 raise com.AbstractMethodError(self)
771
772
773 class SeriesParser(Parser):
774 _default_orient = 'index'
775 _split_keys = ('name', 'index', 'data')
776
777 def _parse_no_numpy(self):
778
779 json = self.json
780 orient = self.orient
781 if orient == "split":
782 decoded = {str(k): v for k, v in compat.iteritems(
783 loads(json, precise_float=self.precise_float))}
784 self.check_keys_split(decoded)
785 self.obj = Series(dtype=None, **decoded)
786 else:
787 self.obj = Series(
788 loads(json, precise_float=self.precise_float), dtype=None)
789
790 def _parse_numpy(self):
791
792 json = self.json
793 orient = self.orient
794 if orient == "split":
795 decoded = loads(json, dtype=None, numpy=True,
796 precise_float=self.precise_float)
797 decoded = {str(k): v for k, v in compat.iteritems(decoded)}
798 self.check_keys_split(decoded)
799 self.obj = Series(**decoded)
800 elif orient == "columns" or orient == "index":
801 self.obj = Series(*loads(json, dtype=None, numpy=True,
802 labelled=True,
803 precise_float=self.precise_float))
804 else:
805 self.obj = Series(loads(json, dtype=None, numpy=True,
806 precise_float=self.precise_float))
807
808 def _try_convert_types(self):
809 if self.obj is None:
810 return
811 obj, result = self._try_convert_data(
812 'data', self.obj, convert_dates=self.convert_dates)
813 if result:
814 self.obj = obj
815
816
817 class FrameParser(Parser):
818 _default_orient = 'columns'
819 _split_keys = ('columns', 'index', 'data')
820
821 def _parse_numpy(self):
822
823 json = self.json
824 orient = self.orient
825
826 if orient == "columns":
827 args = loads(json, dtype=None, numpy=True, labelled=True,
828 precise_float=self.precise_float)
829 if len(args):
830 args = (args[0].T, args[2], args[1])
831 self.obj = DataFrame(*args)
832 elif orient == "split":
833 decoded = loads(json, dtype=None, numpy=True,
834 precise_float=self.precise_float)
835 decoded = {str(k): v for k, v in compat.iteritems(decoded)}
836 self.check_keys_split(decoded)
837 self.obj = DataFrame(**decoded)
838 elif orient == "values":
839 self.obj = DataFrame(loads(json, dtype=None, numpy=True,
840 precise_float=self.precise_float))
841 else:
842 self.obj = DataFrame(*loads(json, dtype=None, numpy=True,
843 labelled=True,
844 precise_float=self.precise_float))
845
846 def _parse_no_numpy(self):
847
848 json = self.json
849 orient = self.orient
850
851 if orient == "columns":
852 self.obj = DataFrame(
853 loads(json, precise_float=self.precise_float), dtype=None)
854 elif orient == "split":
855 decoded = {str(k): v for k, v in compat.iteritems(
856 loads(json, precise_float=self.precise_float))}
857 self.check_keys_split(decoded)
858 self.obj = DataFrame(dtype=None, **decoded)
859 elif orient == "index":
860 self.obj = DataFrame(
861 loads(json, precise_float=self.precise_float), dtype=None).T
862 elif orient == 'table':
863 self.obj = parse_table_schema(json,
864 precise_float=self.precise_float)
865 else:
866 self.obj = DataFrame(
867 loads(json, precise_float=self.precise_float), dtype=None)
868
869 def _process_converter(self, f, filt=None):
870 """ take a conversion function and possibly recreate the frame """
871
872 if filt is None:
873 filt = lambda col, c: True
874
875 needs_new_obj = False
876 new_obj = dict()
877 for i, (col, c) in enumerate(self.obj.iteritems()):
878 if filt(col, c):
879 new_data, result = f(col, c)
880 if result:
881 c = new_data
882 needs_new_obj = True
883 new_obj[i] = c
884
885 if needs_new_obj:
886
887 # possibly handle dup columns
888 new_obj = DataFrame(new_obj, index=self.obj.index)
889 new_obj.columns = self.obj.columns
890 self.obj = new_obj
891
892 def _try_convert_types(self):
893 if self.obj is None:
894 return
895 if self.convert_dates:
896 self._try_convert_dates()
897
898 self._process_converter(
899 lambda col, c: self._try_convert_data(col, c, convert_dates=False))
900
901 def _try_convert_dates(self):
902 if self.obj is None:
903 return
904
905 # our columns to parse
906 convert_dates = self.convert_dates
907 if convert_dates is True:
908 convert_dates = []
909 convert_dates = set(convert_dates)
910
911 def is_ok(col):
912 """ return if this col is ok to try for a date parse """
913 if not isinstance(col, compat.string_types):
914 return False
915
916 col_lower = col.lower()
917 if (col_lower.endswith('_at') or
918 col_lower.endswith('_time') or
919 col_lower == 'modified' or
920 col_lower == 'date' or
921 col_lower == 'datetime' or
922 col_lower.startswith('timestamp')):
923 return True
924 return False
925
926 self._process_converter(
927 lambda col, c: self._try_convert_to_date(c),
928 lambda col, c: ((self.keep_default_dates and is_ok(col)) or
929 col in convert_dates))
930
[end of pandas/io/json/json.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pandas-dev/pandas
|
8f322149ebd2a30f2f2d48c06863cd783166fa84
|
TST: split up pandas/tests/indexes/test_multi.py
getting pretty long, could create a sub-dir and split out into separate files.
|
Hi @jreback I would like to work on this issue. I am new to this repo, I think this would be a good issue to solve first.
@jreback I'm interested in working on this issue. Could you please elaborate what exactly has to be done? Thanks!
make a sub-dir
``pandas/tests/indexes/multi`` and move tests to files within that dir. we want to split this up into various files by type of tests, look at ``pandas/tests/indexes/datetimes`` for example (won't necessarily be the same names), idea is to group tests logically. you won't change any tests just move them (and get the setup/linting only as needed).
should have the same number of tests before and after.
@jreback it looks like @xchoudhury is unresponsive or unable to follow up with this item. Is there an appropriate way to step in and help push the work to completion? Thanks!
@elmq0022 apologies! I got busy with the end of the school year and the start of my internship. If you want, you may completely take over for me. Thanks again!
sure - will close this but can take the existing commits and make a new PR
Thanks @xchoundhury. I've cloned the work you've done so far and will work to incorporate @jreback 's comments from your previous pull request.
Hey @jreback. From the description this seems like it would be straight forward, but looking more closely at the code, the task is slightly more involved.
Unfortunately the calls in test_multi.py inherits from the Base object in common.py which has around 60 tests of it's own. If I was to break up multi.py and inherit from Base each time, I would create a mess where a duplicate set of the tests in Base is run for each new file created.
Also, Base is referenced in datetimelike.py, test_base.py, test_category.py, and test_numeric.py besides test_multi.py. So, touching anything in Base class is prohibited.
Here's what I propose:
1. Remove all the tests defined in test_multi.py from the TestMultiIndex class. This leaves the inherited tests intact.
2. Create a file of pytest fixture(s) for the MultiIndex specific tests.
3. Logically group the tests removed in item 1 into separate files using the fixtures created in item 2 as needed.
Please let me know if this approach works for you, or please suggestion modifications.
Thanks!
I would suggest moving the things first that don't require the class hierarchy as it may reveal the easiest way to reveal with the remaining ones. Generally fixtures would be preferable.
Thanks @WillAyd. That would be a very good first pass to see how things shake out.
|
2018-06-17T18:02:28Z
|
<patch>
</patch>
|
[]
|
[]
| |||
pandas-dev__pandas-4770
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Issue with index_col and read_html
```
pd.read_html("http://www.camacau.com/changeLang?lang=en_US&url=/statistic_list",infer_types=False,header=0,index_col=0)
```
yields:
```
Traceback (most recent call last)
<ipython-input-114-a13f8ac8a77b> in <module>()
----> 1 foo2 = pd.read_html("http://www.camacau.com/changeLang?lang=en_US&url=/statistic_list",infer_types=False,header=0,index_col=0)
/usr/local/lib/python2.7/dist-packages/pandas/io/html.pyc in read_html(io, match, flavor, header, index_col, skiprows, infer_types, attrs)
904 'data (you passed a negative value)')
905 return _parse(flavor, io, match, header, index_col, skiprows, infer_types,
--> 906 attrs)
/usr/local/lib/python2.7/dist-packages/pandas/io/html.pyc in _parse(flavor, io, match, header, index_col, skiprows, infer_types, attrs)
776
777 return [_data_to_frame(table, header, index_col, infer_types, skiprows)
--> 778 for table in tables]
779
780
/usr/local/lib/python2.7/dist-packages/pandas/io/html.pyc in _data_to_frame(data, header, index_col, infer_types, skiprows)
674
675 # drop by default
--> 676 df.set_index(cols, inplace=True)
677 if df.index.nlevels == 1:
678 if isnull(df.index.name) or not df.index.name:
/usr/local/lib/python2.7/dist-packages/pandas/core/frame.pyc in set_index(self, keys, drop, append, inplace, verify_integrity)
2833 arrays.append(level)
2834
-> 2835 index = MultiIndex.from_arrays(arrays, names=names)
2836
2837 if verify_integrity and not index.is_unique:
/usr/local/lib/python2.7/dist-packages/pandas/core/index.pyc in from_arrays(cls, arrays, sortorder, names)
1763 if len(arrays) == 1:
1764 name = None if names is None else names[0]
-> 1765 return Index(arrays[0], name=name)
1766
1767 cats = [Categorical.from_array(arr) for arr in arrays]
/usr/local/lib/python2.7/dist-packages/pandas/core/index.pyc in __new__(cls, data, dtype, copy, name, **kwargs)
108 return Int64Index(data, copy=copy, dtype=dtype, name=name)
109
--> 110 subarr = com._asarray_tuplesafe(data, dtype=object)
111 elif np.isscalar(data):
112 raise ValueError('Index(...) must be called with a collection '
/usr/local/lib/python2.7/dist-packages/pandas/core/common.pyc in _asarray_tuplesafe(values, dtype)
1489 # in numpy, leading to the following
1490 result = np.empty(len(values), dtype=object)
-> 1491 result[:] = values
1492
1493 return result
ValueError: could not broadcast input array from shape (11,2) into shape (11)
```
</issue>
<code>
[start of README.md]
1 # pandas: powerful Python data analysis toolkit
2
3 
4
5 ## What is it
6 **pandas** is a Python package providing fast, flexible, and expressive data
7 structures designed to make working with "relational" or "labeled" data both
8 easy and intuitive. It aims to be the fundamental high-level building block for
9 doing practical, **real world** data analysis in Python. Additionally, it has
10 the broader goal of becoming **the most powerful and flexible open source data
11 analysis / manipulation tool available in any language**. It is already well on
12 its way toward this goal.
13
14 ## Main Features
15 Here are just a few of the things that pandas does well:
16
17 - Easy handling of [**missing data**][missing-data] (represented as
18 `NaN`) in floating point as well as non-floating point data
19 - Size mutability: columns can be [**inserted and
20 deleted**][insertion-deletion] from DataFrame and higher dimensional
21 objects
22 - Automatic and explicit [**data alignment**][alignment]: objects can
23 be explicitly aligned to a set of labels, or the user can simply
24 ignore the labels and let `Series`, `DataFrame`, etc. automatically
25 align the data for you in computations
26 - Powerful, flexible [**group by**][groupby] functionality to perform
27 split-apply-combine operations on data sets, for both aggregating
28 and transforming data
29 - Make it [**easy to convert**][conversion] ragged,
30 differently-indexed data in other Python and NumPy data structures
31 into DataFrame objects
32 - Intelligent label-based [**slicing**][slicing], [**fancy
33 indexing**][fancy-indexing], and [**subsetting**][subsetting] of
34 large data sets
35 - Intuitive [**merging**][merging] and [**joining**][joining] data
36 sets
37 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of
38 data sets
39 - [**Hierarchical**][mi] labeling of axes (possible to have multiple
40 labels per tick)
41 - Robust IO tools for loading data from [**flat files**][flat-files]
42 (CSV and delimited), [**Excel files**][excel], [**databases**][db],
43 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore]
44 - [**Time series**][timeseries]-specific functionality: date range
45 generation and frequency conversion, moving window statistics,
46 moving window linear regressions, date shifting and lagging, etc.
47
48
49 [missing-data]: http://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data
50 [insertion-deletion]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html#column-selection-addition-deletion
51 [alignment]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html?highlight=alignment#intro-to-data-structures
52 [groupby]: http://pandas.pydata.org/pandas-docs/stable/groupby.html#group-by-split-apply-combine
53 [conversion]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe
54 [slicing]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges
55 [fancy-indexing]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#advanced-indexing-with-ix
56 [subsetting]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing
57 [merging]: http://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging
58 [joining]: http://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index
59 [reshape]: http://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-and-pivot-tables
60 [pivot-table]: http://pandas.pydata.org/pandas-docs/stable/reshaping.html#pivot-tables-and-cross-tabulations
61 [mi]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex
62 [flat-files]: http://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files
63 [excel]: http://pandas.pydata.org/pandas-docs/stable/io.html#excel-files
64 [db]: http://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries
65 [hdfstore]: http://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables
66 [timeseries]: http://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-series-date-functionality
67
68 ## Where to get it
69 The source code is currently hosted on GitHub at:
70 http://github.com/pydata/pandas
71
72 Binary installers for the latest released version are available at the Python
73 package index
74
75 http://pypi.python.org/pypi/pandas/
76
77 And via `easy_install`:
78
79 ```sh
80 easy_install pandas
81 ```
82
83 or `pip`:
84
85 ```sh
86 pip install pandas
87 ```
88
89 ## Dependencies
90 - [NumPy](http://www.numpy.org): 1.6.1 or higher
91 - [python-dateutil](http://labix.org/python-dateutil): 1.5 or higher
92 - [pytz](http://pytz.sourceforge.net)
93 - Needed for time zone support with ``pandas.date_range``
94
95 ### Highly Recommended Dependencies
96 - [numexpr](http://code.google.com/p/numexpr/)
97 - Needed to accelerate some expression evaluation operations
98 - Required by PyTables
99 - [bottleneck](http://berkeleyanalytics.com/bottleneck)
100 - Needed to accelerate certain numerical operations
101
102 ### Optional dependencies
103 - [Cython](http://www.cython.org): Only necessary to build development version. Version 0.17.1 or higher.
104 - [SciPy](http://www.scipy.org): miscellaneous statistical functions
105 - [PyTables](http://www.pytables.org): necessary for HDF5-based storage
106 - [matplotlib](http://matplotlib.sourceforge.net/): for plotting
107 - [statsmodels](http://statsmodels.sourceforge.net/)
108 - Needed for parts of `pandas.stats`
109 - [openpyxl](http://packages.python.org/openpyxl/), [xlrd/xlwt](http://www.python-excel.org/)
110 - openpyxl version 1.6.1 or higher, for writing .xlsx files
111 - xlrd >= 0.9.0
112 - Needed for Excel I/O
113 - [boto](https://pypi.python.org/pypi/boto): necessary for Amazon S3 access.
114 - One of the following combinations of libraries is needed to use the
115 top-level [`pandas.read_html`][read-html-docs] function:
116 - [BeautifulSoup4][BeautifulSoup4] and [html5lib][html5lib] (Any
117 recent version of [html5lib][html5lib] is okay.)
118 - [BeautifulSoup4][BeautifulSoup4] and [lxml][lxml]
119 - [BeautifulSoup4][BeautifulSoup4] and [html5lib][html5lib] and [lxml][lxml]
120 - Only [lxml][lxml], although see [HTML reading gotchas][html-gotchas]
121 for reasons as to why you should probably **not** take this approach.
122
123 #### Notes about HTML parsing libraries
124 - If you install [BeautifulSoup4][BeautifulSoup4] you must install
125 either [lxml][lxml] or [html5lib][html5lib] or both.
126 `pandas.read_html` will **not** work with *only* `BeautifulSoup4`
127 installed.
128 - You are strongly encouraged to read [HTML reading
129 gotchas][html-gotchas]. It explains issues surrounding the
130 installation and usage of the above three libraries.
131 - You may need to install an older version of
132 [BeautifulSoup4][BeautifulSoup4]:
133 - Versions 4.2.1, 4.1.3 and 4.0.2 have been confirmed for 64 and
134 32-bit Ubuntu/Debian
135 - Additionally, if you're using [Anaconda][Anaconda] you should
136 definitely read [the gotchas about HTML parsing][html-gotchas]
137 libraries
138 - If you're on a system with `apt-get` you can do
139
140 ```sh
141 sudo apt-get build-dep python-lxml
142 ```
143
144 to get the necessary dependencies for installation of [lxml][lxml].
145 This will prevent further headaches down the line.
146
147 [html5lib]: https://github.com/html5lib/html5lib-python "html5lib"
148 [BeautifulSoup4]: http://www.crummy.com/software/BeautifulSoup "BeautifulSoup4"
149 [lxml]: http://lxml.de
150 [Anaconda]: https://store.continuum.io/cshop/anaconda
151 [NumPy]: http://numpy.scipy.org/
152 [html-gotchas]: http://pandas.pydata.org/pandas-docs/stable/gotchas.html#html-table-parsing
153 [read-html-docs]: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.io.html.read_html.html#pandas.io.html.read_html
154
155 ## Installation from sources
156 To install pandas from source you need Cython in addition to the normal
157 dependencies above. Cython can be installed from pypi:
158
159 ```sh
160 pip install cython
161 ```
162
163 In the `pandas` directory (same one where you found this file after
164 cloning the git repo), execute:
165
166 ```sh
167 python setup.py install
168 ```
169
170 or for installing in [development mode](http://www.pip-installer.org/en/latest/usage.html):
171
172 ```sh
173 python setup.py develop
174 ```
175
176 Alternatively, you can use `pip` if you want all the dependencies pulled
177 in automatically (the `-e` option is for installing it in [development
178 mode](http://www.pip-installer.org/en/latest/usage.html)):
179
180 ```sh
181 pip install -e .
182 ```
183
184 On Windows, you will need to install MinGW and execute:
185
186 ```sh
187 python setup.py build --compiler=mingw32
188 python setup.py install
189 ```
190
191 See http://pandas.pydata.org/ for more information.
192
193 ## License
194 BSD
195
196 ## Documentation
197 The official documentation is hosted on PyData.org: http://pandas.pydata.org/
198
199 The Sphinx documentation should provide a good starting point for learning how
200 to use the library. Expect the docs to continue to expand as time goes on.
201
202 ## Background
203 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and
204 has been under active development since then.
205
206 ## Discussion and Development
207 Since pandas development is related to a number of other scientific
208 Python projects, questions are welcome on the scipy-user mailing
209 list. Specialized discussions or design issues should take place on
210 the pystatsmodels mailing list / Google group, where
211 ``scikits.statsmodels`` and other libraries will also be discussed:
212
213 http://groups.google.com/group/pystatsmodels
214
[end of README.md]
[start of pandas/io/html.py]
1 """:mod:`pandas.io.html` is a module containing functionality for dealing with
2 HTML IO.
3
4 """
5
6 import os
7 import re
8 import numbers
9 import collections
10
11 from distutils.version import LooseVersion
12
13 import numpy as np
14
15 from pandas import DataFrame, MultiIndex, isnull
16 from pandas.io.common import _is_url, urlopen, parse_url
17 from pandas.compat import range, lrange, lmap, u, map
18 from pandas import compat
19
20
21 try:
22 import bs4
23 except ImportError:
24 _HAS_BS4 = False
25 else:
26 _HAS_BS4 = True
27
28
29 try:
30 import lxml
31 except ImportError:
32 _HAS_LXML = False
33 else:
34 _HAS_LXML = True
35
36
37 try:
38 import html5lib
39 except ImportError:
40 _HAS_HTML5LIB = False
41 else:
42 _HAS_HTML5LIB = True
43
44
45 #############
46 # READ HTML #
47 #############
48 _RE_WHITESPACE = re.compile(r'([\r\n]+|\s{2,})')
49
50
51 def _remove_whitespace(s, regex=_RE_WHITESPACE):
52 """Replace extra whitespace inside of a string with a single space.
53
54 Parameters
55 ----------
56 s : str or unicode
57 The string from which to remove extra whitespace.
58
59 regex : regex
60 The regular expression to use to remove extra whitespace.
61
62 Returns
63 -------
64 subd : str or unicode
65 `s` with all extra whitespace replaced with a single space.
66 """
67 return regex.sub(' ', s.strip())
68
69
70 def _get_skiprows_iter(skiprows):
71 """Get an iterator given an integer, slice or container.
72
73 Parameters
74 ----------
75 skiprows : int, slice, container
76 The iterator to use to skip rows; can also be a slice.
77
78 Raises
79 ------
80 TypeError
81 * If `skiprows` is not a slice, integer, or Container
82
83 Raises
84 ------
85 TypeError
86 * If `skiprows` is not a slice, integer, or Container
87
88 Returns
89 -------
90 it : iterable
91 A proper iterator to use to skip rows of a DataFrame.
92 """
93 if isinstance(skiprows, slice):
94 return lrange(skiprows.start or 0, skiprows.stop, skiprows.step or 1)
95 elif isinstance(skiprows, numbers.Integral):
96 return lrange(skiprows)
97 elif isinstance(skiprows, collections.Container):
98 return skiprows
99 else:
100 raise TypeError('{0} is not a valid type for skipping'
101 ' rows'.format(type(skiprows)))
102
103
104 def _read(io):
105 """Try to read from a url, file or string.
106
107 Parameters
108 ----------
109 io : str, unicode, or file-like
110
111 Returns
112 -------
113 raw_text : str
114 """
115 if _is_url(io):
116 with urlopen(io) as url:
117 raw_text = url.read()
118 elif hasattr(io, 'read'):
119 raw_text = io.read()
120 elif os.path.isfile(io):
121 with open(io) as f:
122 raw_text = f.read()
123 elif isinstance(io, compat.string_types):
124 raw_text = io
125 else:
126 raise TypeError("Cannot read object of type "
127 "'{0.__class__.__name__!r}'".format(io))
128 return raw_text
129
130
131 class _HtmlFrameParser(object):
132 """Base class for parsers that parse HTML into DataFrames.
133
134 Parameters
135 ----------
136 io : str or file-like
137 This can be either a string of raw HTML, a valid URL using the HTTP,
138 FTP, or FILE protocols or a file-like object.
139
140 match : str or regex
141 The text to match in the document.
142
143 attrs : dict
144 List of HTML <table> element attributes to match.
145
146 Attributes
147 ----------
148 io : str or file-like
149 raw HTML, URL, or file-like object
150
151 match : regex
152 The text to match in the raw HTML
153
154 attrs : dict-like
155 A dictionary of valid table attributes to use to search for table
156 elements.
157
158 Notes
159 -----
160 To subclass this class effectively you must override the following methods:
161 * :func:`_build_doc`
162 * :func:`_text_getter`
163 * :func:`_parse_td`
164 * :func:`_parse_tables`
165 * :func:`_parse_tr`
166 * :func:`_parse_thead`
167 * :func:`_parse_tbody`
168 * :func:`_parse_tfoot`
169 See each method's respective documentation for details on their
170 functionality.
171 """
172 def __init__(self, io, match, attrs):
173 self.io = io
174 self.match = match
175 self.attrs = attrs
176
177 def parse_tables(self):
178 tables = self._parse_tables(self._build_doc(), self.match, self.attrs)
179 return (self._build_table(table) for table in tables)
180
181 def _parse_raw_data(self, rows):
182 """Parse the raw data into a list of lists.
183
184 Parameters
185 ----------
186 rows : iterable of node-like
187 A list of row elements.
188
189 text_getter : callable
190 A callable that gets the text from an individual node. This must be
191 defined by subclasses.
192
193 column_finder : callable
194 A callable that takes a row node as input and returns a list of the
195 column node in that row. This must be defined by subclasses.
196
197 Raises
198 ------
199 AssertionError
200 * If `text_getter` is not callable
201 * If `column_finder` is not callable
202
203 Returns
204 -------
205 data : list of list of strings
206 """
207 data = [[_remove_whitespace(self._text_getter(col)) for col in
208 self._parse_td(row)] for row in rows]
209 return data
210
211 def _text_getter(self, obj):
212 """Return the text of an individual DOM node.
213
214 Parameters
215 ----------
216 obj : node-like
217 A DOM node.
218
219 Returns
220 -------
221 text : str or unicode
222 The text from an individual DOM node.
223 """
224 raise NotImplementedError
225
226 def _parse_td(self, obj):
227 """Return the td elements from a row element.
228
229 Parameters
230 ----------
231 obj : node-like
232
233 Returns
234 -------
235 columns : list of node-like
236 These are the elements of each row, i.e., the columns.
237 """
238 raise NotImplementedError
239
240 def _parse_tables(self, doc, match, attrs):
241 """Return all tables from the parsed DOM.
242
243 Parameters
244 ----------
245 doc : tree-like
246 The DOM from which to parse the table element.
247
248 match : str or regular expression
249 The text to search for in the DOM tree.
250
251 attrs : dict
252 A dictionary of table attributes that can be used to disambiguate
253 mutliple tables on a page.
254
255 Raises
256 ------
257 AssertionError
258 * If `match` does not match any text in the document.
259
260 Returns
261 -------
262 tables : list of node-like
263 A list of <table> elements to be parsed into raw data.
264 """
265 raise NotImplementedError
266
267 def _parse_tr(self, table):
268 """Return the list of row elements from the parsed table element.
269
270 Parameters
271 ----------
272 table : node-like
273 A table element that contains row elements.
274
275 Returns
276 -------
277 rows : list of node-like
278 A list row elements of a table, usually <tr> or <th> elements.
279 """
280 raise NotImplementedError
281
282 def _parse_thead(self, table):
283 """Return the header of a table.
284
285 Parameters
286 ----------
287 table : node-like
288 A table element that contains row elements.
289
290 Returns
291 -------
292 thead : node-like
293 A <thead>...</thead> element.
294 """
295 raise NotImplementedError
296
297 def _parse_tbody(self, table):
298 """Return the body of the table.
299
300 Parameters
301 ----------
302 table : node-like
303 A table element that contains row elements.
304
305 Returns
306 -------
307 tbody : node-like
308 A <tbody>...</tbody> element.
309 """
310 raise NotImplementedError
311
312 def _parse_tfoot(self, table):
313 """Return the footer of the table if any.
314
315 Parameters
316 ----------
317 table : node-like
318 A table element that contains row elements.
319
320 Returns
321 -------
322 tfoot : node-like
323 A <tfoot>...</tfoot> element.
324 """
325 raise NotImplementedError
326
327 def _build_doc(self):
328 """Return a tree-like object that can be used to iterate over the DOM.
329
330 Returns
331 -------
332 obj : tree-like
333 """
334 raise NotImplementedError
335
336 def _build_table(self, table):
337 header = self._parse_raw_thead(table)
338 body = self._parse_raw_tbody(table)
339 footer = self._parse_raw_tfoot(table)
340 return header, body, footer
341
342 def _parse_raw_thead(self, table):
343 thead = self._parse_thead(table)
344 res = []
345 if thead:
346 res = lmap(self._text_getter, self._parse_th(thead[0]))
347 return np.array(res).squeeze() if res and len(res) == 1 else res
348
349 def _parse_raw_tfoot(self, table):
350 tfoot = self._parse_tfoot(table)
351 res = []
352 if tfoot:
353 res = lmap(self._text_getter, self._parse_td(tfoot[0]))
354 return np.array(res).squeeze() if res and len(res) == 1 else res
355
356 def _parse_raw_tbody(self, table):
357 tbody = self._parse_tbody(table)
358
359 try:
360 res = self._parse_tr(tbody[0])
361 except IndexError:
362 res = self._parse_tr(table)
363 return self._parse_raw_data(res)
364
365
366 class _BeautifulSoupHtml5LibFrameParser(_HtmlFrameParser):
367 """HTML to DataFrame parser that uses BeautifulSoup under the hood.
368
369 See Also
370 --------
371 pandas.io.html._HtmlFrameParser
372 pandas.io.html._LxmlFrameParser
373
374 Notes
375 -----
376 Documentation strings for this class are in the base class
377 :class:`pandas.io.html._HtmlFrameParser`.
378 """
379 def __init__(self, *args, **kwargs):
380 super(_BeautifulSoupHtml5LibFrameParser, self).__init__(*args,
381 **kwargs)
382 from bs4 import SoupStrainer
383 self._strainer = SoupStrainer('table')
384
385 def _text_getter(self, obj):
386 return obj.text
387
388 def _parse_td(self, row):
389 return row.find_all(('td', 'th'))
390
391 def _parse_tr(self, element):
392 return element.find_all('tr')
393
394 def _parse_th(self, element):
395 return element.find_all('th')
396
397 def _parse_thead(self, table):
398 return table.find_all('thead')
399
400 def _parse_tbody(self, table):
401 return table.find_all('tbody')
402
403 def _parse_tfoot(self, table):
404 return table.find_all('tfoot')
405
406 def _parse_tables(self, doc, match, attrs):
407 element_name = self._strainer.name
408 tables = doc.find_all(element_name, attrs=attrs)
409 if not tables:
410 # known sporadically working release
411 raise AssertionError('No tables found')
412
413 mts = [table.find(text=match) for table in tables]
414 matched_tables = [mt for mt in mts if mt is not None]
415 tables = list(set(mt.find_parent(element_name)
416 for mt in matched_tables))
417
418 if not tables:
419 raise AssertionError("No tables found matching "
420 "'{0}'".format(match.pattern))
421 return tables
422
423 def _setup_build_doc(self):
424 raw_text = _read(self.io)
425 if not raw_text:
426 raise AssertionError('No text parsed from document: '
427 '{0}'.format(self.io))
428 return raw_text
429
430 def _build_doc(self):
431 from bs4 import BeautifulSoup
432 return BeautifulSoup(self._setup_build_doc(), features='html5lib')
433
434
435 def _build_node_xpath_expr(attrs):
436 """Build an xpath expression to simulate bs4's ability to pass in kwargs to
437 search for attributes when using the lxml parser.
438
439 Parameters
440 ----------
441 attrs : dict
442 A dict of HTML attributes. These are NOT checked for validity.
443
444 Returns
445 -------
446 expr : unicode
447 An XPath expression that checks for the given HTML attributes.
448 """
449 # give class attribute as class_ because class is a python keyword
450 if 'class_' in attrs:
451 attrs['class'] = attrs.pop('class_')
452
453 s = (u("@{k}='{v}'").format(k=k, v=v) for k, v in compat.iteritems(attrs))
454 return u('[{0}]').format(' and '.join(s))
455
456
457 _re_namespace = {'re': 'http://exslt.org/regular-expressions'}
458 _valid_schemes = 'http', 'file', 'ftp'
459
460
461 class _LxmlFrameParser(_HtmlFrameParser):
462 """HTML to DataFrame parser that uses lxml under the hood.
463
464 Warning
465 -------
466 This parser can only handle HTTP, FTP, and FILE urls.
467
468 See Also
469 --------
470 _HtmlFrameParser
471 _BeautifulSoupLxmlFrameParser
472
473 Notes
474 -----
475 Documentation strings for this class are in the base class
476 :class:`_HtmlFrameParser`.
477 """
478 def __init__(self, *args, **kwargs):
479 super(_LxmlFrameParser, self).__init__(*args, **kwargs)
480
481 def _text_getter(self, obj):
482 return obj.text_content()
483
484 def _parse_td(self, row):
485 return row.xpath('.//td|.//th')
486
487 def _parse_tr(self, table):
488 expr = './/tr[normalize-space()]'
489 return table.xpath(expr)
490
491 def _parse_tables(self, doc, match, kwargs):
492 pattern = match.pattern
493
494 # check all descendants for the given pattern
495 check_all_expr = u('//*')
496 if pattern:
497 check_all_expr += u("[re:test(text(), '{0}')]").format(pattern)
498
499 # go up the tree until we find a table
500 check_table_expr = '/ancestor::table'
501 xpath_expr = check_all_expr + check_table_expr
502
503 # if any table attributes were given build an xpath expression to
504 # search for them
505 if kwargs:
506 xpath_expr += _build_node_xpath_expr(kwargs)
507 tables = doc.xpath(xpath_expr, namespaces=_re_namespace)
508 if not tables:
509 raise AssertionError("No tables found matching regex "
510 "'{0}'".format(pattern))
511 return tables
512
513 def _build_doc(self):
514 """
515 Raises
516 ------
517 ValueError
518 * If a URL that lxml cannot parse is passed.
519
520 Exception
521 * Any other ``Exception`` thrown. For example, trying to parse a
522 URL that is syntactically correct on a machine with no internet
523 connection will fail.
524
525 See Also
526 --------
527 pandas.io.html._HtmlFrameParser._build_doc
528 """
529 from lxml.html import parse, fromstring, HTMLParser
530 from lxml.etree import XMLSyntaxError
531 parser = HTMLParser(recover=False)
532
533 try:
534 # try to parse the input in the simplest way
535 r = parse(self.io, parser=parser)
536
537 try:
538 r = r.getroot()
539 except AttributeError:
540 pass
541 except (UnicodeDecodeError, IOError):
542 # if the input is a blob of html goop
543 if not _is_url(self.io):
544 r = fromstring(self.io, parser=parser)
545
546 try:
547 r = r.getroot()
548 except AttributeError:
549 pass
550 else:
551 # not a url
552 scheme = parse_url(self.io).scheme
553 if scheme not in _valid_schemes:
554 # lxml can't parse it
555 msg = ('{0} is not a valid url scheme, valid schemes are '
556 '{1}').format(scheme, _valid_schemes)
557 raise ValueError(msg)
558 else:
559 # something else happened: maybe a faulty connection
560 raise
561 else:
562 if not hasattr(r, 'text_content'):
563 raise XMLSyntaxError("no text parsed from document", 0, 0, 0)
564 return r
565
566 def _parse_tbody(self, table):
567 return table.xpath('.//tbody')
568
569 def _parse_thead(self, table):
570 return table.xpath('.//thead')
571
572 def _parse_tfoot(self, table):
573 return table.xpath('.//tfoot')
574
575 def _parse_raw_thead(self, table):
576 expr = './/thead//th'
577 return [_remove_whitespace(x.text_content()) for x in
578 table.xpath(expr)]
579
580 def _parse_raw_tfoot(self, table):
581 expr = './/tfoot//th'
582 return [_remove_whitespace(x.text_content()) for x in
583 table.xpath(expr)]
584
585
586 def _data_to_frame(data, header, index_col, infer_types, skiprows):
587 """Parse a BeautifulSoup table into a DataFrame.
588
589 Parameters
590 ----------
591 data : tuple of lists
592 The raw data to be placed into a DataFrame. This is a list of lists of
593 strings or unicode. If it helps, it can be thought of as a matrix of
594 strings instead.
595
596 header : int or None
597 An integer indicating the row to use for the column header or None
598 indicating no header will be used.
599
600 index_col : int or None
601 An integer indicating the column to use for the index or None
602 indicating no column will be used.
603
604 infer_types : bool
605 Whether to convert numbers and dates.
606
607 skiprows : collections.Container or int or slice
608 Iterable used to skip rows.
609
610 Returns
611 -------
612 df : DataFrame
613 A DataFrame containing the data from `data`
614
615 Raises
616 ------
617 ValueError
618 * If `skiprows` is not found in the rows of the parsed DataFrame.
619
620 Raises
621 ------
622 ValueError
623 * If `skiprows` is not found in the rows of the parsed DataFrame.
624
625 See Also
626 --------
627 read_html
628
629 Notes
630 -----
631 The `data` parameter is guaranteed not to be a list of empty lists.
632 """
633 thead, tbody, tfoot = data
634 columns = thead or None
635 df = DataFrame(tbody, columns=columns)
636
637 if skiprows is not None:
638 it = _get_skiprows_iter(skiprows)
639
640 try:
641 df = df.drop(it)
642 except ValueError:
643 raise ValueError('Labels {0} not found when trying to skip'
644 ' rows'.format(it))
645
646 # convert to numbers/dates where possible
647 # must be sequential since dates trump numbers if both args are given
648 if infer_types:
649 df = df.convert_objects(convert_numeric=True)
650 df = df.convert_objects(convert_dates='coerce')
651
652 if header is not None:
653 header_rows = df.iloc[header]
654
655 if header_rows.ndim == 2:
656 names = header_rows.index
657 df.columns = MultiIndex.from_arrays(header_rows.values,
658 names=names)
659 else:
660 df.columns = header_rows
661
662 df = df.drop(df.index[header])
663
664 if index_col is not None:
665 cols = df.columns[index_col]
666
667 try:
668 cols = cols.tolist()
669 except AttributeError:
670 pass
671
672 # drop by default
673 df.set_index(cols, inplace=True)
674 if df.index.nlevels == 1:
675 if isnull(df.index.name) or not df.index.name:
676 df.index.name = None
677 else:
678 names = [name or None for name in df.index.names]
679 df.index = MultiIndex.from_tuples(df.index.values, names=names)
680
681 return df
682
683
684 _valid_parsers = {'lxml': _LxmlFrameParser, None: _LxmlFrameParser,
685 'html5lib': _BeautifulSoupHtml5LibFrameParser,
686 'bs4': _BeautifulSoupHtml5LibFrameParser}
687
688
689 def _parser_dispatch(flavor):
690 """Choose the parser based on the input flavor.
691
692 Parameters
693 ----------
694 flavor : str
695 The type of parser to use. This must be a valid backend.
696
697 Returns
698 -------
699 cls : _HtmlFrameParser subclass
700 The parser class based on the requested input flavor.
701
702 Raises
703 ------
704 AssertionError
705 * If `flavor` is not a valid backend.
706 ImportError
707 * If you do not have the requested `flavor`
708 """
709 valid_parsers = list(_valid_parsers.keys())
710 if flavor not in valid_parsers:
711 raise AssertionError('"{0!r}" is not a valid flavor, valid flavors are'
712 ' {1}'.format(flavor, valid_parsers))
713
714 if flavor in ('bs4', 'html5lib'):
715 if not _HAS_HTML5LIB:
716 raise ImportError("html5lib not found please install it")
717 if not _HAS_BS4:
718 raise ImportError("bs4 not found please install it")
719 if bs4.__version__ == LooseVersion('4.2.0'):
720 raise AssertionError("You're using a version"
721 " of BeautifulSoup4 (4.2.0) that has been"
722 " known to cause problems on certain"
723 " operating systems such as Debian. "
724 "Please install a version of"
725 " BeautifulSoup4 != 4.2.0, both earlier"
726 " and later releases will work.")
727 else:
728 if not _HAS_LXML:
729 raise ImportError("lxml not found please install it")
730 return _valid_parsers[flavor]
731
732
733 def _validate_parser_flavor(flavor):
734 if flavor is None:
735 flavor = ['lxml', 'bs4']
736 elif isinstance(flavor, compat.string_types):
737 flavor = [flavor]
738 elif isinstance(flavor, collections.Iterable):
739 if not all(isinstance(flav, compat.string_types) for flav in flavor):
740 raise TypeError('{0} is not an iterable of strings'.format(flavor))
741 else:
742 raise TypeError('{0} is not a valid "flavor"'.format(flavor))
743
744 flavor = list(flavor)
745 valid_flavors = list(_valid_parsers.keys())
746
747 if not set(flavor) & set(valid_flavors):
748 raise ValueError('{0} is not a valid set of flavors, valid flavors are'
749 ' {1}'.format(flavor, valid_flavors))
750 return flavor
751
752
753 def _parse(flavor, io, match, header, index_col, skiprows, infer_types, attrs):
754 # bonus: re.compile is idempotent under function iteration so you can pass
755 # a compiled regex to it and it will return itself
756 flavor = _validate_parser_flavor(flavor)
757 compiled_match = re.compile(match)
758
759 # ugly hack because python 3 DELETES the exception variable!
760 retained = None
761 for flav in flavor:
762 parser = _parser_dispatch(flav)
763 p = parser(io, compiled_match, attrs)
764
765 try:
766 tables = p.parse_tables()
767 except Exception as caught:
768 retained = caught
769 else:
770 break
771 else:
772 raise retained
773
774 return [_data_to_frame(table, header, index_col, infer_types, skiprows)
775 for table in tables]
776
777
778 def read_html(io, match='.+', flavor=None, header=None, index_col=None,
779 skiprows=None, infer_types=True, attrs=None):
780 r"""Read an HTML table into a DataFrame.
781
782 Parameters
783 ----------
784 io : str or file-like
785 A string or file like object that can be either a url, a file-like
786 object, or a raw string containing HTML. Note that lxml only accepts
787 the http, ftp and file url protocols. If you have a URI that starts
788 with ``'https'`` you might removing the ``'s'``.
789
790 match : str or regex, optional, default '.+'
791 The set of tables containing text matching this regex or string will be
792 returned. Unless the HTML is extremely simple you will probably need to
793 pass a non-empty string here. Defaults to '.+' (match any non-empty
794 string). The default value will return all tables contained on a page.
795 This value is converted to a regular expression so that there is
796 consistent behavior between Beautiful Soup and lxml.
797
798 flavor : str, container of strings, default ``None``
799 The parsing engine to use under the hood. 'bs4' and 'html5lib' are
800 synonymous with each other, they are both there for backwards
801 compatibility. The default of ``None`` tries to use ``lxml`` to parse
802 and if that fails it falls back on ``bs4`` + ``html5lib``.
803
804 header : int or array-like or None, optional, default ``None``
805 The row (or rows for a MultiIndex) to use to make the columns headers.
806 Note that this row will be removed from the data.
807
808 index_col : int or array-like or None, optional, default ``None``
809 The column to use to make the index. Note that this column will be
810 removed from the data.
811
812 skiprows : int or collections.Container or slice or None, optional, default ``None``
813 If an integer is given then skip this many rows after parsing the
814 column header. If a sequence of integers is given skip those specific
815 rows (0-based). Note that
816
817 .. code-block:: python
818
819 skiprows == 0
820
821 yields the same result as
822
823 .. code-block:: python
824
825 skiprows is None
826
827 If `skiprows` is a positive integer, say :math:`n`, then
828 it is treated as "skip :math:`n` rows", *not* as "skip the
829 :math:`n^\textrm{th}` row".
830
831 infer_types : bool, optional, default ``True``
832 Whether to convert numeric types and date-appearing strings to numbers
833 and dates, respectively.
834
835 attrs : dict or None, optional, default ``None``
836 This is a dictionary of attributes that you can pass to use to identify
837 the table in the HTML. These are not checked for validity before being
838 passed to lxml or Beautiful Soup. However, these attributes must be
839 valid HTML table attributes to work correctly. For example,
840
841 .. code-block:: python
842
843 attrs = {'id': 'table'}
844
845 is a valid attribute dictionary because the 'id' HTML tag attribute is
846 a valid HTML attribute for *any* HTML tag as per `this document
847 <http://www.w3.org/TR/html-markup/global-attributes.html>`__.
848
849 .. code-block:: python
850
851 attrs = {'asdf': 'table'}
852
853 is *not* a valid attribute dictionary because 'asdf' is not a valid
854 HTML attribute even if it is a valid XML attribute. Valid HTML 4.01
855 table attributes can be found `here
856 <http://www.w3.org/TR/REC-html40/struct/tables.html#h-11.2>`__. A
857 working draft of the HTML 5 spec can be found `here
858 <http://www.w3.org/TR/html-markup/table.html>`__. It contains the
859 latest information on table attributes for the modern web.
860
861 Returns
862 -------
863 dfs : list of DataFrames
864 A list of DataFrames, each of which is the parsed data from each of the
865 tables on the page.
866
867 Notes
868 -----
869 Before using this function you should probably read the :ref:`gotchas about
870 the parser libraries that this function uses <html-gotchas>`.
871
872 There's as little cleaning of the data as possible due to the heterogeneity
873 and general disorder of HTML on the web.
874
875 Expect some cleanup after you call this function. For example,
876 you might need to pass `infer_types=False` and perform manual conversion if
877 the column names are converted to NaN when you pass the `header=0`
878 argument. We try to assume as little as possible about the structure of the
879 table and push the idiosyncrasies of the HTML contained in the table to
880 you, the user.
881
882 This function only searches for <table> elements and only for <tr> and <th>
883 rows and <td> elements within those rows. This could be extended by
884 subclassing one of the parser classes contained in :mod:`pandas.io.html`.
885
886 Similar to :func:`read_csv` the `header` argument is applied **after**
887 `skiprows` is applied.
888
889 This function will *always* return a list of :class:`DataFrame` *or*
890 it will fail, e.g., it will *not* return an empty list.
891
892 Examples
893 --------
894 See the :ref:`read_html documentation in the IO section of the docs
895 <io.read_html>` for many examples of reading HTML.
896 """
897 # Type check here. We don't want to parse only to fail because of an
898 # invalid value of an integer skiprows.
899 if isinstance(skiprows, numbers.Integral) and skiprows < 0:
900 raise AssertionError('cannot skip rows starting from the end of the '
901 'data (you passed a negative value)')
902 return _parse(flavor, io, match, header, index_col, skiprows, infer_types,
903 attrs)
904
[end of pandas/io/html.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pandas-dev/pandas
|
ac0ce3c40c666790d65039076cc968b83d8f0403
|
Issue with index_col and read_html
```
pd.read_html("http://www.camacau.com/changeLang?lang=en_US&url=/statistic_list",infer_types=False,header=0,index_col=0)
```
yields:
```
Traceback (most recent call last)
<ipython-input-114-a13f8ac8a77b> in <module>()
----> 1 foo2 = pd.read_html("http://www.camacau.com/changeLang?lang=en_US&url=/statistic_list",infer_types=False,header=0,index_col=0)
/usr/local/lib/python2.7/dist-packages/pandas/io/html.pyc in read_html(io, match, flavor, header, index_col, skiprows, infer_types, attrs)
904 'data (you passed a negative value)')
905 return _parse(flavor, io, match, header, index_col, skiprows, infer_types,
--> 906 attrs)
/usr/local/lib/python2.7/dist-packages/pandas/io/html.pyc in _parse(flavor, io, match, header, index_col, skiprows, infer_types, attrs)
776
777 return [_data_to_frame(table, header, index_col, infer_types, skiprows)
--> 778 for table in tables]
779
780
/usr/local/lib/python2.7/dist-packages/pandas/io/html.pyc in _data_to_frame(data, header, index_col, infer_types, skiprows)
674
675 # drop by default
--> 676 df.set_index(cols, inplace=True)
677 if df.index.nlevels == 1:
678 if isnull(df.index.name) or not df.index.name:
/usr/local/lib/python2.7/dist-packages/pandas/core/frame.pyc in set_index(self, keys, drop, append, inplace, verify_integrity)
2833 arrays.append(level)
2834
-> 2835 index = MultiIndex.from_arrays(arrays, names=names)
2836
2837 if verify_integrity and not index.is_unique:
/usr/local/lib/python2.7/dist-packages/pandas/core/index.pyc in from_arrays(cls, arrays, sortorder, names)
1763 if len(arrays) == 1:
1764 name = None if names is None else names[0]
-> 1765 return Index(arrays[0], name=name)
1766
1767 cats = [Categorical.from_array(arr) for arr in arrays]
/usr/local/lib/python2.7/dist-packages/pandas/core/index.pyc in __new__(cls, data, dtype, copy, name, **kwargs)
108 return Int64Index(data, copy=copy, dtype=dtype, name=name)
109
--> 110 subarr = com._asarray_tuplesafe(data, dtype=object)
111 elif np.isscalar(data):
112 raise ValueError('Index(...) must be called with a collection '
/usr/local/lib/python2.7/dist-packages/pandas/core/common.pyc in _asarray_tuplesafe(values, dtype)
1489 # in numpy, leading to the following
1490 result = np.empty(len(values), dtype=object)
-> 1491 result[:] = values
1492
1493 return result
ValueError: could not broadcast input array from shape (11,2) into shape (11)
```
|
2013-09-07T04:14:16Z
|
<patch>
diff --git a/doc/source/release.rst b/doc/source/release.rst
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -167,6 +167,8 @@ Improvements to existing features
- Improve support for converting R datasets to pandas objects (more
informative index for timeseries and numeric, support for factors, dist, and
high-dimensional arrays).
+ - :func:`~pandas.read_html` now supports the ``parse_dates``,
+ ``tupleize_cols`` and ``thousands`` parameters (:issue:`4770`).
API Changes
~~~~~~~~~~~
@@ -373,6 +375,8 @@ See :ref:`Internal Refactoring<whatsnew_0130.refactoring>`
``core/generic.py`` (:issue:`4435`).
- Refactor cum objects to core/generic.py (:issue:`4435`), note that these have a more numpy-like
function signature.
+ - :func:`~pandas.read_html` now uses ``TextParser`` to parse HTML data from
+ bs4/lxml (:issue:`4770`).
.. _release.bug_fixes-0.13.0:
@@ -538,6 +542,15 @@ Bug Fixes
- Make sure series-series boolean comparions are label based (:issue:`4947`)
- Bug in multi-level indexing with a Timestamp partial indexer (:issue:`4294`)
- Tests/fix for multi-index construction of an all-nan frame (:isue:`4078`)
+ - Fixed a bug where :func:`~pandas.read_html` wasn't correctly inferring
+ values of tables with commas (:issue:`5029`)
+ - Fixed a bug where :func:`~pandas.read_html` wasn't providing a stable
+ ordering of returned tables (:issue:`4770`, :issue:`5029`).
+ - Fixed a bug where :func:`~pandas.read_html` was incorrectly parsing when
+ passed ``index_col=0`` (:issue:`5066`).
+ - Fixed a bug where :func:`~pandas.read_html` was incorrectly infering the
+ type of headers (:issue:`5048`).
+
pandas 0.12.0
-------------
diff --git a/pandas/io/html.py b/pandas/io/html.py
--- a/pandas/io/html.py
+++ b/pandas/io/html.py
@@ -7,15 +7,18 @@
import re
import numbers
import collections
+import warnings
from distutils.version import LooseVersion
import numpy as np
-from pandas import DataFrame, MultiIndex, isnull
from pandas.io.common import _is_url, urlopen, parse_url
-from pandas.compat import range, lrange, lmap, u, map
-from pandas import compat
+from pandas.io.parsers import TextParser
+from pandas.compat import (lrange, lmap, u, string_types, iteritems, text_type,
+ raise_with_traceback)
+from pandas.core import common as com
+from pandas import Series
try:
@@ -45,7 +48,7 @@
#############
# READ HTML #
#############
-_RE_WHITESPACE = re.compile(r'([\r\n]+|\s{2,})')
+_RE_WHITESPACE = re.compile(r'[\r\n]+|\s{2,}')
def _remove_whitespace(s, regex=_RE_WHITESPACE):
@@ -67,7 +70,7 @@ def _remove_whitespace(s, regex=_RE_WHITESPACE):
return regex.sub(' ', s.strip())
-def _get_skiprows_iter(skiprows):
+def _get_skiprows(skiprows):
"""Get an iterator given an integer, slice or container.
Parameters
@@ -80,11 +83,6 @@ def _get_skiprows_iter(skiprows):
TypeError
* If `skiprows` is not a slice, integer, or Container
- Raises
- ------
- TypeError
- * If `skiprows` is not a slice, integer, or Container
-
Returns
-------
it : iterable
@@ -92,13 +90,12 @@ def _get_skiprows_iter(skiprows):
"""
if isinstance(skiprows, slice):
return lrange(skiprows.start or 0, skiprows.stop, skiprows.step or 1)
- elif isinstance(skiprows, numbers.Integral):
- return lrange(skiprows)
- elif isinstance(skiprows, collections.Container):
+ elif isinstance(skiprows, numbers.Integral) or com.is_list_like(skiprows):
return skiprows
- else:
- raise TypeError('{0} is not a valid type for skipping'
- ' rows'.format(type(skiprows)))
+ elif skiprows is None:
+ return 0
+ raise TypeError('%r is not a valid type for skipping rows' %
+ type(skiprows).__name__)
def _read(io):
@@ -120,11 +117,10 @@ def _read(io):
elif os.path.isfile(io):
with open(io) as f:
raw_text = f.read()
- elif isinstance(io, compat.string_types):
+ elif isinstance(io, string_types):
raw_text = io
else:
- raise TypeError("Cannot read object of type "
- "'{0.__class__.__name__!r}'".format(io))
+ raise TypeError("Cannot read object of type %r" % type(io).__name__)
return raw_text
@@ -194,12 +190,6 @@ def _parse_raw_data(self, rows):
A callable that takes a row node as input and returns a list of the
column node in that row. This must be defined by subclasses.
- Raises
- ------
- AssertionError
- * If `text_getter` is not callable
- * If `column_finder` is not callable
-
Returns
-------
data : list of list of strings
@@ -254,7 +244,7 @@ def _parse_tables(self, doc, match, attrs):
Raises
------
- AssertionError
+ ValueError
* If `match` does not match any text in the document.
Returns
@@ -406,25 +396,28 @@ def _parse_tfoot(self, table):
def _parse_tables(self, doc, match, attrs):
element_name = self._strainer.name
tables = doc.find_all(element_name, attrs=attrs)
+
if not tables:
- # known sporadically working release
- raise AssertionError('No tables found')
+ raise ValueError('No tables found')
- mts = [table.find(text=match) for table in tables]
- matched_tables = [mt for mt in mts if mt is not None]
- tables = list(set(mt.find_parent(element_name)
- for mt in matched_tables))
+ result = []
+ unique_tables = set()
- if not tables:
- raise AssertionError("No tables found matching "
- "'{0}'".format(match.pattern))
- return tables
+ for table in tables:
+ if (table not in unique_tables and
+ table.find(text=match) is not None):
+ result.append(table)
+ unique_tables.add(table)
+
+ if not result:
+ raise ValueError("No tables found matching pattern %r" %
+ match.pattern)
+ return result
def _setup_build_doc(self):
raw_text = _read(self.io)
if not raw_text:
- raise AssertionError('No text parsed from document: '
- '{0}'.format(self.io))
+ raise ValueError('No text parsed from document: %s' % self.io)
return raw_text
def _build_doc(self):
@@ -432,7 +425,7 @@ def _build_doc(self):
return BeautifulSoup(self._setup_build_doc(), features='html5lib')
-def _build_node_xpath_expr(attrs):
+def _build_xpath_expr(attrs):
"""Build an xpath expression to simulate bs4's ability to pass in kwargs to
search for attributes when using the lxml parser.
@@ -450,8 +443,8 @@ def _build_node_xpath_expr(attrs):
if 'class_' in attrs:
attrs['class'] = attrs.pop('class_')
- s = (u("@{k}='{v}'").format(k=k, v=v) for k, v in compat.iteritems(attrs))
- return u('[{0}]').format(' and '.join(s))
+ s = [u("@%s=%r") % (k, v) for k, v in iteritems(attrs)]
+ return u('[%s]') % ' and '.join(s)
_re_namespace = {'re': 'http://exslt.org/regular-expressions'}
@@ -491,23 +484,20 @@ def _parse_tr(self, table):
def _parse_tables(self, doc, match, kwargs):
pattern = match.pattern
- # check all descendants for the given pattern
- check_all_expr = u('//*')
- if pattern:
- check_all_expr += u("[re:test(text(), '{0}')]").format(pattern)
-
- # go up the tree until we find a table
- check_table_expr = '/ancestor::table'
- xpath_expr = check_all_expr + check_table_expr
+ # 1. check all descendants for the given pattern and only search tables
+ # 2. go up the tree until we find a table
+ query = '//table//*[re:test(text(), %r)]/ancestor::table'
+ xpath_expr = u(query) % pattern
# if any table attributes were given build an xpath expression to
# search for them
if kwargs:
- xpath_expr += _build_node_xpath_expr(kwargs)
+ xpath_expr += _build_xpath_expr(kwargs)
+
tables = doc.xpath(xpath_expr, namespaces=_re_namespace)
+
if not tables:
- raise AssertionError("No tables found matching regex "
- "'{0}'".format(pattern))
+ raise ValueError("No tables found matching regex %r" % pattern)
return tables
def _build_doc(self):
@@ -528,6 +518,7 @@ def _build_doc(self):
"""
from lxml.html import parse, fromstring, HTMLParser
from lxml.etree import XMLSyntaxError
+
parser = HTMLParser(recover=False)
try:
@@ -552,8 +543,8 @@ def _build_doc(self):
scheme = parse_url(self.io).scheme
if scheme not in _valid_schemes:
# lxml can't parse it
- msg = ('{0} is not a valid url scheme, valid schemes are '
- '{1}').format(scheme, _valid_schemes)
+ msg = ('%r is not a valid url scheme, valid schemes are '
+ '%s') % (scheme, _valid_schemes)
raise ValueError(msg)
else:
# something else happened: maybe a faulty connection
@@ -583,101 +574,38 @@ def _parse_raw_tfoot(self, table):
table.xpath(expr)]
-def _data_to_frame(data, header, index_col, infer_types, skiprows):
- """Parse a BeautifulSoup table into a DataFrame.
+def _expand_elements(body):
+ lens = Series(lmap(len, body))
+ lens_max = lens.max()
+ not_max = lens[lens != lens_max]
- Parameters
- ----------
- data : tuple of lists
- The raw data to be placed into a DataFrame. This is a list of lists of
- strings or unicode. If it helps, it can be thought of as a matrix of
- strings instead.
-
- header : int or None
- An integer indicating the row to use for the column header or None
- indicating no header will be used.
+ for ind, length in iteritems(not_max):
+ body[ind] += [np.nan] * (lens_max - length)
- index_col : int or None
- An integer indicating the column to use for the index or None
- indicating no column will be used.
- infer_types : bool
- Whether to convert numbers and dates.
+def _data_to_frame(data, header, index_col, skiprows, infer_types,
+ parse_dates, tupleize_cols, thousands):
+ head, body, _ = data # _ is footer which is rarely used: ignore for now
- skiprows : collections.Container or int or slice
- Iterable used to skip rows.
+ if head:
+ body = [head] + body
- Returns
- -------
- df : DataFrame
- A DataFrame containing the data from `data`
-
- Raises
- ------
- ValueError
- * If `skiprows` is not found in the rows of the parsed DataFrame.
+ if header is None: # special case when a table has <th> elements
+ header = 0
- Raises
- ------
- ValueError
- * If `skiprows` is not found in the rows of the parsed DataFrame.
-
- See Also
- --------
- read_html
-
- Notes
- -----
- The `data` parameter is guaranteed not to be a list of empty lists.
- """
- thead, tbody, tfoot = data
- columns = thead or None
- df = DataFrame(tbody, columns=columns)
+ # fill out elements of body that are "ragged"
+ _expand_elements(body)
- if skiprows is not None:
- it = _get_skiprows_iter(skiprows)
+ tp = TextParser(body, header=header, index_col=index_col,
+ skiprows=_get_skiprows(skiprows),
+ parse_dates=parse_dates, tupleize_cols=tupleize_cols,
+ thousands=thousands)
+ df = tp.read()
- try:
- df = df.drop(it)
- except ValueError:
- raise ValueError('Labels {0} not found when trying to skip'
- ' rows'.format(it))
-
- # convert to numbers/dates where possible
- # must be sequential since dates trump numbers if both args are given
- if infer_types:
- df = df.convert_objects(convert_numeric=True)
+ if infer_types: # TODO: rm this code so infer_types has no effect in 0.14
df = df.convert_objects(convert_dates='coerce')
-
- if header is not None:
- header_rows = df.iloc[header]
-
- if header_rows.ndim == 2:
- names = header_rows.index
- df.columns = MultiIndex.from_arrays(header_rows.values,
- names=names)
- else:
- df.columns = header_rows
-
- df = df.drop(df.index[header])
-
- if index_col is not None:
- cols = df.columns[index_col]
-
- try:
- cols = cols.tolist()
- except AttributeError:
- pass
-
- # drop by default
- df.set_index(cols, inplace=True)
- if df.index.nlevels == 1:
- if isnull(df.index.name) or not df.index.name:
- df.index.name = None
- else:
- names = [name or None for name in df.index.names]
- df.index = MultiIndex.from_tuples(df.index.values, names=names)
-
+ else:
+ df = df.applymap(text_type)
return df
@@ -701,15 +629,15 @@ def _parser_dispatch(flavor):
Raises
------
- AssertionError
+ ValueError
* If `flavor` is not a valid backend.
ImportError
* If you do not have the requested `flavor`
"""
valid_parsers = list(_valid_parsers.keys())
if flavor not in valid_parsers:
- raise AssertionError('"{0!r}" is not a valid flavor, valid flavors are'
- ' {1}'.format(flavor, valid_parsers))
+ raise ValueError('%r is not a valid flavor, valid flavors are %s' %
+ (flavor, valid_parsers))
if flavor in ('bs4', 'html5lib'):
if not _HAS_HTML5LIB:
@@ -717,46 +645,54 @@ def _parser_dispatch(flavor):
if not _HAS_BS4:
raise ImportError("bs4 not found please install it")
if bs4.__version__ == LooseVersion('4.2.0'):
- raise AssertionError("You're using a version"
- " of BeautifulSoup4 (4.2.0) that has been"
- " known to cause problems on certain"
- " operating systems such as Debian. "
- "Please install a version of"
- " BeautifulSoup4 != 4.2.0, both earlier"
- " and later releases will work.")
+ raise ValueError("You're using a version"
+ " of BeautifulSoup4 (4.2.0) that has been"
+ " known to cause problems on certain"
+ " operating systems such as Debian. "
+ "Please install a version of"
+ " BeautifulSoup4 != 4.2.0, both earlier"
+ " and later releases will work.")
else:
if not _HAS_LXML:
raise ImportError("lxml not found please install it")
return _valid_parsers[flavor]
-def _validate_parser_flavor(flavor):
+def _print_as_set(s):
+ return '{%s}' % ', '.join([com.pprint_thing(el) for el in s])
+
+
+def _validate_flavor(flavor):
if flavor is None:
- flavor = ['lxml', 'bs4']
- elif isinstance(flavor, compat.string_types):
- flavor = [flavor]
+ flavor = 'lxml', 'bs4'
+ elif isinstance(flavor, string_types):
+ flavor = flavor,
elif isinstance(flavor, collections.Iterable):
- if not all(isinstance(flav, compat.string_types) for flav in flavor):
- raise TypeError('{0} is not an iterable of strings'.format(flavor))
+ if not all(isinstance(flav, string_types) for flav in flavor):
+ raise TypeError('Object of type %r is not an iterable of strings' %
+ type(flavor).__name__)
else:
- raise TypeError('{0} is not a valid "flavor"'.format(flavor))
-
- flavor = list(flavor)
- valid_flavors = list(_valid_parsers.keys())
-
- if not set(flavor) & set(valid_flavors):
- raise ValueError('{0} is not a valid set of flavors, valid flavors are'
- ' {1}'.format(flavor, valid_flavors))
+ fmt = '{0!r}' if isinstance(flavor, string_types) else '{0}'
+ fmt += ' is not a valid flavor'
+ raise ValueError(fmt.format(flavor))
+
+ flavor = tuple(flavor)
+ valid_flavors = set(_valid_parsers)
+ flavor_set = set(flavor)
+
+ if not flavor_set & valid_flavors:
+ raise ValueError('%s is not a valid set of flavors, valid flavors are '
+ '%s' % (_print_as_set(flavor_set),
+ _print_as_set(valid_flavors)))
return flavor
-def _parse(flavor, io, match, header, index_col, skiprows, infer_types, attrs):
- # bonus: re.compile is idempotent under function iteration so you can pass
- # a compiled regex to it and it will return itself
- flavor = _validate_parser_flavor(flavor)
- compiled_match = re.compile(match)
+def _parse(flavor, io, match, header, index_col, skiprows, infer_types,
+ parse_dates, tupleize_cols, thousands, attrs):
+ flavor = _validate_flavor(flavor)
+ compiled_match = re.compile(match) # you can pass a compiled regex here
- # ugly hack because python 3 DELETES the exception variable!
+ # hack around python 3 deleting the exception variable
retained = None
for flav in flavor:
parser = _parser_dispatch(flav)
@@ -769,25 +705,26 @@ def _parse(flavor, io, match, header, index_col, skiprows, infer_types, attrs):
else:
break
else:
- raise retained
+ raise_with_traceback(retained)
- return [_data_to_frame(table, header, index_col, infer_types, skiprows)
+ return [_data_to_frame(table, header, index_col, skiprows, infer_types,
+ parse_dates, tupleize_cols, thousands)
for table in tables]
def read_html(io, match='.+', flavor=None, header=None, index_col=None,
- skiprows=None, infer_types=True, attrs=None):
- r"""Read an HTML table into a DataFrame.
+ skiprows=None, infer_types=None, attrs=None, parse_dates=False,
+ tupleize_cols=False, thousands=','):
+ r"""Read HTML tables into a ``list`` of ``DataFrame`` objects.
Parameters
----------
io : str or file-like
- A string or file like object that can be either a url, a file-like
- object, or a raw string containing HTML. Note that lxml only accepts
- the http, ftp and file url protocols. If you have a URI that starts
- with ``'https'`` you might removing the ``'s'``.
+ A URL, a file-like object, or a raw string containing HTML. Note that
+ lxml only accepts the http, ftp and file url protocols. If you have a
+ URL that starts with ``'https'`` you might try removing the ``'s'``.
- match : str or regex, optional, default '.+'
+ match : str or compiled regular expression, optional
The set of tables containing text matching this regex or string will be
returned. Unless the HTML is extremely simple you will probably need to
pass a non-empty string here. Defaults to '.+' (match any non-empty
@@ -795,44 +732,30 @@ def read_html(io, match='.+', flavor=None, header=None, index_col=None,
This value is converted to a regular expression so that there is
consistent behavior between Beautiful Soup and lxml.
- flavor : str, container of strings, default ``None``
- The parsing engine to use under the hood. 'bs4' and 'html5lib' are
- synonymous with each other, they are both there for backwards
- compatibility. The default of ``None`` tries to use ``lxml`` to parse
- and if that fails it falls back on ``bs4`` + ``html5lib``.
+ flavor : str or None, container of strings
+ The parsing engine to use. 'bs4' and 'html5lib' are synonymous with
+ each other, they are both there for backwards compatibility. The
+ default of ``None`` tries to use ``lxml`` to parse and if that fails it
+ falls back on ``bs4`` + ``html5lib``.
- header : int or array-like or None, optional, default ``None``
- The row (or rows for a MultiIndex) to use to make the columns headers.
- Note that this row will be removed from the data.
+ header : int or list-like or None, optional
+ The row (or list of rows for a :class:`~pandas.MultiIndex`) to use to
+ make the columns headers.
- index_col : int or array-like or None, optional, default ``None``
- The column to use to make the index. Note that this column will be
- removed from the data.
+ index_col : int or list-like or None, optional
+ The column (or list of columns) to use to create the index.
- skiprows : int or collections.Container or slice or None, optional, default ``None``
- If an integer is given then skip this many rows after parsing the
- column header. If a sequence of integers is given skip those specific
- rows (0-based). Note that
+ skiprows : int or list-like or slice or None, optional
+ 0-based. Number of rows to skip after parsing the column integer. If a
+ sequence of integers or a slice is given, will skip the rows indexed by
+ that sequence. Note that a single element sequence means 'skip the nth
+ row' whereas an integer means 'skip n rows'.
- .. code-block:: python
-
- skiprows == 0
-
- yields the same result as
-
- .. code-block:: python
+ infer_types : bool, optional
+ This option is deprecated in 0.13, an will have no effect in 0.14. It
+ defaults to ``True``.
- skiprows is None
-
- If `skiprows` is a positive integer, say :math:`n`, then
- it is treated as "skip :math:`n` rows", *not* as "skip the
- :math:`n^\textrm{th}` row".
-
- infer_types : bool, optional, default ``True``
- Whether to convert numeric types and date-appearing strings to numbers
- and dates, respectively.
-
- attrs : dict or None, optional, default ``None``
+ attrs : dict or None, optional
This is a dictionary of attributes that you can pass to use to identify
the table in the HTML. These are not checked for validity before being
passed to lxml or Beautiful Soup. However, these attributes must be
@@ -858,33 +781,38 @@ def read_html(io, match='.+', flavor=None, header=None, index_col=None,
<http://www.w3.org/TR/html-markup/table.html>`__. It contains the
latest information on table attributes for the modern web.
+ parse_dates : bool, optional
+ See :func:`~pandas.read_csv` for details.
+
+ tupleize_cols : bool, optional
+ If ``False`` try to parse multiple header rows into a
+ :class:`~pandas.MultiIndex`, otherwise return raw tuples. Defaults to
+ ``False``.
+
+ thousands : str, optional
+ Separator to use to parse thousands. Defaults to ``','``.
+
Returns
-------
dfs : list of DataFrames
- A list of DataFrames, each of which is the parsed data from each of the
- tables on the page.
Notes
-----
- Before using this function you should probably read the :ref:`gotchas about
- the parser libraries that this function uses <html-gotchas>`.
+ Before using this function you should read the :ref:`gotchas about the
+ HTML parsing libraries <html-gotchas>`.
- There's as little cleaning of the data as possible due to the heterogeneity
- and general disorder of HTML on the web.
+ Expect to do some cleanup after you call this function. For example, you
+ might need to manually assign column names if the column names are
+ converted to NaN when you pass the `header=0` argument. We try to assume as
+ little as possible about the structure of the table and push the
+ idiosyncrasies of the HTML contained in the table to the user.
- Expect some cleanup after you call this function. For example,
- you might need to pass `infer_types=False` and perform manual conversion if
- the column names are converted to NaN when you pass the `header=0`
- argument. We try to assume as little as possible about the structure of the
- table and push the idiosyncrasies of the HTML contained in the table to
- you, the user.
+ This function searches for ``<table>`` elements and only for ``<tr>``
+ and ``<th>`` rows and ``<td>`` elements within each ``<tr>`` or ``<th>``
+ element in the table. ``<td>`` stands for "table data".
- This function only searches for <table> elements and only for <tr> and <th>
- rows and <td> elements within those rows. This could be extended by
- subclassing one of the parser classes contained in :mod:`pandas.io.html`.
-
- Similar to :func:`read_csv` the `header` argument is applied **after**
- `skiprows` is applied.
+ Similar to :func:`~pandas.read_csv` the `header` argument is applied
+ **after** `skiprows` is applied.
This function will *always* return a list of :class:`DataFrame` *or*
it will fail, e.g., it will *not* return an empty list.
@@ -892,12 +820,21 @@ def read_html(io, match='.+', flavor=None, header=None, index_col=None,
Examples
--------
See the :ref:`read_html documentation in the IO section of the docs
- <io.read_html>` for many examples of reading HTML.
+ <io.read_html>` for some examples of reading in HTML tables.
+
+ See Also
+ --------
+ pandas.read_csv
"""
+ if infer_types is not None:
+ warnings.warn("infer_types will have no effect in 0.14", FutureWarning)
+ else:
+ infer_types = True # TODO: remove in 0.14
+
# Type check here. We don't want to parse only to fail because of an
# invalid value of an integer skiprows.
if isinstance(skiprows, numbers.Integral) and skiprows < 0:
- raise AssertionError('cannot skip rows starting from the end of the '
- 'data (you passed a negative value)')
+ raise ValueError('cannot skip rows starting from the end of the '
+ 'data (you passed a negative value)')
return _parse(flavor, io, match, header, index_col, skiprows, infer_types,
- attrs)
+ parse_dates, tupleize_cols, thousands, attrs)
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -606,16 +606,10 @@ def _failover_to_python(self):
raise NotImplementedError
def read(self, nrows=None):
- suppressed_warnings = False
if nrows is not None:
if self.options.get('skip_footer'):
raise ValueError('skip_footer not supported for iteration')
- # # XXX hack
- # if isinstance(self._engine, CParserWrapper):
- # suppressed_warnings = True
- # self._engine.set_error_bad_lines(False)
-
ret = self._engine.read(nrows)
if self.options.get('as_recarray'):
@@ -710,7 +704,6 @@ def _should_parse_dates(self, i):
else:
return (j in self.parse_dates) or (name in self.parse_dates)
-
def _extract_multi_indexer_columns(self, header, index_names, col_names, passed_names=False):
""" extract and return the names, index_names, col_names
header is a list-of-lists returned from the parsers """
@@ -728,12 +721,10 @@ def _extract_multi_indexer_columns(self, header, index_names, col_names, passed_
ic = [ ic ]
sic = set(ic)
- orig_header = list(header)
-
# clean the index_names
index_names = header.pop(-1)
- (index_names, names,
- index_col) = _clean_index_names(index_names, self.index_col)
+ index_names, names, index_col = _clean_index_names(index_names,
+ self.index_col)
# extract the columns
field_count = len(header[0])
@@ -766,7 +757,7 @@ def _maybe_make_multi_index_columns(self, columns, col_names=None):
return columns
def _make_index(self, data, alldata, columns, indexnamerow=False):
- if not _is_index_col(self.index_col) or len(self.index_col) == 0:
+ if not _is_index_col(self.index_col) or not self.index_col:
index = None
elif not self._has_complex_date_col:
@@ -1430,7 +1421,7 @@ def read(self, rows=None):
self._first_chunk = False
columns = list(self.orig_names)
- if len(content) == 0: # pragma: no cover
+ if not len(content): # pragma: no cover
# DataFrame with the right metadata, even though it's length 0
return _get_empty_meta(self.orig_names,
self.index_col,
@@ -1468,8 +1459,8 @@ def _convert_data(self, data):
col = self.orig_names[col]
clean_conv[col] = f
- return self._convert_to_ndarrays(data, self.na_values, self.na_fvalues, self.verbose,
- clean_conv)
+ return self._convert_to_ndarrays(data, self.na_values, self.na_fvalues,
+ self.verbose, clean_conv)
def _infer_columns(self):
names = self.names
@@ -1478,16 +1469,15 @@ def _infer_columns(self):
header = self.header
# we have a mi columns, so read and extra line
- if isinstance(header,(list,tuple,np.ndarray)):
+ if isinstance(header, (list, tuple, np.ndarray)):
have_mi_columns = True
- header = list(header) + [header[-1]+1]
+ header = list(header) + [header[-1] + 1]
else:
have_mi_columns = False
- header = [ header ]
+ header = [header]
columns = []
for level, hr in enumerate(header):
-
if len(self.buf) > 0:
line = self.buf[0]
else:
@@ -1521,10 +1511,11 @@ def _infer_columns(self):
if names is not None:
if len(names) != len(columns[0]):
- raise Exception('Number of passed names did not match '
- 'number of header fields in the file')
+ raise ValueError('Number of passed names did not match '
+ 'number of header fields in the file')
if len(columns) > 1:
- raise Exception('Cannot pass names with multi-index columns')
+ raise TypeError('Cannot pass names with multi-index '
+ 'columns')
columns = [ names ]
else:
</patch>
|
[]
|
[]
| ||||
pandas-dev__pandas-25462
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
assert_*_equal issues
The function `assert_almost_equal` may return the return value of another functions like `assert_frame_equal` or `assert_series_equal`.
But `assert_frame_equal` does not return any value explicitly, hence it shall always return None.
Also, `assert_series_equal` may or may not return a value explicitly.
Are those assertion functions usable or am I overlooking something?
</issue>
<code>
[start of README.md]
1 <div align="center">
2 <img src="https://github.com/pandas-dev/pandas/blob/master/doc/logo/pandas_logo.png"><br>
3 </div>
4
5 -----------------
6
7 # pandas: powerful Python data analysis toolkit
8
9 <table>
10 <tr>
11 <td>Latest Release</td>
12 <td>
13 <a href="https://pypi.org/project/pandas/">
14 <img src="https://img.shields.io/pypi/v/pandas.svg" alt="latest release" />
15 </a>
16 </td>
17 </tr>
18 <td></td>
19 <td>
20 <a href="https://anaconda.org/anaconda/pandas/">
21 <img src="https://anaconda.org/conda-forge/pandas/badges/version.svg" alt="latest release" />
22 </a>
23 </td>
24 </tr>
25 <tr>
26 <td>Package Status</td>
27 <td>
28 <a href="https://pypi.org/project/pandas/">
29 <img src="https://img.shields.io/pypi/status/pandas.svg" alt="status" />
30 </a>
31 </td>
32 </tr>
33 <tr>
34 <td>License</td>
35 <td>
36 <a href="https://github.com/pandas-dev/pandas/blob/master/LICENSE">
37 <img src="https://img.shields.io/pypi/l/pandas.svg" alt="license" />
38 </a>
39 </td>
40 </tr>
41 <tr>
42 <td>Build Status</td>
43 <td>
44 <a href="https://travis-ci.org/pandas-dev/pandas">
45 <img src="https://travis-ci.org/pandas-dev/pandas.svg?branch=master" alt="travis build status" />
46 </a>
47 </td>
48 </tr>
49 <tr>
50 <td></td>
51 <td>
52 <a href="https://dev.azure.com/pandas-dev/pandas/_build/latest?definitionId=1&branch=master">
53 <img src="https://dev.azure.com/pandas-dev/pandas/_apis/build/status/pandas-dev.pandas?branch=master" alt="Azure Pipelines build status" />
54 </a>
55 </td>
56 </tr>
57 <tr>
58 <td>Coverage</td>
59 <td>
60 <a href="https://codecov.io/gh/pandas-dev/pandas">
61 <img src="https://codecov.io/github/pandas-dev/pandas/coverage.svg?branch=master" alt="coverage" />
62 </a>
63 </td>
64 </tr>
65 <tr>
66 <td>Downloads</td>
67 <td>
68 <a href="https://pandas.pydata.org">
69 <img src="https://anaconda.org/conda-forge/pandas/badges/downloads.svg" alt="conda-forge downloads" />
70 </a>
71 </td>
72 </tr>
73 <tr>
74 <td>Gitter</td>
75 <td>
76 <a href="https://gitter.im/pydata/pandas">
77 <img src="https://badges.gitter.im/Join%20Chat.svg" />
78 </a>
79 </td>
80 </tr>
81 </table>
82
83
84
85 ## What is it?
86
87 **pandas** is a Python package providing fast, flexible, and expressive data
88 structures designed to make working with "relational" or "labeled" data both
89 easy and intuitive. It aims to be the fundamental high-level building block for
90 doing practical, **real world** data analysis in Python. Additionally, it has
91 the broader goal of becoming **the most powerful and flexible open source data
92 analysis / manipulation tool available in any language**. It is already well on
93 its way towards this goal.
94
95 ## Main Features
96 Here are just a few of the things that pandas does well:
97
98 - Easy handling of [**missing data**][missing-data] (represented as
99 `NaN`) in floating point as well as non-floating point data
100 - Size mutability: columns can be [**inserted and
101 deleted**][insertion-deletion] from DataFrame and higher dimensional
102 objects
103 - Automatic and explicit [**data alignment**][alignment]: objects can
104 be explicitly aligned to a set of labels, or the user can simply
105 ignore the labels and let `Series`, `DataFrame`, etc. automatically
106 align the data for you in computations
107 - Powerful, flexible [**group by**][groupby] functionality to perform
108 split-apply-combine operations on data sets, for both aggregating
109 and transforming data
110 - Make it [**easy to convert**][conversion] ragged,
111 differently-indexed data in other Python and NumPy data structures
112 into DataFrame objects
113 - Intelligent label-based [**slicing**][slicing], [**fancy
114 indexing**][fancy-indexing], and [**subsetting**][subsetting] of
115 large data sets
116 - Intuitive [**merging**][merging] and [**joining**][joining] data
117 sets
118 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of
119 data sets
120 - [**Hierarchical**][mi] labeling of axes (possible to have multiple
121 labels per tick)
122 - Robust IO tools for loading data from [**flat files**][flat-files]
123 (CSV and delimited), [**Excel files**][excel], [**databases**][db],
124 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore]
125 - [**Time series**][timeseries]-specific functionality: date range
126 generation and frequency conversion, moving window statistics,
127 moving window linear regressions, date shifting and lagging, etc.
128
129
130 [missing-data]: https://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data
131 [insertion-deletion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#column-selection-addition-deletion
132 [alignment]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html?highlight=alignment#intro-to-data-structures
133 [groupby]: https://pandas.pydata.org/pandas-docs/stable/groupby.html#group-by-split-apply-combine
134 [conversion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe
135 [slicing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges
136 [fancy-indexing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#advanced-indexing-with-ix
137 [subsetting]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing
138 [merging]: https://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging
139 [joining]: https://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index
140 [reshape]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-and-pivot-tables
141 [pivot-table]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#pivot-tables-and-cross-tabulations
142 [mi]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex
143 [flat-files]: https://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files
144 [excel]: https://pandas.pydata.org/pandas-docs/stable/io.html#excel-files
145 [db]: https://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries
146 [hdfstore]: https://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables
147 [timeseries]: https://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-series-date-functionality
148
149 ## Where to get it
150 The source code is currently hosted on GitHub at:
151 https://github.com/pandas-dev/pandas
152
153 Binary installers for the latest released version are available at the [Python
154 package index](https://pypi.org/project/pandas) and on conda.
155
156 ```sh
157 # conda
158 conda install pandas
159 ```
160
161 ```sh
162 # or PyPI
163 pip install pandas
164 ```
165
166 ## Dependencies
167 - [NumPy](https://www.numpy.org): 1.13.3 or higher
168 - [python-dateutil](https://labix.org/python-dateutil): 2.5.0 or higher
169 - [pytz](https://pythonhosted.org/pytz): 2015.4 or higher
170
171 See the [full installation instructions](https://pandas.pydata.org/pandas-docs/stable/install.html#dependencies)
172 for recommended and optional dependencies.
173
174 ## Installation from sources
175 To install pandas from source you need Cython in addition to the normal
176 dependencies above. Cython can be installed from pypi:
177
178 ```sh
179 pip install cython
180 ```
181
182 In the `pandas` directory (same one where you found this file after
183 cloning the git repo), execute:
184
185 ```sh
186 python setup.py install
187 ```
188
189 or for installing in [development mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs):
190
191 ```sh
192 python setup.py develop
193 ```
194
195 Alternatively, you can use `pip` if you want all the dependencies pulled
196 in automatically (the `-e` option is for installing it in [development
197 mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs)):
198
199 ```sh
200 pip install -e .
201 ```
202
203 See the full instructions for [installing from source](https://pandas.pydata.org/pandas-docs/stable/install.html#installing-from-source).
204
205 ## License
206 [BSD 3](LICENSE)
207
208 ## Documentation
209 The official documentation is hosted on PyData.org: https://pandas.pydata.org/pandas-docs/stable
210
211 ## Background
212 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and
213 has been under active development since then.
214
215 ## Getting Help
216
217 For usage questions, the best place to go to is [StackOverflow](https://stackoverflow.com/questions/tagged/pandas).
218 Further, general questions and discussions can also take place on the [pydata mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata).
219
220 ## Discussion and Development
221 Most development discussion is taking place on github in this repo. Further, the [pandas-dev mailing list](https://mail.python.org/mailman/listinfo/pandas-dev) can also be used for specialized discussions or design issues, and a [Gitter channel](https://gitter.im/pydata/pandas) is available for quick development related questions.
222
223 ## Contributing to pandas [](https://www.codetriage.com/pandas-dev/pandas)
224
225 All contributions, bug reports, bug fixes, documentation improvements, enhancements and ideas are welcome.
226
227 A detailed overview on how to contribute can be found in the **[contributing guide](https://pandas-docs.github.io/pandas-docs-travis/contributing.html)**. There is also an [overview](.github/CONTRIBUTING.md) on GitHub.
228
229 If you are simply looking to start working with the pandas codebase, navigate to the [GitHub "issues" tab](https://github.com/pandas-dev/pandas/issues) and start looking through interesting issues. There are a number of issues listed under [Docs](https://github.com/pandas-dev/pandas/issues?labels=Docs&sort=updated&state=open) and [good first issue](https://github.com/pandas-dev/pandas/issues?labels=good+first+issue&sort=updated&state=open) where you could start out.
230
231 You can also triage issues which may include reproducing bug reports, or asking for vital information such as version numbers or reproduction instructions. If you would like to start triaging issues, one easy way to get started is to [subscribe to pandas on CodeTriage](https://www.codetriage.com/pandas-dev/pandas).
232
233 Or maybe through using pandas you have an idea of your own or are looking for something in the documentation and thinking ‘this can be improved’...you can do something about it!
234
235 Feel free to ask questions on the [mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata) or on [Gitter](https://gitter.im/pydata/pandas).
236
[end of README.md]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pandas-dev/pandas
|
94c8c94dccfa7934699c314420f0898d81425519
|
assert_*_equal issues
The function `assert_almost_equal` may return the return value of another functions like `assert_frame_equal` or `assert_series_equal`.
But `assert_frame_equal` does not return any value explicitly, hence it shall always return None.
Also, `assert_series_equal` may or may not return a value explicitly.
Are those assertion functions usable or am I overlooking something?
|
`assert_almost_equal` is not part of the public API, can you please show an example of what's wrong?
I was not sure about how to use `assert_almost_equal`, despite the name starting with `assert` I have looked at the code and have seen return statements, so I figured out I should check the truth of the returned value.
May be the docstring could specify that an `AssertionError` is raised when the assertion fails.
If that's not a public function, then please ignore this issue.
Nevertheless, I think a public function on par with the numpy allclose would be very useful (with the ability to combine absolute and relative errors).
That function is not public - but it does look like the return statements are redundant so will keep this open as a code cleanup issue. Also would take that improvement to the docstring.
@chris-b1 if I understand this correctly, the assertions part (```assert_frame_equal``` and `assert_series_equal`) will remain as is, but their return values need not be returned? Only code cleanup is the removal of the `return` keyword?
|
2019-02-27T18:54:51Z
|
<patch>
diff --git a/pandas/util/testing.py b/pandas/util/testing.py
--- a/pandas/util/testing.py
+++ b/pandas/util/testing.py
@@ -281,25 +281,25 @@ def assert_almost_equal(left, right, check_dtype="equiv",
"""
if isinstance(left, pd.Index):
- return assert_index_equal(left, right,
- check_exact=False,
- exact=check_dtype,
- check_less_precise=check_less_precise,
- **kwargs)
+ assert_index_equal(left, right,
+ check_exact=False,
+ exact=check_dtype,
+ check_less_precise=check_less_precise,
+ **kwargs)
elif isinstance(left, pd.Series):
- return assert_series_equal(left, right,
- check_exact=False,
- check_dtype=check_dtype,
- check_less_precise=check_less_precise,
- **kwargs)
+ assert_series_equal(left, right,
+ check_exact=False,
+ check_dtype=check_dtype,
+ check_less_precise=check_less_precise,
+ **kwargs)
elif isinstance(left, pd.DataFrame):
- return assert_frame_equal(left, right,
- check_exact=False,
- check_dtype=check_dtype,
- check_less_precise=check_less_precise,
- **kwargs)
+ assert_frame_equal(left, right,
+ check_exact=False,
+ check_dtype=check_dtype,
+ check_less_precise=check_less_precise,
+ **kwargs)
else:
# Other sequences.
@@ -317,7 +317,7 @@ def assert_almost_equal(left, right, check_dtype="equiv",
else:
obj = "Input"
assert_class_equal(left, right, obj=obj)
- return _testing.assert_almost_equal(
+ _testing.assert_almost_equal(
left, right,
check_dtype=check_dtype,
check_less_precise=check_less_precise,
@@ -355,7 +355,7 @@ def _check_isinstance(left, right, cls):
def assert_dict_equal(left, right, compare_keys=True):
_check_isinstance(left, right, dict)
- return _testing.assert_dict_equal(left, right, compare_keys=compare_keys)
+ _testing.assert_dict_equal(left, right, compare_keys=compare_keys)
def randbool(size=(), p=0.5):
@@ -717,11 +717,12 @@ def isiterable(obj):
return hasattr(obj, '__iter__')
-def is_sorted(seq):
+def assert_is_sorted(seq):
+ """Assert that the sequence is sorted."""
if isinstance(seq, (Index, Series)):
seq = seq.values
# sorting does not change precisions
- return assert_numpy_array_equal(seq, np.sort(np.array(seq)))
+ assert_numpy_array_equal(seq, np.sort(np.array(seq)))
def assert_categorical_equal(left, right, check_dtype=True,
@@ -911,8 +912,6 @@ def _raise(left, right, err_msg):
if isinstance(left, np.ndarray) and isinstance(right, np.ndarray):
assert_attr_equal('dtype', left, right, obj=obj)
- return True
-
def assert_extension_array_equal(left, right, check_dtype=True,
check_less_precise=False,
@@ -1073,12 +1072,10 @@ def assert_series_equal(left, right, check_dtype=True,
# .values is an ndarray, but ._values is the ExtensionArray.
# TODO: Use .array
assert is_extension_array_dtype(right.dtype)
- return assert_extension_array_equal(left._values, right._values)
-
+ assert_extension_array_equal(left._values, right._values)
elif (is_extension_array_dtype(left) and not is_categorical_dtype(left) and
is_extension_array_dtype(right) and not is_categorical_dtype(right)):
- return assert_extension_array_equal(left.array, right.array)
-
+ assert_extension_array_equal(left.array, right.array)
else:
_testing.assert_almost_equal(left.get_values(), right.get_values(),
check_less_precise=check_less_precise,
</patch>
|
[]
|
[]
| |||
pandas-dev__pandas-37149
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
BUG: GroupBy().fillna() performance regression
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the latest version of pandas.
- [ ] (optional) I have confirmed this bug exists on the master branch of pandas.
---
```python
import pandas as pd
import numpy as np
N = 2000
df = pd.DataFrame({"A": [1] * N, "B": [np.nan, 1.0] * (N // 2)})
df = df.sort_values("A").set_index("A")
df["B"] = df.groupby("A")["B"].fillna(method="ffill")
```
#### Problem description
The groupby + fillna gets extremely slow increasing the N.
This is a regression from 1.0.5->1.1.0.
Note: if I remove the `.set_index("A")` it's fast again.
#### Expected Output
Same output, just faster.
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit : d9fff2792bf16178d4e450fe7384244e50635733
python : 3.7.8.final.0
python-bits : 64
OS : Linux
OS-release : 4.4.110-1.el7.elrepo.x86_64
Version : #1 SMP Fri Jan 5 11:35:48 EST 2018
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 1.1.0
numpy : 1.19.1
pytz : 2020.1
dateutil : 2.8.1
pip : 20.2.3
setuptools : 49.6.0.post20200917
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : None
IPython : None
pandas_datareader: None
bs4 : None
bottleneck : None
fsspec : None
fastparquet : None
gcsfs : None
matplotlib : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pytables : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
numba : None
</details>
</issue>
<code>
[start of README.md]
1 <div align="center">
2 <img src="https://dev.pandas.io/static/img/pandas.svg"><br>
3 </div>
4
5 -----------------
6
7 # pandas: powerful Python data analysis toolkit
8 [](https://pypi.org/project/pandas/)
9 [](https://anaconda.org/anaconda/pandas/)
10 [](https://doi.org/10.5281/zenodo.3509134)
11 [](https://pypi.org/project/pandas/)
12 [](https://github.com/pandas-dev/pandas/blob/master/LICENSE)
13 [](https://travis-ci.org/pandas-dev/pandas)
14 [](https://dev.azure.com/pandas-dev/pandas/_build/latest?definitionId=1&branch=master)
15 [](https://codecov.io/gh/pandas-dev/pandas)
16 [](https://pandas.pydata.org)
17 [](https://gitter.im/pydata/pandas)
18 [](https://numfocus.org)
19 [](https://github.com/psf/black)
20
21 ## What is it?
22
23 **pandas** is a Python package that provides fast, flexible, and expressive data
24 structures designed to make working with "relational" or "labeled" data both
25 easy and intuitive. It aims to be the fundamental high-level building block for
26 doing practical, **real world** data analysis in Python. Additionally, it has
27 the broader goal of becoming **the most powerful and flexible open source data
28 analysis / manipulation tool available in any language**. It is already well on
29 its way towards this goal.
30
31 ## Main Features
32 Here are just a few of the things that pandas does well:
33
34 - Easy handling of [**missing data**][missing-data] (represented as
35 `NaN`, `NA`, or `NaT`) in floating point as well as non-floating point data
36 - Size mutability: columns can be [**inserted and
37 deleted**][insertion-deletion] from DataFrame and higher dimensional
38 objects
39 - Automatic and explicit [**data alignment**][alignment]: objects can
40 be explicitly aligned to a set of labels, or the user can simply
41 ignore the labels and let `Series`, `DataFrame`, etc. automatically
42 align the data for you in computations
43 - Powerful, flexible [**group by**][groupby] functionality to perform
44 split-apply-combine operations on data sets, for both aggregating
45 and transforming data
46 - Make it [**easy to convert**][conversion] ragged,
47 differently-indexed data in other Python and NumPy data structures
48 into DataFrame objects
49 - Intelligent label-based [**slicing**][slicing], [**fancy
50 indexing**][fancy-indexing], and [**subsetting**][subsetting] of
51 large data sets
52 - Intuitive [**merging**][merging] and [**joining**][joining] data
53 sets
54 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of
55 data sets
56 - [**Hierarchical**][mi] labeling of axes (possible to have multiple
57 labels per tick)
58 - Robust IO tools for loading data from [**flat files**][flat-files]
59 (CSV and delimited), [**Excel files**][excel], [**databases**][db],
60 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore]
61 - [**Time series**][timeseries]-specific functionality: date range
62 generation and frequency conversion, moving window statistics,
63 date shifting and lagging.
64
65
66 [missing-data]: https://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data
67 [insertion-deletion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#column-selection-addition-deletion
68 [alignment]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html?highlight=alignment#intro-to-data-structures
69 [groupby]: https://pandas.pydata.org/pandas-docs/stable/groupby.html#group-by-split-apply-combine
70 [conversion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe
71 [slicing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges
72 [fancy-indexing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#advanced-indexing-with-ix
73 [subsetting]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing
74 [merging]: https://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging
75 [joining]: https://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index
76 [reshape]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-and-pivot-tables
77 [pivot-table]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#pivot-tables-and-cross-tabulations
78 [mi]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex
79 [flat-files]: https://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files
80 [excel]: https://pandas.pydata.org/pandas-docs/stable/io.html#excel-files
81 [db]: https://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries
82 [hdfstore]: https://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables
83 [timeseries]: https://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-series-date-functionality
84
85 ## Where to get it
86 The source code is currently hosted on GitHub at:
87 https://github.com/pandas-dev/pandas
88
89 Binary installers for the latest released version are available at the [Python
90 package index](https://pypi.org/project/pandas) and on conda.
91
92 ```sh
93 # conda
94 conda install pandas
95 ```
96
97 ```sh
98 # or PyPI
99 pip install pandas
100 ```
101
102 ## Dependencies
103 - [NumPy](https://www.numpy.org)
104 - [python-dateutil](https://labix.org/python-dateutil)
105 - [pytz](https://pythonhosted.org/pytz)
106
107 See the [full installation instructions](https://pandas.pydata.org/pandas-docs/stable/install.html#dependencies) for minimum supported versions of required, recommended and optional dependencies.
108
109 ## Installation from sources
110 To install pandas from source you need Cython in addition to the normal
111 dependencies above. Cython can be installed from pypi:
112
113 ```sh
114 pip install cython
115 ```
116
117 In the `pandas` directory (same one where you found this file after
118 cloning the git repo), execute:
119
120 ```sh
121 python setup.py install
122 ```
123
124 or for installing in [development mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs):
125
126
127 ```sh
128 python -m pip install -e . --no-build-isolation --no-use-pep517
129 ```
130
131 If you have `make`, you can also use `make develop` to run the same command.
132
133 or alternatively
134
135 ```sh
136 python setup.py develop
137 ```
138
139 See the full instructions for [installing from source](https://pandas.pydata.org/pandas-docs/stable/install.html#installing-from-source).
140
141 ## License
142 [BSD 3](LICENSE)
143
144 ## Documentation
145 The official documentation is hosted on PyData.org: https://pandas.pydata.org/pandas-docs/stable
146
147 ## Background
148 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and
149 has been under active development since then.
150
151 ## Getting Help
152
153 For usage questions, the best place to go to is [StackOverflow](https://stackoverflow.com/questions/tagged/pandas).
154 Further, general questions and discussions can also take place on the [pydata mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata).
155
156 ## Discussion and Development
157 Most development discussions take place on github in this repo. Further, the [pandas-dev mailing list](https://mail.python.org/mailman/listinfo/pandas-dev) can also be used for specialized discussions or design issues, and a [Gitter channel](https://gitter.im/pydata/pandas) is available for quick development related questions.
158
159 ## Contributing to pandas [](https://www.codetriage.com/pandas-dev/pandas)
160
161 All contributions, bug reports, bug fixes, documentation improvements, enhancements, and ideas are welcome.
162
163 A detailed overview on how to contribute can be found in the **[contributing guide](https://pandas.pydata.org/docs/dev/development/contributing.html)**. There is also an [overview](.github/CONTRIBUTING.md) on GitHub.
164
165 If you are simply looking to start working with the pandas codebase, navigate to the [GitHub "issues" tab](https://github.com/pandas-dev/pandas/issues) and start looking through interesting issues. There are a number of issues listed under [Docs](https://github.com/pandas-dev/pandas/issues?labels=Docs&sort=updated&state=open) and [good first issue](https://github.com/pandas-dev/pandas/issues?labels=good+first+issue&sort=updated&state=open) where you could start out.
166
167 You can also triage issues which may include reproducing bug reports, or asking for vital information such as version numbers or reproduction instructions. If you would like to start triaging issues, one easy way to get started is to [subscribe to pandas on CodeTriage](https://www.codetriage.com/pandas-dev/pandas).
168
169 Or maybe through using pandas you have an idea of your own or are looking for something in the documentation and thinking ‘this can be improved’...you can do something about it!
170
171 Feel free to ask questions on the [mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata) or on [Gitter](https://gitter.im/pydata/pandas).
172
173 As contributors and maintainers to this project, you are expected to abide by pandas' code of conduct. More information can be found at: [Contributor Code of Conduct](https://github.com/pandas-dev/pandas/blob/master/.github/CODE_OF_CONDUCT.md)
174
[end of README.md]
[start of pandas/compat/_optional.py]
1 import distutils.version
2 import importlib
3 import types
4 import warnings
5
6 # Update install.rst when updating versions!
7
8 VERSIONS = {
9 "bs4": "4.6.0",
10 "bottleneck": "1.2.1",
11 "fsspec": "0.7.4",
12 "fastparquet": "0.3.2",
13 "gcsfs": "0.6.0",
14 "lxml.etree": "4.3.0",
15 "matplotlib": "2.2.3",
16 "numexpr": "2.6.8",
17 "odfpy": "1.3.0",
18 "openpyxl": "2.5.7",
19 "pandas_gbq": "0.12.0",
20 "pyarrow": "0.15.0",
21 "pytest": "5.0.1",
22 "pyxlsb": "1.0.6",
23 "s3fs": "0.4.0",
24 "scipy": "1.2.0",
25 "sqlalchemy": "1.2.8",
26 "tables": "3.5.1",
27 "tabulate": "0.8.3",
28 "xarray": "0.12.0",
29 "xlrd": "1.2.0",
30 "xlwt": "1.3.0",
31 "xlsxwriter": "1.0.2",
32 "numba": "0.46.0",
33 }
34
35 # A mapping from import name to package name (on PyPI) for packages where
36 # these two names are different.
37
38 INSTALL_MAPPING = {
39 "bs4": "beautifulsoup4",
40 "bottleneck": "Bottleneck",
41 "lxml.etree": "lxml",
42 "odf": "odfpy",
43 "pandas_gbq": "pandas-gbq",
44 "sqlalchemy": "SQLAlchemy",
45 "jinja2": "Jinja2",
46 }
47
48
49 def _get_version(module: types.ModuleType) -> str:
50 version = getattr(module, "__version__", None)
51 if version is None:
52 # xlrd uses a capitalized attribute name
53 version = getattr(module, "__VERSION__", None)
54
55 if version is None:
56 raise ImportError(f"Can't determine version for {module.__name__}")
57 return version
58
59
60 def import_optional_dependency(
61 name: str, extra: str = "", raise_on_missing: bool = True, on_version: str = "raise"
62 ):
63 """
64 Import an optional dependency.
65
66 By default, if a dependency is missing an ImportError with a nice
67 message will be raised. If a dependency is present, but too old,
68 we raise.
69
70 Parameters
71 ----------
72 name : str
73 The module name. This should be top-level only, so that the
74 version may be checked.
75 extra : str
76 Additional text to include in the ImportError message.
77 raise_on_missing : bool, default True
78 Whether to raise if the optional dependency is not found.
79 When False and the module is not present, None is returned.
80 on_version : str {'raise', 'warn'}
81 What to do when a dependency's version is too old.
82
83 * raise : Raise an ImportError
84 * warn : Warn that the version is too old. Returns None
85 * ignore: Return the module, even if the version is too old.
86 It's expected that users validate the version locally when
87 using ``on_version="ignore"`` (see. ``io/html.py``)
88
89 Returns
90 -------
91 maybe_module : Optional[ModuleType]
92 The imported module, when found and the version is correct.
93 None is returned when the package is not found and `raise_on_missing`
94 is False, or when the package's version is too old and `on_version`
95 is ``'warn'``.
96 """
97
98 package_name = INSTALL_MAPPING.get(name)
99 install_name = package_name if package_name is not None else name
100
101 msg = (
102 f"Missing optional dependency '{install_name}'. {extra} "
103 f"Use pip or conda to install {install_name}."
104 )
105 try:
106 module = importlib.import_module(name)
107 except ImportError:
108 if raise_on_missing:
109 raise ImportError(msg) from None
110 else:
111 return None
112
113 minimum_version = VERSIONS.get(name)
114 if minimum_version:
115 version = _get_version(module)
116 if distutils.version.LooseVersion(version) < minimum_version:
117 assert on_version in {"warn", "raise", "ignore"}
118 msg = (
119 f"Pandas requires version '{minimum_version}' or newer of '{name}' "
120 f"(version '{version}' currently installed)."
121 )
122 if on_version == "warn":
123 warnings.warn(msg, UserWarning)
124 return None
125 elif on_version == "raise":
126 raise ImportError(msg)
127
128 return module
129
[end of pandas/compat/_optional.py]
[start of pandas/core/config_init.py]
1 """
2 This module is imported from the pandas package __init__.py file
3 in order to ensure that the core.config options registered here will
4 be available as soon as the user loads the package. if register_option
5 is invoked inside specific modules, they will not be registered until that
6 module is imported, which may or may not be a problem.
7
8 If you need to make sure options are available even before a certain
9 module is imported, register them here rather than in the module.
10
11 """
12 import warnings
13
14 import pandas._config.config as cf
15 from pandas._config.config import (
16 is_bool,
17 is_callable,
18 is_instance_factory,
19 is_int,
20 is_nonnegative_int,
21 is_one_of_factory,
22 is_text,
23 )
24
25 # compute
26
27 use_bottleneck_doc = """
28 : bool
29 Use the bottleneck library to accelerate if it is installed,
30 the default is True
31 Valid values: False,True
32 """
33
34
35 def use_bottleneck_cb(key):
36 from pandas.core import nanops
37
38 nanops.set_use_bottleneck(cf.get_option(key))
39
40
41 use_numexpr_doc = """
42 : bool
43 Use the numexpr library to accelerate computation if it is installed,
44 the default is True
45 Valid values: False,True
46 """
47
48
49 def use_numexpr_cb(key):
50 from pandas.core.computation import expressions
51
52 expressions.set_use_numexpr(cf.get_option(key))
53
54
55 use_numba_doc = """
56 : bool
57 Use the numba engine option for select operations if it is installed,
58 the default is False
59 Valid values: False,True
60 """
61
62
63 def use_numba_cb(key):
64 from pandas.core.util import numba_
65
66 numba_.set_use_numba(cf.get_option(key))
67
68
69 with cf.config_prefix("compute"):
70 cf.register_option(
71 "use_bottleneck",
72 True,
73 use_bottleneck_doc,
74 validator=is_bool,
75 cb=use_bottleneck_cb,
76 )
77 cf.register_option(
78 "use_numexpr", True, use_numexpr_doc, validator=is_bool, cb=use_numexpr_cb
79 )
80 cf.register_option(
81 "use_numba", False, use_numba_doc, validator=is_bool, cb=use_numba_cb
82 )
83 #
84 # options from the "display" namespace
85
86 pc_precision_doc = """
87 : int
88 Floating point output precision in terms of number of places after the
89 decimal, for regular formatting as well as scientific notation. Similar
90 to ``precision`` in :meth:`numpy.set_printoptions`.
91 """
92
93 pc_colspace_doc = """
94 : int
95 Default space for DataFrame columns.
96 """
97
98 pc_max_rows_doc = """
99 : int
100 If max_rows is exceeded, switch to truncate view. Depending on
101 `large_repr`, objects are either centrally truncated or printed as
102 a summary view. 'None' value means unlimited.
103
104 In case python/IPython is running in a terminal and `large_repr`
105 equals 'truncate' this can be set to 0 and pandas will auto-detect
106 the height of the terminal and print a truncated object which fits
107 the screen height. The IPython notebook, IPython qtconsole, or
108 IDLE do not run in a terminal and hence it is not possible to do
109 correct auto-detection.
110 """
111
112 pc_min_rows_doc = """
113 : int
114 The numbers of rows to show in a truncated view (when `max_rows` is
115 exceeded). Ignored when `max_rows` is set to None or 0. When set to
116 None, follows the value of `max_rows`.
117 """
118
119 pc_max_cols_doc = """
120 : int
121 If max_cols is exceeded, switch to truncate view. Depending on
122 `large_repr`, objects are either centrally truncated or printed as
123 a summary view. 'None' value means unlimited.
124
125 In case python/IPython is running in a terminal and `large_repr`
126 equals 'truncate' this can be set to 0 and pandas will auto-detect
127 the width of the terminal and print a truncated object which fits
128 the screen width. The IPython notebook, IPython qtconsole, or IDLE
129 do not run in a terminal and hence it is not possible to do
130 correct auto-detection.
131 """
132
133 pc_max_categories_doc = """
134 : int
135 This sets the maximum number of categories pandas should output when
136 printing out a `Categorical` or a Series of dtype "category".
137 """
138
139 pc_max_info_cols_doc = """
140 : int
141 max_info_columns is used in DataFrame.info method to decide if
142 per column information will be printed.
143 """
144
145 pc_nb_repr_h_doc = """
146 : boolean
147 When True, IPython notebook will use html representation for
148 pandas objects (if it is available).
149 """
150
151 pc_pprint_nest_depth = """
152 : int
153 Controls the number of nested levels to process when pretty-printing
154 """
155
156 pc_multi_sparse_doc = """
157 : boolean
158 "sparsify" MultiIndex display (don't display repeated
159 elements in outer levels within groups)
160 """
161
162 float_format_doc = """
163 : callable
164 The callable should accept a floating point number and return
165 a string with the desired format of the number. This is used
166 in some places like SeriesFormatter.
167 See formats.format.EngFormatter for an example.
168 """
169
170 max_colwidth_doc = """
171 : int or None
172 The maximum width in characters of a column in the repr of
173 a pandas data structure. When the column overflows, a "..."
174 placeholder is embedded in the output. A 'None' value means unlimited.
175 """
176
177 colheader_justify_doc = """
178 : 'left'/'right'
179 Controls the justification of column headers. used by DataFrameFormatter.
180 """
181
182 pc_expand_repr_doc = """
183 : boolean
184 Whether to print out the full DataFrame repr for wide DataFrames across
185 multiple lines, `max_columns` is still respected, but the output will
186 wrap-around across multiple "pages" if its width exceeds `display.width`.
187 """
188
189 pc_show_dimensions_doc = """
190 : boolean or 'truncate'
191 Whether to print out dimensions at the end of DataFrame repr.
192 If 'truncate' is specified, only print out the dimensions if the
193 frame is truncated (e.g. not display all rows and/or columns)
194 """
195
196 pc_east_asian_width_doc = """
197 : boolean
198 Whether to use the Unicode East Asian Width to calculate the display text
199 width.
200 Enabling this may affect to the performance (default: False)
201 """
202
203 pc_ambiguous_as_wide_doc = """
204 : boolean
205 Whether to handle Unicode characters belong to Ambiguous as Wide (width=2)
206 (default: False)
207 """
208
209 pc_latex_repr_doc = """
210 : boolean
211 Whether to produce a latex DataFrame representation for jupyter
212 environments that support it.
213 (default: False)
214 """
215
216 pc_table_schema_doc = """
217 : boolean
218 Whether to publish a Table Schema representation for frontends
219 that support it.
220 (default: False)
221 """
222
223 pc_html_border_doc = """
224 : int
225 A ``border=value`` attribute is inserted in the ``<table>`` tag
226 for the DataFrame HTML repr.
227 """
228
229 pc_html_use_mathjax_doc = """\
230 : boolean
231 When True, Jupyter notebook will process table contents using MathJax,
232 rendering mathematical expressions enclosed by the dollar symbol.
233 (default: True)
234 """
235
236 pc_width_doc = """
237 : int
238 Width of the display in characters. In case python/IPython is running in
239 a terminal this can be set to None and pandas will correctly auto-detect
240 the width.
241 Note that the IPython notebook, IPython qtconsole, or IDLE do not run in a
242 terminal and hence it is not possible to correctly detect the width.
243 """
244
245 pc_chop_threshold_doc = """
246 : float or None
247 if set to a float value, all float values smaller then the given threshold
248 will be displayed as exactly 0 by repr and friends.
249 """
250
251 pc_max_seq_items = """
252 : int or None
253 When pretty-printing a long sequence, no more then `max_seq_items`
254 will be printed. If items are omitted, they will be denoted by the
255 addition of "..." to the resulting string.
256
257 If set to None, the number of items to be printed is unlimited.
258 """
259
260 pc_max_info_rows_doc = """
261 : int or None
262 df.info() will usually show null-counts for each column.
263 For large frames this can be quite slow. max_info_rows and max_info_cols
264 limit this null check only to frames with smaller dimensions than
265 specified.
266 """
267
268 pc_large_repr_doc = """
269 : 'truncate'/'info'
270 For DataFrames exceeding max_rows/max_cols, the repr (and HTML repr) can
271 show a truncated table (the default from 0.13), or switch to the view from
272 df.info() (the behaviour in earlier versions of pandas).
273 """
274
275 pc_memory_usage_doc = """
276 : bool, string or None
277 This specifies if the memory usage of a DataFrame should be displayed when
278 df.info() is called. Valid values True,False,'deep'
279 """
280
281 pc_latex_escape = """
282 : bool
283 This specifies if the to_latex method of a Dataframe uses escapes special
284 characters.
285 Valid values: False,True
286 """
287
288 pc_latex_longtable = """
289 :bool
290 This specifies if the to_latex method of a Dataframe uses the longtable
291 format.
292 Valid values: False,True
293 """
294
295 pc_latex_multicolumn = """
296 : bool
297 This specifies if the to_latex method of a Dataframe uses multicolumns
298 to pretty-print MultiIndex columns.
299 Valid values: False,True
300 """
301
302 pc_latex_multicolumn_format = """
303 : string
304 This specifies the format for multicolumn headers.
305 Can be surrounded with '|'.
306 Valid values: 'l', 'c', 'r', 'p{<width>}'
307 """
308
309 pc_latex_multirow = """
310 : bool
311 This specifies if the to_latex method of a Dataframe uses multirows
312 to pretty-print MultiIndex rows.
313 Valid values: False,True
314 """
315
316
317 def table_schema_cb(key):
318 from pandas.io.formats.printing import enable_data_resource_formatter
319
320 enable_data_resource_formatter(cf.get_option(key))
321
322
323 def is_terminal() -> bool:
324 """
325 Detect if Python is running in a terminal.
326
327 Returns True if Python is running in a terminal or False if not.
328 """
329 try:
330 # error: Name 'get_ipython' is not defined
331 ip = get_ipython() # type: ignore[name-defined]
332 except NameError: # assume standard Python interpreter in a terminal
333 return True
334 else:
335 if hasattr(ip, "kernel"): # IPython as a Jupyter kernel
336 return False
337 else: # IPython in a terminal
338 return True
339
340
341 with cf.config_prefix("display"):
342 cf.register_option("precision", 6, pc_precision_doc, validator=is_nonnegative_int)
343 cf.register_option(
344 "float_format",
345 None,
346 float_format_doc,
347 validator=is_one_of_factory([None, is_callable]),
348 )
349 cf.register_option("column_space", 12, validator=is_int)
350 cf.register_option(
351 "max_info_rows",
352 1690785,
353 pc_max_info_rows_doc,
354 validator=is_instance_factory((int, type(None))),
355 )
356 cf.register_option("max_rows", 60, pc_max_rows_doc, validator=is_nonnegative_int)
357 cf.register_option(
358 "min_rows",
359 10,
360 pc_min_rows_doc,
361 validator=is_instance_factory([type(None), int]),
362 )
363 cf.register_option("max_categories", 8, pc_max_categories_doc, validator=is_int)
364
365 def _deprecate_negative_int_max_colwidth(key):
366 value = cf.get_option(key)
367 if value is not None and value < 0:
368 warnings.warn(
369 "Passing a negative integer is deprecated in version 1.0 and "
370 "will not be supported in future version. Instead, use None "
371 "to not limit the column width.",
372 FutureWarning,
373 stacklevel=4,
374 )
375
376 cf.register_option(
377 # TODO(2.0): change `validator=is_nonnegative_int` see GH#31569
378 "max_colwidth",
379 50,
380 max_colwidth_doc,
381 validator=is_instance_factory([type(None), int]),
382 cb=_deprecate_negative_int_max_colwidth,
383 )
384 if is_terminal():
385 max_cols = 0 # automatically determine optimal number of columns
386 else:
387 max_cols = 20 # cannot determine optimal number of columns
388 cf.register_option(
389 "max_columns", max_cols, pc_max_cols_doc, validator=is_nonnegative_int
390 )
391 cf.register_option(
392 "large_repr",
393 "truncate",
394 pc_large_repr_doc,
395 validator=is_one_of_factory(["truncate", "info"]),
396 )
397 cf.register_option("max_info_columns", 100, pc_max_info_cols_doc, validator=is_int)
398 cf.register_option(
399 "colheader_justify", "right", colheader_justify_doc, validator=is_text
400 )
401 cf.register_option("notebook_repr_html", True, pc_nb_repr_h_doc, validator=is_bool)
402 cf.register_option("pprint_nest_depth", 3, pc_pprint_nest_depth, validator=is_int)
403 cf.register_option("multi_sparse", True, pc_multi_sparse_doc, validator=is_bool)
404 cf.register_option("expand_frame_repr", True, pc_expand_repr_doc)
405 cf.register_option(
406 "show_dimensions",
407 "truncate",
408 pc_show_dimensions_doc,
409 validator=is_one_of_factory([True, False, "truncate"]),
410 )
411 cf.register_option("chop_threshold", None, pc_chop_threshold_doc)
412 cf.register_option("max_seq_items", 100, pc_max_seq_items)
413 cf.register_option(
414 "width", 80, pc_width_doc, validator=is_instance_factory([type(None), int])
415 )
416 cf.register_option(
417 "memory_usage",
418 True,
419 pc_memory_usage_doc,
420 validator=is_one_of_factory([None, True, False, "deep"]),
421 )
422 cf.register_option(
423 "unicode.east_asian_width", False, pc_east_asian_width_doc, validator=is_bool
424 )
425 cf.register_option(
426 "unicode.ambiguous_as_wide", False, pc_east_asian_width_doc, validator=is_bool
427 )
428 cf.register_option("latex.repr", False, pc_latex_repr_doc, validator=is_bool)
429 cf.register_option("latex.escape", True, pc_latex_escape, validator=is_bool)
430 cf.register_option("latex.longtable", False, pc_latex_longtable, validator=is_bool)
431 cf.register_option(
432 "latex.multicolumn", True, pc_latex_multicolumn, validator=is_bool
433 )
434 cf.register_option(
435 "latex.multicolumn_format", "l", pc_latex_multicolumn, validator=is_text
436 )
437 cf.register_option("latex.multirow", False, pc_latex_multirow, validator=is_bool)
438 cf.register_option(
439 "html.table_schema",
440 False,
441 pc_table_schema_doc,
442 validator=is_bool,
443 cb=table_schema_cb,
444 )
445 cf.register_option("html.border", 1, pc_html_border_doc, validator=is_int)
446 cf.register_option(
447 "html.use_mathjax", True, pc_html_use_mathjax_doc, validator=is_bool
448 )
449
450 tc_sim_interactive_doc = """
451 : boolean
452 Whether to simulate interactive mode for purposes of testing
453 """
454
455 with cf.config_prefix("mode"):
456 cf.register_option("sim_interactive", False, tc_sim_interactive_doc)
457
458 use_inf_as_null_doc = """
459 : boolean
460 use_inf_as_null had been deprecated and will be removed in a future
461 version. Use `use_inf_as_na` instead.
462 """
463
464 use_inf_as_na_doc = """
465 : boolean
466 True means treat None, NaN, INF, -INF as NA (old way),
467 False means None and NaN are null, but INF, -INF are not NA
468 (new way).
469 """
470
471 # We don't want to start importing everything at the global context level
472 # or we'll hit circular deps.
473
474
475 def use_inf_as_na_cb(key):
476 from pandas.core.dtypes.missing import _use_inf_as_na
477
478 _use_inf_as_na(key)
479
480
481 with cf.config_prefix("mode"):
482 cf.register_option("use_inf_as_na", False, use_inf_as_na_doc, cb=use_inf_as_na_cb)
483 cf.register_option(
484 "use_inf_as_null", False, use_inf_as_null_doc, cb=use_inf_as_na_cb
485 )
486
487 cf.deprecate_option(
488 "mode.use_inf_as_null", msg=use_inf_as_null_doc, rkey="mode.use_inf_as_na"
489 )
490
491
492 # user warnings
493 chained_assignment = """
494 : string
495 Raise an exception, warn, or no action if trying to use chained assignment,
496 The default is warn
497 """
498
499 with cf.config_prefix("mode"):
500 cf.register_option(
501 "chained_assignment",
502 "warn",
503 chained_assignment,
504 validator=is_one_of_factory([None, "warn", "raise"]),
505 )
506
507
508 # Set up the io.excel specific reader configuration.
509 reader_engine_doc = """
510 : string
511 The default Excel reader engine for '{ext}' files. Available options:
512 auto, {others}.
513 """
514
515 _xls_options = ["xlrd"]
516 _xlsm_options = ["xlrd", "openpyxl"]
517 _xlsx_options = ["xlrd", "openpyxl"]
518 _ods_options = ["odf"]
519 _xlsb_options = ["pyxlsb"]
520
521
522 with cf.config_prefix("io.excel.xls"):
523 cf.register_option(
524 "reader",
525 "auto",
526 reader_engine_doc.format(ext="xls", others=", ".join(_xls_options)),
527 validator=str,
528 )
529
530 with cf.config_prefix("io.excel.xlsm"):
531 cf.register_option(
532 "reader",
533 "auto",
534 reader_engine_doc.format(ext="xlsm", others=", ".join(_xlsm_options)),
535 validator=str,
536 )
537
538
539 with cf.config_prefix("io.excel.xlsx"):
540 cf.register_option(
541 "reader",
542 "auto",
543 reader_engine_doc.format(ext="xlsx", others=", ".join(_xlsx_options)),
544 validator=str,
545 )
546
547
548 with cf.config_prefix("io.excel.ods"):
549 cf.register_option(
550 "reader",
551 "auto",
552 reader_engine_doc.format(ext="ods", others=", ".join(_ods_options)),
553 validator=str,
554 )
555
556 with cf.config_prefix("io.excel.xlsb"):
557 cf.register_option(
558 "reader",
559 "auto",
560 reader_engine_doc.format(ext="xlsb", others=", ".join(_xlsb_options)),
561 validator=str,
562 )
563
564 # Set up the io.excel specific writer configuration.
565 writer_engine_doc = """
566 : string
567 The default Excel writer engine for '{ext}' files. Available options:
568 auto, {others}.
569 """
570
571 _xls_options = ["xlwt"]
572 _xlsm_options = ["openpyxl"]
573 _xlsx_options = ["openpyxl", "xlsxwriter"]
574 _ods_options = ["odf"]
575
576
577 with cf.config_prefix("io.excel.xls"):
578 cf.register_option(
579 "writer",
580 "auto",
581 writer_engine_doc.format(ext="xls", others=", ".join(_xls_options)),
582 validator=str,
583 )
584
585 with cf.config_prefix("io.excel.xlsm"):
586 cf.register_option(
587 "writer",
588 "auto",
589 writer_engine_doc.format(ext="xlsm", others=", ".join(_xlsm_options)),
590 validator=str,
591 )
592
593
594 with cf.config_prefix("io.excel.xlsx"):
595 cf.register_option(
596 "writer",
597 "auto",
598 writer_engine_doc.format(ext="xlsx", others=", ".join(_xlsx_options)),
599 validator=str,
600 )
601
602
603 with cf.config_prefix("io.excel.ods"):
604 cf.register_option(
605 "writer",
606 "auto",
607 writer_engine_doc.format(ext="ods", others=", ".join(_ods_options)),
608 validator=str,
609 )
610
611
612 # Set up the io.parquet specific configuration.
613 parquet_engine_doc = """
614 : string
615 The default parquet reader/writer engine. Available options:
616 'auto', 'pyarrow', 'fastparquet', the default is 'auto'
617 """
618
619 with cf.config_prefix("io.parquet"):
620 cf.register_option(
621 "engine",
622 "auto",
623 parquet_engine_doc,
624 validator=is_one_of_factory(["auto", "pyarrow", "fastparquet"]),
625 )
626
627 # --------
628 # Plotting
629 # ---------
630
631 plotting_backend_doc = """
632 : str
633 The plotting backend to use. The default value is "matplotlib", the
634 backend provided with pandas. Other backends can be specified by
635 providing the name of the module that implements the backend.
636 """
637
638
639 def register_plotting_backend_cb(key):
640 if key == "matplotlib":
641 # We defer matplotlib validation, since it's the default
642 return
643 from pandas.plotting._core import _get_plot_backend
644
645 _get_plot_backend(key)
646
647
648 with cf.config_prefix("plotting"):
649 cf.register_option(
650 "backend",
651 defval="matplotlib",
652 doc=plotting_backend_doc,
653 validator=register_plotting_backend_cb,
654 )
655
656
657 register_converter_doc = """
658 : bool or 'auto'.
659 Whether to register converters with matplotlib's units registry for
660 dates, times, datetimes, and Periods. Toggling to False will remove
661 the converters, restoring any converters that pandas overwrote.
662 """
663
664
665 def register_converter_cb(key):
666 from pandas.plotting import (
667 deregister_matplotlib_converters,
668 register_matplotlib_converters,
669 )
670
671 if cf.get_option(key):
672 register_matplotlib_converters()
673 else:
674 deregister_matplotlib_converters()
675
676
677 with cf.config_prefix("plotting.matplotlib"):
678 cf.register_option(
679 "register_converters",
680 "auto",
681 register_converter_doc,
682 validator=is_one_of_factory(["auto", True, False]),
683 cb=register_converter_cb,
684 )
685
[end of pandas/core/config_init.py]
[start of pandas/util/_print_versions.py]
1 import codecs
2 import json
3 import locale
4 import os
5 import platform
6 import struct
7 import sys
8 from typing import Dict, Optional, Union
9
10 from pandas._typing import JSONSerializable
11 from pandas.compat._optional import VERSIONS, _get_version, import_optional_dependency
12
13
14 def _get_commit_hash() -> Optional[str]:
15 """
16 Use vendored versioneer code to get git hash, which handles
17 git worktree correctly.
18 """
19 from pandas._version import get_versions
20
21 versions = get_versions()
22 return versions["full-revisionid"]
23
24
25 def _get_sys_info() -> Dict[str, JSONSerializable]:
26 """
27 Returns system information as a JSON serializable dictionary.
28 """
29 uname_result = platform.uname()
30 language_code, encoding = locale.getlocale()
31 return {
32 "commit": _get_commit_hash(),
33 "python": ".".join(str(i) for i in sys.version_info),
34 "python-bits": struct.calcsize("P") * 8,
35 "OS": uname_result.system,
36 "OS-release": uname_result.release,
37 "Version": uname_result.version,
38 "machine": uname_result.machine,
39 "processor": uname_result.processor,
40 "byteorder": sys.byteorder,
41 "LC_ALL": os.environ.get("LC_ALL"),
42 "LANG": os.environ.get("LANG"),
43 "LOCALE": {"language-code": language_code, "encoding": encoding},
44 }
45
46
47 def _get_dependency_info() -> Dict[str, JSONSerializable]:
48 """
49 Returns dependency information as a JSON serializable dictionary.
50 """
51 deps = [
52 "pandas",
53 # required
54 "numpy",
55 "pytz",
56 "dateutil",
57 # install / build,
58 "pip",
59 "setuptools",
60 "Cython",
61 # test
62 "pytest",
63 "hypothesis",
64 # docs
65 "sphinx",
66 # Other, need a min version
67 "blosc",
68 "feather",
69 "xlsxwriter",
70 "lxml.etree",
71 "html5lib",
72 "pymysql",
73 "psycopg2",
74 "jinja2",
75 # Other, not imported.
76 "IPython",
77 "pandas_datareader",
78 ]
79 deps.extend(list(VERSIONS))
80
81 result: Dict[str, JSONSerializable] = {}
82 for modname in deps:
83 mod = import_optional_dependency(
84 modname, raise_on_missing=False, on_version="ignore"
85 )
86 result[modname] = _get_version(mod) if mod else None
87 return result
88
89
90 def show_versions(as_json: Union[str, bool] = False) -> None:
91 """
92 Provide useful information, important for bug reports.
93
94 It comprises info about hosting operation system, pandas version,
95 and versions of other installed relative packages.
96
97 Parameters
98 ----------
99 as_json : str or bool, default False
100 * If False, outputs info in a human readable form to the console.
101 * If str, it will be considered as a path to a file.
102 Info will be written to that file in JSON format.
103 * If True, outputs info in JSON format to the console.
104 """
105 sys_info = _get_sys_info()
106 deps = _get_dependency_info()
107
108 if as_json:
109 j = dict(system=sys_info, dependencies=deps)
110
111 if as_json is True:
112 print(j)
113 else:
114 assert isinstance(as_json, str) # needed for mypy
115 with codecs.open(as_json, "wb", encoding="utf8") as f:
116 json.dump(j, f, indent=2)
117
118 else:
119 assert isinstance(sys_info["LOCALE"], dict) # needed for mypy
120 language_code = sys_info["LOCALE"]["language-code"]
121 encoding = sys_info["LOCALE"]["encoding"]
122 sys_info["LOCALE"] = f"{language_code}.{encoding}"
123
124 maxlen = max(len(x) for x in deps)
125 print("\nINSTALLED VERSIONS")
126 print("------------------")
127 for k, v in sys_info.items():
128 print(f"{k:<{maxlen}}: {v}")
129 print("")
130 for k, v in deps.items():
131 print(f"{k:<{maxlen}}: {v}")
132
133
134 def main() -> int:
135 from optparse import OptionParser
136
137 parser = OptionParser()
138 parser.add_option(
139 "-j",
140 "--json",
141 metavar="FILE",
142 nargs=1,
143 help="Save output as JSON into file, pass in '-' to output to stdout",
144 )
145
146 (options, args) = parser.parse_args()
147
148 if options.json == "-":
149 options.json = True
150
151 show_versions(as_json=options.json)
152
153 return 0
154
155
156 if __name__ == "__main__":
157 sys.exit(main())
158
[end of pandas/util/_print_versions.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pandas-dev/pandas
|
85793fb884e8838722945114ed525f93a30349ad
|
BUG: GroupBy().fillna() performance regression
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the latest version of pandas.
- [ ] (optional) I have confirmed this bug exists on the master branch of pandas.
---
```python
import pandas as pd
import numpy as np
N = 2000
df = pd.DataFrame({"A": [1] * N, "B": [np.nan, 1.0] * (N // 2)})
df = df.sort_values("A").set_index("A")
df["B"] = df.groupby("A")["B"].fillna(method="ffill")
```
#### Problem description
The groupby + fillna gets extremely slow increasing the N.
This is a regression from 1.0.5->1.1.0.
Note: if I remove the `.set_index("A")` it's fast again.
#### Expected Output
Same output, just faster.
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit : d9fff2792bf16178d4e450fe7384244e50635733
python : 3.7.8.final.0
python-bits : 64
OS : Linux
OS-release : 4.4.110-1.el7.elrepo.x86_64
Version : #1 SMP Fri Jan 5 11:35:48 EST 2018
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 1.1.0
numpy : 1.19.1
pytz : 2020.1
dateutil : 2.8.1
pip : 20.2.3
setuptools : 49.6.0.post20200917
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : None
IPython : None
pandas_datareader: None
bs4 : None
bottleneck : None
fsspec : None
fastparquet : None
gcsfs : None
matplotlib : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pytables : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
numba : None
</details>
|
Thanks @alippai for the report, can confirm this reproduces
on master:
```python
In [5]: import pandas as pd
...: import numpy as np
...:
...: N = 2000
...: df = pd.DataFrame({"A": [1] * N, "B": [np.nan, 1.0] * (N // 2)})
...: df = df.sort_values("A").set_index("A")
...: %time df.groupby("A")["B"].fillna(method="ffill")
CPU times: user 1.09 s, sys: 571 ms, total: 1.66 s
Wall time: 1.66 s
Out[5]:
A
1 NaN
1 1.0
1 1.0
1 1.0
1 1.0
...
1 1.0
1 1.0
1 1.0
1 1.0
1 1.0
Name: B, Length: 2000, dtype: float64
```
on 1.0.5:
```python
In [8]: import pandas as pd
...: import numpy as np
...:
...: N = 2000
...: df = pd.DataFrame({"A": [1] * N, "B": [np.nan, 1.0] * (N // 2)})
...: df = df.sort_values("A").set_index("A")
...:
...: %time df.groupby("A")["B"].fillna(method="ffill")
CPU times: user 3.99 ms, sys: 0 ns, total: 3.99 ms
Wall time: 3.39 ms
Out[8]:
A
1 NaN
1 1.0
1 1.0
1 1.0
1 1.0
...
1 1.0
1 1.0
1 1.0
1 1.0
1 1.0
Name: B, Length: 2000, dtype: float64
```
`ffill()` is fast, but the output is different: https://github.com/pandas-dev/pandas/issues/34725
As for larger N (starting from 10k) this never completes, can we consider adding back the `Bug` label? Looks like quadratic complexity or worse.
Running profiler gives:
```
243385 function calls (235532 primitive calls) in 10.477 seconds
Ordered by: internal time
ncalls tottime percall cumtime percall filename:lineno(function)
1632/421 7.849 0.005 7.882 0.019 {built-in method numpy.core._multiarray_umath.implement_array_function}
1 2.054 2.054 9.936 9.936 {method 'get_indexer_non_unique' of 'pandas._libs.index.IndexEngine' objects}
375 0.048 0.000 0.048 0.000 {built-in method marshal.loads}
377 0.042 0.000 0.042 0.000 {method 'read' of '_io.BufferedReader' objects}
1 0.042 0.042 0.042 0.042 {method 'unique' of 'pandas._libs.hashtable.Int64HashTable' objects}
83/81 0.037 0.000 0.039 0.000 {built-in method _imp.create_dynamic}
1 0.022 0.022 10.002 10.002 groupby.py:1167(_concat_objects)
410 0.020 0.000 0.020 0.000 {built-in method builtins.compile}
```
> `ffill()` is fast, but the output is different: https://github.com/pandas-dev/pandas/issues/34725
@alippai #34725 is fixed now if that helps
take
|
2020-10-15T22:52:55Z
|
<patch>
diff --git a/asv_bench/benchmarks/groupby.py b/asv_bench/benchmarks/groupby.py
--- a/asv_bench/benchmarks/groupby.py
+++ b/asv_bench/benchmarks/groupby.py
@@ -358,6 +358,26 @@ def time_category_size(self):
self.draws.groupby(self.cats).size()
+class FillNA:
+ def setup(self):
+ N = 100
+ self.df = DataFrame(
+ {"group": [1] * N + [2] * N, "value": [np.nan, 1.0] * N}
+ ).set_index("group")
+
+ def time_df_ffill(self):
+ self.df.groupby("group").fillna(method="ffill")
+
+ def time_df_bfill(self):
+ self.df.groupby("group").fillna(method="bfill")
+
+ def time_srs_ffill(self):
+ self.df.groupby("group")["value"].fillna(method="ffill")
+
+ def time_srs_bfill(self):
+ self.df.groupby("group")["value"].fillna(method="bfill")
+
+
class GroupByMethods:
param_names = ["dtype", "method", "application"]
diff --git a/doc/source/whatsnew/v1.1.4.rst b/doc/source/whatsnew/v1.1.4.rst
--- a/doc/source/whatsnew/v1.1.4.rst
+++ b/doc/source/whatsnew/v1.1.4.rst
@@ -29,6 +29,7 @@ Bug fixes
~~~~~~~~~
- Bug causing ``groupby(...).sum()`` and similar to not preserve metadata (:issue:`29442`)
- Bug in :meth:`Series.isin` and :meth:`DataFrame.isin` raising a ``ValueError`` when the target was read-only (:issue:`37174`)
+- Bug in :meth:`GroupBy.fillna` that introduced a performance regression after 1.0.5 (:issue:`36757`)
.. ---------------------------------------------------------------------------
diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -1196,12 +1196,12 @@ def reset_identity(values):
# when the ax has duplicates
# so we resort to this
# GH 14776, 30667
- if ax.has_duplicates:
+ if ax.has_duplicates and not result.axes[self.axis].equals(ax):
indexer, _ = result.index.get_indexer_non_unique(ax.values)
indexer = algorithms.unique1d(indexer)
result = result.take(indexer, axis=self.axis)
else:
- result = result.reindex(ax, axis=self.axis)
+ result = result.reindex(ax, axis=self.axis, copy=False)
elif self.group_keys:
</patch>
|
[]
|
[]
| |||
pandas-dev__pandas-6650
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Speed up DatetimeConverter for plotting
I've recently started using pandas (impressed so far!) and found that plotting large data (from around 100k) samples is quite slow. I traced the bottleneck to the _dt_to_float_ordinal helper function called by DatetimeConverter.(https://github.com/pydata/pandas/blob/master/pandas/tseries/converter.py#L144).
More specifically, this function uses matplotlib's date2num, which converts arrays and iterables using a slow list comprehension. Since pandas seem to natively store datetimes as epoch+nanoseconds in an int64 array, it would be much faster to use matplotlib's vectorized epoch2num instead. In a testcase with 1 million points, using epoch2num is about 100 times faster than date2num:
``` python
from pandas import date_range, DataFrame
from numpy import int64, arange
from matplotlib import pyplot, dates
import time
n = 1e6
df = DataFrame(arange(n), index = date_range('20130101', periods=n, freq='S'))
start = time.time()
pyplot.plot(df.index, df)
print('date2num took {0:g}s'.format(time.time() - start))
pyplot.show()
# monkey patch
import pandas.tseries.converter
def _my_dt_to_float_ordinal(dt):
try:
base = dates.epoch2num(dt.astype(int64) / 1.0E9)
except AttributeError:
base = dates.date2num(dt)
return base
pandas.tseries.converter._dt_to_float_ordinal = _my_dt_to_float_ordinal
start = time.time()
pyplot.plot(df.index, df)
print('epoch2num took {0:g}s'.format(time.time() - start))
pyplot.show()
```
Unfortunately, I am not familiar enough with pandas to know whether date2num is used intentionally or to implement a proper patch myself that works in all cases.
</issue>
<code>
[start of README.md]
1 # pandas: powerful Python data analysis toolkit
2
3 
4
5 [](http://scatterci.github.io/pydata/pandas)
6
7 ## What is it
8
9 **pandas** is a Python package providing fast, flexible, and expressive data
10 structures designed to make working with "relational" or "labeled" data both
11 easy and intuitive. It aims to be the fundamental high-level building block for
12 doing practical, **real world** data analysis in Python. Additionally, it has
13 the broader goal of becoming **the most powerful and flexible open source data
14 analysis / manipulation tool available in any language**. It is already well on
15 its way toward this goal.
16
17 ## Main Features
18 Here are just a few of the things that pandas does well:
19
20 - Easy handling of [**missing data**][missing-data] (represented as
21 `NaN`) in floating point as well as non-floating point data
22 - Size mutability: columns can be [**inserted and
23 deleted**][insertion-deletion] from DataFrame and higher dimensional
24 objects
25 - Automatic and explicit [**data alignment**][alignment]: objects can
26 be explicitly aligned to a set of labels, or the user can simply
27 ignore the labels and let `Series`, `DataFrame`, etc. automatically
28 align the data for you in computations
29 - Powerful, flexible [**group by**][groupby] functionality to perform
30 split-apply-combine operations on data sets, for both aggregating
31 and transforming data
32 - Make it [**easy to convert**][conversion] ragged,
33 differently-indexed data in other Python and NumPy data structures
34 into DataFrame objects
35 - Intelligent label-based [**slicing**][slicing], [**fancy
36 indexing**][fancy-indexing], and [**subsetting**][subsetting] of
37 large data sets
38 - Intuitive [**merging**][merging] and [**joining**][joining] data
39 sets
40 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of
41 data sets
42 - [**Hierarchical**][mi] labeling of axes (possible to have multiple
43 labels per tick)
44 - Robust IO tools for loading data from [**flat files**][flat-files]
45 (CSV and delimited), [**Excel files**][excel], [**databases**][db],
46 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore]
47 - [**Time series**][timeseries]-specific functionality: date range
48 generation and frequency conversion, moving window statistics,
49 moving window linear regressions, date shifting and lagging, etc.
50
51
52 [missing-data]: http://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data
53 [insertion-deletion]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html#column-selection-addition-deletion
54 [alignment]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html?highlight=alignment#intro-to-data-structures
55 [groupby]: http://pandas.pydata.org/pandas-docs/stable/groupby.html#group-by-split-apply-combine
56 [conversion]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe
57 [slicing]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges
58 [fancy-indexing]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#advanced-indexing-with-ix
59 [subsetting]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing
60 [merging]: http://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging
61 [joining]: http://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index
62 [reshape]: http://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-and-pivot-tables
63 [pivot-table]: http://pandas.pydata.org/pandas-docs/stable/reshaping.html#pivot-tables-and-cross-tabulations
64 [mi]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex
65 [flat-files]: http://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files
66 [excel]: http://pandas.pydata.org/pandas-docs/stable/io.html#excel-files
67 [db]: http://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries
68 [hdfstore]: http://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables
69 [timeseries]: http://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-series-date-functionality
70
71 ## Where to get it
72 The source code is currently hosted on GitHub at:
73 http://github.com/pydata/pandas
74
75 Binary installers for the latest released version are available at the Python
76 package index
77
78 http://pypi.python.org/pypi/pandas/
79
80 And via `easy_install`:
81
82 ```sh
83 easy_install pandas
84 ```
85
86 or `pip`:
87
88 ```sh
89 pip install pandas
90 ```
91
92 ## Dependencies
93 - [NumPy](http://www.numpy.org): 1.6.1 or higher
94 - [python-dateutil](http://labix.org/python-dateutil): 1.5 or higher
95 - [pytz](http://pytz.sourceforge.net)
96 - Needed for time zone support with ``pandas.date_range``
97
98 ### Highly Recommended Dependencies
99 - [numexpr](http://code.google.com/p/numexpr/)
100 - Needed to accelerate some expression evaluation operations
101 - Required by PyTables
102 - [bottleneck](http://berkeleyanalytics.com/bottleneck)
103 - Needed to accelerate certain numerical operations
104
105 ### Optional dependencies
106 - [Cython](http://www.cython.org): Only necessary to build development version. Version 0.17.1 or higher.
107 - [SciPy](http://www.scipy.org): miscellaneous statistical functions
108 - [PyTables](http://www.pytables.org): necessary for HDF5-based storage
109 - [SQLAlchemy](http://www.sqlalchemy.org): for SQL database support. Version 0.8.1 or higher recommended.
110 - [matplotlib](http://matplotlib.sourceforge.net/): for plotting
111 - [statsmodels](http://statsmodels.sourceforge.net/)
112 - Needed for parts of `pandas.stats`
113 - For Excel I/O:
114 - [xlrd/xlwt](http://www.python-excel.org/)
115 - Excel reading (xlrd) and writing (xlwt)
116 - [openpyxl](http://packages.python.org/openpyxl/)
117 - openpyxl version 1.6.1 or higher, for writing .xlsx files
118 - xlrd >= 0.9.0
119 - [XlsxWriter](https://pypi.python.org/pypi/XlsxWriter)
120 - Alternative Excel writer.
121 - [Google bq Command Line Tool](https://developers.google.com/bigquery/bq-command-line-tool/)
122 - Needed for `pandas.io.gbq`
123 - [boto](https://pypi.python.org/pypi/boto): necessary for Amazon S3 access.
124 - One of the following combinations of libraries is needed to use the
125 top-level [`pandas.read_html`][read-html-docs] function:
126 - [BeautifulSoup4][BeautifulSoup4] and [html5lib][html5lib] (Any
127 recent version of [html5lib][html5lib] is okay.)
128 - [BeautifulSoup4][BeautifulSoup4] and [lxml][lxml]
129 - [BeautifulSoup4][BeautifulSoup4] and [html5lib][html5lib] and [lxml][lxml]
130 - Only [lxml][lxml], although see [HTML reading gotchas][html-gotchas]
131 for reasons as to why you should probably **not** take this approach.
132
133 #### Notes about HTML parsing libraries
134 - If you install [BeautifulSoup4][BeautifulSoup4] you must install
135 either [lxml][lxml] or [html5lib][html5lib] or both.
136 `pandas.read_html` will **not** work with *only* `BeautifulSoup4`
137 installed.
138 - You are strongly encouraged to read [HTML reading
139 gotchas][html-gotchas]. It explains issues surrounding the
140 installation and usage of the above three libraries.
141 - You may need to install an older version of
142 [BeautifulSoup4][BeautifulSoup4]:
143 - Versions 4.2.1, 4.1.3 and 4.0.2 have been confirmed for 64 and
144 32-bit Ubuntu/Debian
145 - Additionally, if you're using [Anaconda][Anaconda] you should
146 definitely read [the gotchas about HTML parsing][html-gotchas]
147 libraries
148 - If you're on a system with `apt-get` you can do
149
150 ```sh
151 sudo apt-get build-dep python-lxml
152 ```
153
154 to get the necessary dependencies for installation of [lxml][lxml].
155 This will prevent further headaches down the line.
156
157 [html5lib]: https://github.com/html5lib/html5lib-python "html5lib"
158 [BeautifulSoup4]: http://www.crummy.com/software/BeautifulSoup "BeautifulSoup4"
159 [lxml]: http://lxml.de
160 [Anaconda]: https://store.continuum.io/cshop/anaconda
161 [NumPy]: http://numpy.scipy.org/
162 [html-gotchas]: http://pandas.pydata.org/pandas-docs/stable/gotchas.html#html-table-parsing
163 [read-html-docs]: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.io.html.read_html.html#pandas.io.html.read_html
164
165 ## Installation from sources
166 To install pandas from source you need Cython in addition to the normal
167 dependencies above. Cython can be installed from pypi:
168
169 ```sh
170 pip install cython
171 ```
172
173 In the `pandas` directory (same one where you found this file after
174 cloning the git repo), execute:
175
176 ```sh
177 python setup.py install
178 ```
179
180 or for installing in [development mode](http://www.pip-installer.org/en/latest/usage.html):
181
182 ```sh
183 python setup.py develop
184 ```
185
186 Alternatively, you can use `pip` if you want all the dependencies pulled
187 in automatically (the `-e` option is for installing it in [development
188 mode](http://www.pip-installer.org/en/latest/usage.html)):
189
190 ```sh
191 pip install -e .
192 ```
193
194 On Windows, you will need to install MinGW and execute:
195
196 ```sh
197 python setup.py build --compiler=mingw32
198 python setup.py install
199 ```
200
201 See http://pandas.pydata.org/ for more information.
202
203 ## License
204 BSD
205
206 ## Documentation
207 The official documentation is hosted on PyData.org: http://pandas.pydata.org/
208
209 The Sphinx documentation should provide a good starting point for learning how
210 to use the library. Expect the docs to continue to expand as time goes on.
211
212 ## Background
213 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and
214 has been under active development since then.
215
216 ## Discussion and Development
217 Since pandas development is related to a number of other scientific
218 Python projects, questions are welcome on the scipy-user mailing
219 list. Specialized discussions or design issues should take place on
220 the pystatsmodels mailing list / Google group, where
221 ``scikits.statsmodels`` and other libraries will also be discussed:
222
223 http://groups.google.com/group/pystatsmodels
224
[end of README.md]
[start of pandas/core/config_init.py]
1 """
2 This module is imported from the pandas package __init__.py file
3 in order to ensure that the core.config options registered here will
4 be available as soon as the user loads the package. if register_option
5 is invoked inside specific modules, they will not be registered until that
6 module is imported, which may or may not be a problem.
7
8 If you need to make sure options are available even before a certain
9 module is imported, register them here rather then in the module.
10
11 """
12
13 import pandas.core.config as cf
14 from pandas.core.config import (is_int, is_bool, is_text, is_float,
15 is_instance_factory, is_one_of_factory,
16 get_default_val)
17 from pandas.core.format import detect_console_encoding
18
19
20 #
21 # options from the "display" namespace
22
23 pc_precision_doc = """
24 : int
25 Floating point output precision (number of significant digits). This is
26 only a suggestion
27 """
28
29 pc_colspace_doc = """
30 : int
31 Default space for DataFrame columns.
32 """
33
34 pc_max_rows_doc = """
35 : int
36 This sets the maximum number of rows pandas should output when printing
37 out various output. For example, this value determines whether the repr()
38 for a dataframe prints out fully or just a summary repr.
39 'None' value means unlimited.
40 """
41
42 pc_max_cols_doc = """
43 : int
44 max_rows and max_columns are used in __repr__() methods to decide if
45 to_string() or info() is used to render an object to a string. In case
46 python/IPython is running in a terminal this can be set to 0 and pandas
47 will correctly auto-detect the width the terminal and swap to a smaller
48 format in case all columns would not fit vertically. The IPython notebook,
49 IPython qtconsole, or IDLE do not run in a terminal and hence it is not
50 possible to do correct auto-detection.
51 'None' value means unlimited.
52 """
53
54 pc_max_info_cols_doc = """
55 : int
56 max_info_columns is used in DataFrame.info method to decide if
57 per column information will be printed.
58 """
59
60 pc_nb_repr_h_doc = """
61 : boolean
62 When True, IPython notebook will use html representation for
63 pandas objects (if it is available).
64 """
65
66 pc_date_dayfirst_doc = """
67 : boolean
68 When True, prints and parses dates with the day first, eg 20/01/2005
69 """
70
71 pc_date_yearfirst_doc = """
72 : boolean
73 When True, prints and parses dates with the year first, eg 2005/01/20
74 """
75
76 pc_pprint_nest_depth = """
77 : int
78 Controls the number of nested levels to process when pretty-printing
79 """
80
81 pc_multi_sparse_doc = """
82 : boolean
83 "sparsify" MultiIndex display (don't display repeated
84 elements in outer levels within groups)
85 """
86
87 pc_encoding_doc = """
88 : str/unicode
89 Defaults to the detected encoding of the console.
90 Specifies the encoding to be used for strings returned by to_string,
91 these are generally strings meant to be displayed on the console.
92 """
93
94 float_format_doc = """
95 : callable
96 The callable should accept a floating point number and return
97 a string with the desired format of the number. This is used
98 in some places like SeriesFormatter.
99 See core.format.EngFormatter for an example.
100
101 """
102
103 max_colwidth_doc = """
104 : int
105 The maximum width in characters of a column in the repr of
106 a pandas data structure. When the column overflows, a "..."
107 placeholder is embedded in the output.
108 """
109
110 colheader_justify_doc = """
111 : 'left'/'right'
112 Controls the justification of column headers. used by DataFrameFormatter.
113 """
114
115 pc_expand_repr_doc = """
116 : boolean
117 Whether to print out the full DataFrame repr for wide DataFrames across
118 multiple lines, `max_columns` is still respected, but the output will
119 wrap-around across multiple "pages" if it's width exceeds `display.width`.
120 """
121
122 pc_show_dimensions_doc = """
123 : boolean
124 Whether to print out dimensions at the end of DataFrame repr.
125 """
126
127 pc_line_width_doc = """
128 : int
129 Deprecated.
130 """
131
132 pc_line_width_deprecation_warning = """\
133 line_width has been deprecated, use display.width instead (currently both are
134 identical)
135 """
136
137 pc_height_deprecation_warning = """\
138 height has been deprecated.
139 """
140
141 pc_width_doc = """
142 : int
143 Width of the display in characters. In case python/IPython is running in
144 a terminal this can be set to None and pandas will correctly auto-detect
145 the width.
146 Note that the IPython notebook, IPython qtconsole, or IDLE do not run in a
147 terminal and hence it is not possible to correctly detect the width.
148 """
149
150 pc_height_doc = """
151 : int
152 Deprecated.
153 """
154
155 pc_chop_threshold_doc = """
156 : float or None
157 if set to a float value, all float values smaller then the given threshold
158 will be displayed as exactly 0 by repr and friends.
159 """
160
161 pc_max_seq_items = """
162 : int or None
163
164 when pretty-printing a long sequence, no more then `max_seq_items`
165 will be printed. If items are omitted, they will be denoted by the
166 addition of "..." to the resulting string.
167
168 If set to None, the number of items to be printed is unlimited.
169 """
170
171 pc_max_info_rows_doc = """
172 : int or None
173 df.info() will usually show null-counts for each column.
174 For large frames this can be quite slow. max_info_rows and max_info_cols
175 limit this null check only to frames with smaller dimensions then specified.
176 """
177
178 pc_large_repr_doc = """
179 : 'truncate'/'info'
180
181 For DataFrames exceeding max_rows/max_cols, the repr (and HTML repr) can
182 show a truncated table (the default from 0.13), or switch to the view from
183 df.info() (the behaviour in earlier versions of pandas).
184 """
185
186 pc_mpl_style_doc = """
187 : bool
188
189 Setting this to 'default' will modify the rcParams used by matplotlib
190 to give plots a more pleasing visual style by default.
191 Setting this to None/False restores the values to their initial value.
192 """
193
194 style_backup = dict()
195
196
197 def mpl_style_cb(key):
198 import sys
199 from pandas.tools.plotting import mpl_stylesheet
200 global style_backup
201
202 val = cf.get_option(key)
203
204 if 'matplotlib' not in sys.modules.keys():
205 if not(val): # starting up, we get reset to None
206 return val
207 raise Exception("matplotlib has not been imported. aborting")
208
209 import matplotlib.pyplot as plt
210
211 if val == 'default':
212 style_backup = dict([(k, plt.rcParams[k]) for k in mpl_stylesheet])
213 plt.rcParams.update(mpl_stylesheet)
214 elif not val:
215 if style_backup:
216 plt.rcParams.update(style_backup)
217
218 return val
219
220 with cf.config_prefix('display'):
221 cf.register_option('precision', 7, pc_precision_doc, validator=is_int)
222 cf.register_option('float_format', None, float_format_doc)
223 cf.register_option('column_space', 12, validator=is_int)
224 cf.register_option('max_info_rows', 1690785, pc_max_info_rows_doc,
225 validator=is_instance_factory((int, type(None))))
226 cf.register_option('max_rows', 60, pc_max_rows_doc,
227 validator=is_instance_factory([type(None), int]))
228 cf.register_option('max_colwidth', 50, max_colwidth_doc, validator=is_int)
229 cf.register_option('max_columns', 20, pc_max_cols_doc,
230 validator=is_instance_factory([type(None), int]))
231 cf.register_option('large_repr', 'truncate', pc_large_repr_doc,
232 validator=is_one_of_factory(['truncate', 'info']))
233 cf.register_option('max_info_columns', 100, pc_max_info_cols_doc,
234 validator=is_int)
235 cf.register_option('colheader_justify', 'right', colheader_justify_doc,
236 validator=is_text)
237 cf.register_option('notebook_repr_html', True, pc_nb_repr_h_doc,
238 validator=is_bool)
239 cf.register_option('date_dayfirst', False, pc_date_dayfirst_doc,
240 validator=is_bool)
241 cf.register_option('date_yearfirst', False, pc_date_yearfirst_doc,
242 validator=is_bool)
243 cf.register_option('pprint_nest_depth', 3, pc_pprint_nest_depth,
244 validator=is_int)
245 cf.register_option('multi_sparse', True, pc_multi_sparse_doc,
246 validator=is_bool)
247 cf.register_option('encoding', detect_console_encoding(), pc_encoding_doc,
248 validator=is_text)
249 cf.register_option('expand_frame_repr', True, pc_expand_repr_doc)
250 cf.register_option('show_dimensions', True, pc_show_dimensions_doc)
251 cf.register_option('chop_threshold', None, pc_chop_threshold_doc)
252 cf.register_option('max_seq_items', 100, pc_max_seq_items)
253 cf.register_option('mpl_style', None, pc_mpl_style_doc,
254 validator=is_one_of_factory([None, False, 'default']),
255 cb=mpl_style_cb)
256 cf.register_option('height', 60, pc_height_doc,
257 validator=is_instance_factory([type(None), int]))
258 cf.register_option('width', 80, pc_width_doc,
259 validator=is_instance_factory([type(None), int]))
260 # redirected to width, make defval identical
261 cf.register_option('line_width', get_default_val('display.width'),
262 pc_line_width_doc)
263
264 cf.deprecate_option('display.line_width',
265 msg=pc_line_width_deprecation_warning,
266 rkey='display.width')
267
268 cf.deprecate_option('display.height',
269 msg=pc_height_deprecation_warning,
270 rkey='display.max_rows')
271
272 tc_sim_interactive_doc = """
273 : boolean
274 Whether to simulate interactive mode for purposes of testing
275 """
276 with cf.config_prefix('mode'):
277 cf.register_option('sim_interactive', False, tc_sim_interactive_doc)
278
279 use_inf_as_null_doc = """
280 : boolean
281 True means treat None, NaN, INF, -INF as null (old way),
282 False means None and NaN are null, but INF, -INF are not null
283 (new way).
284 """
285
286 # We don't want to start importing everything at the global context level
287 # or we'll hit circular deps.
288
289
290 def use_inf_as_null_cb(key):
291 from pandas.core.common import _use_inf_as_null
292 _use_inf_as_null(key)
293
294 with cf.config_prefix('mode'):
295 cf.register_option('use_inf_as_null', False, use_inf_as_null_doc,
296 cb=use_inf_as_null_cb)
297
298
299 # user warnings
300 chained_assignment = """
301 : string
302 Raise an exception, warn, or no action if trying to use chained assignment,
303 The default is warn
304 """
305
306 with cf.config_prefix('mode'):
307 cf.register_option('chained_assignment', 'warn', chained_assignment,
308 validator=is_one_of_factory([None, 'warn', 'raise']))
309
310
311 # Set up the io.excel specific configuration.
312 writer_engine_doc = """
313 : string
314 The default Excel writer engine for '{ext}' files. Available options:
315 '{default}' (the default){others}.
316 """
317
318 with cf.config_prefix('io.excel'):
319 # going forward, will be additional writers
320 for ext, options in [('xls', ['xlwt']),
321 ('xlsm', ['openpyxl'])]:
322 default = options.pop(0)
323 if options:
324 options = " " + ", ".join(options)
325 else:
326 options = ""
327 doc = writer_engine_doc.format(ext=ext, default=default,
328 others=options)
329 cf.register_option(ext + '.writer', default, doc, validator=str)
330
331 def _register_xlsx(engine, other):
332 cf.register_option('xlsx.writer', engine,
333 writer_engine_doc.format(ext='xlsx',
334 default=engine,
335 others=", '%s'" % other),
336 validator=str)
337
338 try:
339 # better memory footprint
340 import xlsxwriter
341 _register_xlsx('xlsxwriter', 'openpyxl')
342 except ImportError:
343 # fallback
344 _register_xlsx('openpyxl', 'xlsxwriter')
345
[end of pandas/core/config_init.py]
[start of pandas/io/gbq.py]
1 """
2 Pandas module to interface with Google BigQuery.
3 """
4 import os
5 import sys
6 import tempfile
7 import csv
8 import logging
9 from datetime import datetime
10 import pkg_resources
11 from distutils.version import LooseVersion
12 from pandas.compat import u
13
14 import pandas as pd
15 import numpy as np
16
17 from pandas.core.common import PandasError
18 from pandas.core.frame import DataFrame
19 from pandas.tools.merge import concat
20
21 try:
22 import bq
23 import bigquery_client
24 import gflags as flags
25 _BQ_INSTALLED = True
26
27 _BQ_VERSION = pkg_resources.get_distribution('bigquery').version
28 if LooseVersion(_BQ_VERSION) >= '2.0.17':
29 _BQ_VALID_VERSION = True
30 else:
31 _BQ_VALID_VERSION = False
32
33 except ImportError:
34 _BQ_INSTALLED = False
35
36
37 # Setup the logger
38 logger = logging.getLogger('pandas.io.gbq')
39
40 # These are some custom exceptions that the
41 # to_gbq() method can throw
42
43
44 class SchemaMissing(PandasError, IOError):
45 """
46 Raised when attempting to write a DataFrame to
47 a new table in Google BigQuery without specifying
48 a schema describing the DataFrame.
49 """
50 pass
51
52
53 class InvalidSchema(PandasError, IOError):
54 """
55 Raised when attempting to write a DataFrame to
56 Google BigQuery with an invalid table schema.
57 """
58 pass
59
60
61 class TableExistsFail(PandasError, IOError):
62 """
63 Raised when attempting to write a DataFrame to
64 an existing Google BigQuery table without specifying
65 that a replace/update action be taken.
66 """
67 pass
68
69
70 class InvalidColumnOrder(PandasError, IOError):
71 """
72 Raised when the provided column order for output
73 results DataFrame does not match the schema
74 returned by BigQuery.
75 """
76 pass
77
78
79 def _authenticate():
80 """
81 For testing, we abstract the authentication to BigQuery API.
82 Presently this is implemented using the bq.py Client.Get()
83 method. Any exceptions raised are considered fatal, so we
84 do not process them.
85
86 Returns
87 -------
88 BigqueryClient : Configured connection to Google BigQuery
89 """
90 return bq.Client.Get()
91
92
93 def _parse_entry(field_value, field_type):
94 """
95 Given a value and the corresponding BigQuery data type,
96 perform any operations needed and return in a format
97 appropriate for a numpy record dictionary
98
99 Parameters
100 ----------
101 field_value : Source object to be transformed
102 field_type : String representation of Google BigQuery
103 data type (per schema)
104
105 Returns
106 -------
107 field_value : object or primitive of type corresponding
108 to field_type
109 """
110
111 # Avoid any casting problems
112 if field_value is None or field_value == 'null':
113 return None
114 if field_type == 'INTEGER' or field_type == 'FLOAT':
115 field_value = float(field_value)
116 elif field_type == 'TIMESTAMP':
117 timestamp = datetime.utcfromtimestamp(float(field_value))
118 field_value = np.datetime64(timestamp)
119 elif field_type == 'BOOLEAN':
120 field_value = field_value == 'true'
121 elif field_type == 'STRING':
122 field_value = field_value
123 else:
124 field_value = str(field_value)
125 return field_value
126
127
128 def _parse_page(raw_page, col_names, col_types, col_dtypes):
129 """
130 Given a list of rows produced by the client.apiclient.tabledata().list(),
131 build a numpy array with proper dtypes and column names as specified
132 by the arguments.
133
134 Parameters
135 ----------
136 raw_page : Resulting list of rows from a page retrieved via
137 bigquery API
138 client.apiclient.tabledata().list().execute()['rows']
139 col_names: An ordered list of names for the columns
140 col_types: String representation of the BigQuery DataType for that
141 column
142 col_dtypes: Target numpy.dtype for the column
143
144 Returns
145 -------
146 page_array : numpy record array corresponding
147 to the page data
148 """
149
150 # Should be at most 100,000 per the API, but this could
151 # be increased in the future. Should only be less than
152 # this for the last page to reduce API calls
153 page_row_count = len(raw_page)
154
155 # Place to hold the results for a page of data
156 page_array = np.zeros((page_row_count,), dtype=zip(col_names, col_dtypes))
157 for row_num, raw_row in enumerate(raw_page):
158 entries = raw_row.get('f', [])
159 # Iterate over each entry - setting proper field types
160 for col_num, field_type in enumerate(col_types):
161 # Process the field's types using schema
162 field_value = _parse_entry(entries[col_num].get('v', ''),
163 field_type)
164 # Fill the value into the final array
165 page_array[row_num][col_num] = field_value
166
167 return page_array
168
169
170 def _parse_data(client, job, index_col=None, col_order=None):
171 """
172 Iterate through the query results and piece together the
173 final DataFrame. Builds a DataFrame for each page of
174 results, then concatenates them together when finished.
175 To save memory, we use numpy record arrays to build these
176 DataFrames.
177
178 Parameters
179 ----------
180 client: An instance of bq.Client
181 job: An array containing the job info for a completed query
182 index_col: str (optional)
183 Name of result column to use for index in results DataFrame
184 col_order: list() (optional)
185 List of BigQuery column names in the desired order for results
186 DataFrame
187
188 Returns
189 -------
190 df: pandas DataFrame
191 DataFrame representing results of query
192
193 Raises:
194 ------
195 InvalidColumnOrder:
196 Raised if 'col_order' parameter doesn't match returned DataFrame
197 BigqueryError:
198 Raised by bigquery_client if a Google API error is encountered
199
200
201 Notes:
202 -----
203 This script relies on Google being consistent with their
204 pagination API. We are using the most flexible iteration method
205 that we could find in the bq.py/bigquery_client.py API's, but
206 these have undergone large amounts of change recently.
207 """
208
209 # dtype Map -
210 # see: http://pandas.pydata.org/pandas-docs/dev/missing_data.html#missing-data-casting-rules-and-indexing
211 dtype_map = {'INTEGER': np.dtype(float),
212 'FLOAT': np.dtype(float),
213 'TIMESTAMP': 'M8[ns]'} # This seems to be buggy without
214 # nanosecond indicator
215
216 # We first need the schema to get information about the columns of
217 # our dataframe.
218
219 table_dict = job['configuration']['query']['destinationTable']
220 fields = client.GetTableSchema(table_dict)['fields']
221
222 # Get the schema into a format useable to create our
223 # dataframe
224 col_dtypes = []
225 col_types = []
226 col_names = []
227
228 # TODO: Do this in one clean step
229 for field in fields:
230 col_types.append(field['type'])
231 # Note the encoding... numpy doesn't like titles that are UTF8, which
232 # is the return type from the API
233 col_names.append(field['name'].encode('ascii', 'ignore'))
234 # Note, it would be nice to use 'str' types, but BigQuery doesn't have
235 # a fixed length in mind - just maxes out at 64k
236 col_dtypes.append(dtype_map.get(field['type'], object))
237
238 # How many columns are there
239 num_columns = len(col_names)
240
241 # Iterate over the result rows.
242 # Since Google's API now requires pagination of results,
243 # we do that here. The following is repurposed from
244 # bigquery_client.py :: Client._JobTableReader._ReadOnePage
245
246 # TODO: Enable Reading From Table,
247 # see Client._TableTableReader._ReadOnePage
248
249 # Initially, no page token is set
250 page_token = None
251
252 # This number is the current max results per page
253 max_rows = bigquery_client._MAX_ROWS_PER_REQUEST
254
255 # How many rows in result set? Initialize to max_rows
256 total_rows = max_rows
257
258 # This is the starting row for a particular page...
259 # is ignored if page_token is present, though
260 # it may be useful if we wish to implement SQL like LIMITs
261 # with minimums
262 start_row = 0
263
264 # Keep our page DataFrames until the end when we concatenate them
265 dataframe_list = list()
266
267 current_job = job['jobReference']
268
269 # Iterate over all rows
270 while start_row < total_rows:
271 # Setup the parameters for getQueryResults() API Call
272 kwds = dict(current_job)
273 kwds['maxResults'] = max_rows
274 # Sets the timeout to 0 because we assume the table is already ready.
275 # This is because our previous call to Query() is synchronous
276 # and will block until it's actually done
277 kwds['timeoutMs'] = 0
278 # Use start row if there's no page_token ... in other words, the
279 # user requested to start somewhere other than the beginning...
280 # presently this is not a parameter to read_gbq(), but it will be
281 # added eventually.
282 if page_token:
283 kwds['pageToken'] = page_token
284 else:
285 kwds['startIndex'] = start_row
286 data = client.apiclient.jobs().getQueryResults(**kwds).execute()
287 if not data['jobComplete']:
288 raise bigquery_client.BigqueryError('Job was not completed, or was invalid')
289
290 # How many rows are there across all pages?
291 # Note: This is presently the only reason we don't just use
292 # _ReadOnePage() directly
293 total_rows = int(data['totalRows'])
294
295 page_token = data.get('pageToken', None)
296 raw_page = data.get('rows', [])
297 page_array = _parse_page(raw_page, col_names, col_types, col_dtypes)
298
299 start_row += len(raw_page)
300 if total_rows > 0:
301 completed = (100 * start_row) / total_rows
302 logger.info('Remaining Rows: ' + str(total_rows - start_row) + '('
303 + str(completed) + '% Complete)')
304 else:
305 logger.info('No Rows')
306
307 dataframe_list.append(DataFrame(page_array))
308
309 # Did we get enough rows? Note: gbq.py stopped checking for this
310 # but we felt it was still a good idea.
311 if not page_token and not raw_page and start_row != total_rows:
312 raise bigquery_client.BigqueryInterfaceError(
313 'Not enough rows returned by server. Expected: {0} Rows, But '
314 'Received {1}'.format(total_rows, start_row)
315 )
316
317 # Build final dataframe
318 final_df = concat(dataframe_list, ignore_index=True)
319
320 # Reindex the DataFrame on the provided column
321 if index_col is not None:
322 if index_col in col_names:
323 final_df.set_index(index_col, inplace=True)
324 col_names.remove(index_col)
325 else:
326 raise InvalidColumnOrder(
327 'Index column "{0}" does not exist in DataFrame.'
328 .format(index_col)
329 )
330
331 # Change the order of columns in the DataFrame based on provided list
332 if col_order is not None:
333 if sorted(col_order) == sorted(col_names):
334 final_df = final_df[col_order]
335 else:
336 raise InvalidColumnOrder(
337 'Column order does not match this DataFrame.'
338 )
339
340 # Downcast floats to integers and objects to booleans
341 # if there are no NaN's. This is presently due to a
342 # limitation of numpy in handling missing data.
343 final_df._data = final_df._data.downcast(dtypes='infer')
344 return final_df
345
346
347 def to_gbq(dataframe, destination_table, schema=None, col_order=None,
348 if_exists='fail', **kwargs):
349 """Write a DataFrame to a Google BigQuery table.
350
351 THIS IS AN EXPERIMENTAL LIBRARY
352
353 If the table exists, the DataFrame will be appended. If not, a new table
354 will be created, in which case the schema will have to be specified. By
355 default, rows will be written in the order they appear in the DataFrame,
356 though the user may specify an alternative order.
357
358 Parameters
359 ----------
360 dataframe : DataFrame
361 DataFrame to be written
362 destination_table : string
363 name of table to be written, in the form 'dataset.tablename'
364 schema : sequence (optional)
365 list of column types in order for data to be inserted,
366 e.g. ['INTEGER', 'TIMESTAMP', 'BOOLEAN']
367 col_order : sequence (optional)
368 order which columns are to be inserted,
369 e.g. ['primary_key', 'birthday', 'username']
370 if_exists : {'fail', 'replace', 'append'} (optional)
371 - fail: If table exists, do nothing.
372 - replace: If table exists, drop it, recreate it, and insert data.
373 - append: If table exists, insert data. Create if does not exist.
374 kwargs are passed to the Client constructor
375
376 Raises
377 ------
378 SchemaMissing :
379 Raised if the 'if_exists' parameter is set to 'replace', but no schema
380 is specified
381 TableExists :
382 Raised if the specified 'destination_table' exists but the 'if_exists'
383 parameter is set to 'fail' (the default)
384 InvalidSchema :
385 Raised if the 'schema' parameter does not match the provided DataFrame
386 """
387
388 if not _BQ_INSTALLED:
389 if sys.version_info >= (3, 0):
390 raise NotImplementedError('gbq module does not support Python 3 '
391 'yet')
392 else:
393 raise ImportError('Could not import Google BigQuery Client.')
394
395 if not _BQ_VALID_VERSION:
396 raise ImportError("pandas requires bigquery >= 2.0.17 for Google "
397 "BigQuery support, current version " + _BQ_VERSION)
398
399 ALLOWED_TYPES = ['STRING', 'INTEGER', 'FLOAT', 'BOOLEAN', 'TIMESTAMP',
400 'RECORD']
401
402 if if_exists == 'replace' and schema is None:
403 raise SchemaMissing('Cannot replace a table without specifying the '
404 'data schema')
405 else:
406 client = _authenticate()
407 table_reference = client.GetTableReference(destination_table)
408 if client.TableExists(table_reference):
409 if if_exists == 'fail':
410 raise TableExistsFail('Cannot overwrite existing tables if '
411 '\'if_exists="fail"\'')
412 else:
413 # Build up a string representation of the
414 # table's schema. Since the table already
415 # exists, we ask ask the API for it, which
416 # is returned in a list of dictionaries
417 # describing column data. Iterate over these
418 # and build up a string of form:
419 # "col_name1 : col_type1, col_name2 : col_type2..."
420 schema_full = client.GetTableSchema(
421 dict(table_reference)
422 )['fields']
423 schema = ''
424 for count, row in enumerate(schema_full):
425 if count > 0:
426 schema += ', '
427 schema += row['name'] + ':' + row['type']
428 else:
429 logger.info('Creating New Table')
430 if schema is None:
431 raise SchemaMissing('Cannot create a new table without '
432 'specifying the data schema')
433 else:
434 columns = dataframe.columns
435 if len(schema) != len(columns):
436 raise InvalidSchema('Incorrect number of columns in '
437 'schema')
438 else:
439 schema_string = ''
440 for count, name in enumerate(columns):
441 if count > 0:
442 schema_string += ', '
443 column_type = schema[count].upper()
444 if column_type in ALLOWED_TYPES:
445 schema_string += name + ':' + schema[count].lower()
446 else:
447 raise InvalidSchema('Invalid Type: ' + column_type
448 + ". Must be one of: " +
449 str(ALLOWED_TYPES))
450 schema = schema_string
451
452 opts = kwargs
453 opts['sync'] = True
454 opts['skip_leading_rows'] = 1
455 opts['encoding'] = 'UTF-8'
456 opts['max_bad_records'] = 0
457
458 # See: https://developers.google.com/bigquery/docs/reference/v2/jobs
459 if if_exists == 'replace':
460 opts['write_disposition'] = 'WRITE_TRUNCATE'
461 elif if_exists == 'append':
462 opts['write_disposition'] = 'WRITE_APPEND'
463
464 with tempfile.NamedTemporaryFile() as csv_file:
465 dataframe.to_csv(csv_file.name, index=False, encoding='utf-8')
466 job = client.Load(table_reference, csv_file.name, schema=schema,
467 **opts)
468
469
470 def read_gbq(query, project_id=None, destination_table=None, index_col=None,
471 col_order=None, **kwargs):
472 """Load data from Google BigQuery.
473
474 THIS IS AN EXPERIMENTAL LIBRARY
475
476 The main method a user calls to load data from Google BigQuery into a
477 pandas DataFrame. This is a simple wrapper for Google's bq.py and
478 bigquery_client.py, which we use to get the source data. Because of this,
479 this script respects the user's bq settings file, '~/.bigqueryrc', if it
480 exists. Such a file can be generated using 'bq init'. Further, additional
481 parameters for the query can be specified as either ``**kwds`` in the
482 command, or using FLAGS provided in the 'gflags' module. Particular options
483 can be found in bigquery_client.py.
484
485 Parameters
486 ----------
487 query : str
488 SQL-Like Query to return data values
489 project_id : str (optional)
490 Google BigQuery Account project ID. Optional, since it may be
491 located in ~/.bigqueryrc
492 index_col : str (optional)
493 Name of result column to use for index in results DataFrame
494 col_order : list(str) (optional)
495 List of BigQuery column names in the desired order for results
496 DataFrame
497 destination_table : string (optional)
498 If provided, send the results to the given table.
499 **kwargs :
500 To be passed to bq.Client.Create(). Particularly: 'trace',
501 'sync', 'api', 'api_version'
502
503 Returns
504 -------
505 df: DataFrame
506 DataFrame representing results of query
507
508 """
509 if not _BQ_INSTALLED:
510 if sys.version_info >= (3, 0):
511 raise NotImplementedError('gbq module does not support Python 3 '
512 'yet')
513 else:
514 raise ImportError('Could not import Google BigQuery Client.')
515
516 if not _BQ_VALID_VERSION:
517 raise ImportError('pandas requires bigquery >= 2.0.17 for Google '
518 'BigQuery support, current version ' + _BQ_VERSION)
519
520 query_args = kwargs
521 query_args['project_id'] = project_id
522 query_args['query'] = query
523 query_args['destination_table'] = destination_table
524 query_args['sync'] = True
525
526 client = _authenticate()
527
528 job = client.Query(**query_args)
529
530 return _parse_data(client, job, index_col=index_col, col_order=col_order)
531
[end of pandas/io/gbq.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pandas-dev/pandas
|
ff2c5f2e47ca5319238b7865b4004c59298a94c1
|
Speed up DatetimeConverter for plotting
I've recently started using pandas (impressed so far!) and found that plotting large data (from around 100k) samples is quite slow. I traced the bottleneck to the _dt_to_float_ordinal helper function called by DatetimeConverter.(https://github.com/pydata/pandas/blob/master/pandas/tseries/converter.py#L144).
More specifically, this function uses matplotlib's date2num, which converts arrays and iterables using a slow list comprehension. Since pandas seem to natively store datetimes as epoch+nanoseconds in an int64 array, it would be much faster to use matplotlib's vectorized epoch2num instead. In a testcase with 1 million points, using epoch2num is about 100 times faster than date2num:
``` python
from pandas import date_range, DataFrame
from numpy import int64, arange
from matplotlib import pyplot, dates
import time
n = 1e6
df = DataFrame(arange(n), index = date_range('20130101', periods=n, freq='S'))
start = time.time()
pyplot.plot(df.index, df)
print('date2num took {0:g}s'.format(time.time() - start))
pyplot.show()
# monkey patch
import pandas.tseries.converter
def _my_dt_to_float_ordinal(dt):
try:
base = dates.epoch2num(dt.astype(int64) / 1.0E9)
except AttributeError:
base = dates.date2num(dt)
return base
pandas.tseries.converter._dt_to_float_ordinal = _my_dt_to_float_ordinal
start = time.time()
pyplot.plot(df.index, df)
print('epoch2num took {0:g}s'.format(time.time() - start))
pyplot.show()
```
Unfortunately, I am not familiar enough with pandas to know whether date2num is used intentionally or to implement a proper patch myself that works in all cases.
|
I think they are the same effect, so this should be good. @TomAugspurger
@agijsberts pls do a pull-request and can get this in.
I think may need to manually validate that the graphs are correct as we don't do comparison graphs per se (more of a validation that they plot and and the returned objects are 'ok').
pls add a vbench for this as well.
good catch
https://github.com/pydata/pandas/wiki section on how-to do the PR
Applying your changed didn't seem to break any tests so this should be good. @jreback know if there will be any problems on 32-bit systems?
@agijsberts a pull request would be great for this. Let me know if you have any trouble.
@agijsberts yep let's give a try on this
Sorry, things are moving a bit slow since it's my first time preparing a PR (setting up git, virtualenv etc.). I'm manually checking the correctness of the plots at the moment (at least w.r.t. the current implementation). Expect a PR later today.
Note by the way that pandas' plotting functions (e.g., DataFrame.plot) do not benefit from this patch, as they do other trickery with time axes. The patch is therefore unfortunately only helpful in use-cases where matplotlib's functions are used directly with the time index.
|
2014-03-15T21:17:46Z
|
<patch>
diff --git a/doc/source/release.rst b/doc/source/release.rst
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -159,6 +159,8 @@ Improvements to existing features
- ``StataWriter`` and ``DataFrame.to_stata`` accept time stamp and data labels (:issue:`6545`)
- offset/freq info now in Timestamp __repr__ (:issue:`4553`)
- Support passing ``encoding`` with xlwt (:issue:`3710`)
+- Performance improvement when converting ``DatetimeIndex`` to floating ordinals
+ using ``DatetimeConverter`` (:issue:`6636`)
.. _release.bug_fixes-0.14.0:
diff --git a/pandas/tseries/converter.py b/pandas/tseries/converter.py
--- a/pandas/tseries/converter.py
+++ b/pandas/tseries/converter.py
@@ -16,6 +16,7 @@
import pandas.core.common as com
from pandas.core.index import Index
+from pandas.core.series import Series
from pandas.tseries.index import date_range
import pandas.tseries.tools as tools
import pandas.tseries.frequencies as frequencies
@@ -144,7 +145,10 @@ def _dt_to_float_ordinal(dt):
preserving hours, minutes, seconds and microseconds. Return value
is a :func:`float`.
"""
- base = dates.date2num(dt)
+ if isinstance(dt, (np.ndarray, Series)) and com.is_datetime64_ns_dtype(dt):
+ base = dates.epoch2num(dt.asi8 / 1.0E9)
+ else:
+ base = dates.date2num(dt)
return base
diff --git a/vb_suite/timeseries.py b/vb_suite/timeseries.py
--- a/vb_suite/timeseries.py
+++ b/vb_suite/timeseries.py
@@ -269,3 +269,15 @@ def date_range(start=None, end=None, periods=None, freq=None):
dataframe_resample_max_numpy = \
Benchmark("df.resample('1s', how=np.max)", setup)
+
+#----------------------------------------------------------------------
+# DatetimeConverter
+
+setup = common_setup + """
+from pandas.tseries.converter import DatetimeConverter
+"""
+
+datetimeindex_converter = \
+ Benchmark('DatetimeConverter.convert(rng, None, None)',
+ setup, start_date=datetime(2013, 1, 1))
+
</patch>
|
[]
|
[]
| |||
wagtail__wagtail-9133
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Unit tests for the `classnames` template tag
### Is your proposal related to a problem?
* As part of the GSoC UX Unification project a useful template tag was created `classnames` e2d4cb77458878d7d7076a7aa8b6d590deb99463
* It would be good to add unit tests for this behaviour
### Describe the solution you'd like
* In the file - https://github.com/wagtail/wagtail/blob/main/wagtail/admin/tests/test_templatetags.py
* Add unit tests for the classnames template tag
* These tests should cover various scenarios of the template tag usage (a single arg, multiple args, falsey args and also strings with extra whitespace)
https://github.com/wagtail/wagtail/blob/849d4d71cae41de56e43832546429cbb8ad289d5/wagtail/admin/templatetags/wagtailadmin_tags.py#L156-L161
### Additional context
* Implemented as part of this PR https://github.com/wagtail/wagtail/pull/8781
### Example test scenarios from existing header usage
https://github.com/wagtail/wagtail/blob/849d4d71cae41de56e43832546429cbb8ad289d5/wagtail/admin/templates/wagtailadmin/shared/header.html#L21
```
{% load wagtailadmin_tags %}
{% classnames "w-header" classname merged|yesno:"w-header--merged," search_form|yesno:"w-header--hasform," %}
```
* Depending on whether `merged` is truthy - should add `w-header--merged`
* Depending on whether `search_form` is truthy - should add `w-header--hasform`
* Should also add the `classname` passed into the template, even if it is a string with spaces in between
Unit tests for the `classnames` template tag
### Is your proposal related to a problem?
* As part of the GSoC UX Unification project a useful template tag was created `classnames` e2d4cb77458878d7d7076a7aa8b6d590deb99463
* It would be good to add unit tests for this behaviour
### Describe the solution you'd like
* In the file - https://github.com/wagtail/wagtail/blob/main/wagtail/admin/tests/test_templatetags.py
* Add unit tests for the classnames template tag
* These tests should cover various scenarios of the template tag usage (a single arg, multiple args, falsey args and also strings with extra whitespace)
https://github.com/wagtail/wagtail/blob/849d4d71cae41de56e43832546429cbb8ad289d5/wagtail/admin/templatetags/wagtailadmin_tags.py#L156-L161
### Additional context
* Implemented as part of this PR https://github.com/wagtail/wagtail/pull/8781
### Example test scenarios from existing header usage
https://github.com/wagtail/wagtail/blob/849d4d71cae41de56e43832546429cbb8ad289d5/wagtail/admin/templates/wagtailadmin/shared/header.html#L21
```
{% load wagtailadmin_tags %}
{% classnames "w-header" classname merged|yesno:"w-header--merged," search_form|yesno:"w-header--hasform," %}
```
* Depending on whether `merged` is truthy - should add `w-header--merged`
* Depending on whether `search_form` is truthy - should add `w-header--hasform`
* Should also add the `classname` passed into the template, even if it is a string with spaces in between
</issue>
<code>
[start of README.md]
1 <h1 align="center">
2 <img width="343" src=".github/wagtail.svg#gh-light-mode-only" alt="Wagtail">
3 <img width="343" src=".github/wagtail-inverse.svg#gh-dark-mode-only" alt="Wagtail">
4 </h1>
5 <p align="center">
6 <br>
7 <a href="https://github.com/wagtail/wagtail/actions">
8 <img src="https://github.com/wagtail/wagtail/workflows/Wagtail%20CI/badge.svg" alt="Build Status" />
9 </a>
10 <a href="https://opensource.org/licenses/BSD-3-Clause">
11 <img src="https://img.shields.io/badge/license-BSD-blue.svg" alt="License" />
12 </a>
13 <a href="https://pypi.python.org/pypi/wagtail/">
14 <img src="https://img.shields.io/pypi/v/wagtail.svg" alt="Version" />
15 </a>
16 <a href="https://lgtm.com/projects/g/wagtail/wagtail/alerts/">
17 <img src="https://img.shields.io/lgtm/alerts/g/wagtail/wagtail.svg?logo=lgtm&logoWidth=18" alt="Total alerts" />
18 </a>
19 <a href="https://lgtm.com/projects/g/wagtail/wagtail/context:python">
20 <img src="https://img.shields.io/lgtm/grade/python/g/wagtail/wagtail.svg?logo=lgtm&logoWidth=18" alt="Language grade: Python" />
21 </a>
22 <a href="https://lgtm.com/projects/g/wagtail/wagtail/context:javascript">
23 <img src="https://img.shields.io/lgtm/grade/javascript/g/wagtail/wagtail.svg?logo=lgtm&logoWidth=18" alt="Language grade: JavaScript" />
24 </a>
25 <a href="https://pypi.python.org/pypi/wagtail/">
26 <img src="https://img.shields.io/pypi/dm/wagtail?logo=Downloads" alt="Monthly downloads" />
27 </a>
28 </p>
29
30 Wagtail is an open source content management system built on Django, with a strong community and commercial support. It's focused on user experience, and offers precise control for designers and developers.
31
32 
33
34 ### 🔥 Features
35
36 - A fast, attractive interface for authors
37 - Complete control over front-end design and structure
38 - Scales to millions of pages and thousands of editors
39 - Fast out of the box, cache-friendly when you need it
40 - Content API for 'headless' sites with de-coupled front-end
41 - Runs on a Raspberry Pi or a multi-datacenter cloud platform
42 - StreamField encourages flexible content without compromising structure
43 - Powerful, integrated search, using Elasticsearch or PostgreSQL
44 - Excellent support for images and embedded content
45 - Multi-site and multi-language ready
46 - Embraces and extends Django
47
48 Find out more at [wagtail.org](https://wagtail.org/).
49
50 ### 👉 Getting started
51
52 Wagtail works with [Python 3](https://www.python.org/downloads/), on any platform.
53
54 To get started with using Wagtail, run the following in a virtual environment:
55
56 
57
58 ```bash
59 pip install wagtail
60 wagtail start mysite
61 cd mysite
62 pip install -r requirements.txt
63 python manage.py migrate
64 python manage.py createsuperuser
65 python manage.py runserver
66 ```
67
68 For detailed installation and setup docs, see [the getting started tutorial](https://docs.wagtail.org/en/stable/getting_started/tutorial.html).
69
70 ### 👨👩👧👦 Who’s using it?
71
72 Wagtail is used by [NASA](https://www.nasa.gov/), [Google](https://www.google.com/), [Oxfam](https://www.oxfam.org/en), the [NHS](https://www.nhs.uk/), [Mozilla](https://www.mozilla.org/en-US/), [MIT](https://www.mit.edu/), the [Red Cross](https://www.icrc.org/en), [Salesforce](https://www.salesforce.com/), [NBC](https://www.nbc.com/), [BMW](https://www.bmw.com/en/index.html), and the US and UK governments. Add your own Wagtail site to [madewithwagtail.org](https://madewithwagtail.org).
73
74 ### 📖 Documentation
75
76 [docs.wagtail.org](https://docs.wagtail.org/) is the full reference for Wagtail, and includes guides for developers, designers and editors, alongside release notes and our roadmap.
77
78 For those who are **new to Wagtail**, the [Zen of Wagtail](https://docs.wagtail.org/en/stable/getting_started/the_zen_of_wagtail.html) will help you understand what Wagtail is, and what Wagtail is _not_.
79
80 **For developers** who are ready to jump in to their first Wagtail website the [Getting Started Tutorial](https://docs.wagtail.org/en/stable/getting_started/tutorial.html) will guide you through creating and editing your first page.
81
82 **Do you have an existing Django project?** The [Wagtail Integration documentation](https://docs.wagtail.org/en/stable/getting_started/integrating_into_django.html) is the best place to start.
83
84 ### 📌 Compatibility
85
86 _(If you are reading this on GitHub, the details here may not be indicative of the current released version - please see [Compatible Django / Python versions](https://docs.wagtail.org/en/stable/releases/upgrading.html#compatible-django-python-versions) in the Wagtail documentation.)_
87
88 Wagtail supports:
89
90 - Django 3.2.x, 4.0.x and 4.1.x
91 - Python 3.7, 3.8, 3.9 and 3.10
92 - PostgreSQL, MySQL and SQLite (with JSON1) as database backends
93
94 [Previous versions of Wagtail](https://docs.wagtail.org/en/stable/releases/upgrading.html#compatible-django-python-versions) additionally supported Python 2.7 and earlier Django versions.
95
96 ---
97
98 ### 📢 Community Support
99
100 There is an active community of Wagtail users and developers responding to questions on [Stack Overflow](https://stackoverflow.com/questions/tagged/wagtail). When posting questions, please read Stack Overflow's advice on [how to ask questions](https://stackoverflow.com/help/how-to-ask) and remember to tag your question "wagtail".
101
102 For topics and discussions that do not fit Stack Overflow's question and answer format we have a [Slack workspace](https://github.com/wagtail/wagtail/wiki/Slack). Please respect the time and effort of volunteers by not asking the same question in multiple places.
103
104 [](https://github.com/wagtail/wagtail/wiki/Slack)
105
106 Our [Github discussion boards](https://github.com/wagtail/wagtail/discussions) are open for sharing ideas and plans for the Wagtail project.
107
108 We maintain a curated list of third party packages, articles and other resources at [Awesome Wagtail](https://github.com/springload/awesome-wagtail).
109
110 ### 🧑💼 Commercial Support
111
112 Wagtail is sponsored by [Torchbox](https://torchbox.com/). If you need help implementing or hosting Wagtail, please contact us: [email protected]. See also [madewithwagtail.org/developers/](https://madewithwagtail.org/developers/) for expert Wagtail developers around the world.
113
114 ### 🔐 Security
115
116 We take the security of Wagtail, and related packages we maintain, seriously. If you have found a security issue with any of our projects please email us at [[email protected]](mailto:[email protected]) so we can work together to find and patch the issue. We appreciate responsible disclosure with any security related issues, so please contact us first before creating a Github issue.
117
118 If you want to send an encrypted email (optional), the public key ID for [email protected] is 0xbed227b4daf93ff9, and this public key is available from most commonly-used keyservers.
119
120 ### 🕒 Release schedule
121
122 Feature releases of Wagtail are released every three months. Selected releases are designated as Long Term Support (LTS) releases, and will receive maintenance updates for an extended period to address any security and data-loss related issues. For dates of past and upcoming releases and support periods, see [Release Schedule](https://github.com/wagtail/wagtail/wiki/Release-schedule).
123
124 #### 🕛 Nightly releases
125
126 To try out the latest features before a release, we also create builds from `main` every night. You can find instructions on how to install the latest nightly release at https://releases.wagtail.org/nightly/index.html
127
128 ### 🙋🏽 Contributing
129
130 If you're a Python or Django developer, fork the repo and get stuck in! We have several developer focused channels on the [Slack workspace](https://github.com/wagtail/wagtail/wiki/Slack).
131
132 You might like to start by reviewing the [contributing guidelines](https://docs.wagtail.org/en/latest/contributing/index.html) and checking issues with the [good first issue](https://github.com/wagtail/wagtail/labels/good%20first%20issue) label.
133
134 We also welcome translations for Wagtail's interface. Translation work should be submitted through [Transifex](https://www.transifex.com/torchbox/wagtail/).
135
136 ### 🔓 License
137
138 [BSD](https://github.com/wagtail/wagtail/blob/main/LICENSE) - Free to use and modify for any purpose, including both open and closed-source code.
139
140 ### 👏 Thanks
141
142 We thank the following organisations for their services used in Wagtail's development:
143
144 [](https://www.browserstack.com/)<br>
145 [BrowserStack](https://www.browserstack.com/) provides the project with free access to their live web-based browser testing tool, and automated Selenium cloud testing.
146
147 [](https://www.squash.io/)<br>
148 [Squash](https://www.squash.io/) provides the project with free test environments for reviewing pull requests.
149
150 [](https://assistivlabs.com/)<br>
151 [Assistiv Labs](https://assistivlabs.com/) provides the project with unlimited access to their remote testing with assistive technologies.
152
[end of README.md]
[start of wagtail/admin/templatetags/wagtailadmin_tags.py]
1 import json
2 from datetime import datetime
3 from urllib.parse import urljoin
4
5 from django import template
6 from django.conf import settings
7 from django.contrib.admin.utils import quote
8 from django.contrib.humanize.templatetags.humanize import intcomma, naturaltime
9 from django.contrib.messages.constants import DEFAULT_TAGS as MESSAGE_TAGS
10 from django.shortcuts import resolve_url as resolve_url_func
11 from django.template import Context
12 from django.template.base import token_kwargs
13 from django.template.defaultfilters import stringfilter
14 from django.templatetags.static import static
15 from django.urls import reverse
16 from django.urls.exceptions import NoReverseMatch
17 from django.utils import timezone
18 from django.utils.encoding import force_str
19 from django.utils.html import avoid_wrapping, format_html, format_html_join, json_script
20 from django.utils.http import urlencode
21 from django.utils.safestring import mark_safe
22 from django.utils.timesince import timesince
23 from django.utils.translation import gettext_lazy as _
24
25 from wagtail import hooks
26 from wagtail.admin.localization import get_js_translation_strings
27 from wagtail.admin.menu import admin_menu
28 from wagtail.admin.navigation import get_explorable_root_page
29 from wagtail.admin.search import admin_search_areas
30 from wagtail.admin.staticfiles import versioned_static as versioned_static_func
31 from wagtail.admin.ui import sidebar
32 from wagtail.admin.utils import get_admin_base_url
33 from wagtail.admin.views.bulk_action.registry import bulk_action_registry
34 from wagtail.admin.views.pages.utils import get_valid_next_url_from_request
35 from wagtail.admin.widgets import ButtonWithDropdown, PageListingButton
36 from wagtail.coreutils import camelcase_to_underscore
37 from wagtail.coreutils import cautious_slugify as _cautious_slugify
38 from wagtail.coreutils import (
39 escape_script,
40 get_content_type_label,
41 get_locales_display_names,
42 )
43 from wagtail.models import (
44 CollectionViewRestriction,
45 Locale,
46 Page,
47 PageViewRestriction,
48 UserPagePermissionsProxy,
49 )
50 from wagtail.telepath import JSContext
51 from wagtail.users.utils import get_gravatar_url
52
53 register = template.Library()
54
55 register.filter("intcomma", intcomma)
56 register.filter("naturaltime", naturaltime)
57
58
59 @register.inclusion_tag("wagtailadmin/shared/breadcrumbs.html", takes_context=True)
60 def breadcrumbs(
61 context,
62 page,
63 url_name,
64 url_root_name=None,
65 include_self=True,
66 is_expanded=False,
67 page_perms=None,
68 querystring_value=None,
69 trailing_breadcrumb_title=None,
70 ):
71 user = context["request"].user
72
73 # find the closest common ancestor of the pages that this user has direct explore permission
74 # (i.e. add/edit/publish/lock) over; this will be the root of the breadcrumb
75 cca = get_explorable_root_page(user)
76 if not cca:
77 return {"pages": Page.objects.none()}
78
79 return {
80 "pages": page.get_ancestors(inclusive=include_self)
81 .descendant_of(cca, inclusive=True)
82 .specific(),
83 "current_page": page,
84 "is_expanded": is_expanded,
85 "page_perms": page_perms,
86 "querystring_value": querystring_value or "",
87 "trailing_breadcrumb_title": trailing_breadcrumb_title, # Only used in collapsible breadcrumb templates
88 "url_name": url_name,
89 "url_root_name": url_root_name,
90 }
91
92
93 @register.inclusion_tag("wagtailadmin/shared/search_other.html", takes_context=True)
94 def search_other(context, current=None):
95 request = context["request"]
96
97 return {
98 "options_html": admin_search_areas.render_html(request, current),
99 "request": request,
100 }
101
102
103 @register.filter("ellipsistrim")
104 def ellipsistrim(value, max_length):
105 if len(value) > max_length:
106 truncd_val = value[:max_length]
107 if not len(value) == (max_length + 1) and value[max_length + 1] != " ":
108 truncd_val = truncd_val[: truncd_val.rfind(" ")]
109 return truncd_val + "…"
110 return value
111
112
113 @register.filter
114 def fieldtype(bound_field):
115 try:
116 return camelcase_to_underscore(bound_field.field.__class__.__name__)
117 except AttributeError:
118 try:
119 return camelcase_to_underscore(bound_field.__class__.__name__)
120 except AttributeError:
121 return ""
122
123
124 @register.filter
125 def widgettype(bound_field):
126 try:
127 return camelcase_to_underscore(bound_field.field.widget.__class__.__name__)
128 except AttributeError:
129 try:
130 return camelcase_to_underscore(bound_field.widget.__class__.__name__)
131 except AttributeError:
132 return ""
133
134
135 def _get_user_page_permissions(context):
136 # Create a UserPagePermissionsProxy object to represent the user's global permissions, and
137 # cache it in the context for the duration of the page request, if one does not exist already
138 if "user_page_permissions" not in context:
139 context["user_page_permissions"] = UserPagePermissionsProxy(
140 context["request"].user
141 )
142
143 return context["user_page_permissions"]
144
145
146 @register.simple_tag(takes_context=True)
147 def page_permissions(context, page):
148 """
149 Usage: {% page_permissions page as page_perms %}
150 Sets the variable 'page_perms' to a PagePermissionTester object that can be queried to find out
151 what actions the current logged-in user can perform on the given page.
152 """
153 return _get_user_page_permissions(context).for_page(page)
154
155
156 @register.simple_tag
157 def classnames(*classes):
158 """
159 Returns any args as a space-separated joined string for using in HTML class names.
160 """
161 return " ".join([classname.strip() for classname in classes if classname])
162
163
164 @register.simple_tag(takes_context=True)
165 def test_collection_is_public(context, collection):
166 """
167 Usage: {% test_collection_is_public collection as is_public %}
168 Sets 'is_public' to True iff there are no collection view restrictions in place
169 on this collection.
170 Caches the list of collection view restrictions in the context, to avoid repeated
171 DB queries on repeated calls.
172 """
173 if "all_collection_view_restrictions" not in context:
174 context[
175 "all_collection_view_restrictions"
176 ] = CollectionViewRestriction.objects.select_related("collection").values_list(
177 "collection__name", flat=True
178 )
179
180 is_private = collection.name in context["all_collection_view_restrictions"]
181
182 return not is_private
183
184
185 @register.simple_tag(takes_context=True)
186 def test_page_is_public(context, page):
187 """
188 Usage: {% test_page_is_public page as is_public %}
189 Sets 'is_public' to True iff there are no page view restrictions in place on
190 this page.
191 Caches the list of page view restrictions on the request, to avoid repeated
192 DB queries on repeated calls.
193 """
194 if not hasattr(context["request"], "all_page_view_restriction_paths"):
195 context[
196 "request"
197 ].all_page_view_restriction_paths = PageViewRestriction.objects.select_related(
198 "page"
199 ).values_list(
200 "page__path", flat=True
201 )
202
203 is_private = any(
204 [
205 page.path.startswith(restricted_path)
206 for restricted_path in context["request"].all_page_view_restriction_paths
207 ]
208 )
209
210 return not is_private
211
212
213 @register.simple_tag
214 def hook_output(hook_name):
215 """
216 Example: {% hook_output 'insert_editor_css' %}
217 Whenever we have a hook whose functions take no parameters and return a string, this tag can be used
218 to output the concatenation of all of those return values onto the page.
219 Note that the output is not escaped - it is the hook function's responsibility to escape unsafe content.
220 """
221 snippets = [fn() for fn in hooks.get_hooks(hook_name)]
222 return mark_safe("".join(snippets))
223
224
225 @register.simple_tag
226 def usage_count_enabled():
227 return getattr(settings, "WAGTAIL_USAGE_COUNT_ENABLED", False)
228
229
230 @register.simple_tag
231 def base_url_setting(default=None):
232 return get_admin_base_url() or default
233
234
235 @register.simple_tag
236 def allow_unicode_slugs():
237 return getattr(settings, "WAGTAIL_ALLOW_UNICODE_SLUGS", True)
238
239
240 class EscapeScriptNode(template.Node):
241 TAG_NAME = "escapescript"
242
243 def __init__(self, nodelist):
244 super().__init__()
245 self.nodelist = nodelist
246
247 def render(self, context):
248 out = self.nodelist.render(context)
249 return escape_script(out)
250
251 @classmethod
252 def handle(cls, parser, token):
253 nodelist = parser.parse(("end" + EscapeScriptNode.TAG_NAME,))
254 parser.delete_first_token()
255 return cls(nodelist)
256
257
258 register.tag(EscapeScriptNode.TAG_NAME, EscapeScriptNode.handle)
259
260
261 # Helpers for Widget.render_with_errors, our extension to the Django widget API that allows widgets to
262 # take on the responsibility of rendering their own error messages
263 @register.filter
264 def render_with_errors(bound_field):
265 """
266 Usage: {{ field|render_with_errors }} as opposed to {{ field }}.
267 If the field (a BoundField instance) has errors on it, and the associated widget implements
268 a render_with_errors method, call that; otherwise, call the regular widget rendering mechanism.
269 """
270 widget = bound_field.field.widget
271 if bound_field.errors and hasattr(widget, "render_with_errors"):
272 return widget.render_with_errors(
273 bound_field.html_name,
274 bound_field.value(),
275 attrs={"id": bound_field.auto_id},
276 errors=bound_field.errors,
277 )
278 else:
279 return bound_field.as_widget()
280
281
282 @register.filter
283 def has_unrendered_errors(bound_field):
284 """
285 Return true if this field has errors that were not accounted for by render_with_errors, because
286 the widget does not support the render_with_errors method
287 """
288 return bound_field.errors and not hasattr(
289 bound_field.field.widget, "render_with_errors"
290 )
291
292
293 @register.filter(is_safe=True)
294 @stringfilter
295 def cautious_slugify(value):
296 return _cautious_slugify(value)
297
298
299 @register.simple_tag(takes_context=True)
300 def querystring(context, **kwargs):
301 """
302 Print out the current querystring. Any keyword arguments to this template
303 tag will be added to the querystring before it is printed out.
304
305 <a href="/page/{% querystring key='value' %}">
306
307 Will result in something like:
308
309 <a href="/page/?foo=bar&key=value">
310 """
311 request = context["request"]
312 querydict = request.GET.copy()
313 # Can't do querydict.update(kwargs), because QueryDict.update() appends to
314 # the list of values, instead of replacing the values.
315 for key, value in kwargs.items():
316 if value is None:
317 # Remove the key if the value is None
318 querydict.pop(key, None)
319 else:
320 # Set the key otherwise
321 querydict[key] = str(value)
322
323 return "?" + querydict.urlencode()
324
325
326 @register.simple_tag(takes_context=True)
327 def page_table_header_label(context, label=None, parent_page_title=None, **kwargs):
328 """
329 Wraps table_header_label to add a title attribute based on the parent page title and the column label
330 """
331 if label:
332 translation_context = {"parent": parent_page_title, "label": label}
333 ascending_title_text = (
334 _(
335 "Sort the order of child pages within '%(parent)s' by '%(label)s' in ascending order."
336 )
337 % translation_context
338 )
339 descending_title_text = (
340 _(
341 "Sort the order of child pages within '%(parent)s' by '%(label)s' in descending order."
342 )
343 % translation_context
344 )
345 else:
346 ascending_title_text = None
347 descending_title_text = None
348
349 return table_header_label(
350 context,
351 label=label,
352 ascending_title_text=ascending_title_text,
353 descending_title_text=descending_title_text,
354 **kwargs,
355 )
356
357
358 @register.simple_tag(takes_context=True)
359 def table_header_label(
360 context,
361 label=None,
362 sortable=True,
363 ordering=None,
364 sort_context_var="ordering",
365 sort_param="ordering",
366 sort_field=None,
367 ascending_title_text=None,
368 descending_title_text=None,
369 ):
370 """
371 A label to go in a table header cell, optionally with a 'sort' link that alternates between
372 forward and reverse sorting
373
374 label = label text
375 ordering = current active ordering. If not specified, we will fetch it from the template context variable
376 given by sort_context_var. (We don't fetch it from the URL because that wouldn't give the view method
377 the opportunity to set a default)
378 sort_param = URL parameter that indicates the current active ordering
379 sort_field = the value for sort_param that indicates that sorting is currently on this column.
380 For example, if sort_param='ordering' and sort_field='title', then a URL parameter of
381 ordering=title indicates that the listing is ordered forwards on this column, and a URL parameter
382 of ordering=-title indicated that the listing is ordered in reverse on this column
383 ascending_title_text = title attribute to use on the link when the link action will sort in ascending order
384 descending_title_text = title attribute to use on the link when the link action will sort in descending order
385
386 To disable sorting on this column, set sortable=False or leave sort_field unspecified.
387 """
388 if not sortable or not sort_field:
389 # render label without a sort link
390 return label
391
392 if ordering is None:
393 ordering = context.get(sort_context_var)
394 reverse_sort_field = "-%s" % sort_field
395
396 if ordering == sort_field:
397 # currently ordering forwards on this column; link should change to reverse ordering
398 attrs = {
399 "href": querystring(context, **{sort_param: reverse_sort_field}),
400 "class": "icon icon-arrow-down-after teal",
401 }
402 if descending_title_text is not None:
403 attrs["title"] = descending_title_text
404
405 elif ordering == reverse_sort_field:
406 # currently ordering backwards on this column; link should change to forward ordering
407 attrs = {
408 "href": querystring(context, **{sort_param: sort_field}),
409 "class": "icon icon-arrow-up-after teal",
410 }
411 if ascending_title_text is not None:
412 attrs["title"] = ascending_title_text
413
414 else:
415 # not currently ordering on this column; link should change to forward ordering
416 attrs = {
417 "href": querystring(context, **{sort_param: sort_field}),
418 "class": "icon icon-arrow-down-after",
419 }
420 if ascending_title_text is not None:
421 attrs["title"] = ascending_title_text
422
423 attrs_string = format_html_join(" ", '{}="{}"', attrs.items())
424
425 return format_html(
426 # need whitespace around label for correct positioning of arrow icon
427 "<a {attrs}> {label} </a>",
428 attrs=attrs_string,
429 label=label,
430 )
431
432
433 @register.simple_tag(takes_context=True)
434 def pagination_querystring(context, page_number, page_key="p"):
435 """
436 Print out a querystring with an updated page number:
437
438 {% if page.has_next_page %}
439 <a href="{% pagination_link page.next_page_number %}">Next page</a>
440 {% endif %}
441 """
442 return querystring(context, **{page_key: page_number})
443
444
445 @register.inclusion_tag(
446 "wagtailadmin/pages/listing/_pagination.html", takes_context=True
447 )
448 def paginate(context, page, base_url="", page_key="p", classnames=""):
449 """
450 Print pagination previous/next links, and the page count. Take the
451 following arguments:
452
453 page
454 The current page of results. This should be a Django pagination `Page`
455 instance
456
457 base_url
458 The base URL of the next/previous page, with no querystring.
459 This is optional, and defaults to the current page by just printing the
460 querystring for the next/previous page.
461
462 page_key
463 The name of the page variable in the query string. Defaults to 'p'.
464
465 classnames
466 Extra classes to add to the next/previous links.
467 """
468 request = context["request"]
469 return {
470 "base_url": base_url,
471 "classnames": classnames,
472 "request": request,
473 "page": page,
474 "page_key": page_key,
475 "paginator": page.paginator,
476 }
477
478
479 @register.inclusion_tag("wagtailadmin/pages/listing/_buttons.html", takes_context=True)
480 def page_listing_buttons(context, page, page_perms):
481 next_url = context.request.path
482 button_hooks = hooks.get_hooks("register_page_listing_buttons")
483
484 buttons = []
485 for hook in button_hooks:
486 buttons.extend(hook(page, page_perms, next_url))
487
488 buttons.sort()
489
490 for hook in hooks.get_hooks("construct_page_listing_buttons"):
491 hook(buttons, page, page_perms, context)
492
493 return {"page": page, "buttons": buttons}
494
495
496 @register.inclusion_tag(
497 "wagtailadmin/pages/listing/_modern_dropdown.html", takes_context=True
498 )
499 def page_header_buttons(context, page, page_perms):
500 next_url = context.request.path
501 button_hooks = hooks.get_hooks("register_page_header_buttons")
502
503 buttons = []
504 for hook in button_hooks:
505 buttons.extend(hook(page, page_perms, next_url))
506
507 buttons.sort()
508 return {
509 "page": page,
510 "buttons": buttons,
511 "title": _("Actions"),
512 "icon_name": "dots-horizontal",
513 "classes": [
514 "w-flex",
515 "w-justify-center",
516 "w-items-center",
517 "w-h-slim-header",
518 ],
519 "button_classes": [
520 "w-p-0",
521 "w-w-12",
522 "w-h-full",
523 "w-text-primary",
524 "w-bg-transparent",
525 "hover:w-scale-110",
526 "w-transition",
527 "w-outline-offset-inside",
528 "w-relative",
529 "w-z-30",
530 ],
531 "hide_title": True,
532 }
533
534
535 @register.inclusion_tag("wagtailadmin/pages/listing/_buttons.html", takes_context=True)
536 def bulk_action_choices(context, app_label, model_name):
537 bulk_actions_list = list(
538 bulk_action_registry.get_bulk_actions_for_model(app_label, model_name)
539 )
540 bulk_actions_list.sort(key=lambda x: x.action_priority)
541
542 bulk_action_more_list = []
543 if len(bulk_actions_list) > 4:
544 bulk_action_more_list = bulk_actions_list[4:]
545 bulk_actions_list = bulk_actions_list[:4]
546
547 next_url = get_valid_next_url_from_request(context["request"])
548 if not next_url:
549 next_url = context["request"].path
550
551 bulk_action_buttons = [
552 PageListingButton(
553 action.display_name,
554 reverse(
555 "wagtail_bulk_action", args=[app_label, model_name, action.action_type]
556 )
557 + "?"
558 + urlencode({"next": next_url}),
559 attrs={"aria-label": action.aria_label},
560 priority=action.action_priority,
561 classes=action.classes | {"bulk-action-btn"},
562 )
563 for action in bulk_actions_list
564 ]
565
566 if bulk_action_more_list:
567 more_button = ButtonWithDropdown(
568 label=_("More"),
569 attrs={"title": _("View more bulk actions")},
570 classes={"bulk-actions-more", "dropup"},
571 button_classes={"button", "button-small"},
572 buttons_data=[
573 {
574 "label": action.display_name,
575 "url": reverse(
576 "wagtail_bulk_action",
577 args=[app_label, model_name, action.action_type],
578 )
579 + "?"
580 + urlencode({"next": next_url}),
581 "attrs": {"aria-label": action.aria_label},
582 "priority": action.action_priority,
583 "classes": {"bulk-action-btn"},
584 }
585 for action in bulk_action_more_list
586 ],
587 )
588 bulk_action_buttons.append(more_button)
589
590 return {"buttons": bulk_action_buttons}
591
592
593 @register.simple_tag
594 def message_level_tag(message):
595 """
596 Return the tag for this message's level as defined in
597 django.contrib.messages.constants.DEFAULT_TAGS, ignoring the project-level
598 MESSAGE_TAGS setting (which end-users might customise).
599 """
600 return MESSAGE_TAGS.get(message.level)
601
602
603 @register.simple_tag
604 def message_tags(message):
605 level_tag = message_level_tag(message)
606 if message.extra_tags and level_tag:
607 return message.extra_tags + " " + level_tag
608 elif message.extra_tags:
609 return message.extra_tags
610 elif level_tag:
611 return level_tag
612 else:
613 return ""
614
615
616 @register.filter("abs")
617 def _abs(val):
618 return abs(val)
619
620
621 @register.filter
622 def admin_urlquote(value):
623 return quote(value)
624
625
626 @register.simple_tag
627 def avatar_url(user, size=50, gravatar_only=False):
628 """
629 A template tag that receives a user and size and return
630 the appropriate avatar url for that user.
631 Example usage: {% avatar_url request.user 50 %}
632 """
633
634 if (
635 not gravatar_only
636 and hasattr(user, "wagtail_userprofile")
637 and user.wagtail_userprofile.avatar
638 ):
639 return user.wagtail_userprofile.avatar.url
640
641 if hasattr(user, "email"):
642 gravatar_url = get_gravatar_url(user.email, size=size)
643 if gravatar_url is not None:
644 return gravatar_url
645
646 return versioned_static_func("wagtailadmin/images/default-user-avatar.png")
647
648
649 @register.simple_tag
650 def js_translation_strings():
651 return mark_safe(json.dumps(get_js_translation_strings()))
652
653
654 @register.simple_tag
655 def notification_static(path):
656 """
657 Variant of the {% static %}` tag for use in notification emails - tries to form
658 a full URL using WAGTAILADMIN_BASE_URL if the static URL isn't already a full URL.
659 """
660 return urljoin(base_url_setting(), static(path))
661
662
663 @register.simple_tag
664 def versioned_static(path):
665 """
666 Wrapper for Django's static file finder to append a cache-busting query parameter
667 that updates on each Wagtail version
668 """
669 return versioned_static_func(path)
670
671
672 @register.inclusion_tag("wagtailadmin/shared/icon.html", takes_context=False)
673 def icon(name=None, class_name="icon", title=None, wrapped=False):
674 """
675 Abstracts away the actual icon implementation.
676
677 Usage:
678 {% load wagtailadmin_tags %}
679 ...
680 {% icon name="cogs" class_name="icon--red" title="Settings" %}
681
682 :param name: the icon name/id, required (string)
683 :param class_name: default 'icon' (string)
684 :param title: accessible label intended for screen readers (string)
685 :return: Rendered template snippet (string)
686 """
687 if not name:
688 raise ValueError("You must supply an icon name")
689
690 return {"name": name, "class_name": class_name, "title": title, "wrapped": wrapped}
691
692
693 @register.filter()
694 def timesince_simple(d):
695 """
696 Returns a simplified timesince:
697 19 hours, 48 minutes ago -> 19 hours ago
698 1 week, 1 day ago -> 1 week ago
699 0 minutes ago -> just now
700 """
701 time_period = timesince(d).split(",")[0]
702 if time_period == avoid_wrapping(_("0 minutes")):
703 return _("Just now")
704 return _("%(time_period)s ago") % {"time_period": time_period}
705
706
707 @register.simple_tag
708 def timesince_last_update(
709 last_update, time_prefix="", user_display_name="", use_shorthand=True
710 ):
711 """
712 Returns:
713 - the time of update if last_update is today, if any prefix is supplied, the output will use it
714 - time since last update otherwise. Defaults to the simplified timesince,
715 but can return the full string if needed
716 """
717 if last_update.date() == datetime.today().date():
718 if timezone.is_aware(last_update):
719 time_str = timezone.localtime(last_update).strftime("%H:%M")
720 else:
721 time_str = last_update.strftime("%H:%M")
722
723 time_prefix = f"{time_prefix} " if time_prefix else time_prefix
724 by_user = f" by {user_display_name}" if user_display_name else user_display_name
725
726 return f"{time_prefix}{time_str}{by_user}"
727
728 else:
729 if use_shorthand:
730 return timesince_simple(last_update)
731 return _("%(time_period)s ago") % {"time_period": timesince(last_update)}
732
733
734 @register.filter
735 def user_display_name(user):
736 """
737 Returns the preferred display name for the given user object: the result of
738 user.get_full_name() if implemented and non-empty, or user.get_username() otherwise.
739 """
740 try:
741 full_name = user.get_full_name().strip()
742 if full_name:
743 return full_name
744 except AttributeError:
745 pass
746
747 try:
748 return user.get_username()
749 except AttributeError:
750 # we were passed None or something else that isn't a valid user object; return
751 # empty string to replicate the behaviour of {{ user.get_full_name|default:user.get_username }}
752 return ""
753
754
755 @register.filter
756 def format_content_type(content_type):
757 return get_content_type_label(content_type)
758
759
760 @register.simple_tag
761 def i18n_enabled():
762 return getattr(settings, "WAGTAIL_I18N_ENABLED", False)
763
764
765 @register.simple_tag
766 def locales():
767 return json.dumps(
768 [
769 {
770 "code": locale.language_code,
771 "display_name": force_str(locale.get_display_name()),
772 }
773 for locale in Locale.objects.all()
774 ]
775 )
776
777
778 @register.simple_tag
779 def locale_label_from_id(locale_id):
780 """
781 Returns the Locale display name given its id.
782 """
783 return get_locales_display_names().get(locale_id)
784
785
786 @register.simple_tag(takes_context=True)
787 def sidebar_collapsed(context):
788 request = context.get("request")
789 collapsed = request.COOKIES.get("wagtail_sidebar_collapsed", "0")
790 if collapsed == "0":
791 return False
792 return True
793
794
795 @register.simple_tag(takes_context=True)
796 def sidebar_props(context):
797 request = context["request"]
798 search_areas = admin_search_areas.search_items_for_request(request)
799 if search_areas:
800 search_area = search_areas[0]
801 else:
802 search_area = None
803
804 account_menu = [
805 sidebar.LinkMenuItem(
806 "account", _("Account"), reverse("wagtailadmin_account"), icon_name="user"
807 ),
808 sidebar.LinkMenuItem(
809 "logout", _("Log out"), reverse("wagtailadmin_logout"), icon_name="logout"
810 ),
811 ]
812
813 modules = [
814 sidebar.WagtailBrandingModule(),
815 sidebar.SearchModule(search_area) if search_area else None,
816 sidebar.MainMenuModule(
817 admin_menu.render_component(request), account_menu, request.user
818 ),
819 ]
820 modules = [module for module in modules if module is not None]
821
822 return json_script(
823 {
824 "modules": JSContext().pack(modules),
825 },
826 element_id="wagtail-sidebar-props",
827 )
828
829
830 @register.simple_tag
831 def get_comments_enabled():
832 return getattr(settings, "WAGTAILADMIN_COMMENTS_ENABLED", True)
833
834
835 @register.simple_tag
836 def preview_settings():
837 default_options = {
838 "WAGTAIL_AUTO_UPDATE_PREVIEW": True,
839 "WAGTAIL_AUTO_UPDATE_PREVIEW_INTERVAL": 500,
840 }
841
842 return {
843 option: getattr(settings, option, default)
844 for option, default in default_options.items()
845 }
846
847
848 @register.simple_tag
849 def resolve_url(url):
850 # Used by wagtailadmin/shared/pagination_nav.html - given an input that may be a URL route
851 # name, or a direct URL path, return it as a direct URL path. On failure (or being passed
852 # an empty / None value), return empty string
853 if not url:
854 return ""
855
856 try:
857 return resolve_url_func(url)
858 except NoReverseMatch:
859 return ""
860
861
862 @register.simple_tag(takes_context=True)
863 def component(context, obj, fallback_render_method=False):
864 # Render a component by calling its render_html method, passing request and context from the
865 # calling template.
866 # If fallback_render_method is true, objects without a render_html method will have render()
867 # called instead (with no arguments) - this is to provide deprecation path for things that have
868 # been newly upgraded to use the component pattern.
869
870 has_render_html_method = hasattr(obj, "render_html")
871 if fallback_render_method and not has_render_html_method and hasattr(obj, "render"):
872 return obj.render()
873 elif not has_render_html_method:
874 raise ValueError("Cannot render %r as a component" % (obj,))
875
876 return obj.render_html(context)
877
878
879 class FragmentNode(template.Node):
880 def __init__(self, nodelist, target_var):
881 self.nodelist = nodelist
882 self.target_var = target_var
883
884 def render(self, context):
885 fragment = self.nodelist.render(context) if self.nodelist else ""
886 context[self.target_var] = fragment
887 return ""
888
889
890 @register.tag(name="fragment")
891 def fragment(parser, token):
892 """
893 Store a template fragment as a variable.
894
895 Usage:
896 {% fragment as header_title %}
897 {% blocktrans trimmed %}Welcome to the {{ site_name }} Wagtail CMS{% endblocktrans %}
898 {% endfragment %}
899
900 Copy-paste of slippers’ fragment template tag.
901 See https://github.com/mixxorz/slippers/blob/254c720e6bb02eb46ae07d104863fce41d4d3164/slippers/templatetags/slippers.py#L173.
902 """
903 error_message = "The syntax for fragment is {% fragment as variable_name %}"
904
905 try:
906 tag_name, _, target_var = token.split_contents()
907 nodelist = parser.parse(("endfragment",))
908 parser.delete_first_token()
909 except ValueError:
910 if settings.DEBUG:
911 raise template.TemplateSyntaxError(error_message)
912 return ""
913
914 return FragmentNode(nodelist, target_var)
915
916
917 class BlockInclusionNode(template.Node):
918 """
919 Create template-driven tags like Django’s inclusion_tag / InclusionNode, but for block-level tags.
920
921 Usage:
922 {% my_tag status="test" label="Alert" %}
923 Proceed with caution.
924 {% endmy_tag %}
925
926 Within `my_tag`’s template, the template fragment will be accessible as the {{ children }} context variable.
927
928 The output can also be stored as a variable in the parent context:
929
930 {% my_tag status="test" label="Alert" as my_variable %}
931 Proceed with caution.
932 {% endmy_tag %}
933
934 Inspired by slippers’ Component Node.
935 See https://github.com/mixxorz/slippers/blob/254c720e6bb02eb46ae07d104863fce41d4d3164/slippers/templatetags/slippers.py#L47.
936 """
937
938 def __init__(self, nodelist, template, extra_context, target_var=None):
939 self.nodelist = nodelist
940 self.template = template
941 self.extra_context = extra_context
942 self.target_var = target_var
943
944 def get_context_data(self, parent_context):
945 return parent_context
946
947 def render(self, context):
948 children = self.nodelist.render(context) if self.nodelist else ""
949
950 values = {
951 # Resolve the tag’s parameters within the current context.
952 key: value.resolve(context)
953 for key, value in self.extra_context.items()
954 }
955
956 t = context.template.engine.get_template(self.template)
957 # Add the `children` variable in the rendered template’s context.
958 context_data = self.get_context_data({**values, "children": children})
959 output = t.render(Context(context_data, autoescape=context.autoescape))
960
961 if self.target_var:
962 context[self.target_var] = output
963 return ""
964
965 return output
966
967 @classmethod
968 def handle(cls, parser, token):
969 tag_name, *remaining_bits = token.split_contents()
970
971 nodelist = parser.parse((f"end{tag_name}",))
972 parser.delete_first_token()
973
974 extra_context = token_kwargs(remaining_bits, parser)
975
976 # Allow component fragment to be assigned to a variable
977 target_var = None
978 if len(remaining_bits) >= 2 and remaining_bits[-2] == "as":
979 target_var = remaining_bits[-1]
980
981 return cls(nodelist, cls.template, extra_context, target_var)
982
983
984 class DialogNode(BlockInclusionNode):
985 template = "wagtailadmin/shared/dialog/dialog.html"
986
987 def get_context_data(self, parent_context):
988 context = super().get_context_data(parent_context)
989
990 if "title" not in context:
991 raise TypeError("You must supply a title")
992 if "id" not in context:
993 raise TypeError("You must supply an id")
994
995 # Used for determining which icon the message will use
996 message_icon_name = {
997 "info": "info-circle",
998 "warning": "warning",
999 "critical": "warning",
1000 "success": "circle-check",
1001 }
1002
1003 message_status = context.get("message_status")
1004
1005 # If there is a message status then determine which icon to use.
1006 if message_status:
1007 context["message_icon_name"] = message_icon_name[message_status]
1008
1009 return context
1010
1011
1012 register.tag("dialog", DialogNode.handle)
1013
1014
1015 class HelpBlockNode(BlockInclusionNode):
1016 template = "wagtailadmin/shared/help_block.html"
1017
1018
1019 register.tag("help_block", HelpBlockNode.handle)
1020
1021
1022 class PanelNode(BlockInclusionNode):
1023 template = "wagtailadmin/shared/panel.html"
1024
1025
1026 register.tag("panel", PanelNode.handle)
1027
1028
1029 class FieldNode(BlockInclusionNode):
1030 template = "wagtailadmin/shared/field.html"
1031
1032
1033 register.tag("field", FieldNode.handle)
1034
1035
1036 class FieldRowNode(BlockInclusionNode):
1037 template = "wagtailadmin/shared/forms/field_row.html"
1038
1039
1040 register.tag("field_row", FieldRowNode.handle)
1041
1042
1043 # Button used to open dialogs
1044 @register.inclusion_tag("wagtailadmin/shared/dialog/dialog_toggle.html")
1045 def dialog_toggle(dialog_id, class_name="", text=None):
1046 if not dialog_id:
1047 raise ValueError("You must supply the dialog ID")
1048
1049 return {
1050 "class_name": class_name,
1051 "text": text,
1052 # dialog_id must match the ID of the dialog you are toggling
1053 "dialog_id": dialog_id,
1054 }
1055
1056
1057 @register.simple_tag()
1058 def workflow_status_with_date(workflow_state):
1059 translation_context = {
1060 "finished_at": naturaltime(workflow_state.current_task_state.finished_at),
1061 "started_at": naturaltime(workflow_state.current_task_state.started_at),
1062 "task_name": workflow_state.current_task_state.task.name,
1063 "status_display": workflow_state.get_status_display,
1064 }
1065
1066 if workflow_state.status == "needs_changes":
1067 return _("Changes requested %(finished_at)s") % translation_context
1068
1069 if workflow_state.status == "in_progress":
1070 return _("Sent to %(task_name)s %(started_at)s") % translation_context
1071
1072 return _("%(status_display)s %(task_name)s %(started_at)s") % translation_context
1073
1074
1075 @register.inclusion_tag("wagtailadmin/shared/human_readable_date.html")
1076 def human_readable_date(date, description=None):
1077 return {
1078 "date": date,
1079 "description": description,
1080 }
1081
[end of wagtail/admin/templatetags/wagtailadmin_tags.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
wagtail/wagtail
|
4fdeaad18c409c50f31089432d007901ed1aaf41
|
Unit tests for the `classnames` template tag
### Is your proposal related to a problem?
* As part of the GSoC UX Unification project a useful template tag was created `classnames` e2d4cb77458878d7d7076a7aa8b6d590deb99463
* It would be good to add unit tests for this behaviour
### Describe the solution you'd like
* In the file - https://github.com/wagtail/wagtail/blob/main/wagtail/admin/tests/test_templatetags.py
* Add unit tests for the classnames template tag
* These tests should cover various scenarios of the template tag usage (a single arg, multiple args, falsey args and also strings with extra whitespace)
https://github.com/wagtail/wagtail/blob/849d4d71cae41de56e43832546429cbb8ad289d5/wagtail/admin/templatetags/wagtailadmin_tags.py#L156-L161
### Additional context
* Implemented as part of this PR https://github.com/wagtail/wagtail/pull/8781
### Example test scenarios from existing header usage
https://github.com/wagtail/wagtail/blob/849d4d71cae41de56e43832546429cbb8ad289d5/wagtail/admin/templates/wagtailadmin/shared/header.html#L21
```
{% load wagtailadmin_tags %}
{% classnames "w-header" classname merged|yesno:"w-header--merged," search_form|yesno:"w-header--hasform," %}
```
* Depending on whether `merged` is truthy - should add `w-header--merged`
* Depending on whether `search_form` is truthy - should add `w-header--hasform`
* Should also add the `classname` passed into the template, even if it is a string with spaces in between
Unit tests for the `classnames` template tag
### Is your proposal related to a problem?
* As part of the GSoC UX Unification project a useful template tag was created `classnames` e2d4cb77458878d7d7076a7aa8b6d590deb99463
* It would be good to add unit tests for this behaviour
### Describe the solution you'd like
* In the file - https://github.com/wagtail/wagtail/blob/main/wagtail/admin/tests/test_templatetags.py
* Add unit tests for the classnames template tag
* These tests should cover various scenarios of the template tag usage (a single arg, multiple args, falsey args and also strings with extra whitespace)
https://github.com/wagtail/wagtail/blob/849d4d71cae41de56e43832546429cbb8ad289d5/wagtail/admin/templatetags/wagtailadmin_tags.py#L156-L161
### Additional context
* Implemented as part of this PR https://github.com/wagtail/wagtail/pull/8781
### Example test scenarios from existing header usage
https://github.com/wagtail/wagtail/blob/849d4d71cae41de56e43832546429cbb8ad289d5/wagtail/admin/templates/wagtailadmin/shared/header.html#L21
```
{% load wagtailadmin_tags %}
{% classnames "w-header" classname merged|yesno:"w-header--merged," search_form|yesno:"w-header--hasform," %}
```
* Depending on whether `merged` is truthy - should add `w-header--merged`
* Depending on whether `search_form` is truthy - should add `w-header--hasform`
* Should also add the `classname` passed into the template, even if it is a string with spaces in between
|
Good first issue to add some test coverage of existing behaviour.
Good first issue to add some test coverage of existing behaviour.
|
2022-09-02T19:34:35Z
|
<patch>
diff --git a/wagtail/admin/templatetags/wagtailadmin_tags.py b/wagtail/admin/templatetags/wagtailadmin_tags.py
--- a/wagtail/admin/templatetags/wagtailadmin_tags.py
+++ b/wagtail/admin/templatetags/wagtailadmin_tags.py
@@ -156,6 +156,7 @@ def page_permissions(context, page):
@register.simple_tag
def classnames(*classes):
"""
+ Usage <div class="{% classnames "w-base" classname active|yesno:"w-base--active," any_other_var %}"></div>
Returns any args as a space-separated joined string for using in HTML class names.
"""
return " ".join([classname.strip() for classname in classes if classname])
</patch>
|
[]
|
[]
| |||
Qiskit__qiskit-1880
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Incorrect shaded barriers in mpl drawer
Circuit with single qubit barriers on 2Q. The text drawn output is correct, but the mpl output is incorrect.
MPL (incorrect)
<img width="992" alt="image" src="https://user-images.githubusercontent.com/32201347/53523392-ae062380-3aaa-11e9-9c36-12f8ce2962bd.png">
Text (correct)
<img width="677" alt="image" src="https://user-images.githubusercontent.com/32201347/53523411-bfe7c680-3aaa-11e9-9716-d7cbebf71c97.png">
Qasm
'OPENQASM 2.0;\ninclude "qelib1.inc";\nqreg q7[3];\ncreg c7[2];\nu2(0.0,0.0) q7[0];\nbarrier q7[0];\nid q7[0];\nbarrier q7[0];\ny q7[0];\nbarrier q7[0];\nid q7[0];\nbarrier q7[0];\nid q7[0];\nbarrier q7[0];\nx q7[0];\nbarrier q7[0];\nid q7[0];\nbarrier q7[0];\nid q7[0];\nbarrier q7[0];\ny q7[0];\nbarrier q7[0];\nid q7[0];\nbarrier q7[0];\nid q7[0];\nbarrier q7[0];\nx q7[0];\nbarrier q7[0];\nid q7[0];\nbarrier q7[0];\nid q7[0];\nbarrier q7[0];\ny q7[0];\nbarrier q7[0];\nid q7[0];\nbarrier q7[0];\nu2(0.0,0.0) q7[0];\nu2(0.0,0.0) q7[2];\nbarrier q7[2];\nid q7[2];\nbarrier q7[2];\ny q7[2];\nbarrier q7[2];\nid q7[2];\nbarrier q7[2];\nid q7[2];\nbarrier q7[2];\nx q7[2];\nbarrier q7[2];\nid q7[2];\nbarrier q7[2];\nid q7[2];\nbarrier q7[2];\ny q7[2];\nbarrier q7[2];\nid q7[2];\nbarrier q7[2];\nid q7[2];\nbarrier q7[2];\nx q7[2];\nbarrier q7[2];\nid q7[2];\nbarrier q7[2];\nid q7[2];\nbarrier q7[2];\ny q7[2];\nbarrier q7[2];\nid q7[2];\nbarrier q7[2];\nu2(0.0,0.0) q7[2];\nbarrier q7[0],q7[1],q7[2];\nmeasure q7[0] -> c7[0];\nmeasure q7[2] -> c7[1];\n'
<!-- ⚠️ If you do not respect this template, your issue will be closed -->
<!-- ⚠️ Make sure to browse the opened and closed issues -->
### Information
- **Qiskit Terra version**: Latest pull from github
- **Python version**: 3.6
- **Operating system**: macox
### What is the current behavior?
### Steps to reproduce the problem
### What is the expected behavior?
### Suggested solutions
</issue>
<code>
[start of README.md]
1 # Qiskit Terra
2
3 [](https://opensource.org/licenses/Apache-2.0)[](https://travis-ci.com/Qiskit/qiskit-terra)[](https://github.com/Qiskit/qiskit-terra/releases)[](https://pypi.org/project/qiskit-terra/)
4
5 **Qiskit** is an open-source framework for working with Noisy Intermediate-Scale Quantum (NISQ) computers at the level of pulses, circuits, and algorithms.
6
7 Qiskit is made up elements that work together to enable quantum computing. This element is **Terra** and is the foundation on which the rest of Qiskit is built.
8
9 ## Installation
10
11 We encourage installing Qiskit via the pip tool (a python package manager), which installs all Qiskit elements, including Terra.
12
13 ```bash
14 pip install qiskit
15 ```
16
17 PIP will handle all dependencies automatically and you will always install the latest (and well-tested) version.
18
19 To install from source, follow the instructions in the [contribution guidelines](.github/CONTRIBUTING.rst).
20
21 ## Creating Your First Quantum Program in Qiskit Terra
22
23 Now that Qiskit is installed, it's time to begin working with Terra.
24
25 We are ready to try out a quantum circuit example, which is simulated locally using
26 the Qiskt Aer element. This is a simple example that makes an entangled state.
27
28 ```
29 $ python
30 ```
31
32 ```python
33 >>> from qiskit import *
34 >>> q = QuantumRegister(2)
35 >>> c = ClassicalRegister(2)
36 >>> qc = QuantumCircuit(q, c)
37 >>> qc.h(q[0])
38 >>> qc.cx(q[0], q[1])
39 >>> qc.measure(q, c)
40 >>> backend_sim = Aer.get_backend('qasm_simulator')
41 >>> result = execute(qc, backend_sim).result()
42 >>> print(result.get_counts(qc))
43 ```
44
45 In this case, the output will be:
46
47 ```python
48 {'00': 513, '11': 511}
49 ```
50
51 A script is available [here](examples/python/hello_quantum.py), where we also show how to
52 run the same program on a real quantum computer via IBMQ.
53
54 ### Executing your code on a real quantum chip
55
56 You can also use Qiskit to execute your code on a
57 **real quantum chip**.
58 In order to do so, you need to configure Qiskit for using the credentials in
59 your IBM Q account:
60
61 #### Configure your IBMQ credentials
62
63 1. Create an _[IBM Q](https://quantumexperience.ng.bluemix.net) > Account_ if you haven't already done so.
64
65 2. Get an API token from the IBM Q website under _My Account > Advanced > API Token_.
66
67 3. Take your token from step 2, here called `MY_API_TOKEN`, and run:
68
69 ```python
70 >>> from qiskit import IBMQ
71 >>> IBMQ.save_account('MY_API_TOKEN')
72 ```
73
74 4. If you have access to the IBM Q Network features, you also need to pass the
75 URL listed on your IBM Q account page to `save_account`.
76
77 After calling `IBMQ.save_account()`, your credentials will be stored on disk.
78 Once they are stored, at any point in the future you can load and use them
79 in your program simply via:
80
81 ```python
82 >>> from qiskit import IBMQ
83 >>> IBMQ.load_accounts()
84 ```
85
86 Those who do not want to save there credentials to disk should use instead:
87
88 ```python
89 >>> from qiskit import IBMQ
90 >>> IBMQ.enable_account('MY_API_TOKEN')
91 ```
92
93 and the token will only be active for the session. For examples using Terra with real
94 devices we have provided a set of examples in **examples/python** and we suggest starting with [using_qiskit_terra_level_0.py](examples/python/using_qiskit_terra_level_0.py) and working up in
95 the levels.
96
97 ## Contribution Guidelines
98
99 If you'd like to contribute to Qiskit Terra, please take a look at our
100 [contribution guidelines](.github/CONTRIBUTING.rst). This project adheres to Qiskit's [code of conduct](.github/CODE_OF_CONDUCT.rst). By participating, you are expected to uphold to this code.
101
102 We use [GitHub issues](https://github.com/Qiskit/qiskit-terra/issues) for tracking requests and bugs. Please
103 [join the Qiskit Slack community](https://join.slack.com/t/qiskit/shared_invite/enQtNDc2NjUzMjE4Mzc0LTMwZmE0YTM4ZThiNGJmODkzN2Y2NTNlMDIwYWNjYzA2ZmM1YTRlZGQ3OGM0NjcwMjZkZGE0MTA4MGQ1ZTVmYzk)
104 and use our [Qiskit Slack channel](https://qiskit.slack.com) for discussion and simple questions.
105 For questions that are more suited for a forum we use the Qiskit tag in the [Stack Exchange](https://quantumcomputing.stackexchange.com/questions/tagged/qiskit).
106
107 ## Next Steps
108
109 Now you're set up and ready to check out some of the other examples from our
110 [Qiskit Tutorials](https://github.com/Qiskit/qiskit-tutorials) repository.
111
112 ## Authors and Citation
113
114 Qiskit Terra is the work of [many people](https://github.com/Qiskit/qiskit-terra/graphs/contributors) who contribute
115 to the project at different levels. If you use Qiskit, please cite as per the included [BibTeX file](https://github.com/Qiskit/qiskit/blob/master/Qiskit.bib).
116
117 ## License
118
119 [Apache License 2.0](LICENSE.txt)
120
[end of README.md]
[start of examples/python/teleport.py]
1 # -*- coding: utf-8 -*-
2
3 # Copyright 2017, IBM.
4 #
5 # This source code is licensed under the Apache License, Version 2.0 found in
6 # the LICENSE.txt file in the root directory of this source tree.
7
8 """
9 Quantum teleportation example.
10
11 Note: if you have only cloned the Qiskit repository but not
12 used `pip install`, the examples only work from the root directory.
13 """
14
15 from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit
16 from qiskit import compile, BasicAer
17
18 ###############################################################
19 # Set the backend name and coupling map.
20 ###############################################################
21 coupling_map = [[0, 1], [0, 2], [1, 2], [3, 2], [3, 4], [4, 2]]
22 backend = BasicAer.get_backend("qasm_simulator")
23
24 ###############################################################
25 # Make a quantum program for quantum teleportation.
26 ###############################################################
27 q = QuantumRegister(3, "q")
28 c0 = ClassicalRegister(1, "c0")
29 c1 = ClassicalRegister(1, "c1")
30 c2 = ClassicalRegister(1, "c2")
31 qc = QuantumCircuit(q, c0, c1, c2, name="teleport")
32
33 # Prepare an initial state
34 qc.u3(0.3, 0.2, 0.1, q[0])
35
36 # Prepare a Bell pair
37 qc.h(q[1])
38 qc.cx(q[1], q[2])
39
40 # Barrier following state preparation
41 qc.barrier(q)
42
43 # Measure in the Bell basis
44 qc.cx(q[0], q[1])
45 qc.h(q[0])
46 qc.measure(q[0], c0[0])
47 qc.measure(q[1], c1[0])
48
49 # Apply a correction
50 qc.barrier(q)
51 qc.z(q[2]).c_if(c0, 1)
52 qc.x(q[2]).c_if(c1, 1)
53 qc.measure(q[2], c2[0])
54
55 ###############################################################
56 # Execute.
57 # Experiment does not support feedback, so we use the simulator
58 ###############################################################
59
60 # First version: not mapped
61 initial_layout = {("q", 0): ("q", 0), ("q", 1): ("q", 1),
62 ("q", 2): ("q", 2)}
63 qobj = compile(qc, backend=backend, coupling_map=None, shots=1024, initial_layout=initial_layout)
64 job = backend.run(qobj)
65 qobj_exp = qobj.experiments[0]
66
67 result = job.result()
68 print(result.get_counts(qc))
69
70 # Second version: mapped to 2x8 array coupling graph
71 qobj = compile(qc, backend=backend, coupling_map=coupling_map, shots=1024,initial_layout=initial_layout)
72 qobj_exp = qobj.experiments[0]
73 qobj_exp.header.compiled_circuit_qasm = ""
74 job = backend.run(qobj)
75 result = job.result()
76 print(result.get_counts(qc))
77 # Both versions should give the same distribution
78
[end of examples/python/teleport.py]
[start of qiskit/circuit/quantumcircuit.py]
1 # -*- coding: utf-8 -*-
2
3 # Copyright 2017, IBM.
4 #
5 # This source code is licensed under the Apache License, Version 2.0 found in
6 # the LICENSE.txt file in the root directory of this source tree.
7
8 """
9 Quantum circuit object.
10 """
11 from collections import OrderedDict
12 from copy import deepcopy
13 import itertools
14 import sys
15 import multiprocessing as mp
16
17 from qiskit.qasm import _qasm
18 from qiskit.exceptions import QiskitError
19 from .instruction import Instruction
20 from .quantumregister import QuantumRegister
21 from .classicalregister import ClassicalRegister
22
23
24 class QuantumCircuit:
25 """Quantum circuit."""
26 instances = 0
27 prefix = 'circuit'
28
29 # Class variable OPENQASM header
30 header = "OPENQASM 2.0;"
31 extension_lib = "include \"qelib1.inc\";"
32
33 # Class variable with gate definitions
34 # This is a dict whose values are dicts with the
35 # following keys:
36 # "print" = True or False
37 # "opaque" = True or False
38 # "n_args" = number of real parameters
39 # "n_bits" = number of qubits
40 # "args" = list of parameter names
41 # "bits" = list of qubit names
42 # "body" = GateBody AST node
43 definitions = OrderedDict()
44
45 def __init__(self, *regs, name=None):
46 """Create a new circuit.
47
48 A circuit is a list of instructions bound to some registers.
49
50 Args:
51 *regs (Registers): registers to include in the circuit.
52 name (str or None): the name of the quantum circuit. If
53 None, an automatically generated string will be assigned.
54
55 Raises:
56 QiskitError: if the circuit name, if given, is not valid.
57 """
58 if name is None:
59 name = self.cls_prefix() + str(self.cls_instances())
60 # pylint: disable=not-callable
61 # (known pylint bug: https://github.com/PyCQA/pylint/issues/1699)
62 if sys.platform != "win32" and \
63 isinstance(mp.current_process(), mp.context.ForkProcess):
64 name += '-{}'.format(mp.current_process().pid)
65 self._increment_instances()
66
67 if not isinstance(name, str):
68 raise QiskitError("The circuit name should be a string "
69 "(or None to auto-generate a name).")
70
71 self.name = name
72
73 # Data contains a list of instructions in the order they were applied.
74 self.data = []
75
76 # This is a map of registers bound to this circuit, by name.
77 self.qregs = []
78 self.cregs = []
79 self.add_register(*regs)
80
81 def __str__(self):
82 return str(self.draw(output='text'))
83
84 def __eq__(self, other):
85 # TODO: removed the DAG from this function
86 from qiskit.converters import circuit_to_dag
87 return circuit_to_dag(self) == circuit_to_dag(other)
88
89 @classmethod
90 def _increment_instances(cls):
91 cls.instances += 1
92
93 @classmethod
94 def cls_instances(cls):
95 """Return the current number of instances of this class,
96 useful for auto naming."""
97 return cls.instances
98
99 @classmethod
100 def cls_prefix(cls):
101 """Return the prefix to use for auto naming."""
102 return cls.prefix
103
104 def has_register(self, register):
105 """
106 Test if this circuit has the register r.
107
108 Args:
109 register (Register): a quantum or classical register.
110
111 Returns:
112 bool: True if the register is contained in this circuit.
113 """
114 has_reg = False
115 if (isinstance(register, QuantumRegister) and
116 register in self.qregs):
117 has_reg = True
118 elif (isinstance(register, ClassicalRegister) and
119 register in self.cregs):
120 has_reg = True
121 return has_reg
122
123 def combine(self, rhs):
124 """
125 Append rhs to self if self contains compatible registers.
126
127 Two circuits are compatible if they contain the same registers
128 or if they contain different registers with unique names. The
129 returned circuit will contain all unique registers between both
130 circuits.
131
132 Return self + rhs as a new object.
133 """
134 if isinstance(rhs, Instruction):
135 qregs = {qubit[0] for qubit in rhs.qargs}
136 cregs = {cbit[0] for cbit in rhs.cargs}
137 qc = QuantumCircuit(*qregs, *cregs)
138 qc._attach(rhs)
139 rhs = qc
140 # Check registers in LHS are compatible with RHS
141 self._check_compatible_regs(rhs)
142
143 # Make new circuit with combined registers
144 combined_qregs = deepcopy(self.qregs)
145 combined_cregs = deepcopy(self.cregs)
146
147 for element in rhs.qregs:
148 if element not in self.qregs:
149 combined_qregs.append(element)
150 for element in rhs.cregs:
151 if element not in self.cregs:
152 combined_cregs.append(element)
153 circuit = QuantumCircuit(*combined_qregs, *combined_cregs)
154 for gate in itertools.chain(self.data, rhs.data):
155 gate.reapply(circuit)
156 return circuit
157
158 def extend(self, rhs):
159 """
160 Append rhs to self if self contains compatible registers.
161
162 Two circuits are compatible if they contain the same registers
163 or if they contain different registers with unique names. The
164 returned circuit will contain all unique registers between both
165 circuits.
166
167 Modify and return self.
168 """
169 if isinstance(rhs, Instruction):
170 qregs = {qubit[0] for qubit in rhs.qargs}
171 cregs = {cbit[0] for cbit in rhs.cargs}
172 qc = QuantumCircuit(*qregs, *cregs)
173 qc._attach(rhs)
174 rhs = qc
175 # Check registers in LHS are compatible with RHS
176 self._check_compatible_regs(rhs)
177
178 # Add new registers
179 for element in rhs.qregs:
180 if element not in self.qregs:
181 self.qregs.append(element)
182 for element in rhs.cregs:
183 if element not in self.cregs:
184 self.cregs.append(element)
185
186 # Add new gates
187 for gate in rhs.data:
188 gate.reapply(self)
189 return self
190
191 def __add__(self, rhs):
192 """Overload + to implement self.combine."""
193 return self.combine(rhs)
194
195 def __iadd__(self, rhs):
196 """Overload += to implement self.extend."""
197 return self.extend(rhs)
198
199 def __len__(self):
200 """Return number of operations in circuit."""
201 return len(self.data)
202
203 def __getitem__(self, item):
204 """Return indexed operation."""
205 return self.data[item]
206
207 def _attach(self, instruction):
208 """Attach an instruction."""
209 self.data.append(instruction)
210 return instruction
211
212 def add_register(self, *regs):
213 """Add registers."""
214 for register in regs:
215 if register in self.qregs or register in self.cregs:
216 raise QiskitError("register name \"%s\" already exists"
217 % register.name)
218 if isinstance(register, QuantumRegister):
219 self.qregs.append(register)
220 elif isinstance(register, ClassicalRegister):
221 self.cregs.append(register)
222 else:
223 raise QiskitError("expected a register")
224
225 def _check_qreg(self, register):
226 """Raise exception if r is not in this circuit or not qreg."""
227 if not isinstance(register, QuantumRegister):
228 raise QiskitError("expected quantum register")
229 if not self.has_register(register):
230 raise QiskitError(
231 "register '%s' not in this circuit" %
232 register.name)
233
234 def _check_qubit(self, qubit):
235 """Raise exception if qubit is not in this circuit or bad format."""
236 if not isinstance(qubit, tuple):
237 raise QiskitError("%s is not a tuple."
238 "A qubit should be formated as a tuple." % str(qubit))
239 if not len(qubit) == 2:
240 raise QiskitError("%s is not a tuple with two elements, but %i instead" % len(qubit))
241 if not isinstance(qubit[1], int):
242 raise QiskitError("The second element of a tuple defining a qubit should be an int:"
243 "%s was found instead" % type(qubit[1]).__name__)
244 self._check_qreg(qubit[0])
245 qubit[0].check_range(qubit[1])
246
247 def _check_creg(self, register):
248 """Raise exception if r is not in this circuit or not creg."""
249 if not isinstance(register, ClassicalRegister):
250 raise QiskitError("Expected ClassicalRegister, but %s given" % type(register))
251 if not self.has_register(register):
252 raise QiskitError(
253 "register '%s' not in this circuit" %
254 register.name)
255
256 def _check_dups(self, qubits):
257 """Raise exception if list of qubits contains duplicates."""
258 squbits = set(qubits)
259 if len(squbits) != len(qubits):
260 raise QiskitError("duplicate qubit arguments")
261
262 def _check_compatible_regs(self, rhs):
263 """Raise exception if the circuits are defined on incompatible registers"""
264
265 list1 = self.qregs + self.cregs
266 list2 = rhs.qregs + rhs.cregs
267 for element1 in list1:
268 for element2 in list2:
269 if element2.name == element1.name:
270 if element1 != element2:
271 raise QiskitError("circuits are not compatible")
272
273 def _gate_string(self, name):
274 """Return a QASM string for the named gate."""
275 out = ""
276 if self.definitions[name]["opaque"]:
277 out = "opaque " + name
278 else:
279 out = "gate " + name
280 if self.definitions[name]["n_args"] > 0:
281 out += "(" + ",".join(self.definitions[name]["args"]) + ")"
282 out += " " + ",".join(self.definitions[name]["bits"])
283 if self.definitions[name]["opaque"]:
284 out += ";"
285 else:
286 out += "\n{\n" + self.definitions[name]["body"].qasm() + "}\n"
287 return out
288
289 def qasm(self):
290 """Return OPENQASM string."""
291 string_temp = self.header + "\n"
292 string_temp += self.extension_lib + "\n"
293 for register in self.qregs:
294 string_temp += register.qasm() + "\n"
295 for register in self.cregs:
296 string_temp += register.qasm() + "\n"
297 for instruction in self.data:
298 string_temp += instruction.qasm() + "\n"
299 return string_temp
300
301 def draw(self, scale=0.7, filename=None, style=None, output='text',
302 interactive=False, line_length=None, plot_barriers=True,
303 reverse_bits=False, justify=None):
304 """Draw the quantum circuit
305
306 Using the output parameter you can specify the format. The choices are:
307 0. text: ASCII art string
308 1. latex: high-quality images, but heavy external software dependencies
309 2. matplotlib: purely in Python with no external dependencies
310
311 Defaults to an overcomplete basis, in order to not alter gates.
312
313 Args:
314 scale (float): scale of image to draw (shrink if < 1)
315 filename (str): file path to save image to
316 style (dict or str): dictionary of style or file name of style
317 file. You can refer to the
318 :ref:`Style Dict Doc <style-dict-doc>` for more information
319 on the contents.
320 output (str): Select the output method to use for drawing the
321 circuit. Valid choices are `text`, `latex`, `latex_source`,
322 `mpl`.
323 interactive (bool): when set true show the circuit in a new window
324 (for `mpl` this depends on the matplotlib backend being used
325 supporting this). Note when used with either the `text` or the
326 `latex_source` output type this has no effect and will be
327 silently ignored.
328 line_length (int): sets the length of the lines generated by `text`
329 reverse_bits (bool): When set to True reverse the bit order inside
330 registers for the output visualization.
331 plot_barriers (bool): Enable/disable drawing barriers in the output
332 circuit. Defaults to True.
333 justify (string): Options are `left`, `right` or `none`, if anything
334 else is supplied it defaults to left justified. It refers to where
335 gates should be placed in the output circuit if there is an option.
336 `none` results in each gate being placed in its own column. Currently
337 only supported by text drawer.
338
339 Returns:
340 PIL.Image or matplotlib.figure or str or TextDrawing:
341 * PIL.Image: (output `latex`) an in-memory representation of the
342 image of the circuit diagram.
343 * matplotlib.figure: (output `mpl`) a matplotlib figure object
344 for the circuit diagram.
345 * str: (output `latex_source`). The LaTeX source code.
346 * TextDrawing: (output `text`). A drawing that can be printed as
347 ascii art
348
349 Raises:
350 VisualizationError: when an invalid output method is selected
351 """
352 from qiskit.tools import visualization
353 return visualization.circuit_drawer(self, scale=scale,
354 filename=filename, style=style,
355 output=output,
356 interactive=interactive,
357 line_length=line_length,
358 plot_barriers=plot_barriers,
359 reverse_bits=reverse_bits,
360 justify=justify)
361
362 def size(self):
363 """Return total number of operations in circuit."""
364 # TODO: removed the DAG from this function
365 from qiskit.converters import circuit_to_dag
366 dag = circuit_to_dag(self)
367 return dag.size()
368
369 def depth(self):
370 """Return circuit depth (i.e. length of critical path)."""
371 from qiskit.converters import circuit_to_dag
372 dag = circuit_to_dag(self)
373 return dag.depth()
374
375 def width(self):
376 """Return number of qubits in circuit."""
377 from qiskit.converters import circuit_to_dag
378 dag = circuit_to_dag(self)
379 return dag.width()
380
381 def count_ops(self):
382 """Count each operation kind in the circuit.
383
384 Returns:
385 dict: a breakdown of how many operations of each kind.
386 """
387 from qiskit.converters import circuit_to_dag
388 dag = circuit_to_dag(self)
389 return dag.count_ops()
390
391 def num_tensor_factors(self):
392 """How many non-entangled subcircuits can the circuit be factored to."""
393 from qiskit.converters import circuit_to_dag
394 dag = circuit_to_dag(self)
395 return dag.num_tensor_factors()
396
397 def copy(self, name=None):
398 """
399 Args:
400 name (str): name to be given to the copied circuit, if None then the name stays the same
401 Returns:
402 QuantumCircuit: a deepcopy of the current circuit, with the name updated if
403 it was provided
404 """
405 cpy = deepcopy(self)
406 if name:
407 cpy.name = name
408 return cpy
409
410 @staticmethod
411 def from_qasm_file(path):
412 """Take in a QASM file and generate a QuantumCircuit object.
413
414 Args:
415 path (str): Path to the file for a QASM program
416 Return:
417 QuantumCircuit: The QuantumCircuit object for the input QASM
418 """
419 qasm = _qasm.Qasm(filename=path)
420 return _circuit_from_qasm(qasm)
421
422 @staticmethod
423 def from_qasm_str(qasm_str):
424 """Take in a QASM string and generate a QuantumCircuit object.
425
426 Args:
427 qasm_str (str): A QASM program string
428 Return:
429 QuantumCircuit: The QuantumCircuit object for the input QASM
430 """
431 qasm = _qasm.Qasm(data=qasm_str)
432 return _circuit_from_qasm(qasm)
433
434
435 def _circuit_from_qasm(qasm):
436 # pylint: disable=cyclic-import
437 from qiskit.converters import ast_to_dag
438 from qiskit.converters import dag_to_circuit
439 ast = qasm.parse()
440 dag = ast_to_dag(ast)
441 return dag_to_circuit(dag)
442
[end of qiskit/circuit/quantumcircuit.py]
[start of qiskit/tools/visualization/_circuit_visualization.py]
1 # -*- coding: utf-8 -*-
2
3 # Copyright 2018, IBM.
4 #
5 # This source code is licensed under the Apache License, Version 2.0 found in
6 # the LICENSE.txt file in the root directory of this source tree.
7
8 # TODO: Remove after 0.7 and the deprecated methods are removed
9 # pylint: disable=unused-argument
10
11
12 """
13 Two quantum circuit drawers based on:
14 0. Ascii art
15 1. LaTeX
16 2. Matplotlib
17 """
18
19 import errno
20 import logging
21 import os
22 import subprocess
23 import tempfile
24
25 from PIL import Image
26
27 from qiskit.tools.visualization import exceptions
28 from qiskit.tools.visualization import _latex
29 from qiskit.tools.visualization import _text
30 from qiskit.tools.visualization import _utils
31 from qiskit.tools.visualization import _matplotlib
32
33 logger = logging.getLogger(__name__)
34
35
36 def circuit_drawer(circuit,
37 scale=0.7,
38 filename=None,
39 style=None,
40 output='text',
41 interactive=False,
42 line_length=None,
43 plot_barriers=True,
44 reverse_bits=False,
45 justify=None):
46 """Draw a quantum circuit to different formats (set by output parameter):
47 0. text: ASCII art TextDrawing that can be printed in the console.
48 1. latex: high-quality images, but heavy external software dependencies
49 2. matplotlib: purely in Python with no external dependencies
50
51 Args:
52 circuit (QuantumCircuit): the quantum circuit to draw
53 scale (float): scale of image to draw (shrink if < 1)
54 filename (str): file path to save image to
55 style (dict or str): dictionary of style or file name of style file.
56 This option is only used by the `mpl`, `latex`, and `latex_source`
57 output types. If a str is passed in that is the path to a json
58 file which contains that will be open, parsed, and then used just
59 as the input dict.
60 output (TextDrawing): Select the output method to use for drawing the circuit.
61 Valid choices are `text`, `latex`, `latex_source`, `mpl`. Note if
62 one is not specified it will use latex and if that fails fallback
63 to mpl. However this behavior is deprecated and in a future release
64 the default will change.
65 interactive (bool): when set true show the circuit in a new window
66 (for `mpl` this depends on the matplotlib backend being used
67 supporting this). Note when used with either the `text` or the
68 `latex_source` output type this has no effect and will be silently
69 ignored.
70 line_length (int): Sets the length of the lines generated by `text`
71 output type. This useful when the drawing does not fit in the
72 console. If None (default), it will try to guess the console width
73 using shutil.get_terminal_size(). However, if you're running in
74 jupyter the default line length is set to 80 characters. If you
75 don't want pagination at all, set `line_length=-1`.
76 reverse_bits (bool): When set to True reverse the bit order inside
77 registers for the output visualization.
78 plot_barriers (bool): Enable/disable drawing barriers in the output
79 circuit. Defaults to True.
80 justify (string): Options are `left`, `right` or `none`, if anything
81 else is supplied it defaults to left justified. It refers to where
82 gates should be placed in the output circuit if there is an option.
83 `none` results in each gate being placed in its own column. Currently
84 only supported by text drawer.
85
86 Returns:
87 PIL.Image: (output `latex`) an in-memory representation of the image
88 of the circuit diagram.
89 matplotlib.figure: (output `mpl`) a matplotlib figure object for the
90 circuit diagram.
91 String: (output `latex_source`). The LaTeX source code.
92 TextDrawing: (output `text`). A drawing that can be printed as ascii art
93 Raises:
94 VisualizationError: when an invalid output method is selected
95 ImportError: when the output methods requieres non-installed libraries.
96
97 .. _style-dict-doc:
98
99 The style dict kwarg contains numerous options that define the style of the
100 output circuit visualization. While the style dict is used by the `mpl`,
101 `latex`, and `latex_source` outputs some options in that are only used
102 by the `mpl` output. These options are defined below, if it is only used by
103 the `mpl` output it is marked as such:
104
105 textcolor (str): The color code to use for text. Defaults to
106 `'#000000'` (`mpl` only)
107 subtextcolor (str): The color code to use for subtext. Defaults to
108 `'#000000'` (`mpl` only)
109 linecolor (str): The color code to use for lines. Defaults to
110 `'#000000'` (`mpl` only)
111 creglinecolor (str): The color code to use for classical register lines
112 `'#778899'`(`mpl` only)
113 gatetextcolor (str): The color code to use for gate text `'#000000'`
114 (`mpl` only)
115 gatefacecolor (str): The color code to use for gates. Defaults to
116 `'#ffffff'` (`mpl` only)
117 barrierfacecolor (str): The color code to use for barriers. Defaults to
118 `'#bdbdbd'` (`mpl` only)
119 backgroundcolor (str): The color code to use for the background.
120 Defaults to `'#ffffff'` (`mpl` only)
121 fontsize (int): The font size to use for text. Defaults to 13 (`mpl`
122 only)
123 subfontsize (int): The font size to use for subtext. Defaults to 8
124 (`mpl` only)
125 displaytext (dict): A dictionary of the text to use for each element
126 type in the output visualization. The default values are:
127 {
128 'id': 'id',
129 'u0': 'U_0',
130 'u1': 'U_1',
131 'u2': 'U_2',
132 'u3': 'U_3',
133 'x': 'X',
134 'y': 'Y',
135 'z': 'Z',
136 'h': 'H',
137 's': 'S',
138 'sdg': 'S^\\dagger',
139 't': 'T',
140 'tdg': 'T^\\dagger',
141 'rx': 'R_x',
142 'ry': 'R_y',
143 'rz': 'R_z',
144 'reset': '\\left|0\\right\\rangle'
145 }
146 You must specify all the necessary values if using this. There is
147 no provision for passing an incomplete dict in. (`mpl` only)
148 displaycolor (dict): The color codes to use for each circuit element.
149 By default all values default to the value of `gatefacecolor` and
150 the keys are the same as `displaytext`. Also, just like
151 `displaytext` there is no provision for an incomplete dict passed
152 in. (`mpl` only)
153 latexdrawerstyle (bool): When set to True enable latex mode which will
154 draw gates like the `latex` output modes. (`mpl` only)
155 usepiformat (bool): When set to True use radians for output (`mpl`
156 only)
157 fold (int): The number of circuit elements to fold the circuit at.
158 Defaults to 20 (`mpl` only)
159 cregbundle (bool): If set True bundle classical registers (`mpl` only)
160 showindex (bool): If set True draw an index. (`mpl` only)
161 compress (bool): If set True draw a compressed circuit (`mpl` only)
162 figwidth (int): The maximum width (in inches) for the output figure.
163 (`mpl` only)
164 dpi (int): The DPI to use for the output image. Defaults to 150 (`mpl`
165 only)
166 margin (list): `mpl` only
167 creglinestyle (str): The style of line to use for classical registers.
168 Choices are `'solid'`, `'doublet'`, or any valid matplotlib
169 `linestyle` kwarg value. Defaults to `doublet`(`mpl` only)
170 """
171 image = None
172
173 if output == 'text':
174 return _text_circuit_drawer(circuit, filename=filename,
175 line_length=line_length,
176 reversebits=reverse_bits,
177 plotbarriers=plot_barriers,
178 justify=justify)
179 elif output == 'latex':
180 image = _latex_circuit_drawer(circuit, scale=scale,
181 filename=filename, style=style,
182 plot_barriers=plot_barriers,
183 reverse_bits=reverse_bits)
184 elif output == 'latex_source':
185 return _generate_latex_source(circuit,
186 filename=filename, scale=scale,
187 style=style,
188 plot_barriers=plot_barriers,
189 reverse_bits=reverse_bits)
190 elif output == 'mpl':
191 image = _matplotlib_circuit_drawer(circuit, scale=scale,
192 filename=filename, style=style,
193 plot_barriers=plot_barriers,
194 reverse_bits=reverse_bits)
195 else:
196 raise exceptions.VisualizationError(
197 'Invalid output type %s selected. The only valid choices '
198 'are latex, latex_source, text, and mpl' % output)
199 if image and interactive:
200 image.show()
201 return image
202
203
204 # -----------------------------------------------------------------------------
205 # Plot style sheet option
206 # -----------------------------------------------------------------------------
207 def qx_color_scheme():
208 """Return default style for matplotlib_circuit_drawer (IBM QX style)."""
209 return {
210 "comment": "Style file for matplotlib_circuit_drawer (IBM QX Composer style)",
211 "textcolor": "#000000",
212 "gatetextcolor": "#000000",
213 "subtextcolor": "#000000",
214 "linecolor": "#000000",
215 "creglinecolor": "#b9b9b9",
216 "gatefacecolor": "#ffffff",
217 "barrierfacecolor": "#bdbdbd",
218 "backgroundcolor": "#ffffff",
219 "fold": 20,
220 "fontsize": 13,
221 "subfontsize": 8,
222 "figwidth": -1,
223 "dpi": 150,
224 "displaytext": {
225 "id": "id",
226 "u0": "U_0",
227 "u1": "U_1",
228 "u2": "U_2",
229 "u3": "U_3",
230 "x": "X",
231 "y": "Y",
232 "z": "Z",
233 "h": "H",
234 "s": "S",
235 "sdg": "S^\\dagger",
236 "t": "T",
237 "tdg": "T^\\dagger",
238 "rx": "R_x",
239 "ry": "R_y",
240 "rz": "R_z",
241 "reset": "\\left|0\\right\\rangle"
242 },
243 "displaycolor": {
244 "id": "#ffca64",
245 "u0": "#f69458",
246 "u1": "#f69458",
247 "u2": "#f69458",
248 "u3": "#f69458",
249 "x": "#a6ce38",
250 "y": "#a6ce38",
251 "z": "#a6ce38",
252 "h": "#00bff2",
253 "s": "#00bff2",
254 "sdg": "#00bff2",
255 "t": "#ff6666",
256 "tdg": "#ff6666",
257 "rx": "#ffca64",
258 "ry": "#ffca64",
259 "rz": "#ffca64",
260 "reset": "#d7ddda",
261 "target": "#00bff2",
262 "meas": "#f070aa"
263 },
264 "latexdrawerstyle": True,
265 "usepiformat": False,
266 "cregbundle": False,
267 "plotbarrier": False,
268 "showindex": False,
269 "compress": True,
270 "margin": [2.0, 0.0, 0.0, 0.3],
271 "creglinestyle": "solid",
272 "reversebits": False
273 }
274
275
276 # -----------------------------------------------------------------------------
277 # _text_circuit_drawer
278 # -----------------------------------------------------------------------------
279
280
281 def _text_circuit_drawer(circuit, filename=None, line_length=None, reversebits=False,
282 plotbarriers=True, justify=None):
283 """
284 Draws a circuit using ascii art.
285 Args:
286 circuit (QuantumCircuit): Input circuit
287 filename (str): optional filename to write the result
288 line_length (int): Optional. Breaks the circuit drawing to this length. This
289 useful when the drawing does not fit in the console. If
290 None (default), it will try to guess the console width using
291 shutil.get_terminal_size(). If you don't want pagination
292 at all, set line_length=-1.
293 reversebits (bool): Rearrange the bits in reverse order.
294 plotbarriers (bool): Draws the barriers when they are there.
295 justify (str) : `left`, `right` or `none`. Defaults to `left`. Says how
296 the circuit should be justified.
297 Returns:
298 TextDrawing: An instances that, when printed, draws the circuit in ascii art.
299 """
300 qregs, cregs, ops = _utils._get_layered_instructions(circuit,
301 reversebits=reversebits,
302 justify=justify)
303 text_drawing = _text.TextDrawing(qregs, cregs, ops)
304 text_drawing.plotbarriers = plotbarriers
305 text_drawing.line_length = line_length
306
307 if filename:
308 text_drawing.dump(filename)
309 return text_drawing
310
311
312 # -----------------------------------------------------------------------------
313 # latex_circuit_drawer
314 # -----------------------------------------------------------------------------
315
316
317 def _latex_circuit_drawer(circuit,
318 scale=0.7,
319 filename=None,
320 style=None,
321 plot_barriers=True,
322 reverse_bits=False):
323 """Draw a quantum circuit based on latex (Qcircuit package)
324
325 Requires version >=2.6.0 of the qcircuit LaTeX package.
326
327 Args:
328 circuit (QuantumCircuit): a quantum circuit
329 scale (float): scaling factor
330 filename (str): file path to save image to
331 style (dict or str): dictionary of style or file name of style file
332 reverse_bits (bool): When set to True reverse the bit order inside
333 registers for the output visualization.
334 plot_barriers (bool): Enable/disable drawing barriers in the output
335 circuit. Defaults to True.
336
337 Returns:
338 PIL.Image: an in-memory representation of the circuit diagram
339
340 Raises:
341 OSError: usually indicates that ```pdflatex``` or ```pdftocairo``` is
342 missing.
343 CalledProcessError: usually points errors during diagram creation.
344 """
345 tmpfilename = 'circuit'
346 with tempfile.TemporaryDirectory() as tmpdirname:
347 tmppath = os.path.join(tmpdirname, tmpfilename + '.tex')
348 _generate_latex_source(circuit, filename=tmppath,
349 scale=scale, style=style,
350 plot_barriers=plot_barriers,
351 reverse_bits=reverse_bits)
352 image = None
353 try:
354
355 subprocess.run(["pdflatex", "-halt-on-error",
356 "-output-directory={}".format(tmpdirname),
357 "{}".format(tmpfilename + '.tex')],
358 stdout=subprocess.PIPE, stderr=subprocess.DEVNULL,
359 check=True)
360 except OSError as ex:
361 if ex.errno == errno.ENOENT:
362 logger.warning('WARNING: Unable to compile latex. '
363 'Is `pdflatex` installed? '
364 'Skipping latex circuit drawing...')
365 raise
366 except subprocess.CalledProcessError as ex:
367 with open('latex_error.log', 'wb') as error_file:
368 error_file.write(ex.stdout)
369 logger.warning('WARNING Unable to compile latex. '
370 'The output from the pdflatex command can '
371 'be found in latex_error.log')
372 raise
373 else:
374 try:
375 base = os.path.join(tmpdirname, tmpfilename)
376 subprocess.run(["pdftocairo", "-singlefile", "-png", "-q",
377 base + '.pdf', base])
378 image = Image.open(base + '.png')
379 image = _utils._trim(image)
380 os.remove(base + '.png')
381 if filename:
382 image.save(filename, 'PNG')
383 except OSError as ex:
384 if ex.errno == errno.ENOENT:
385 logger.warning('WARNING: Unable to convert pdf to image. '
386 'Is `poppler` installed? '
387 'Skipping circuit drawing...')
388 raise
389 return image
390
391
392 def _generate_latex_source(circuit, filename=None,
393 scale=0.7, style=None, reverse_bits=False,
394 plot_barriers=True):
395 """Convert QuantumCircuit to LaTeX string.
396
397 Args:
398 circuit (QuantumCircuit): input circuit
399 scale (float): image scaling
400 filename (str): optional filename to write latex
401 style (dict or str): dictionary of style or file name of style file
402 reverse_bits (bool): When set to True reverse the bit order inside
403 registers for the output visualization.
404 plot_barriers (bool): Enable/disable drawing barriers in the output
405 circuit. Defaults to True.
406
407 Returns:
408 str: Latex string appropriate for writing to file.
409 """
410 qregs, cregs, ops = _utils._get_instructions(circuit,
411 reversebits=reverse_bits)
412 qcimg = _latex.QCircuitImage(qregs, cregs, ops, scale, style=style,
413 plot_barriers=plot_barriers,
414 reverse_bits=reverse_bits)
415 latex = qcimg.latex()
416 if filename:
417 with open(filename, 'w') as latex_file:
418 latex_file.write(latex)
419 return latex
420
421
422 # -----------------------------------------------------------------------------
423 # matplotlib_circuit_drawer
424 # -----------------------------------------------------------------------------
425
426
427 def _matplotlib_circuit_drawer(circuit,
428 scale=0.7,
429 filename=None,
430 style=None,
431 plot_barriers=True,
432 reverse_bits=False):
433 """Draw a quantum circuit based on matplotlib.
434 If `%matplotlib inline` is invoked in a Jupyter notebook, it visualizes a circuit inline.
435 We recommend `%config InlineBackend.figure_format = 'svg'` for the inline visualization.
436
437 Args:
438 circuit (QuantumCircuit): a quantum circuit
439 scale (float): scaling factor
440 filename (str): file path to save image to
441 style (dict or str): dictionary of style or file name of style file
442 reverse_bits (bool): When set to True reverse the bit order inside
443 registers for the output visualization.
444 plot_barriers (bool): Enable/disable drawing barriers in the output
445 circuit. Defaults to True.
446
447
448 Returns:
449 matplotlib.figure: a matplotlib figure object for the circuit diagram
450 """
451 qcd = _matplotlib.MatplotlibDrawer(scale=scale, style=style,
452 plot_barriers=plot_barriers,
453 reverse_bits=reverse_bits)
454 qcd.parse_circuit(circuit)
455 return qcd.draw(filename)
456
[end of qiskit/tools/visualization/_circuit_visualization.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
Qiskit/qiskit
|
dc21249a77af4eae3a9bf49f4a0b6bb24af42338
|
Incorrect shaded barriers in mpl drawer
Circuit with single qubit barriers on 2Q. The text drawn output is correct, but the mpl output is incorrect.
MPL (incorrect)
<img width="992" alt="image" src="https://user-images.githubusercontent.com/32201347/53523392-ae062380-3aaa-11e9-9c36-12f8ce2962bd.png">
Text (correct)
<img width="677" alt="image" src="https://user-images.githubusercontent.com/32201347/53523411-bfe7c680-3aaa-11e9-9716-d7cbebf71c97.png">
Qasm
'OPENQASM 2.0;\ninclude "qelib1.inc";\nqreg q7[3];\ncreg c7[2];\nu2(0.0,0.0) q7[0];\nbarrier q7[0];\nid q7[0];\nbarrier q7[0];\ny q7[0];\nbarrier q7[0];\nid q7[0];\nbarrier q7[0];\nid q7[0];\nbarrier q7[0];\nx q7[0];\nbarrier q7[0];\nid q7[0];\nbarrier q7[0];\nid q7[0];\nbarrier q7[0];\ny q7[0];\nbarrier q7[0];\nid q7[0];\nbarrier q7[0];\nid q7[0];\nbarrier q7[0];\nx q7[0];\nbarrier q7[0];\nid q7[0];\nbarrier q7[0];\nid q7[0];\nbarrier q7[0];\ny q7[0];\nbarrier q7[0];\nid q7[0];\nbarrier q7[0];\nu2(0.0,0.0) q7[0];\nu2(0.0,0.0) q7[2];\nbarrier q7[2];\nid q7[2];\nbarrier q7[2];\ny q7[2];\nbarrier q7[2];\nid q7[2];\nbarrier q7[2];\nid q7[2];\nbarrier q7[2];\nx q7[2];\nbarrier q7[2];\nid q7[2];\nbarrier q7[2];\nid q7[2];\nbarrier q7[2];\ny q7[2];\nbarrier q7[2];\nid q7[2];\nbarrier q7[2];\nid q7[2];\nbarrier q7[2];\nx q7[2];\nbarrier q7[2];\nid q7[2];\nbarrier q7[2];\nid q7[2];\nbarrier q7[2];\ny q7[2];\nbarrier q7[2];\nid q7[2];\nbarrier q7[2];\nu2(0.0,0.0) q7[2];\nbarrier q7[0],q7[1],q7[2];\nmeasure q7[0] -> c7[0];\nmeasure q7[2] -> c7[1];\n'
<!-- ⚠️ If you do not respect this template, your issue will be closed -->
<!-- ⚠️ Make sure to browse the opened and closed issues -->
### Information
- **Qiskit Terra version**: Latest pull from github
- **Python version**: 3.6
- **Operating system**: macox
### What is the current behavior?
### Steps to reproduce the problem
### What is the expected behavior?
### Suggested solutions
|
So I've been looking at this a bit more and I'm not actually convinced the `mpl` output is wrong, it's just not aligned/justified the same way as the `text` version. The part which is confusing here is the barrier style, the dark grey shading isn't actually the barrier, it's the dotted line. To verify that I came up with this little test:
```
import qiskit as qk
qr = qk.QuantumRegister(3)
qc = qk.QuantumCircuit(qr)
qc.h([qr[0], qr[1]])
qc.barrier([qr[0], qr[1]])
qc.h(qr[2])
qc.barrier(qr[2])
qc.y(qr)
qc.draw(output='mpl')
```
Which yielded:

With that in mind when I ran that qasm through QuantumCircuit.draw() the circuit output was correct, it just didn't left justify everything. So the parts on q7[0] and q7[2] weren't lined up like in the text drawer. To make this a bit clearer I rendered the same qasm with the mpl backend and set the fold higher so it would put everything in a single row:

It's not as nice looking as text, but there's nothing wrong with it. FWIW, the justify option is in progress here: https://github.com/Qiskit/qiskit-terra/pull/1797
But, just out of curiosity I took a look look at the latex output from the same qasm generated circuit and that is really wrong on several fronts:

So the latex issue is it doesn't apparently know how to draw iden gates (which is where those double barriers come from, the gap is where an iden gate should be). That'll be easy to fix and I'll push a patch in a second. The horizontal spacing on the barriers will be a bit trickier
|
2019-02-28T19:13:26Z
|
<patch>
diff --git a/qiskit/tools/visualization/_latex.py b/qiskit/tools/visualization/_latex.py
--- a/qiskit/tools/visualization/_latex.py
+++ b/qiskit/tools/visualization/_latex.py
@@ -225,7 +225,7 @@ def _get_image_depth(self, aliases=None):
# useful information for determining row spacing
boxed_gates = ['u0', 'u1', 'u2', 'u3', 'x', 'y', 'z', 'h', 's',
'sdg', 't', 'tdg', 'rx', 'ry', 'rz', 'ch', 'cy',
- 'crz', 'cu3']
+ 'crz', 'cu3', 'id']
target_gates = ['cx', 'ccx']
if op['name'] in boxed_gates:
self.has_box = True
@@ -546,6 +546,8 @@ def _build_latex_array(self, aliases=None):
self._latex[pos_1][columns] = "\\gate{Z}"
elif nm == "h":
self._latex[pos_1][columns] = "\\gate{H}"
+ elif nm == "id":
+ self._latex[pos_1][columns] = "\\gate{Id}"
elif nm == "s":
self._latex[pos_1][columns] = "\\gate{S}"
elif nm == "sdg":
@@ -606,6 +608,8 @@ def _build_latex_array(self, aliases=None):
self._latex[pos_1][columns] = "\\gate{Z}"
elif nm == "h":
self._latex[pos_1][columns] = "\\gate{H}"
+ elif nm == "id":
+ self._latex[pos_1][columns] = "\\gate{Id}"
elif nm == "s":
self._latex[pos_1][columns] = "\\gate{S}"
elif nm == "sdg":
</patch>
|
[]
|
[]
| |||
pandas-dev__pandas-39239
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
BUG: Numpy ufuncs e.g. np.[op](df1, df2) aligns columns in pandas 1.2.0 where it did not before
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the latest version of pandas.
- [ ] (optional) I have confirmed this bug exists on the master branch of pandas.
---
#### Code Sample
```python
df = pd.DataFrame({k: [1,2,3,4,5] for k in 'abcd'})
np.add(df[['a', 'b']], df[['c', 'd']])
```
#### Problem description
This is a regression from pandas 1.1.5 (both versions are using numpy 1.19.5).
Normally if we want to add, subtract, multiply or divide df columns with different names we get NaNs because the column names don't match. E.g:
```python
>>> df[['a', 'b']] + df[['c', 'd']]
a b c d
0 NaN NaN NaN NaN
1 NaN NaN NaN NaN
2 NaN NaN NaN NaN
3 NaN NaN NaN NaN
4 NaN NaN NaN NaN
```
To get around this, we would use np.[op](df1, df2).
However, we get the same output as above.
```python
>>> np.add(df[['a', 'b']], df[['c', 'd']])
a b c d
0 NaN NaN NaN NaN
1 NaN NaN NaN NaN
2 NaN NaN NaN NaN
3 NaN NaN NaN NaN
4 NaN NaN NaN NaN
```
#### Expected Output
```python
# Using pandas 1.1.5:
>>> np.add(df[['a', 'b']], df[['c', 'd']])
a b
0 2 2
1 4 4
2 6 6
3 8 8
4 10 10
```
#### Temporary solution
```python
# This may have a potential copy penalty with the conversion to numpy
>>> df[['a', 'b']] + df[['c', 'd']].to_numpy()
a b
0 2 2
1 4 4
2 6 6
3 8 8
4 10 10
```
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit : 3e89b4c4b1580aa890023fc550774e63d499da25
python : 3.9.1.final.0
python-bits : 64
OS : Windows
OS-release : 10
Version : 10.0.19041
machine : AMD64
processor : Intel64 Family 6 Model 158 Stepping 10, GenuineIntel
byteorder : little
LC_ALL : None
LANG : None
LOCALE : English_United States.1252
pandas : 1.2.0
numpy : 1.19.5
pytz : 2020.5
dateutil : 2.8.1
pip : 20.3.3
setuptools : 49.6.0.post20210108
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : None
IPython : 7.19.0
pandas_datareader: None
bs4 : None
bottleneck : None
fsspec : None
fastparquet : None
gcsfs : None
matplotlib : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
numba : None
</details>
Just my 2 cents: I was more than willing to test this on a nightly / master release, but it doesn't appear you release those. It could be quite beneficial to publish nightlies to PyPl so we don't report issues that have already been fixed. For some, it might be easier to test a nightly than peruse recent open and closed issues.
</issue>
<code>
[start of README.md]
1 <div align="center">
2 <img src="https://dev.pandas.io/static/img/pandas.svg"><br>
3 </div>
4
5 -----------------
6
7 # pandas: powerful Python data analysis toolkit
8 [](https://pypi.org/project/pandas/)
9 [](https://anaconda.org/anaconda/pandas/)
10 [](https://doi.org/10.5281/zenodo.3509134)
11 [](https://pypi.org/project/pandas/)
12 [](https://github.com/pandas-dev/pandas/blob/master/LICENSE)
13 [](https://travis-ci.org/pandas-dev/pandas)
14 [](https://dev.azure.com/pandas-dev/pandas/_build/latest?definitionId=1&branch=master)
15 [](https://codecov.io/gh/pandas-dev/pandas)
16 [](https://pandas.pydata.org)
17 [](https://gitter.im/pydata/pandas)
18 [](https://numfocus.org)
19 [](https://github.com/psf/black)
20
21 ## What is it?
22
23 **pandas** is a Python package that provides fast, flexible, and expressive data
24 structures designed to make working with "relational" or "labeled" data both
25 easy and intuitive. It aims to be the fundamental high-level building block for
26 doing practical, **real world** data analysis in Python. Additionally, it has
27 the broader goal of becoming **the most powerful and flexible open source data
28 analysis / manipulation tool available in any language**. It is already well on
29 its way towards this goal.
30
31 ## Main Features
32 Here are just a few of the things that pandas does well:
33
34 - Easy handling of [**missing data**][missing-data] (represented as
35 `NaN`, `NA`, or `NaT`) in floating point as well as non-floating point data
36 - Size mutability: columns can be [**inserted and
37 deleted**][insertion-deletion] from DataFrame and higher dimensional
38 objects
39 - Automatic and explicit [**data alignment**][alignment]: objects can
40 be explicitly aligned to a set of labels, or the user can simply
41 ignore the labels and let `Series`, `DataFrame`, etc. automatically
42 align the data for you in computations
43 - Powerful, flexible [**group by**][groupby] functionality to perform
44 split-apply-combine operations on data sets, for both aggregating
45 and transforming data
46 - Make it [**easy to convert**][conversion] ragged,
47 differently-indexed data in other Python and NumPy data structures
48 into DataFrame objects
49 - Intelligent label-based [**slicing**][slicing], [**fancy
50 indexing**][fancy-indexing], and [**subsetting**][subsetting] of
51 large data sets
52 - Intuitive [**merging**][merging] and [**joining**][joining] data
53 sets
54 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of
55 data sets
56 - [**Hierarchical**][mi] labeling of axes (possible to have multiple
57 labels per tick)
58 - Robust IO tools for loading data from [**flat files**][flat-files]
59 (CSV and delimited), [**Excel files**][excel], [**databases**][db],
60 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore]
61 - [**Time series**][timeseries]-specific functionality: date range
62 generation and frequency conversion, moving window statistics,
63 date shifting and lagging
64
65
66 [missing-data]: https://pandas.pydata.org/pandas-docs/stable/user_guide/missing_data.html
67 [insertion-deletion]: https://pandas.pydata.org/pandas-docs/stable/user_guide/dsintro.html#column-selection-addition-deletion
68 [alignment]: https://pandas.pydata.org/pandas-docs/stable/user_guide/dsintro.html?highlight=alignment#intro-to-data-structures
69 [groupby]: https://pandas.pydata.org/pandas-docs/stable/user_guide/groupby.html#group-by-split-apply-combine
70 [conversion]: https://pandas.pydata.org/pandas-docs/stable/user_guide/dsintro.html#dataframe
71 [slicing]: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#slicing-ranges
72 [fancy-indexing]: https://pandas.pydata.org/pandas-docs/stable/user_guide/advanced.html#advanced
73 [subsetting]: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#boolean-indexing
74 [merging]: https://pandas.pydata.org/pandas-docs/stable/user_guide/merging.html#database-style-dataframe-or-named-series-joining-merging
75 [joining]: https://pandas.pydata.org/pandas-docs/stable/user_guide/merging.html#joining-on-index
76 [reshape]: https://pandas.pydata.org/pandas-docs/stable/user_guide/reshaping.html
77 [pivot-table]: https://pandas.pydata.org/pandas-docs/stable/user_guide/reshaping.html
78 [mi]: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#hierarchical-indexing-multiindex
79 [flat-files]: https://pandas.pydata.org/pandas-docs/stable/user_guide/io.html#csv-text-files
80 [excel]: https://pandas.pydata.org/pandas-docs/stable/user_guide/io.html#excel-files
81 [db]: https://pandas.pydata.org/pandas-docs/stable/user_guide/io.html#sql-queries
82 [hdfstore]: https://pandas.pydata.org/pandas-docs/stable/user_guide/io.html#hdf5-pytables
83 [timeseries]: https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#time-series-date-functionality
84
85 ## Where to get it
86 The source code is currently hosted on GitHub at:
87 https://github.com/pandas-dev/pandas
88
89 Binary installers for the latest released version are available at the [Python
90 Package Index (PyPI)](https://pypi.org/project/pandas) and on [Conda](https://docs.conda.io/en/latest/).
91
92 ```sh
93 # conda
94 conda install pandas
95 ```
96
97 ```sh
98 # or PyPI
99 pip install pandas
100 ```
101
102 ## Dependencies
103 - [NumPy - Adds support for large, multi-dimensional arrays, matrices and high-level mathematical functions to operate on these arrays](https://www.numpy.org)
104 - [python-dateutil - Provides powerful extensions to the standard datetime module](https://labix.org/python-dateutil)
105 - [pytz - Brings the Olson tz database into Python which allows accurate and cross platform timezone calculations](https://pythonhosted.org/pytz)
106
107 See the [full installation instructions](https://pandas.pydata.org/pandas-docs/stable/install.html#dependencies) for minimum supported versions of required, recommended and optional dependencies.
108
109 ## Installation from sources
110 To install pandas from source you need [Cython](https://cython.org/) in addition to the normal
111 dependencies above. Cython can be installed from PyPI:
112
113 ```sh
114 pip install cython
115 ```
116
117 In the `pandas` directory (same one where you found this file after
118 cloning the git repo), execute:
119
120 ```sh
121 python setup.py install
122 ```
123
124 or for installing in [development mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs):
125
126
127 ```sh
128 python -m pip install -e . --no-build-isolation --no-use-pep517
129 ```
130
131 If you have `make`, you can also use `make develop` to run the same command.
132
133 or alternatively
134
135 ```sh
136 python setup.py develop
137 ```
138
139 See the full instructions for [installing from source](https://pandas.pydata.org/pandas-docs/stable/install.html#installing-from-source).
140
141 ## License
142 [BSD 3](LICENSE)
143
144 ## Documentation
145 The official documentation is hosted on PyData.org: https://pandas.pydata.org/pandas-docs/stable
146
147 ## Background
148 Work on ``pandas`` started at [AQR](https://www.aqr.com/) (a quantitative hedge fund) in 2008 and
149 has been under active development since then.
150
151 ## Getting Help
152
153 For usage questions, the best place to go to is [StackOverflow](https://stackoverflow.com/questions/tagged/pandas).
154 Further, general questions and discussions can also take place on the [pydata mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata).
155
156 ## Discussion and Development
157 Most development discussions take place on GitHub in this repo. Further, the [pandas-dev mailing list](https://mail.python.org/mailman/listinfo/pandas-dev) can also be used for specialized discussions or design issues, and a [Gitter channel](https://gitter.im/pydata/pandas) is available for quick development related questions.
158
159 ## Contributing to pandas [](https://www.codetriage.com/pandas-dev/pandas)
160
161 All contributions, bug reports, bug fixes, documentation improvements, enhancements, and ideas are welcome.
162
163 A detailed overview on how to contribute can be found in the **[contributing guide](https://pandas.pydata.org/docs/dev/development/contributing.html)**. There is also an [overview](.github/CONTRIBUTING.md) on GitHub.
164
165 If you are simply looking to start working with the pandas codebase, navigate to the [GitHub "issues" tab](https://github.com/pandas-dev/pandas/issues) and start looking through interesting issues. There are a number of issues listed under [Docs](https://github.com/pandas-dev/pandas/issues?labels=Docs&sort=updated&state=open) and [good first issue](https://github.com/pandas-dev/pandas/issues?labels=good+first+issue&sort=updated&state=open) where you could start out.
166
167 You can also triage issues which may include reproducing bug reports, or asking for vital information such as version numbers or reproduction instructions. If you would like to start triaging issues, one easy way to get started is to [subscribe to pandas on CodeTriage](https://www.codetriage.com/pandas-dev/pandas).
168
169 Or maybe through using pandas you have an idea of your own or are looking for something in the documentation and thinking ‘this can be improved’...you can do something about it!
170
171 Feel free to ask questions on the [mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata) or on [Gitter](https://gitter.im/pydata/pandas).
172
173 As contributors and maintainers to this project, you are expected to abide by pandas' code of conduct. More information can be found at: [Contributor Code of Conduct](https://github.com/pandas-dev/pandas/blob/master/.github/CODE_OF_CONDUCT.md)
174
[end of README.md]
[start of pandas/core/ops/docstrings.py]
1 """
2 Templating for ops docstrings
3 """
4 from typing import Dict, Optional
5
6
7 def make_flex_doc(op_name: str, typ: str) -> str:
8 """
9 Make the appropriate substitutions for the given operation and class-typ
10 into either _flex_doc_SERIES or _flex_doc_FRAME to return the docstring
11 to attach to a generated method.
12
13 Parameters
14 ----------
15 op_name : str {'__add__', '__sub__', ... '__eq__', '__ne__', ...}
16 typ : str {series, 'dataframe']}
17
18 Returns
19 -------
20 doc : str
21 """
22 op_name = op_name.replace("__", "")
23 op_desc = _op_descriptions[op_name]
24
25 op_desc_op = op_desc["op"]
26 assert op_desc_op is not None # for mypy
27 if op_name.startswith("r"):
28 equiv = "other " + op_desc_op + " " + typ
29 elif op_name == "divmod":
30 equiv = f"{op_name}({typ}, other)"
31 else:
32 equiv = typ + " " + op_desc_op + " other"
33
34 if typ == "series":
35 base_doc = _flex_doc_SERIES
36 if op_desc["reverse"]:
37 base_doc += _see_also_reverse_SERIES.format(
38 reverse=op_desc["reverse"], see_also_desc=op_desc["see_also_desc"]
39 )
40 doc_no_examples = base_doc.format(
41 desc=op_desc["desc"],
42 op_name=op_name,
43 equiv=equiv,
44 series_returns=op_desc["series_returns"],
45 )
46 ser_example = op_desc["series_examples"]
47 if ser_example:
48 doc = doc_no_examples + ser_example
49 else:
50 doc = doc_no_examples
51 elif typ == "dataframe":
52 base_doc = _flex_doc_FRAME
53 doc = base_doc.format(
54 desc=op_desc["desc"],
55 op_name=op_name,
56 equiv=equiv,
57 reverse=op_desc["reverse"],
58 )
59 else:
60 raise AssertionError("Invalid typ argument.")
61 return doc
62
63
64 _common_examples_algebra_SERIES = """
65 Examples
66 --------
67 >>> a = pd.Series([1, 1, 1, np.nan], index=['a', 'b', 'c', 'd'])
68 >>> a
69 a 1.0
70 b 1.0
71 c 1.0
72 d NaN
73 dtype: float64
74 >>> b = pd.Series([1, np.nan, 1, np.nan], index=['a', 'b', 'd', 'e'])
75 >>> b
76 a 1.0
77 b NaN
78 d 1.0
79 e NaN
80 dtype: float64"""
81
82 _common_examples_comparison_SERIES = """
83 Examples
84 --------
85 >>> a = pd.Series([1, 1, 1, np.nan, 1], index=['a', 'b', 'c', 'd', 'e'])
86 >>> a
87 a 1.0
88 b 1.0
89 c 1.0
90 d NaN
91 e 1.0
92 dtype: float64
93 >>> b = pd.Series([0, 1, 2, np.nan, 1], index=['a', 'b', 'c', 'd', 'f'])
94 >>> b
95 a 0.0
96 b 1.0
97 c 2.0
98 d NaN
99 f 1.0
100 dtype: float64"""
101
102 _add_example_SERIES = (
103 _common_examples_algebra_SERIES
104 + """
105 >>> a.add(b, fill_value=0)
106 a 2.0
107 b 1.0
108 c 1.0
109 d 1.0
110 e NaN
111 dtype: float64
112 """
113 )
114
115 _sub_example_SERIES = (
116 _common_examples_algebra_SERIES
117 + """
118 >>> a.subtract(b, fill_value=0)
119 a 0.0
120 b 1.0
121 c 1.0
122 d -1.0
123 e NaN
124 dtype: float64
125 """
126 )
127
128 _mul_example_SERIES = (
129 _common_examples_algebra_SERIES
130 + """
131 >>> a.multiply(b, fill_value=0)
132 a 1.0
133 b 0.0
134 c 0.0
135 d 0.0
136 e NaN
137 dtype: float64
138 """
139 )
140
141 _div_example_SERIES = (
142 _common_examples_algebra_SERIES
143 + """
144 >>> a.divide(b, fill_value=0)
145 a 1.0
146 b inf
147 c inf
148 d 0.0
149 e NaN
150 dtype: float64
151 """
152 )
153
154 _floordiv_example_SERIES = (
155 _common_examples_algebra_SERIES
156 + """
157 >>> a.floordiv(b, fill_value=0)
158 a 1.0
159 b NaN
160 c NaN
161 d 0.0
162 e NaN
163 dtype: float64
164 """
165 )
166
167 _divmod_example_SERIES = (
168 _common_examples_algebra_SERIES
169 + """
170 >>> a.divmod(b, fill_value=0)
171 (a 1.0
172 b NaN
173 c NaN
174 d 0.0
175 e NaN
176 dtype: float64,
177 a 0.0
178 b NaN
179 c NaN
180 d 0.0
181 e NaN
182 dtype: float64)
183 """
184 )
185
186 _mod_example_SERIES = (
187 _common_examples_algebra_SERIES
188 + """
189 >>> a.mod(b, fill_value=0)
190 a 0.0
191 b NaN
192 c NaN
193 d 0.0
194 e NaN
195 dtype: float64
196 """
197 )
198 _pow_example_SERIES = (
199 _common_examples_algebra_SERIES
200 + """
201 >>> a.pow(b, fill_value=0)
202 a 1.0
203 b 1.0
204 c 1.0
205 d 0.0
206 e NaN
207 dtype: float64
208 """
209 )
210
211 _ne_example_SERIES = (
212 _common_examples_algebra_SERIES
213 + """
214 >>> a.ne(b, fill_value=0)
215 a False
216 b True
217 c True
218 d True
219 e True
220 dtype: bool
221 """
222 )
223
224 _eq_example_SERIES = (
225 _common_examples_algebra_SERIES
226 + """
227 >>> a.eq(b, fill_value=0)
228 a True
229 b False
230 c False
231 d False
232 e False
233 dtype: bool
234 """
235 )
236
237 _lt_example_SERIES = (
238 _common_examples_comparison_SERIES
239 + """
240 >>> a.lt(b, fill_value=0)
241 a False
242 b False
243 c True
244 d False
245 e False
246 f True
247 dtype: bool
248 """
249 )
250
251 _le_example_SERIES = (
252 _common_examples_comparison_SERIES
253 + """
254 >>> a.le(b, fill_value=0)
255 a False
256 b True
257 c True
258 d False
259 e False
260 f True
261 dtype: bool
262 """
263 )
264
265 _gt_example_SERIES = (
266 _common_examples_comparison_SERIES
267 + """
268 >>> a.gt(b, fill_value=0)
269 a True
270 b False
271 c False
272 d False
273 e True
274 f False
275 dtype: bool
276 """
277 )
278
279 _ge_example_SERIES = (
280 _common_examples_comparison_SERIES
281 + """
282 >>> a.ge(b, fill_value=0)
283 a True
284 b True
285 c False
286 d False
287 e True
288 f False
289 dtype: bool
290 """
291 )
292
293 _returns_series = """Series\n The result of the operation."""
294
295 _returns_tuple = """2-Tuple of Series\n The result of the operation."""
296
297 _op_descriptions: Dict[str, Dict[str, Optional[str]]] = {
298 # Arithmetic Operators
299 "add": {
300 "op": "+",
301 "desc": "Addition",
302 "reverse": "radd",
303 "series_examples": _add_example_SERIES,
304 "series_returns": _returns_series,
305 },
306 "sub": {
307 "op": "-",
308 "desc": "Subtraction",
309 "reverse": "rsub",
310 "series_examples": _sub_example_SERIES,
311 "series_returns": _returns_series,
312 },
313 "mul": {
314 "op": "*",
315 "desc": "Multiplication",
316 "reverse": "rmul",
317 "series_examples": _mul_example_SERIES,
318 "series_returns": _returns_series,
319 "df_examples": None,
320 },
321 "mod": {
322 "op": "%",
323 "desc": "Modulo",
324 "reverse": "rmod",
325 "series_examples": _mod_example_SERIES,
326 "series_returns": _returns_series,
327 },
328 "pow": {
329 "op": "**",
330 "desc": "Exponential power",
331 "reverse": "rpow",
332 "series_examples": _pow_example_SERIES,
333 "series_returns": _returns_series,
334 "df_examples": None,
335 },
336 "truediv": {
337 "op": "/",
338 "desc": "Floating division",
339 "reverse": "rtruediv",
340 "series_examples": _div_example_SERIES,
341 "series_returns": _returns_series,
342 "df_examples": None,
343 },
344 "floordiv": {
345 "op": "//",
346 "desc": "Integer division",
347 "reverse": "rfloordiv",
348 "series_examples": _floordiv_example_SERIES,
349 "series_returns": _returns_series,
350 "df_examples": None,
351 },
352 "divmod": {
353 "op": "divmod",
354 "desc": "Integer division and modulo",
355 "reverse": "rdivmod",
356 "series_examples": _divmod_example_SERIES,
357 "series_returns": _returns_tuple,
358 "df_examples": None,
359 },
360 # Comparison Operators
361 "eq": {
362 "op": "==",
363 "desc": "Equal to",
364 "reverse": None,
365 "series_examples": _eq_example_SERIES,
366 "series_returns": _returns_series,
367 },
368 "ne": {
369 "op": "!=",
370 "desc": "Not equal to",
371 "reverse": None,
372 "series_examples": _ne_example_SERIES,
373 "series_returns": _returns_series,
374 },
375 "lt": {
376 "op": "<",
377 "desc": "Less than",
378 "reverse": None,
379 "series_examples": _lt_example_SERIES,
380 "series_returns": _returns_series,
381 },
382 "le": {
383 "op": "<=",
384 "desc": "Less than or equal to",
385 "reverse": None,
386 "series_examples": _le_example_SERIES,
387 "series_returns": _returns_series,
388 },
389 "gt": {
390 "op": ">",
391 "desc": "Greater than",
392 "reverse": None,
393 "series_examples": _gt_example_SERIES,
394 "series_returns": _returns_series,
395 },
396 "ge": {
397 "op": ">=",
398 "desc": "Greater than or equal to",
399 "reverse": None,
400 "series_examples": _ge_example_SERIES,
401 "series_returns": _returns_series,
402 },
403 }
404
405 _py_num_ref = """see
406 `Python documentation
407 <https://docs.python.org/3/reference/datamodel.html#emulating-numeric-types>`_
408 for more details"""
409 _op_names = list(_op_descriptions.keys())
410 for key in _op_names:
411 reverse_op = _op_descriptions[key]["reverse"]
412 if reverse_op is not None:
413 _op_descriptions[reverse_op] = _op_descriptions[key].copy()
414 _op_descriptions[reverse_op]["reverse"] = key
415 _op_descriptions[key][
416 "see_also_desc"
417 ] = f"Reverse of the {_op_descriptions[key]['desc']} operator, {_py_num_ref}"
418 _op_descriptions[reverse_op][
419 "see_also_desc"
420 ] = f"Element-wise {_op_descriptions[key]['desc']}, {_py_num_ref}"
421
422 _flex_doc_SERIES = """
423 Return {desc} of series and other, element-wise (binary operator `{op_name}`).
424
425 Equivalent to ``{equiv}``, but with support to substitute a fill_value for
426 missing data in either one of the inputs.
427
428 Parameters
429 ----------
430 other : Series or scalar value
431 fill_value : None or float value, default None (NaN)
432 Fill existing missing (NaN) values, and any new element needed for
433 successful Series alignment, with this value before computation.
434 If data in both corresponding Series locations is missing
435 the result of filling (at that location) will be missing.
436 level : int or name
437 Broadcast across a level, matching Index values on the
438 passed MultiIndex level.
439
440 Returns
441 -------
442 {series_returns}
443 """
444
445 _see_also_reverse_SERIES = """
446 See Also
447 --------
448 Series.{reverse} : {see_also_desc}.
449 """
450
451 _flex_doc_FRAME = """
452 Get {desc} of dataframe and other, element-wise (binary operator `{op_name}`).
453
454 Equivalent to ``{equiv}``, but with support to substitute a fill_value
455 for missing data in one of the inputs. With reverse version, `{reverse}`.
456
457 Among flexible wrappers (`add`, `sub`, `mul`, `div`, `mod`, `pow`) to
458 arithmetic operators: `+`, `-`, `*`, `/`, `//`, `%`, `**`.
459
460 Parameters
461 ----------
462 other : scalar, sequence, Series, or DataFrame
463 Any single or multiple element data structure, or list-like object.
464 axis : {{0 or 'index', 1 or 'columns'}}
465 Whether to compare by the index (0 or 'index') or columns
466 (1 or 'columns'). For Series input, axis to match Series index on.
467 level : int or label
468 Broadcast across a level, matching Index values on the
469 passed MultiIndex level.
470 fill_value : float or None, default None
471 Fill existing missing (NaN) values, and any new element needed for
472 successful DataFrame alignment, with this value before computation.
473 If data in both corresponding DataFrame locations is missing
474 the result will be missing.
475
476 Returns
477 -------
478 DataFrame
479 Result of the arithmetic operation.
480
481 See Also
482 --------
483 DataFrame.add : Add DataFrames.
484 DataFrame.sub : Subtract DataFrames.
485 DataFrame.mul : Multiply DataFrames.
486 DataFrame.div : Divide DataFrames (float division).
487 DataFrame.truediv : Divide DataFrames (float division).
488 DataFrame.floordiv : Divide DataFrames (integer division).
489 DataFrame.mod : Calculate modulo (remainder after division).
490 DataFrame.pow : Calculate exponential power.
491
492 Notes
493 -----
494 Mismatched indices will be unioned together.
495
496 Examples
497 --------
498 >>> df = pd.DataFrame({{'angles': [0, 3, 4],
499 ... 'degrees': [360, 180, 360]}},
500 ... index=['circle', 'triangle', 'rectangle'])
501 >>> df
502 angles degrees
503 circle 0 360
504 triangle 3 180
505 rectangle 4 360
506
507 Add a scalar with operator version which return the same
508 results.
509
510 >>> df + 1
511 angles degrees
512 circle 1 361
513 triangle 4 181
514 rectangle 5 361
515
516 >>> df.add(1)
517 angles degrees
518 circle 1 361
519 triangle 4 181
520 rectangle 5 361
521
522 Divide by constant with reverse version.
523
524 >>> df.div(10)
525 angles degrees
526 circle 0.0 36.0
527 triangle 0.3 18.0
528 rectangle 0.4 36.0
529
530 >>> df.rdiv(10)
531 angles degrees
532 circle inf 0.027778
533 triangle 3.333333 0.055556
534 rectangle 2.500000 0.027778
535
536 Subtract a list and Series by axis with operator version.
537
538 >>> df - [1, 2]
539 angles degrees
540 circle -1 358
541 triangle 2 178
542 rectangle 3 358
543
544 >>> df.sub([1, 2], axis='columns')
545 angles degrees
546 circle -1 358
547 triangle 2 178
548 rectangle 3 358
549
550 >>> df.sub(pd.Series([1, 1, 1], index=['circle', 'triangle', 'rectangle']),
551 ... axis='index')
552 angles degrees
553 circle -1 359
554 triangle 2 179
555 rectangle 3 359
556
557 Multiply a DataFrame of different shape with operator version.
558
559 >>> other = pd.DataFrame({{'angles': [0, 3, 4]}},
560 ... index=['circle', 'triangle', 'rectangle'])
561 >>> other
562 angles
563 circle 0
564 triangle 3
565 rectangle 4
566
567 >>> df * other
568 angles degrees
569 circle 0 NaN
570 triangle 9 NaN
571 rectangle 16 NaN
572
573 >>> df.mul(other, fill_value=0)
574 angles degrees
575 circle 0 0.0
576 triangle 9 0.0
577 rectangle 16 0.0
578
579 Divide by a MultiIndex by level.
580
581 >>> df_multindex = pd.DataFrame({{'angles': [0, 3, 4, 4, 5, 6],
582 ... 'degrees': [360, 180, 360, 360, 540, 720]}},
583 ... index=[['A', 'A', 'A', 'B', 'B', 'B'],
584 ... ['circle', 'triangle', 'rectangle',
585 ... 'square', 'pentagon', 'hexagon']])
586 >>> df_multindex
587 angles degrees
588 A circle 0 360
589 triangle 3 180
590 rectangle 4 360
591 B square 4 360
592 pentagon 5 540
593 hexagon 6 720
594
595 >>> df.div(df_multindex, level=1, fill_value=0)
596 angles degrees
597 A circle NaN 1.0
598 triangle 1.0 1.0
599 rectangle 1.0 1.0
600 B square 0.0 0.0
601 pentagon 0.0 0.0
602 hexagon 0.0 0.0
603 """
604
605 _flex_comp_doc_FRAME = """
606 Get {desc} of dataframe and other, element-wise (binary operator `{op_name}`).
607
608 Among flexible wrappers (`eq`, `ne`, `le`, `lt`, `ge`, `gt`) to comparison
609 operators.
610
611 Equivalent to `==`, `!=`, `<=`, `<`, `>=`, `>` with support to choose axis
612 (rows or columns) and level for comparison.
613
614 Parameters
615 ----------
616 other : scalar, sequence, Series, or DataFrame
617 Any single or multiple element data structure, or list-like object.
618 axis : {{0 or 'index', 1 or 'columns'}}, default 'columns'
619 Whether to compare by the index (0 or 'index') or columns
620 (1 or 'columns').
621 level : int or label
622 Broadcast across a level, matching Index values on the passed
623 MultiIndex level.
624
625 Returns
626 -------
627 DataFrame of bool
628 Result of the comparison.
629
630 See Also
631 --------
632 DataFrame.eq : Compare DataFrames for equality elementwise.
633 DataFrame.ne : Compare DataFrames for inequality elementwise.
634 DataFrame.le : Compare DataFrames for less than inequality
635 or equality elementwise.
636 DataFrame.lt : Compare DataFrames for strictly less than
637 inequality elementwise.
638 DataFrame.ge : Compare DataFrames for greater than inequality
639 or equality elementwise.
640 DataFrame.gt : Compare DataFrames for strictly greater than
641 inequality elementwise.
642
643 Notes
644 -----
645 Mismatched indices will be unioned together.
646 `NaN` values are considered different (i.e. `NaN` != `NaN`).
647
648 Examples
649 --------
650 >>> df = pd.DataFrame({{'cost': [250, 150, 100],
651 ... 'revenue': [100, 250, 300]}},
652 ... index=['A', 'B', 'C'])
653 >>> df
654 cost revenue
655 A 250 100
656 B 150 250
657 C 100 300
658
659 Comparison with a scalar, using either the operator or method:
660
661 >>> df == 100
662 cost revenue
663 A False True
664 B False False
665 C True False
666
667 >>> df.eq(100)
668 cost revenue
669 A False True
670 B False False
671 C True False
672
673 When `other` is a :class:`Series`, the columns of a DataFrame are aligned
674 with the index of `other` and broadcast:
675
676 >>> df != pd.Series([100, 250], index=["cost", "revenue"])
677 cost revenue
678 A True True
679 B True False
680 C False True
681
682 Use the method to control the broadcast axis:
683
684 >>> df.ne(pd.Series([100, 300], index=["A", "D"]), axis='index')
685 cost revenue
686 A True False
687 B True True
688 C True True
689 D True True
690
691 When comparing to an arbitrary sequence, the number of columns must
692 match the number elements in `other`:
693
694 >>> df == [250, 100]
695 cost revenue
696 A True True
697 B False False
698 C False False
699
700 Use the method to control the axis:
701
702 >>> df.eq([250, 250, 100], axis='index')
703 cost revenue
704 A True False
705 B False True
706 C True False
707
708 Compare to a DataFrame of different shape.
709
710 >>> other = pd.DataFrame({{'revenue': [300, 250, 100, 150]}},
711 ... index=['A', 'B', 'C', 'D'])
712 >>> other
713 revenue
714 A 300
715 B 250
716 C 100
717 D 150
718
719 >>> df.gt(other)
720 cost revenue
721 A False False
722 B False False
723 C False True
724 D False False
725
726 Compare to a MultiIndex by level.
727
728 >>> df_multindex = pd.DataFrame({{'cost': [250, 150, 100, 150, 300, 220],
729 ... 'revenue': [100, 250, 300, 200, 175, 225]}},
730 ... index=[['Q1', 'Q1', 'Q1', 'Q2', 'Q2', 'Q2'],
731 ... ['A', 'B', 'C', 'A', 'B', 'C']])
732 >>> df_multindex
733 cost revenue
734 Q1 A 250 100
735 B 150 250
736 C 100 300
737 Q2 A 150 200
738 B 300 175
739 C 220 225
740
741 >>> df.le(df_multindex, level=1)
742 cost revenue
743 Q1 A True True
744 B True True
745 C True True
746 Q2 A False True
747 B True False
748 C True False
749 """
750
[end of pandas/core/ops/docstrings.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pandas-dev/pandas
|
eb17d3df4b8c036fdd9e8bc9450413864ab2cf13
|
BUG: Numpy ufuncs e.g. np.[op](df1, df2) aligns columns in pandas 1.2.0 where it did not before
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the latest version of pandas.
- [ ] (optional) I have confirmed this bug exists on the master branch of pandas.
---
#### Code Sample
```python
df = pd.DataFrame({k: [1,2,3,4,5] for k in 'abcd'})
np.add(df[['a', 'b']], df[['c', 'd']])
```
#### Problem description
This is a regression from pandas 1.1.5 (both versions are using numpy 1.19.5).
Normally if we want to add, subtract, multiply or divide df columns with different names we get NaNs because the column names don't match. E.g:
```python
>>> df[['a', 'b']] + df[['c', 'd']]
a b c d
0 NaN NaN NaN NaN
1 NaN NaN NaN NaN
2 NaN NaN NaN NaN
3 NaN NaN NaN NaN
4 NaN NaN NaN NaN
```
To get around this, we would use np.[op](df1, df2).
However, we get the same output as above.
```python
>>> np.add(df[['a', 'b']], df[['c', 'd']])
a b c d
0 NaN NaN NaN NaN
1 NaN NaN NaN NaN
2 NaN NaN NaN NaN
3 NaN NaN NaN NaN
4 NaN NaN NaN NaN
```
#### Expected Output
```python
# Using pandas 1.1.5:
>>> np.add(df[['a', 'b']], df[['c', 'd']])
a b
0 2 2
1 4 4
2 6 6
3 8 8
4 10 10
```
#### Temporary solution
```python
# This may have a potential copy penalty with the conversion to numpy
>>> df[['a', 'b']] + df[['c', 'd']].to_numpy()
a b
0 2 2
1 4 4
2 6 6
3 8 8
4 10 10
```
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit : 3e89b4c4b1580aa890023fc550774e63d499da25
python : 3.9.1.final.0
python-bits : 64
OS : Windows
OS-release : 10
Version : 10.0.19041
machine : AMD64
processor : Intel64 Family 6 Model 158 Stepping 10, GenuineIntel
byteorder : little
LC_ALL : None
LANG : None
LOCALE : English_United States.1252
pandas : 1.2.0
numpy : 1.19.5
pytz : 2020.5
dateutil : 2.8.1
pip : 20.3.3
setuptools : 49.6.0.post20210108
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : None
IPython : 7.19.0
pandas_datareader: None
bs4 : None
bottleneck : None
fsspec : None
fastparquet : None
gcsfs : None
matplotlib : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
numba : None
</details>
Just my 2 cents: I was more than willing to test this on a nightly / master release, but it doesn't appear you release those. It could be quite beneficial to publish nightlies to PyPl so we don't report issues that have already been fixed. For some, it might be easier to test a nightly than peruse recent open and closed issues.
|
@WhistleWhileYouWork thanks for the report.
We "knowingly" changed this behaviour. At least, I commented about this consequence at https://github.com/pandas-dev/pandas/pull/36955#issuecomment-735684445, but I didn't follow-up myself to ensure we did it with a deprecation warning instead of breaking change.
We could still do that in 1.2.1, I suppose.
cc @TomAugspurger
> Just my 2 cents: I was more than willing to test this on a nightly / master release, but it doesn't appear you release those. It could be quite beneficial to publish nightlies to PyPl so we don't report issues that have already been fixed. For some, it might be easier to test a nightly than peruse recent open and closed issues.
Note that there was a release candidate that could be tested (https://mail.python.org/pipermail/pandas-dev/2020-December/001311.html), and we actually do have nightly packages available to install (https://anaconda.org/scipy-wheels-nightly/pandas), but this might not be very well documented in our installation guide ..
> This may have a potential copy penalty with the conversion to numpy
There won't be any penalty over the old behavior. pandas would have done the conversion anyway.
> We could still do that in 1.2.1, I suppose.
It is unfortunate that that slipped through. We're all too busy :/
If we're able to restore the old behavior and get the warning in *soon*, that'd be OK. But the longer we have the new / future behavior on 1.2.x, the more I think we should just keep it, since people will start relying on the new behavior (alignment).
Understood, what is the standard accepted practice for accomplishing the previous behavior? Is it:
```python
df[['a', 'b']] + df[['c', 'd']].to_numpy()
```
I see the previous comment states:
> There won't be any penalty over the old behavior.
So that's good news since my main reason for using np ufunc syntax was to avoid any possible penalties.
It would be nice to have the recommended replacement syntax along with the note in the `Release Notes` about the change in addition to being included with any warnings in code.
> we actually do have nightly packages available to install
Ah, thanks. I only checked PyPl. I could not find any reference to the nightly conda packages in any of the usual locations. The `README.md` would be the most logical place to mention it in my opinion.
Thanks all!
> It would be nice to have the recommended replacement syntax along with the note in the Release Notes about the change in addition to being included with any warnings in code.
Right. The warning and the release notes was recommend `.to_numpy()` on one of the inputs if you want to avoid alignment (the old / deprecated behavior).
The warning should only occur when they aren't already aligned.
> We could still do that in 1.2.1, I suppose.
in the release notes we did have
> Calling a binary-input NumPy ufunc on multiple ``DataFrame`` objects now aligns, matching the behavior of binary operations and ufuncs on ``Series`` (:issue:`23743`).
so have advertised the change
> in the release notes we did have
>
> > Calling a binary-input NumPy ufunc on multiple `DataFrame` objects now aligns, matching the behavior of binary operations and ufuncs on `Series` (:issue:`23743`).
>
> so have advertised the change
The change was advertised but it is a breaking change, so the suggestion was to walk it back and start with a deprecation warning.
It's possible that my use case was an edge case and there may not be enough users who were impacted. I can certainly live with it now that I know it was an intentional change. At the very least, I suggest guidance in the release notes on the recommended approach to avoiding alignment for those of us who were using NumPy ufuncs specifically to avoid alignment. Something like this:
- Calling a binary-input NumPy ufunc on multiple `DataFrame` objects now aligns, matching the behavior of binary operations and ufuncs on `Series` (:issue:`23743`). _To avoid alignment, convert one of the inputs to a NumPy array (e.g. `df.to_numpy()`)._
> The change was advertised but it is a breaking change, so the suggestion was to walk it back and start with a deprecation warning.
we do allow breaking changes under certain circumstances. If we ascertain that the change meets those requirements we could maybe move the release note in 1.2.0 from the other enhancements section with a small section in the breaking API section (with requested guidance). We would then have a brief note in 1.2.1 release notes (other section) that the 1.2.0 release notes have been updated.
I think the inconsistency between Series and DataFrame behaviors maybe enough to justify the breaking change.
I will check how easy it is to put a deprecation in place.
|
2021-01-17T20:15:01Z
|
<patch>
diff --git a/doc/source/whatsnew/v1.2.0.rst b/doc/source/whatsnew/v1.2.0.rst
--- a/doc/source/whatsnew/v1.2.0.rst
+++ b/doc/source/whatsnew/v1.2.0.rst
@@ -286,6 +286,8 @@ Other enhancements
- Added methods :meth:`IntegerArray.prod`, :meth:`IntegerArray.min`, and :meth:`IntegerArray.max` (:issue:`33790`)
- Calling a NumPy ufunc on a ``DataFrame`` with extension types now preserves the extension types when possible (:issue:`23743`)
- Calling a binary-input NumPy ufunc on multiple ``DataFrame`` objects now aligns, matching the behavior of binary operations and ufuncs on ``Series`` (:issue:`23743`).
+ This change has been reverted in pandas 1.2.1, and the behaviour to not align DataFrames
+ is deprecated instead, see the :ref:`the 1.2.1 release notes <whatsnew_121.ufunc_deprecation>`.
- Where possible :meth:`RangeIndex.difference` and :meth:`RangeIndex.symmetric_difference` will return :class:`RangeIndex` instead of :class:`Int64Index` (:issue:`36564`)
- :meth:`DataFrame.to_parquet` now supports :class:`MultiIndex` for columns in parquet format (:issue:`34777`)
- :func:`read_parquet` gained a ``use_nullable_dtypes=True`` option to use nullable dtypes that use ``pd.NA`` as missing value indicator where possible for the resulting DataFrame (default is ``False``, and only applicable for ``engine="pyarrow"``) (:issue:`31242`)
@@ -536,6 +538,14 @@ Deprecations
- The ``inplace`` parameter of :meth:`Categorical.remove_unused_categories` is deprecated and will be removed in a future version (:issue:`37643`)
- The ``null_counts`` parameter of :meth:`DataFrame.info` is deprecated and replaced by ``show_counts``. It will be removed in a future version (:issue:`37999`)
+**Calling NumPy ufuncs on non-aligned DataFrames**
+
+Calling NumPy ufuncs on non-aligned DataFrames changed behaviour in pandas
+1.2.0 (to align the inputs before calling the ufunc), but this change is
+reverted in pandas 1.2.1. The behaviour to not align is now deprecated instead,
+see the :ref:`the 1.2.1 release notes <whatsnew_121.ufunc_deprecation>` for
+more details.
+
.. ---------------------------------------------------------------------------
diff --git a/doc/source/whatsnew/v1.2.1.rst b/doc/source/whatsnew/v1.2.1.rst
--- a/doc/source/whatsnew/v1.2.1.rst
+++ b/doc/source/whatsnew/v1.2.1.rst
@@ -1,6 +1,6 @@
.. _whatsnew_121:
-What's new in 1.2.1 (January 18, 2021)
+What's new in 1.2.1 (January 20, 2021)
--------------------------------------
These are the changes in pandas 1.2.1. See :ref:`release` for a full changelog
@@ -42,6 +42,79 @@ As a result, bugs reported as fixed in pandas 1.2.0 related to inconsistent tick
.. ---------------------------------------------------------------------------
+.. _whatsnew_121.ufunc_deprecation:
+
+Calling NumPy ufuncs on non-aligned DataFrames
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Before pandas 1.2.0, calling a NumPy ufunc on non-aligned DataFrames (or
+DataFrame / Series combination) would ignore the indices, only match
+the inputs by shape, and use the index/columns of the first DataFrame for
+the result:
+
+.. code-block:: python
+
+ >>> df1 = pd.DataFrame({"a": [1, 2], "b": [3, 4]}, index=[0, 1])
+ ... df2 = pd.DataFrame({"a": [1, 2], "b": [3, 4]}, index=[1, 2])
+ >>> df1
+ a b
+ 0 1 3
+ 1 2 4
+ >>> df2
+ a b
+ 1 1 3
+ 2 2 4
+
+ >>> np.add(df1, df2)
+ a b
+ 0 2 6
+ 1 4 8
+
+This contrasts with how other pandas operations work, which first align
+the inputs:
+
+.. code-block:: python
+
+ >>> df1 + df2
+ a b
+ 0 NaN NaN
+ 1 3.0 7.0
+ 2 NaN NaN
+
+In pandas 1.2.0, we refactored how NumPy ufuncs are called on DataFrames, and
+this started to align the inputs first (:issue:`39184`), as happens in other
+pandas operations and as it happens for ufuncs called on Series objects.
+
+For pandas 1.2.1, we restored the previous behaviour to avoid a breaking
+change, but the above example of ``np.add(df1, df2)`` with non-aligned inputs
+will now to raise a warning, and a future pandas 2.0 release will start
+aligning the inputs first (:issue:`39184`). Calling a NumPy ufunc on Series
+objects (eg ``np.add(s1, s2)``) already aligns and continues to do so.
+
+To avoid the warning and keep the current behaviour of ignoring the indices,
+convert one of the arguments to a NumPy array:
+
+.. code-block:: python
+
+ >>> np.add(df1, np.asarray(df2))
+ a b
+ 0 2 6
+ 1 4 8
+
+To obtain the future behaviour and silence the warning, you can align manually
+before passing the arguments to the ufunc:
+
+.. code-block:: python
+
+ >>> df1, df2 = df1.align(df2)
+ >>> np.add(df1, df2)
+ a b
+ 0 NaN NaN
+ 1 3.0 7.0
+ 2 NaN NaN
+
+.. ---------------------------------------------------------------------------
+
.. _whatsnew_121.bug_fixes:
Bug fixes
diff --git a/pandas/core/arraylike.py b/pandas/core/arraylike.py
--- a/pandas/core/arraylike.py
+++ b/pandas/core/arraylike.py
@@ -149,6 +149,85 @@ def __rpow__(self, other):
return self._arith_method(other, roperator.rpow)
+# -----------------------------------------------------------------------------
+# Helpers to implement __array_ufunc__
+
+
+def _is_aligned(frame, other):
+ """
+ Helper to check if a DataFrame is aligned with another DataFrame or Series.
+ """
+ from pandas import DataFrame
+
+ if isinstance(other, DataFrame):
+ return frame._indexed_same(other)
+ else:
+ # Series -> match index
+ return frame.columns.equals(other.index)
+
+
+def _maybe_fallback(ufunc: Callable, method: str, *inputs: Any, **kwargs: Any):
+ """
+ In the future DataFrame, inputs to ufuncs will be aligned before applying
+ the ufunc, but for now we ignore the index but raise a warning if behaviour
+ would change in the future.
+ This helper detects the case where a warning is needed and then fallbacks
+ to applying the ufunc on arrays to avoid alignment.
+
+ See https://github.com/pandas-dev/pandas/pull/39239
+ """
+ from pandas import DataFrame
+ from pandas.core.generic import NDFrame
+
+ n_alignable = sum(isinstance(x, NDFrame) for x in inputs)
+ n_frames = sum(isinstance(x, DataFrame) for x in inputs)
+
+ if n_alignable >= 2 and n_frames >= 1:
+ # if there are 2 alignable inputs (Series or DataFrame), of which at least 1
+ # is a DataFrame -> we would have had no alignment before -> warn that this
+ # will align in the future
+
+ # the first frame is what determines the output index/columns in pandas < 1.2
+ first_frame = next(x for x in inputs if isinstance(x, DataFrame))
+
+ # check if the objects are aligned or not
+ non_aligned = sum(
+ not _is_aligned(first_frame, x) for x in inputs if isinstance(x, NDFrame)
+ )
+
+ # if at least one is not aligned -> warn and fallback to array behaviour
+ if non_aligned:
+ warnings.warn(
+ "Calling a ufunc on non-aligned DataFrames (or DataFrame/Series "
+ "combination). Currently, the indices are ignored and the result "
+ "takes the index/columns of the first DataFrame. In the future , "
+ "the DataFrames/Series will be aligned before applying the ufunc.\n"
+ "Convert one of the arguments to a NumPy array "
+ "(eg 'ufunc(df1, np.asarray(df2)') to keep the current behaviour, "
+ "or align manually (eg 'df1, df2 = df1.align(df2)') before passing to "
+ "the ufunc to obtain the future behaviour and silence this warning.",
+ FutureWarning,
+ stacklevel=4,
+ )
+
+ # keep the first dataframe of the inputs, other DataFrame/Series is
+ # converted to array for fallback behaviour
+ new_inputs = []
+ for x in inputs:
+ if x is first_frame:
+ new_inputs.append(x)
+ elif isinstance(x, NDFrame):
+ new_inputs.append(np.asarray(x))
+ else:
+ new_inputs.append(x)
+
+ # call the ufunc on those transformed inputs
+ return getattr(ufunc, method)(*new_inputs, **kwargs)
+
+ # signal that we didn't fallback / execute the ufunc yet
+ return NotImplemented
+
+
def array_ufunc(self, ufunc: Callable, method: str, *inputs: Any, **kwargs: Any):
"""
Compatibility with numpy ufuncs.
@@ -162,6 +241,11 @@ def array_ufunc(self, ufunc: Callable, method: str, *inputs: Any, **kwargs: Any)
cls = type(self)
+ # for backwards compatibility check and potentially fallback for non-aligned frames
+ result = _maybe_fallback(ufunc, method, *inputs, **kwargs)
+ if result is not NotImplemented:
+ return result
+
# for binary ops, use our custom dunder methods
result = maybe_dispatch_ufunc_to_dunder_op(self, ufunc, method, *inputs, **kwargs)
if result is not NotImplemented:
</patch>
|
[]
|
[]
| |||
pypa__pip-6429
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
⬆️ Bump vendored pyparsing copy to v2.4.0
Fixes #6362
remark: a strange warning in Python-3.8.0a3 (was already there in a2)
**Environment**
* pip version: 19.0.3
* Python version: Python-3.8.0a3
* OS: win 10
odd warning when pip installing things:
````
python-3.8.0a3\lib\site-packages\pip\_vendor\pyparsing.py:3068: SyntaxWarning: invalid escape sequence \w
````
</issue>
<code>
[start of README.rst]
1 pip - The Python Package Installer
2 ==================================
3
4 .. image:: https://img.shields.io/pypi/v/pip.svg
5 :target: https://pypi.org/project/pip/
6
7 .. image:: https://readthedocs.org/projects/pip/badge/?version=latest
8 :target: https://pip.pypa.io/en/latest
9
10 pip is the `package installer`_ for Python. You can use pip to install packages from the `Python Package Index`_ and other indexes.
11
12 Please take a look at our documentation for how to install and use pip:
13
14 * `Installation`_
15 * `Usage`_
16 * `Release notes`_
17
18 If you find bugs, need help, or want to talk to the developers please use our mailing lists or chat rooms:
19
20 * `Issue tracking`_
21 * `Discourse channel`_
22 * `User IRC`_
23
24 If you want to get involved head over to GitHub to get the source code and feel free to jump on the developer mailing lists and chat rooms:
25
26 * `GitHub page`_
27 * `Dev mailing list`_
28 * `Dev IRC`_
29
30 Code of Conduct
31 ---------------
32
33 Everyone interacting in the pip project's codebases, issue trackers, chat
34 rooms, and mailing lists is expected to follow the `PyPA Code of Conduct`_.
35
36 .. _package installer: https://packaging.python.org/en/latest/current/
37 .. _Python Package Index: https://pypi.org
38 .. _Installation: https://pip.pypa.io/en/stable/installing.html
39 .. _Usage: https://pip.pypa.io/en/stable/
40 .. _Release notes: https://pip.pypa.io/en/stable/news.html
41 .. _GitHub page: https://github.com/pypa/pip
42 .. _Issue tracking: https://github.com/pypa/pip/issues
43 .. _Discourse channel: https://discuss.python.org/c/packaging
44 .. _Dev mailing list: https://groups.google.com/forum/#!forum/pypa-dev
45 .. _User IRC: https://webchat.freenode.net/?channels=%23pypa
46 .. _Dev IRC: https://webchat.freenode.net/?channels=%23pypa-dev
47 .. _PyPA Code of Conduct: https://www.pypa.io/en/latest/code-of-conduct/
48
[end of README.rst]
[start of src/pip/_internal/wheel.py]
1 """
2 Support for installing and building the "wheel" binary package format.
3 """
4 from __future__ import absolute_import
5
6 import collections
7 import compileall
8 import csv
9 import hashlib
10 import logging
11 import os.path
12 import re
13 import shutil
14 import stat
15 import sys
16 import warnings
17 from base64 import urlsafe_b64encode
18 from email.parser import Parser
19
20 from pip._vendor import pkg_resources
21 from pip._vendor.distlib.scripts import ScriptMaker
22 from pip._vendor.packaging.utils import canonicalize_name
23 from pip._vendor.six import StringIO
24
25 from pip._internal import pep425tags
26 from pip._internal.download import path_to_url, unpack_url
27 from pip._internal.exceptions import (
28 InstallationError, InvalidWheelFilename, UnsupportedWheel,
29 )
30 from pip._internal.locations import (
31 PIP_DELETE_MARKER_FILENAME, distutils_scheme,
32 )
33 from pip._internal.models.link import Link
34 from pip._internal.utils.logging import indent_log
35 from pip._internal.utils.misc import (
36 LOG_DIVIDER, call_subprocess, captured_stdout, ensure_dir,
37 format_command_args, read_chunks,
38 )
39 from pip._internal.utils.setuptools_build import SETUPTOOLS_SHIM
40 from pip._internal.utils.temp_dir import TempDirectory
41 from pip._internal.utils.typing import MYPY_CHECK_RUNNING
42 from pip._internal.utils.ui import open_spinner
43
44 if MYPY_CHECK_RUNNING:
45 from typing import (
46 Dict, List, Optional, Sequence, Mapping, Tuple, IO, Text, Any, Iterable
47 )
48 from pip._vendor.packaging.requirements import Requirement
49 from pip._internal.req.req_install import InstallRequirement
50 from pip._internal.download import PipSession
51 from pip._internal.index import FormatControl, PackageFinder
52 from pip._internal.operations.prepare import (
53 RequirementPreparer
54 )
55 from pip._internal.cache import WheelCache
56 from pip._internal.pep425tags import Pep425Tag
57
58 InstalledCSVRow = Tuple[str, ...]
59
60
61 VERSION_COMPATIBLE = (1, 0)
62
63
64 logger = logging.getLogger(__name__)
65
66
67 def normpath(src, p):
68 return os.path.relpath(src, p).replace(os.path.sep, '/')
69
70
71 def rehash(path, blocksize=1 << 20):
72 # type: (str, int) -> Tuple[str, str]
73 """Return (hash, length) for path using hashlib.sha256()"""
74 h = hashlib.sha256()
75 length = 0
76 with open(path, 'rb') as f:
77 for block in read_chunks(f, size=blocksize):
78 length += len(block)
79 h.update(block)
80 digest = 'sha256=' + urlsafe_b64encode(
81 h.digest()
82 ).decode('latin1').rstrip('=')
83 # unicode/str python2 issues
84 return (digest, str(length)) # type: ignore
85
86
87 def open_for_csv(name, mode):
88 # type: (str, Text) -> IO
89 if sys.version_info[0] < 3:
90 nl = {} # type: Dict[str, Any]
91 bin = 'b'
92 else:
93 nl = {'newline': ''} # type: Dict[str, Any]
94 bin = ''
95 return open(name, mode + bin, **nl)
96
97
98 def replace_python_tag(wheelname, new_tag):
99 # type: (str, str) -> str
100 """Replace the Python tag in a wheel file name with a new value.
101 """
102 parts = wheelname.split('-')
103 parts[-3] = new_tag
104 return '-'.join(parts)
105
106
107 def fix_script(path):
108 # type: (str) -> Optional[bool]
109 """Replace #!python with #!/path/to/python
110 Return True if file was changed."""
111 # XXX RECORD hashes will need to be updated
112 if os.path.isfile(path):
113 with open(path, 'rb') as script:
114 firstline = script.readline()
115 if not firstline.startswith(b'#!python'):
116 return False
117 exename = sys.executable.encode(sys.getfilesystemencoding())
118 firstline = b'#!' + exename + os.linesep.encode("ascii")
119 rest = script.read()
120 with open(path, 'wb') as script:
121 script.write(firstline)
122 script.write(rest)
123 return True
124 return None
125
126
127 dist_info_re = re.compile(r"""^(?P<namever>(?P<name>.+?)(-(?P<ver>.+?))?)
128 \.dist-info$""", re.VERBOSE)
129
130
131 def root_is_purelib(name, wheeldir):
132 # type: (str, str) -> bool
133 """
134 Return True if the extracted wheel in wheeldir should go into purelib.
135 """
136 name_folded = name.replace("-", "_")
137 for item in os.listdir(wheeldir):
138 match = dist_info_re.match(item)
139 if match and match.group('name') == name_folded:
140 with open(os.path.join(wheeldir, item, 'WHEEL')) as wheel:
141 for line in wheel:
142 line = line.lower().rstrip()
143 if line == "root-is-purelib: true":
144 return True
145 return False
146
147
148 def get_entrypoints(filename):
149 # type: (str) -> Tuple[Dict[str, str], Dict[str, str]]
150 if not os.path.exists(filename):
151 return {}, {}
152
153 # This is done because you can pass a string to entry_points wrappers which
154 # means that they may or may not be valid INI files. The attempt here is to
155 # strip leading and trailing whitespace in order to make them valid INI
156 # files.
157 with open(filename) as fp:
158 data = StringIO()
159 for line in fp:
160 data.write(line.strip())
161 data.write("\n")
162 data.seek(0)
163
164 # get the entry points and then the script names
165 entry_points = pkg_resources.EntryPoint.parse_map(data)
166 console = entry_points.get('console_scripts', {})
167 gui = entry_points.get('gui_scripts', {})
168
169 def _split_ep(s):
170 """get the string representation of EntryPoint, remove space and split
171 on '='"""
172 return str(s).replace(" ", "").split("=")
173
174 # convert the EntryPoint objects into strings with module:function
175 console = dict(_split_ep(v) for v in console.values())
176 gui = dict(_split_ep(v) for v in gui.values())
177 return console, gui
178
179
180 def message_about_scripts_not_on_PATH(scripts):
181 # type: (Sequence[str]) -> Optional[str]
182 """Determine if any scripts are not on PATH and format a warning.
183
184 Returns a warning message if one or more scripts are not on PATH,
185 otherwise None.
186 """
187 if not scripts:
188 return None
189
190 # Group scripts by the path they were installed in
191 grouped_by_dir = collections.defaultdict(set) # type: Dict[str, set]
192 for destfile in scripts:
193 parent_dir = os.path.dirname(destfile)
194 script_name = os.path.basename(destfile)
195 grouped_by_dir[parent_dir].add(script_name)
196
197 # We don't want to warn for directories that are on PATH.
198 not_warn_dirs = [
199 os.path.normcase(i).rstrip(os.sep) for i in
200 os.environ.get("PATH", "").split(os.pathsep)
201 ]
202 # If an executable sits with sys.executable, we don't warn for it.
203 # This covers the case of venv invocations without activating the venv.
204 not_warn_dirs.append(os.path.normcase(os.path.dirname(sys.executable)))
205 warn_for = {
206 parent_dir: scripts for parent_dir, scripts in grouped_by_dir.items()
207 if os.path.normcase(parent_dir) not in not_warn_dirs
208 }
209 if not warn_for:
210 return None
211
212 # Format a message
213 msg_lines = []
214 for parent_dir, scripts in warn_for.items():
215 sorted_scripts = sorted(scripts) # type: List[str]
216 if len(sorted_scripts) == 1:
217 start_text = "script {} is".format(sorted_scripts[0])
218 else:
219 start_text = "scripts {} are".format(
220 ", ".join(sorted_scripts[:-1]) + " and " + sorted_scripts[-1]
221 )
222
223 msg_lines.append(
224 "The {} installed in '{}' which is not on PATH."
225 .format(start_text, parent_dir)
226 )
227
228 last_line_fmt = (
229 "Consider adding {} to PATH or, if you prefer "
230 "to suppress this warning, use --no-warn-script-location."
231 )
232 if len(msg_lines) == 1:
233 msg_lines.append(last_line_fmt.format("this directory"))
234 else:
235 msg_lines.append(last_line_fmt.format("these directories"))
236
237 # Returns the formatted multiline message
238 return "\n".join(msg_lines)
239
240
241 def sorted_outrows(outrows):
242 # type: (Iterable[InstalledCSVRow]) -> List[InstalledCSVRow]
243 """
244 Return the given rows of a RECORD file in sorted order.
245
246 Each row is a 3-tuple (path, hash, size) and corresponds to a record of
247 a RECORD file (see PEP 376 and PEP 427 for details). For the rows
248 passed to this function, the size can be an integer as an int or string,
249 or the empty string.
250 """
251 # Normally, there should only be one row per path, in which case the
252 # second and third elements don't come into play when sorting.
253 # However, in cases in the wild where a path might happen to occur twice,
254 # we don't want the sort operation to trigger an error (but still want
255 # determinism). Since the third element can be an int or string, we
256 # coerce each element to a string to avoid a TypeError in this case.
257 # For additional background, see--
258 # https://github.com/pypa/pip/issues/5868
259 return sorted(outrows, key=lambda row: tuple(str(x) for x in row))
260
261
262 def get_csv_rows_for_installed(
263 old_csv_rows, # type: Iterable[List[str]]
264 installed, # type: Dict[str, str]
265 changed, # type: set
266 generated, # type: List[str]
267 lib_dir, # type: str
268 ):
269 # type: (...) -> List[InstalledCSVRow]
270 """
271 :param installed: A map from archive RECORD path to installation RECORD
272 path.
273 """
274 installed_rows = [] # type: List[InstalledCSVRow]
275 for row in old_csv_rows:
276 if len(row) > 3:
277 logger.warning(
278 'RECORD line has more than three elements: {}'.format(row)
279 )
280 # Make a copy because we are mutating the row.
281 row = list(row)
282 old_path = row[0]
283 new_path = installed.pop(old_path, old_path)
284 row[0] = new_path
285 if new_path in changed:
286 digest, length = rehash(new_path)
287 row[1] = digest
288 row[2] = length
289 installed_rows.append(tuple(row))
290 for f in generated:
291 digest, length = rehash(f)
292 installed_rows.append((normpath(f, lib_dir), digest, str(length)))
293 for f in installed:
294 installed_rows.append((installed[f], '', ''))
295 return installed_rows
296
297
298 def move_wheel_files(
299 name, # type: str
300 req, # type: Requirement
301 wheeldir, # type: str
302 user=False, # type: bool
303 home=None, # type: Optional[str]
304 root=None, # type: Optional[str]
305 pycompile=True, # type: bool
306 scheme=None, # type: Optional[Mapping[str, str]]
307 isolated=False, # type: bool
308 prefix=None, # type: Optional[str]
309 warn_script_location=True # type: bool
310 ):
311 # type: (...) -> None
312 """Install a wheel"""
313 # TODO: Investigate and break this up.
314 # TODO: Look into moving this into a dedicated class for representing an
315 # installation.
316
317 if not scheme:
318 scheme = distutils_scheme(
319 name, user=user, home=home, root=root, isolated=isolated,
320 prefix=prefix,
321 )
322
323 if root_is_purelib(name, wheeldir):
324 lib_dir = scheme['purelib']
325 else:
326 lib_dir = scheme['platlib']
327
328 info_dir = [] # type: List[str]
329 data_dirs = []
330 source = wheeldir.rstrip(os.path.sep) + os.path.sep
331
332 # Record details of the files moved
333 # installed = files copied from the wheel to the destination
334 # changed = files changed while installing (scripts #! line typically)
335 # generated = files newly generated during the install (script wrappers)
336 installed = {} # type: Dict[str, str]
337 changed = set()
338 generated = [] # type: List[str]
339
340 # Compile all of the pyc files that we're going to be installing
341 if pycompile:
342 with captured_stdout() as stdout:
343 with warnings.catch_warnings():
344 warnings.filterwarnings('ignore')
345 compileall.compile_dir(source, force=True, quiet=True)
346 logger.debug(stdout.getvalue())
347
348 def record_installed(srcfile, destfile, modified=False):
349 """Map archive RECORD paths to installation RECORD paths."""
350 oldpath = normpath(srcfile, wheeldir)
351 newpath = normpath(destfile, lib_dir)
352 installed[oldpath] = newpath
353 if modified:
354 changed.add(destfile)
355
356 def clobber(source, dest, is_base, fixer=None, filter=None):
357 ensure_dir(dest) # common for the 'include' path
358
359 for dir, subdirs, files in os.walk(source):
360 basedir = dir[len(source):].lstrip(os.path.sep)
361 destdir = os.path.join(dest, basedir)
362 if is_base and basedir.split(os.path.sep, 1)[0].endswith('.data'):
363 continue
364 for s in subdirs:
365 destsubdir = os.path.join(dest, basedir, s)
366 if is_base and basedir == '' and destsubdir.endswith('.data'):
367 data_dirs.append(s)
368 continue
369 elif (is_base and
370 s.endswith('.dist-info') and
371 canonicalize_name(s).startswith(
372 canonicalize_name(req.name))):
373 assert not info_dir, ('Multiple .dist-info directories: ' +
374 destsubdir + ', ' +
375 ', '.join(info_dir))
376 info_dir.append(destsubdir)
377 for f in files:
378 # Skip unwanted files
379 if filter and filter(f):
380 continue
381 srcfile = os.path.join(dir, f)
382 destfile = os.path.join(dest, basedir, f)
383 # directory creation is lazy and after the file filtering above
384 # to ensure we don't install empty dirs; empty dirs can't be
385 # uninstalled.
386 ensure_dir(destdir)
387
388 # copyfile (called below) truncates the destination if it
389 # exists and then writes the new contents. This is fine in most
390 # cases, but can cause a segfault if pip has loaded a shared
391 # object (e.g. from pyopenssl through its vendored urllib3)
392 # Since the shared object is mmap'd an attempt to call a
393 # symbol in it will then cause a segfault. Unlinking the file
394 # allows writing of new contents while allowing the process to
395 # continue to use the old copy.
396 if os.path.exists(destfile):
397 os.unlink(destfile)
398
399 # We use copyfile (not move, copy, or copy2) to be extra sure
400 # that we are not moving directories over (copyfile fails for
401 # directories) as well as to ensure that we are not copying
402 # over any metadata because we want more control over what
403 # metadata we actually copy over.
404 shutil.copyfile(srcfile, destfile)
405
406 # Copy over the metadata for the file, currently this only
407 # includes the atime and mtime.
408 st = os.stat(srcfile)
409 if hasattr(os, "utime"):
410 os.utime(destfile, (st.st_atime, st.st_mtime))
411
412 # If our file is executable, then make our destination file
413 # executable.
414 if os.access(srcfile, os.X_OK):
415 st = os.stat(srcfile)
416 permissions = (
417 st.st_mode | stat.S_IXUSR | stat.S_IXGRP | stat.S_IXOTH
418 )
419 os.chmod(destfile, permissions)
420
421 changed = False
422 if fixer:
423 changed = fixer(destfile)
424 record_installed(srcfile, destfile, changed)
425
426 clobber(source, lib_dir, True)
427
428 assert info_dir, "%s .dist-info directory not found" % req
429
430 # Get the defined entry points
431 ep_file = os.path.join(info_dir[0], 'entry_points.txt')
432 console, gui = get_entrypoints(ep_file)
433
434 def is_entrypoint_wrapper(name):
435 # EP, EP.exe and EP-script.py are scripts generated for
436 # entry point EP by setuptools
437 if name.lower().endswith('.exe'):
438 matchname = name[:-4]
439 elif name.lower().endswith('-script.py'):
440 matchname = name[:-10]
441 elif name.lower().endswith(".pya"):
442 matchname = name[:-4]
443 else:
444 matchname = name
445 # Ignore setuptools-generated scripts
446 return (matchname in console or matchname in gui)
447
448 for datadir in data_dirs:
449 fixer = None
450 filter = None
451 for subdir in os.listdir(os.path.join(wheeldir, datadir)):
452 fixer = None
453 if subdir == 'scripts':
454 fixer = fix_script
455 filter = is_entrypoint_wrapper
456 source = os.path.join(wheeldir, datadir, subdir)
457 dest = scheme[subdir]
458 clobber(source, dest, False, fixer=fixer, filter=filter)
459
460 maker = ScriptMaker(None, scheme['scripts'])
461
462 # Ensure old scripts are overwritten.
463 # See https://github.com/pypa/pip/issues/1800
464 maker.clobber = True
465
466 # Ensure we don't generate any variants for scripts because this is almost
467 # never what somebody wants.
468 # See https://bitbucket.org/pypa/distlib/issue/35/
469 maker.variants = {''}
470
471 # This is required because otherwise distlib creates scripts that are not
472 # executable.
473 # See https://bitbucket.org/pypa/distlib/issue/32/
474 maker.set_mode = True
475
476 # Simplify the script and fix the fact that the default script swallows
477 # every single stack trace.
478 # See https://bitbucket.org/pypa/distlib/issue/34/
479 # See https://bitbucket.org/pypa/distlib/issue/33/
480 def _get_script_text(entry):
481 if entry.suffix is None:
482 raise InstallationError(
483 "Invalid script entry point: %s for req: %s - A callable "
484 "suffix is required. Cf https://packaging.python.org/en/"
485 "latest/distributing.html#console-scripts for more "
486 "information." % (entry, req)
487 )
488 return maker.script_template % {
489 "module": entry.prefix,
490 "import_name": entry.suffix.split(".")[0],
491 "func": entry.suffix,
492 }
493 # ignore type, because mypy disallows assigning to a method,
494 # see https://github.com/python/mypy/issues/2427
495 maker._get_script_text = _get_script_text # type: ignore
496 maker.script_template = r"""# -*- coding: utf-8 -*-
497 import re
498 import sys
499
500 from %(module)s import %(import_name)s
501
502 if __name__ == '__main__':
503 sys.argv[0] = re.sub(r'(-script\.pyw?|\.exe)?$', '', sys.argv[0])
504 sys.exit(%(func)s())
505 """
506
507 # Special case pip and setuptools to generate versioned wrappers
508 #
509 # The issue is that some projects (specifically, pip and setuptools) use
510 # code in setup.py to create "versioned" entry points - pip2.7 on Python
511 # 2.7, pip3.3 on Python 3.3, etc. But these entry points are baked into
512 # the wheel metadata at build time, and so if the wheel is installed with
513 # a *different* version of Python the entry points will be wrong. The
514 # correct fix for this is to enhance the metadata to be able to describe
515 # such versioned entry points, but that won't happen till Metadata 2.0 is
516 # available.
517 # In the meantime, projects using versioned entry points will either have
518 # incorrect versioned entry points, or they will not be able to distribute
519 # "universal" wheels (i.e., they will need a wheel per Python version).
520 #
521 # Because setuptools and pip are bundled with _ensurepip and virtualenv,
522 # we need to use universal wheels. So, as a stopgap until Metadata 2.0, we
523 # override the versioned entry points in the wheel and generate the
524 # correct ones. This code is purely a short-term measure until Metadata 2.0
525 # is available.
526 #
527 # To add the level of hack in this section of code, in order to support
528 # ensurepip this code will look for an ``ENSUREPIP_OPTIONS`` environment
529 # variable which will control which version scripts get installed.
530 #
531 # ENSUREPIP_OPTIONS=altinstall
532 # - Only pipX.Y and easy_install-X.Y will be generated and installed
533 # ENSUREPIP_OPTIONS=install
534 # - pipX.Y, pipX, easy_install-X.Y will be generated and installed. Note
535 # that this option is technically if ENSUREPIP_OPTIONS is set and is
536 # not altinstall
537 # DEFAULT
538 # - The default behavior is to install pip, pipX, pipX.Y, easy_install
539 # and easy_install-X.Y.
540 pip_script = console.pop('pip', None)
541 if pip_script:
542 if "ENSUREPIP_OPTIONS" not in os.environ:
543 spec = 'pip = ' + pip_script
544 generated.extend(maker.make(spec))
545
546 if os.environ.get("ENSUREPIP_OPTIONS", "") != "altinstall":
547 spec = 'pip%s = %s' % (sys.version[:1], pip_script)
548 generated.extend(maker.make(spec))
549
550 spec = 'pip%s = %s' % (sys.version[:3], pip_script)
551 generated.extend(maker.make(spec))
552 # Delete any other versioned pip entry points
553 pip_ep = [k for k in console if re.match(r'pip(\d(\.\d)?)?$', k)]
554 for k in pip_ep:
555 del console[k]
556 easy_install_script = console.pop('easy_install', None)
557 if easy_install_script:
558 if "ENSUREPIP_OPTIONS" not in os.environ:
559 spec = 'easy_install = ' + easy_install_script
560 generated.extend(maker.make(spec))
561
562 spec = 'easy_install-%s = %s' % (sys.version[:3], easy_install_script)
563 generated.extend(maker.make(spec))
564 # Delete any other versioned easy_install entry points
565 easy_install_ep = [
566 k for k in console if re.match(r'easy_install(-\d\.\d)?$', k)
567 ]
568 for k in easy_install_ep:
569 del console[k]
570
571 # Generate the console and GUI entry points specified in the wheel
572 if len(console) > 0:
573 generated_console_scripts = maker.make_multiple(
574 ['%s = %s' % kv for kv in console.items()]
575 )
576 generated.extend(generated_console_scripts)
577
578 if warn_script_location:
579 msg = message_about_scripts_not_on_PATH(generated_console_scripts)
580 if msg is not None:
581 logger.warning(msg)
582
583 if len(gui) > 0:
584 generated.extend(
585 maker.make_multiple(
586 ['%s = %s' % kv for kv in gui.items()],
587 {'gui': True}
588 )
589 )
590
591 # Record pip as the installer
592 installer = os.path.join(info_dir[0], 'INSTALLER')
593 temp_installer = os.path.join(info_dir[0], 'INSTALLER.pip')
594 with open(temp_installer, 'wb') as installer_file:
595 installer_file.write(b'pip\n')
596 shutil.move(temp_installer, installer)
597 generated.append(installer)
598
599 # Record details of all files installed
600 record = os.path.join(info_dir[0], 'RECORD')
601 temp_record = os.path.join(info_dir[0], 'RECORD.pip')
602 with open_for_csv(record, 'r') as record_in:
603 with open_for_csv(temp_record, 'w+') as record_out:
604 reader = csv.reader(record_in)
605 outrows = get_csv_rows_for_installed(
606 reader, installed=installed, changed=changed,
607 generated=generated, lib_dir=lib_dir,
608 )
609 writer = csv.writer(record_out)
610 # Sort to simplify testing.
611 for row in sorted_outrows(outrows):
612 writer.writerow(row)
613 shutil.move(temp_record, record)
614
615
616 def wheel_version(source_dir):
617 # type: (Optional[str]) -> Optional[Tuple[int, ...]]
618 """
619 Return the Wheel-Version of an extracted wheel, if possible.
620
621 Otherwise, return None if we couldn't parse / extract it.
622 """
623 try:
624 dist = [d for d in pkg_resources.find_on_path(None, source_dir)][0]
625
626 wheel_data = dist.get_metadata('WHEEL')
627 wheel_data = Parser().parsestr(wheel_data)
628
629 version = wheel_data['Wheel-Version'].strip()
630 version = tuple(map(int, version.split('.')))
631 return version
632 except Exception:
633 return None
634
635
636 def check_compatibility(version, name):
637 # type: (Optional[Tuple[int, ...]], str) -> None
638 """
639 Raises errors or warns if called with an incompatible Wheel-Version.
640
641 Pip should refuse to install a Wheel-Version that's a major series
642 ahead of what it's compatible with (e.g 2.0 > 1.1); and warn when
643 installing a version only minor version ahead (e.g 1.2 > 1.1).
644
645 version: a 2-tuple representing a Wheel-Version (Major, Minor)
646 name: name of wheel or package to raise exception about
647
648 :raises UnsupportedWheel: when an incompatible Wheel-Version is given
649 """
650 if not version:
651 raise UnsupportedWheel(
652 "%s is in an unsupported or invalid wheel" % name
653 )
654 if version[0] > VERSION_COMPATIBLE[0]:
655 raise UnsupportedWheel(
656 "%s's Wheel-Version (%s) is not compatible with this version "
657 "of pip" % (name, '.'.join(map(str, version)))
658 )
659 elif version > VERSION_COMPATIBLE:
660 logger.warning(
661 'Installing from a newer Wheel-Version (%s)',
662 '.'.join(map(str, version)),
663 )
664
665
666 class Wheel(object):
667 """A wheel file"""
668
669 # TODO: Maybe move the class into the models sub-package
670 # TODO: Maybe move the install code into this class
671
672 wheel_file_re = re.compile(
673 r"""^(?P<namever>(?P<name>.+?)-(?P<ver>.*?))
674 ((-(?P<build>\d[^-]*?))?-(?P<pyver>.+?)-(?P<abi>.+?)-(?P<plat>.+?)
675 \.whl|\.dist-info)$""",
676 re.VERBOSE
677 )
678
679 def __init__(self, filename):
680 # type: (str) -> None
681 """
682 :raises InvalidWheelFilename: when the filename is invalid for a wheel
683 """
684 wheel_info = self.wheel_file_re.match(filename)
685 if not wheel_info:
686 raise InvalidWheelFilename(
687 "%s is not a valid wheel filename." % filename
688 )
689 self.filename = filename
690 self.name = wheel_info.group('name').replace('_', '-')
691 # we'll assume "_" means "-" due to wheel naming scheme
692 # (https://github.com/pypa/pip/issues/1150)
693 self.version = wheel_info.group('ver').replace('_', '-')
694 self.build_tag = wheel_info.group('build')
695 self.pyversions = wheel_info.group('pyver').split('.')
696 self.abis = wheel_info.group('abi').split('.')
697 self.plats = wheel_info.group('plat').split('.')
698
699 # All the tag combinations from this file
700 self.file_tags = {
701 (x, y, z) for x in self.pyversions
702 for y in self.abis for z in self.plats
703 }
704
705 def support_index_min(self, tags=None):
706 # type: (Optional[List[Pep425Tag]]) -> Optional[int]
707 """
708 Return the lowest index that one of the wheel's file_tag combinations
709 achieves in the supported_tags list e.g. if there are 8 supported tags,
710 and one of the file tags is first in the list, then return 0. Returns
711 None is the wheel is not supported.
712 """
713 if tags is None: # for mock
714 tags = pep425tags.get_supported()
715 indexes = [tags.index(c) for c in self.file_tags if c in tags]
716 return min(indexes) if indexes else None
717
718 def supported(self, tags=None):
719 # type: (Optional[List[Pep425Tag]]) -> bool
720 """Is this wheel supported on this system?"""
721 if tags is None: # for mock
722 tags = pep425tags.get_supported()
723 return bool(set(tags).intersection(self.file_tags))
724
725
726 def _contains_egg_info(
727 s, _egg_info_re=re.compile(r'([a-z0-9_.]+)-([a-z0-9_.!+-]+)', re.I)):
728 """Determine whether the string looks like an egg_info.
729
730 :param s: The string to parse. E.g. foo-2.1
731 """
732 return bool(_egg_info_re.search(s))
733
734
735 def should_use_ephemeral_cache(
736 req, # type: InstallRequirement
737 format_control, # type: FormatControl
738 autobuilding, # type: bool
739 cache_available # type: bool
740 ):
741 # type: (...) -> Optional[bool]
742 """
743 Return whether to build an InstallRequirement object using the
744 ephemeral cache.
745
746 :param cache_available: whether a cache directory is available for the
747 autobuilding=True case.
748
749 :return: True or False to build the requirement with ephem_cache=True
750 or False, respectively; or None not to build the requirement.
751 """
752 if req.constraint:
753 return None
754 if req.is_wheel:
755 if not autobuilding:
756 logger.info(
757 'Skipping %s, due to already being wheel.', req.name,
758 )
759 return None
760 if not autobuilding:
761 return False
762
763 if req.editable or not req.source_dir:
764 return None
765
766 if req.link and not req.link.is_artifact:
767 # VCS checkout. Build wheel just for this run.
768 return True
769
770 if "binary" not in format_control.get_allowed_formats(
771 canonicalize_name(req.name)):
772 logger.info(
773 "Skipping bdist_wheel for %s, due to binaries "
774 "being disabled for it.", req.name,
775 )
776 return None
777
778 link = req.link
779 base, ext = link.splitext()
780 if cache_available and _contains_egg_info(base):
781 return False
782
783 # Otherwise, build the wheel just for this run using the ephemeral
784 # cache since we are either in the case of e.g. a local directory, or
785 # no cache directory is available to use.
786 return True
787
788
789 def format_command_result(
790 command_args, # type: List[str]
791 command_output, # type: str
792 ):
793 # type: (...) -> str
794 """
795 Format command information for logging.
796 """
797 command_desc = format_command_args(command_args)
798 text = 'Command arguments: {}\n'.format(command_desc)
799
800 if not command_output:
801 text += 'Command output: None'
802 elif logger.getEffectiveLevel() > logging.DEBUG:
803 text += 'Command output: [use --verbose to show]'
804 else:
805 if not command_output.endswith('\n'):
806 command_output += '\n'
807 text += 'Command output:\n{}{}'.format(command_output, LOG_DIVIDER)
808
809 return text
810
811
812 def get_legacy_build_wheel_path(
813 names, # type: List[str]
814 temp_dir, # type: str
815 req, # type: InstallRequirement
816 command_args, # type: List[str]
817 command_output, # type: str
818 ):
819 # type: (...) -> Optional[str]
820 """
821 Return the path to the wheel in the temporary build directory.
822 """
823 # Sort for determinism.
824 names = sorted(names)
825 if not names:
826 msg = (
827 'Legacy build of wheel for {!r} created no files.\n'
828 ).format(req.name)
829 msg += format_command_result(command_args, command_output)
830 logger.warning(msg)
831 return None
832
833 if len(names) > 1:
834 msg = (
835 'Legacy build of wheel for {!r} created more than one file.\n'
836 'Filenames (choosing first): {}\n'
837 ).format(req.name, names)
838 msg += format_command_result(command_args, command_output)
839 logger.warning(msg)
840
841 return os.path.join(temp_dir, names[0])
842
843
844 class WheelBuilder(object):
845 """Build wheels from a RequirementSet."""
846
847 def __init__(
848 self,
849 finder, # type: PackageFinder
850 preparer, # type: RequirementPreparer
851 wheel_cache, # type: WheelCache
852 build_options=None, # type: Optional[List[str]]
853 global_options=None, # type: Optional[List[str]]
854 no_clean=False # type: bool
855 ):
856 # type: (...) -> None
857 self.finder = finder
858 self.preparer = preparer
859 self.wheel_cache = wheel_cache
860
861 self._wheel_dir = preparer.wheel_download_dir
862
863 self.build_options = build_options or []
864 self.global_options = global_options or []
865 self.no_clean = no_clean
866
867 def _build_one(self, req, output_dir, python_tag=None):
868 """Build one wheel.
869
870 :return: The filename of the built wheel, or None if the build failed.
871 """
872 # Install build deps into temporary directory (PEP 518)
873 with req.build_env:
874 return self._build_one_inside_env(req, output_dir,
875 python_tag=python_tag)
876
877 def _build_one_inside_env(self, req, output_dir, python_tag=None):
878 with TempDirectory(kind="wheel") as temp_dir:
879 if req.use_pep517:
880 builder = self._build_one_pep517
881 else:
882 builder = self._build_one_legacy
883 wheel_path = builder(req, temp_dir.path, python_tag=python_tag)
884 if wheel_path is not None:
885 wheel_name = os.path.basename(wheel_path)
886 dest_path = os.path.join(output_dir, wheel_name)
887 try:
888 shutil.move(wheel_path, dest_path)
889 logger.info('Stored in directory: %s', output_dir)
890 return dest_path
891 except Exception:
892 pass
893 # Ignore return, we can't do anything else useful.
894 self._clean_one(req)
895 return None
896
897 def _base_setup_args(self, req):
898 # NOTE: Eventually, we'd want to also -S to the flags here, when we're
899 # isolating. Currently, it breaks Python in virtualenvs, because it
900 # relies on site.py to find parts of the standard library outside the
901 # virtualenv.
902 return [
903 sys.executable, '-u', '-c',
904 SETUPTOOLS_SHIM % req.setup_py
905 ] + list(self.global_options)
906
907 def _build_one_pep517(self, req, tempd, python_tag=None):
908 """Build one InstallRequirement using the PEP 517 build process.
909
910 Returns path to wheel if successfully built. Otherwise, returns None.
911 """
912 assert req.metadata_directory is not None
913 if self.build_options:
914 # PEP 517 does not support --build-options
915 logger.error('Cannot build wheel for %s using PEP 517 when '
916 '--build-options is present' % (req.name,))
917 return None
918 try:
919 req.spin_message = 'Building wheel for %s (PEP 517)' % (req.name,)
920 logger.debug('Destination directory: %s', tempd)
921 wheel_name = req.pep517_backend.build_wheel(
922 tempd,
923 metadata_directory=req.metadata_directory
924 )
925 if python_tag:
926 # General PEP 517 backends don't necessarily support
927 # a "--python-tag" option, so we rename the wheel
928 # file directly.
929 new_name = replace_python_tag(wheel_name, python_tag)
930 os.rename(
931 os.path.join(tempd, wheel_name),
932 os.path.join(tempd, new_name)
933 )
934 # Reassign to simplify the return at the end of function
935 wheel_name = new_name
936 except Exception:
937 logger.error('Failed building wheel for %s', req.name)
938 return None
939 return os.path.join(tempd, wheel_name)
940
941 def _build_one_legacy(self, req, tempd, python_tag=None):
942 """Build one InstallRequirement using the "legacy" build process.
943
944 Returns path to wheel if successfully built. Otherwise, returns None.
945 """
946 base_args = self._base_setup_args(req)
947
948 spin_message = 'Building wheel for %s (setup.py)' % (req.name,)
949 with open_spinner(spin_message) as spinner:
950 logger.debug('Destination directory: %s', tempd)
951 wheel_args = base_args + ['bdist_wheel', '-d', tempd] \
952 + self.build_options
953
954 if python_tag is not None:
955 wheel_args += ["--python-tag", python_tag]
956
957 try:
958 output = call_subprocess(wheel_args, cwd=req.setup_py_dir,
959 spinner=spinner)
960 except Exception:
961 spinner.finish("error")
962 logger.error('Failed building wheel for %s', req.name)
963 return None
964 names = os.listdir(tempd)
965 wheel_path = get_legacy_build_wheel_path(
966 names=names,
967 temp_dir=tempd,
968 req=req,
969 command_args=wheel_args,
970 command_output=output,
971 )
972 return wheel_path
973
974 def _clean_one(self, req):
975 base_args = self._base_setup_args(req)
976
977 logger.info('Running setup.py clean for %s', req.name)
978 clean_args = base_args + ['clean', '--all']
979 try:
980 call_subprocess(clean_args, cwd=req.source_dir)
981 return True
982 except Exception:
983 logger.error('Failed cleaning build dir for %s', req.name)
984 return False
985
986 def build(
987 self,
988 requirements, # type: Iterable[InstallRequirement]
989 session, # type: PipSession
990 autobuilding=False # type: bool
991 ):
992 # type: (...) -> List[InstallRequirement]
993 """Build wheels.
994
995 :param unpack: If True, replace the sdist we built from with the
996 newly built wheel, in preparation for installation.
997 :return: True if all the wheels built correctly.
998 """
999 buildset = []
1000 format_control = self.finder.format_control
1001 # Whether a cache directory is available for autobuilding=True.
1002 cache_available = bool(self._wheel_dir or self.wheel_cache.cache_dir)
1003
1004 for req in requirements:
1005 ephem_cache = should_use_ephemeral_cache(
1006 req, format_control=format_control, autobuilding=autobuilding,
1007 cache_available=cache_available,
1008 )
1009 if ephem_cache is None:
1010 continue
1011
1012 buildset.append((req, ephem_cache))
1013
1014 if not buildset:
1015 return []
1016
1017 # Is any wheel build not using the ephemeral cache?
1018 if any(not ephem_cache for _, ephem_cache in buildset):
1019 have_directory_for_build = self._wheel_dir or (
1020 autobuilding and self.wheel_cache.cache_dir
1021 )
1022 assert have_directory_for_build
1023
1024 # TODO by @pradyunsg
1025 # Should break up this method into 2 separate methods.
1026
1027 # Build the wheels.
1028 logger.info(
1029 'Building wheels for collected packages: %s',
1030 ', '.join([req.name for (req, _) in buildset]),
1031 )
1032 _cache = self.wheel_cache # shorter name
1033 with indent_log():
1034 build_success, build_failure = [], []
1035 for req, ephem in buildset:
1036 python_tag = None
1037 if autobuilding:
1038 python_tag = pep425tags.implementation_tag
1039 if ephem:
1040 output_dir = _cache.get_ephem_path_for_link(req.link)
1041 else:
1042 output_dir = _cache.get_path_for_link(req.link)
1043 try:
1044 ensure_dir(output_dir)
1045 except OSError as e:
1046 logger.warning("Building wheel for %s failed: %s",
1047 req.name, e)
1048 build_failure.append(req)
1049 continue
1050 else:
1051 output_dir = self._wheel_dir
1052 wheel_file = self._build_one(
1053 req, output_dir,
1054 python_tag=python_tag,
1055 )
1056 if wheel_file:
1057 build_success.append(req)
1058 if autobuilding:
1059 # XXX: This is mildly duplicative with prepare_files,
1060 # but not close enough to pull out to a single common
1061 # method.
1062 # The code below assumes temporary source dirs -
1063 # prevent it doing bad things.
1064 if req.source_dir and not os.path.exists(os.path.join(
1065 req.source_dir, PIP_DELETE_MARKER_FILENAME)):
1066 raise AssertionError(
1067 "bad source dir - missing marker")
1068 # Delete the source we built the wheel from
1069 req.remove_temporary_source()
1070 # set the build directory again - name is known from
1071 # the work prepare_files did.
1072 req.source_dir = req.build_location(
1073 self.preparer.build_dir
1074 )
1075 # Update the link for this.
1076 req.link = Link(path_to_url(wheel_file))
1077 assert req.link.is_wheel
1078 # extract the wheel into the dir
1079 unpack_url(
1080 req.link, req.source_dir, None, False,
1081 session=session,
1082 )
1083 else:
1084 build_failure.append(req)
1085
1086 # notify success/failure
1087 if build_success:
1088 logger.info(
1089 'Successfully built %s',
1090 ' '.join([req.name for req in build_success]),
1091 )
1092 if build_failure:
1093 logger.info(
1094 'Failed to build %s',
1095 ' '.join([req.name for req in build_failure]),
1096 )
1097 # Return a list of requirements that failed to build
1098 return build_failure
1099
[end of src/pip/_internal/wheel.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pypa/pip
|
627eeecd15b68b193fcaddc686a9c5701506a412
|
⬆️ Bump vendored pyparsing copy to v2.4.0
Fixes #6362
remark: a strange warning in Python-3.8.0a3 (was already there in a2)
**Environment**
* pip version: 19.0.3
* Python version: Python-3.8.0a3
* OS: win 10
odd warning when pip installing things:
````
python-3.8.0a3\lib\site-packages\pip\_vendor\pyparsing.py:3068: SyntaxWarning: invalid escape sequence \w
````
|
Did you manually copy the `.py` here?
@pradyunsg yes. Should I run a script?
Yep.
`invoke vendoring.update`
It reads from `vendor.txt`, removes all files and re-vendors everything.
@pradyunsg Ah, I see now. But it looks like this one wasn't patched anyway.
It looks like this is due to this line in the vendored pyparsing library (the line starting with `make_html =`): https://github.com/pypa/pip/blob/de242d0ea9aa0e12fb218b37c1bbe2f08cfaedb1/src/pip/_vendor/pyparsing.py#L3067-L3077
I opened an issue in `pyparsing`'s tracker for this: https://github.com/pyparsing/pyparsing/issues/80
FTR that issue is fixed now in upstream.
|
2019-04-23T20:58:35Z
|
<patch>
diff --git a/src/pip/_vendor/certifi/__init__.py b/src/pip/_vendor/certifi/__init__.py
--- a/src/pip/_vendor/certifi/__init__.py
+++ b/src/pip/_vendor/certifi/__init__.py
@@ -1,3 +1,3 @@
from .core import where
-__version__ = "2018.11.29"
+__version__ = "2019.03.09"
diff --git a/src/pip/_vendor/certifi/core.py b/src/pip/_vendor/certifi/core.py
--- a/src/pip/_vendor/certifi/core.py
+++ b/src/pip/_vendor/certifi/core.py
@@ -1,4 +1,3 @@
-#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
@@ -14,7 +13,3 @@ def where():
f = os.path.dirname(__file__)
return os.path.join(f, 'cacert.pem')
-
-
-if __name__ == '__main__':
- print(where())
diff --git a/src/pip/_vendor/distro.py b/src/pip/_vendor/distro.py
--- a/src/pip/_vendor/distro.py
+++ b/src/pip/_vendor/distro.py
@@ -17,12 +17,12 @@
information about the Linux distribution it runs on, such as a reliable
machine-readable distro ID, or version information.
-It is a renewed alternative implementation for Python's original
+It is the recommended replacement for Python's original
:py:func:`platform.linux_distribution` function, but it provides much more
functionality. An alternative implementation became necessary because Python
-3.5 deprecated this function, and Python 3.7 is expected to remove it
-altogether. Its predecessor function :py:func:`platform.dist` was already
-deprecated since Python 2.6 and is also expected to be removed in Python 3.7.
+3.5 deprecated this function, and Python 3.8 will remove it altogether.
+Its predecessor function :py:func:`platform.dist` was already
+deprecated since Python 2.6 and will also be removed in Python 3.8.
Still, there are many cases in which access to OS distribution information
is needed. See `Python issue 1322 <https://bugs.python.org/issue1322>`_ for
more information.
@@ -48,7 +48,9 @@
#: with blanks translated to underscores.
#:
#: * Value: Normalized value.
-NORMALIZED_OS_ID = {}
+NORMALIZED_OS_ID = {
+ 'ol': 'oracle', # Oracle Enterprise Linux
+}
#: Translation table for normalizing the "Distributor ID" attribute returned by
#: the lsb_release command, for use by the :func:`distro.id` method.
@@ -812,10 +814,14 @@ def codename(self):
For details, see :func:`distro.codename`.
"""
- return self.os_release_attr('codename') \
- or self.lsb_release_attr('codename') \
- or self.distro_release_attr('codename') \
- or ''
+ try:
+ # Handle os_release specially since distros might purposefully set
+ # this to empty string to have no codename
+ return self._os_release_info['codename']
+ except KeyError:
+ return self.lsb_release_attr('codename') \
+ or self.distro_release_attr('codename') \
+ or ''
def info(self, pretty=False, best=False):
"""
@@ -872,6 +878,7 @@ def uname_info(self):
For details, see :func:`distro.uname_info`.
"""
+ return self._uname_info
def os_release_attr(self, attribute):
"""
@@ -963,23 +970,30 @@ def _parse_os_release_content(lines):
if isinstance(v, bytes):
v = v.decode('utf-8')
props[k.lower()] = v
- if k == 'VERSION':
- # this handles cases in which the codename is in
- # the `(CODENAME)` (rhel, centos, fedora) format
- # or in the `, CODENAME` format (Ubuntu).
- codename = re.search(r'(\(\D+\))|,(\s+)?\D+', v)
- if codename:
- codename = codename.group()
- codename = codename.strip('()')
- codename = codename.strip(',')
- codename = codename.strip()
- # codename appears within paranthese.
- props['codename'] = codename
- else:
- props['codename'] = ''
else:
# Ignore any tokens that are not variable assignments
pass
+
+ if 'version_codename' in props:
+ # os-release added a version_codename field. Use that in
+ # preference to anything else Note that some distros purposefully
+ # do not have code names. They should be setting
+ # version_codename=""
+ props['codename'] = props['version_codename']
+ elif 'ubuntu_codename' in props:
+ # Same as above but a non-standard field name used on older Ubuntus
+ props['codename'] = props['ubuntu_codename']
+ elif 'version' in props:
+ # If there is no version_codename, parse it from the version
+ codename = re.search(r'(\(\D+\))|,(\s+)?\D+', props['version'])
+ if codename:
+ codename = codename.group()
+ codename = codename.strip('()')
+ codename = codename.strip(',')
+ codename = codename.strip()
+ # codename appears within paranthese.
+ props['codename'] = codename
+
return props
@cached_property
@@ -1072,7 +1086,10 @@ def _distro_release_info(self):
# file), because we want to use what was specified as best as
# possible.
match = _DISTRO_RELEASE_BASENAME_PATTERN.match(basename)
- if match:
+ if 'name' in distro_info \
+ and 'cloudlinux' in distro_info['name'].lower():
+ distro_info['id'] = 'cloudlinux'
+ elif match:
distro_info['id'] = match.group(1)
return distro_info
else:
@@ -1113,6 +1130,8 @@ def _distro_release_info(self):
# The name is always present if the pattern matches
self.distro_release_file = filepath
distro_info['id'] = match.group(1)
+ if 'cloudlinux' in distro_info['name'].lower():
+ distro_info['id'] = 'cloudlinux'
return distro_info
return {}
diff --git a/src/pip/_vendor/idna/__init__.py b/src/pip/_vendor/idna/__init__.py
old mode 100755
new mode 100644
diff --git a/src/pip/_vendor/idna/codec.py b/src/pip/_vendor/idna/codec.py
old mode 100755
new mode 100644
diff --git a/src/pip/_vendor/idna/compat.py b/src/pip/_vendor/idna/compat.py
old mode 100755
new mode 100644
diff --git a/src/pip/_vendor/idna/core.py b/src/pip/_vendor/idna/core.py
old mode 100755
new mode 100644
diff --git a/src/pip/_vendor/idna/idnadata.py b/src/pip/_vendor/idna/idnadata.py
old mode 100755
new mode 100644
diff --git a/src/pip/_vendor/idna/intranges.py b/src/pip/_vendor/idna/intranges.py
old mode 100755
new mode 100644
diff --git a/src/pip/_vendor/idna/package_data.py b/src/pip/_vendor/idna/package_data.py
old mode 100755
new mode 100644
diff --git a/src/pip/_vendor/idna/uts46data.py b/src/pip/_vendor/idna/uts46data.py
old mode 100755
new mode 100644
diff --git a/src/pip/_vendor/pkg_resources/__init__.py b/src/pip/_vendor/pkg_resources/__init__.py
--- a/src/pip/_vendor/pkg_resources/__init__.py
+++ b/src/pip/_vendor/pkg_resources/__init__.py
@@ -39,6 +39,8 @@
import textwrap
import itertools
import inspect
+import ntpath
+import posixpath
from pkgutil import get_importer
try:
@@ -1401,8 +1403,15 @@ def get_resource_string(self, manager, resource_name):
def has_resource(self, resource_name):
return self._has(self._fn(self.module_path, resource_name))
+ def _get_metadata_path(self, name):
+ return self._fn(self.egg_info, name)
+
def has_metadata(self, name):
- return self.egg_info and self._has(self._fn(self.egg_info, name))
+ if not self.egg_info:
+ return self.egg_info
+
+ path = self._get_metadata_path(name)
+ return self._has(path)
def get_metadata(self, name):
if not self.egg_info:
@@ -1466,10 +1475,86 @@ def _listdir(self, path):
)
def _fn(self, base, resource_name):
+ self._validate_resource_path(resource_name)
if resource_name:
return os.path.join(base, *resource_name.split('/'))
return base
+ @staticmethod
+ def _validate_resource_path(path):
+ """
+ Validate the resource paths according to the docs.
+ https://setuptools.readthedocs.io/en/latest/pkg_resources.html#basic-resource-access
+
+ >>> warned = getfixture('recwarn')
+ >>> warnings.simplefilter('always')
+ >>> vrp = NullProvider._validate_resource_path
+ >>> vrp('foo/bar.txt')
+ >>> bool(warned)
+ False
+ >>> vrp('../foo/bar.txt')
+ >>> bool(warned)
+ True
+ >>> warned.clear()
+ >>> vrp('/foo/bar.txt')
+ >>> bool(warned)
+ True
+ >>> vrp('foo/../../bar.txt')
+ >>> bool(warned)
+ True
+ >>> warned.clear()
+ >>> vrp('foo/f../bar.txt')
+ >>> bool(warned)
+ False
+
+ Windows path separators are straight-up disallowed.
+ >>> vrp(r'\\foo/bar.txt')
+ Traceback (most recent call last):
+ ...
+ ValueError: Use of .. or absolute path in a resource path \
+is not allowed.
+
+ >>> vrp(r'C:\\foo/bar.txt')
+ Traceback (most recent call last):
+ ...
+ ValueError: Use of .. or absolute path in a resource path \
+is not allowed.
+
+ Blank values are allowed
+
+ >>> vrp('')
+ >>> bool(warned)
+ False
+
+ Non-string values are not.
+
+ >>> vrp(None)
+ Traceback (most recent call last):
+ ...
+ AttributeError: ...
+ """
+ invalid = (
+ os.path.pardir in path.split(posixpath.sep) or
+ posixpath.isabs(path) or
+ ntpath.isabs(path)
+ )
+ if not invalid:
+ return
+
+ msg = "Use of .. or absolute path in a resource path is not allowed."
+
+ # Aggressively disallow Windows absolute paths
+ if ntpath.isabs(path) and not posixpath.isabs(path):
+ raise ValueError(msg)
+
+ # for compatibility, warn; in future
+ # raise ValueError(msg)
+ warnings.warn(
+ msg[:-1] + " and will raise exceptions in a future release.",
+ DeprecationWarning,
+ stacklevel=4,
+ )
+
def _get(self, path):
if hasattr(self.loader, 'get_data'):
return self.loader.get_data(path)
@@ -1790,6 +1875,9 @@ class FileMetadata(EmptyProvider):
def __init__(self, path):
self.path = path
+ def _get_metadata_path(self, name):
+ return self.path
+
def has_metadata(self, name):
return name == 'PKG-INFO' and os.path.isfile(self.path)
@@ -1888,7 +1976,7 @@ def find_eggs_in_zip(importer, path_item, only=False):
if only:
# don't yield nested distros
return
- for subitem in metadata.resource_listdir('/'):
+ for subitem in metadata.resource_listdir(''):
if _is_egg_path(subitem):
subpath = os.path.join(path_item, subitem)
dists = find_eggs_in_zip(zipimport.zipimporter(subpath), subpath)
@@ -2583,10 +2671,14 @@ def version(self):
try:
return self._version
except AttributeError:
- version = _version_from_file(self._get_metadata(self.PKG_INFO))
+ version = self._get_version()
if version is None:
- tmpl = "Missing 'Version:' header and/or %s file"
- raise ValueError(tmpl % self.PKG_INFO, self)
+ path = self._get_metadata_path_for_display(self.PKG_INFO)
+ msg = (
+ "Missing 'Version:' header and/or {} file at path: {}"
+ ).format(self.PKG_INFO, path)
+ raise ValueError(msg, self)
+
return version
@property
@@ -2644,11 +2736,34 @@ def requires(self, extras=()):
)
return deps
+ def _get_metadata_path_for_display(self, name):
+ """
+ Return the path to the given metadata file, if available.
+ """
+ try:
+ # We need to access _get_metadata_path() on the provider object
+ # directly rather than through this class's __getattr__()
+ # since _get_metadata_path() is marked private.
+ path = self._provider._get_metadata_path(name)
+
+ # Handle exceptions e.g. in case the distribution's metadata
+ # provider doesn't support _get_metadata_path().
+ except Exception:
+ return '[could not detect]'
+
+ return path
+
def _get_metadata(self, name):
if self.has_metadata(name):
for line in self.get_metadata_lines(name):
yield line
+ def _get_version(self):
+ lines = self._get_metadata(self.PKG_INFO)
+ version = _version_from_file(lines)
+
+ return version
+
def activate(self, path=None, replace=False):
"""Ensure distribution is importable on `path` (default=sys.path)"""
if path is None:
@@ -2867,7 +2982,7 @@ def _reload_version(self):
take an extra step and try to get the version number from
the metadata file itself instead of the filename.
"""
- md_version = _version_from_file(self._get_metadata(self.PKG_INFO))
+ md_version = self._get_version()
if md_version:
self._version = md_version
return self
diff --git a/src/pip/_vendor/pyparsing.py b/src/pip/_vendor/pyparsing.py
--- a/src/pip/_vendor/pyparsing.py
+++ b/src/pip/_vendor/pyparsing.py
@@ -93,8 +93,8 @@
namespace class
"""
-__version__ = "2.3.1"
-__versionTime__ = "09 Jan 2019 23:26 UTC"
+__version__ = "2.4.0"
+__versionTime__ = "07 Apr 2019 18:28 UTC"
__author__ = "Paul McGuire <[email protected]>"
import string
@@ -143,10 +143,24 @@
except ImportError:
class SimpleNamespace: pass
+# version compatibility configuration
+__compat__ = SimpleNamespace()
+__compat__.__doc__ = """
+ A cross-version compatibility configuration for pyparsing features that will be
+ released in a future version. By setting values in this configuration to True,
+ those features can be enabled in prior versions for compatibility development
+ and testing.
+
+ - collect_all_And_tokens - flag to enable fix for Issue #63 that fixes erroneous grouping
+ of results names when an And expression is nested within an Or or MatchFirst; set to
+ True to enable bugfix to be released in pyparsing 2.4
+"""
+__compat__.collect_all_And_tokens = True
+
#~ sys.stderr.write( "testing pyparsing module, version %s, %s\n" % (__version__,__versionTime__ ) )
-__all__ = [
+__all__ = [ '__version__', '__versionTime__', '__author__', '__compat__',
'And', 'CaselessKeyword', 'CaselessLiteral', 'CharsNotIn', 'Combine', 'Dict', 'Each', 'Empty',
'FollowedBy', 'Forward', 'GoToColumn', 'Group', 'Keyword', 'LineEnd', 'LineStart', 'Literal',
'PrecededBy', 'MatchFirst', 'NoMatch', 'NotAny', 'OneOrMore', 'OnlyOnce', 'Optional', 'Or',
@@ -350,7 +364,7 @@ def explain(exc, depth=16):
callers = inspect.getinnerframes(exc.__traceback__, context=depth)
seen = set()
for i, ff in enumerate(callers[-depth:]):
- frm = ff.frame
+ frm = ff[0]
f_self = frm.f_locals.get('self', None)
if isinstance(f_self, ParserElement):
@@ -748,7 +762,7 @@ def make_palindrome(tokens):
print(patt.addParseAction(make_palindrome).parseString("lskdj sdlkjf lksd")) # -> 'lskdjsdlkjflksddsklfjkldsjdksl'
"""
if isinstance(itemseq, ParseResults):
- self += itemseq
+ self.__iadd__(itemseq)
else:
self.__toklist.extend(itemseq)
@@ -2517,7 +2531,9 @@ def runTests(self, tests, parseAll=True, comment='#',
comments = []
try:
# convert newline marks to actual newlines, and strip leading BOM if present
- t = t.replace(r'\n','\n').lstrip('\ufeff')
+ NL = Literal(r'\n').addParseAction(replaceWith('\n')).ignore(quotedString)
+ BOM = '\ufeff'
+ t = NL.transformString(t.lstrip(BOM))
result = self.parseString(t, parseAll=parseAll)
out.append(result.dump(full=fullDump))
success = success and not failureTests
@@ -2860,6 +2876,7 @@ class Word(Token):
def __init__( self, initChars, bodyChars=None, min=1, max=0, exact=0, asKeyword=False, excludeChars=None ):
super(Word,self).__init__()
if excludeChars:
+ excludeChars = set(excludeChars)
initChars = ''.join(c for c in initChars if c not in excludeChars)
if bodyChars:
bodyChars = ''.join(c for c in bodyChars if c not in excludeChars)
@@ -2920,7 +2937,7 @@ def parseImpl( self, instring, loc, doActions=True ):
loc = result.end()
return loc, result.group()
- if not(instring[ loc ] in self.initChars):
+ if instring[loc] not in self.initChars:
raise ParseException(instring, loc, self.errmsg, self)
start = loc
@@ -2935,9 +2952,9 @@ def parseImpl( self, instring, loc, doActions=True ):
throwException = False
if loc - start < self.minLen:
throwException = True
- if self.maxSpecified and loc < instrlen and instring[loc] in bodychars:
+ elif self.maxSpecified and loc < instrlen and instring[loc] in bodychars:
throwException = True
- if self.asKeyword:
+ elif self.asKeyword:
if (start>0 and instring[start-1] in bodychars) or (loc<instrlen and instring[loc] in bodychars):
throwException = True
@@ -2974,8 +2991,8 @@ class Char(Word):
when defining a match of any single character in a string of
characters.
"""
- def __init__(self, charset):
- super(Char, self).__init__(charset, exact=1)
+ def __init__(self, charset, asKeyword=False, excludeChars=None):
+ super(Char, self).__init__(charset, exact=1, asKeyword=asKeyword, excludeChars=excludeChars)
self.reString = "[%s]" % _escapeRegexRangeChars(self.initCharsOrig)
self.re = re.compile( self.reString )
@@ -3034,24 +3051,41 @@ def __init__( self, pattern, flags=0, asGroupList=False, asMatch=False):
self.mayReturnEmpty = True
self.asGroupList = asGroupList
self.asMatch = asMatch
+ if self.asGroupList:
+ self.parseImpl = self.parseImplAsGroupList
+ if self.asMatch:
+ self.parseImpl = self.parseImplAsMatch
- def parseImpl( self, instring, loc, doActions=True ):
+ def parseImpl(self, instring, loc, doActions=True):
result = self.re.match(instring,loc)
if not result:
raise ParseException(instring, loc, self.errmsg, self)
loc = result.end()
- if self.asMatch:
- ret = result
- elif self.asGroupList:
- ret = result.groups()
- else:
- ret = ParseResults(result.group())
- d = result.groupdict()
- if d:
- for k, v in d.items():
- ret[k] = v
- return loc,ret
+ ret = ParseResults(result.group())
+ d = result.groupdict()
+ if d:
+ for k, v in d.items():
+ ret[k] = v
+ return loc, ret
+
+ def parseImplAsGroupList(self, instring, loc, doActions=True):
+ result = self.re.match(instring,loc)
+ if not result:
+ raise ParseException(instring, loc, self.errmsg, self)
+
+ loc = result.end()
+ ret = result.groups()
+ return loc, ret
+
+ def parseImplAsMatch(self, instring, loc, doActions=True):
+ result = self.re.match(instring,loc)
+ if not result:
+ raise ParseException(instring, loc, self.errmsg, self)
+
+ loc = result.end()
+ ret = result
+ return loc, ret
def __str__( self ):
try:
@@ -3065,7 +3099,7 @@ def __str__( self ):
return self.strRepr
def sub(self, repl):
- """
+ r"""
Return Regex with an attached parse action to transform the parsed
result as if called using `re.sub(expr, repl, string) <https://docs.python.org/3/library/re.html#re.sub>`_.
@@ -3376,7 +3410,7 @@ def __init__(self, ws=" \t\r\n", min=1, max=0, exact=0):
self.minLen = exact
def parseImpl( self, instring, loc, doActions=True ):
- if not(instring[ loc ] in self.matchWhite):
+ if instring[loc] not in self.matchWhite:
raise ParseException(instring, loc, self.errmsg, self)
start = loc
loc += 1
@@ -3425,7 +3459,7 @@ def parseImpl( self, instring, loc, doActions=True ):
class LineStart(_PositionToken):
- """Matches if current position is at the beginning of a line within
+ r"""Matches if current position is at the beginning of a line within
the parse string
Example::
@@ -3648,10 +3682,6 @@ def streamline( self ):
return self
- def setResultsName( self, name, listAllMatches=False ):
- ret = super(ParseExpression,self).setResultsName(name,listAllMatches)
- return ret
-
def validate( self, validateTrace=[] ):
tmp = validateTrace[:]+[self]
for e in self.exprs:
@@ -3772,7 +3802,8 @@ def __init__( self, exprs, savelist = False ):
def streamline(self):
super(Or, self).streamline()
- self.saveAsList = any(e.saveAsList for e in self.exprs)
+ if __compat__.collect_all_And_tokens:
+ self.saveAsList = any(e.saveAsList for e in self.exprs)
return self
def parseImpl( self, instring, loc, doActions=True ):
@@ -3854,13 +3885,13 @@ def __init__( self, exprs, savelist = False ):
super(MatchFirst,self).__init__(exprs, savelist)
if self.exprs:
self.mayReturnEmpty = any(e.mayReturnEmpty for e in self.exprs)
- # self.saveAsList = any(e.saveAsList for e in self.exprs)
else:
self.mayReturnEmpty = True
def streamline(self):
super(MatchFirst, self).streamline()
- self.saveAsList = any(e.saveAsList for e in self.exprs)
+ if __compat__.collect_all_And_tokens:
+ self.saveAsList = any(e.saveAsList for e in self.exprs)
return self
def parseImpl( self, instring, loc, doActions=True ):
@@ -4630,18 +4661,18 @@ def validate( self, validateTrace=[] ):
def __str__( self ):
if hasattr(self,"name"):
return self.name
- return self.__class__.__name__ + ": ..."
- # stubbed out for now - creates awful memory and perf issues
- self._revertClass = self.__class__
- self.__class__ = _ForwardNoRecurse
+ # Avoid infinite recursion by setting a temporary name
+ self.name = self.__class__.__name__ + ": ..."
+
+ # Use the string representation of main expression.
try:
if self.expr is not None:
retString = _ustr(self.expr)
else:
retString = "None"
finally:
- self.__class__ = self._revertClass
+ del self.name
return self.__class__.__name__ + ": " + retString
def copy(self):
@@ -4652,10 +4683,6 @@ def copy(self):
ret <<= self
return ret
-class _ForwardNoRecurse(Forward):
- def __str__( self ):
- return "..."
-
class TokenConverter(ParseElementEnhance):
"""
Abstract subclass of :class:`ParseExpression`, for converting parsed results.
@@ -4726,7 +4753,7 @@ class Group(TokenConverter):
"""
def __init__( self, expr ):
super(Group,self).__init__( expr )
- self.saveAsList = expr.saveAsList
+ self.saveAsList = True
def postParse( self, instring, loc, tokenlist ):
return [ tokenlist ]
@@ -5189,7 +5216,7 @@ def ungroup(expr):
"""Helper to undo pyparsing's default grouping of And expressions,
even if all but one are non-empty.
"""
- return TokenConverter(expr).setParseAction(lambda t:t[0])
+ return TokenConverter(expr).addParseAction(lambda t:t[0])
def locatedExpr(expr):
"""Helper to decorate a returned token with its starting and ending
@@ -5361,7 +5388,9 @@ def pa(s,l,t):
"""(Deprecated) Helper parse action to convert tokens to lower case.
Deprecated in favor of :class:`pyparsing_common.downcaseTokens`"""
-def _makeTags(tagStr, xml):
+def _makeTags(tagStr, xml,
+ suppress_LT=Suppress("<"),
+ suppress_GT=Suppress(">")):
"""Internal helper to construct opening and closing tag expressions, given a tag name"""
if isinstance(tagStr,basestring):
resname = tagStr
@@ -5372,22 +5401,28 @@ def _makeTags(tagStr, xml):
tagAttrName = Word(alphas,alphanums+"_-:")
if (xml):
tagAttrValue = dblQuotedString.copy().setParseAction( removeQuotes )
- openTag = Suppress("<") + tagStr("tag") + \
- Dict(ZeroOrMore(Group( tagAttrName + Suppress("=") + tagAttrValue ))) + \
- Optional("/",default=[False]).setResultsName("empty").setParseAction(lambda s,l,t:t[0]=='/') + Suppress(">")
+ openTag = (suppress_LT
+ + tagStr("tag")
+ + Dict(ZeroOrMore(Group(tagAttrName + Suppress("=") + tagAttrValue )))
+ + Optional("/", default=[False])("empty").setParseAction(lambda s,l,t:t[0]=='/')
+ + suppress_GT)
else:
- printablesLessRAbrack = "".join(c for c in printables if c not in ">")
- tagAttrValue = quotedString.copy().setParseAction( removeQuotes ) | Word(printablesLessRAbrack)
- openTag = Suppress("<") + tagStr("tag") + \
- Dict(ZeroOrMore(Group( tagAttrName.setParseAction(downcaseTokens) + \
- Optional( Suppress("=") + tagAttrValue ) ))) + \
- Optional("/",default=[False]).setResultsName("empty").setParseAction(lambda s,l,t:t[0]=='/') + Suppress(">")
- closeTag = Combine(_L("</") + tagStr + ">")
-
- openTag = openTag.setResultsName("start"+"".join(resname.replace(":"," ").title().split())).setName("<%s>" % resname)
- closeTag = closeTag.setResultsName("end"+"".join(resname.replace(":"," ").title().split())).setName("</%s>" % resname)
+ tagAttrValue = quotedString.copy().setParseAction( removeQuotes ) | Word(printables, excludeChars=">")
+ openTag = (suppress_LT
+ + tagStr("tag")
+ + Dict(ZeroOrMore(Group(tagAttrName.setParseAction(downcaseTokens)
+ + Optional(Suppress("=") + tagAttrValue))))
+ + Optional("/",default=[False])("empty").setParseAction(lambda s,l,t:t[0]=='/')
+ + suppress_GT)
+ closeTag = Combine(_L("</") + tagStr + ">", adjacent=False)
+
+ openTag.setName("<%s>" % resname)
+ # add start<tagname> results name in parse action now that ungrouped names are not reported at two levels
+ openTag.addParseAction(lambda t: t.__setitem__("start"+"".join(resname.replace(":"," ").title().split()), t.copy()))
+ closeTag = closeTag("end"+"".join(resname.replace(":"," ").title().split())).setName("</%s>" % resname)
openTag.tag = resname
closeTag.tag = resname
+ openTag.tag_body = SkipTo(closeTag())
return openTag, closeTag
def makeHTMLTags(tagStr):
@@ -5852,12 +5887,17 @@ def eggs(z):
':',
[[['def', 'eggs', ['(', 'z', ')'], ':', [['pass']]]]]]]
"""
+ backup_stack = indentStack[:]
+
+ def reset_stack():
+ indentStack[:] = backup_stack
+
def checkPeerIndent(s,l,t):
if l >= len(s): return
curCol = col(l,s)
if curCol != indentStack[-1]:
if curCol > indentStack[-1]:
- raise ParseFatalException(s,l,"illegal nesting")
+ raise ParseException(s,l,"illegal nesting")
raise ParseException(s,l,"not a peer entry")
def checkSubIndent(s,l,t):
@@ -5885,6 +5925,7 @@ def checkUnindent(s,l,t):
else:
smExpr = Group( Optional(NL) +
(OneOrMore( PEER + Group(blockStatementExpr) + Optional(NL) )) )
+ smExpr.setFailAction(lambda a, b, c, d: reset_stack())
blockStatementExpr.ignore(_bslash + LineEnd())
return smExpr.setName('indented block')
diff --git a/src/pip/_vendor/urllib3/__init__.py b/src/pip/_vendor/urllib3/__init__.py
old mode 100755
new mode 100644
diff --git a/src/pip/_vendor/urllib3/_collections.py b/src/pip/_vendor/urllib3/_collections.py
old mode 100755
new mode 100644
diff --git a/src/pip/_vendor/urllib3/connection.py b/src/pip/_vendor/urllib3/connection.py
old mode 100755
new mode 100644
diff --git a/src/pip/_vendor/urllib3/connectionpool.py b/src/pip/_vendor/urllib3/connectionpool.py
old mode 100755
new mode 100644
diff --git a/src/pip/_vendor/urllib3/contrib/__init__.py b/src/pip/_vendor/urllib3/contrib/__init__.py
old mode 100755
new mode 100644
diff --git a/src/pip/_vendor/urllib3/contrib/_appengine_environ.py b/src/pip/_vendor/urllib3/contrib/_appengine_environ.py
old mode 100755
new mode 100644
diff --git a/src/pip/_vendor/urllib3/contrib/_securetransport/__init__.py b/src/pip/_vendor/urllib3/contrib/_securetransport/__init__.py
old mode 100755
new mode 100644
diff --git a/src/pip/_vendor/urllib3/contrib/_securetransport/bindings.py b/src/pip/_vendor/urllib3/contrib/_securetransport/bindings.py
old mode 100755
new mode 100644
diff --git a/src/pip/_vendor/urllib3/contrib/_securetransport/low_level.py b/src/pip/_vendor/urllib3/contrib/_securetransport/low_level.py
old mode 100755
new mode 100644
diff --git a/src/pip/_vendor/urllib3/contrib/appengine.py b/src/pip/_vendor/urllib3/contrib/appengine.py
old mode 100755
new mode 100644
diff --git a/src/pip/_vendor/urllib3/contrib/ntlmpool.py b/src/pip/_vendor/urllib3/contrib/ntlmpool.py
old mode 100755
new mode 100644
diff --git a/src/pip/_vendor/urllib3/contrib/pyopenssl.py b/src/pip/_vendor/urllib3/contrib/pyopenssl.py
old mode 100755
new mode 100644
diff --git a/src/pip/_vendor/urllib3/contrib/securetransport.py b/src/pip/_vendor/urllib3/contrib/securetransport.py
old mode 100755
new mode 100644
diff --git a/src/pip/_vendor/urllib3/contrib/socks.py b/src/pip/_vendor/urllib3/contrib/socks.py
old mode 100755
new mode 100644
diff --git a/src/pip/_vendor/urllib3/exceptions.py b/src/pip/_vendor/urllib3/exceptions.py
old mode 100755
new mode 100644
diff --git a/src/pip/_vendor/urllib3/fields.py b/src/pip/_vendor/urllib3/fields.py
old mode 100755
new mode 100644
diff --git a/src/pip/_vendor/urllib3/filepost.py b/src/pip/_vendor/urllib3/filepost.py
old mode 100755
new mode 100644
diff --git a/src/pip/_vendor/urllib3/packages/__init__.py b/src/pip/_vendor/urllib3/packages/__init__.py
old mode 100755
new mode 100644
diff --git a/src/pip/_vendor/urllib3/packages/backports/__init__.py b/src/pip/_vendor/urllib3/packages/backports/__init__.py
old mode 100755
new mode 100644
diff --git a/src/pip/_vendor/urllib3/packages/backports/makefile.py b/src/pip/_vendor/urllib3/packages/backports/makefile.py
old mode 100755
new mode 100644
diff --git a/src/pip/_vendor/urllib3/packages/six.py b/src/pip/_vendor/urllib3/packages/six.py
old mode 100755
new mode 100644
diff --git a/src/pip/_vendor/urllib3/packages/ssl_match_hostname/__init__.py b/src/pip/_vendor/urllib3/packages/ssl_match_hostname/__init__.py
old mode 100755
new mode 100644
diff --git a/src/pip/_vendor/urllib3/packages/ssl_match_hostname/_implementation.py b/src/pip/_vendor/urllib3/packages/ssl_match_hostname/_implementation.py
old mode 100755
new mode 100644
diff --git a/src/pip/_vendor/urllib3/poolmanager.py b/src/pip/_vendor/urllib3/poolmanager.py
old mode 100755
new mode 100644
diff --git a/src/pip/_vendor/urllib3/request.py b/src/pip/_vendor/urllib3/request.py
old mode 100755
new mode 100644
diff --git a/src/pip/_vendor/urllib3/response.py b/src/pip/_vendor/urllib3/response.py
old mode 100755
new mode 100644
diff --git a/src/pip/_vendor/urllib3/util/__init__.py b/src/pip/_vendor/urllib3/util/__init__.py
old mode 100755
new mode 100644
diff --git a/src/pip/_vendor/urllib3/util/connection.py b/src/pip/_vendor/urllib3/util/connection.py
old mode 100755
new mode 100644
diff --git a/src/pip/_vendor/urllib3/util/queue.py b/src/pip/_vendor/urllib3/util/queue.py
old mode 100755
new mode 100644
diff --git a/src/pip/_vendor/urllib3/util/request.py b/src/pip/_vendor/urllib3/util/request.py
old mode 100755
new mode 100644
diff --git a/src/pip/_vendor/urllib3/util/response.py b/src/pip/_vendor/urllib3/util/response.py
old mode 100755
new mode 100644
diff --git a/src/pip/_vendor/urllib3/util/retry.py b/src/pip/_vendor/urllib3/util/retry.py
old mode 100755
new mode 100644
diff --git a/src/pip/_vendor/urllib3/util/ssl_.py b/src/pip/_vendor/urllib3/util/ssl_.py
old mode 100755
new mode 100644
diff --git a/src/pip/_vendor/urllib3/util/timeout.py b/src/pip/_vendor/urllib3/util/timeout.py
old mode 100755
new mode 100644
diff --git a/src/pip/_vendor/urllib3/util/url.py b/src/pip/_vendor/urllib3/util/url.py
old mode 100755
new mode 100644
diff --git a/src/pip/_vendor/urllib3/util/wait.py b/src/pip/_vendor/urllib3/util/wait.py
old mode 100755
new mode 100644
</patch>
|
[]
|
[]
| |||
Qiskit__qiskit-6609
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Pylint not applied to tools directory
Pylint is not applied to tools directory. Some of the tools are non-compliant with project style.
</issue>
<code>
[start of README.md]
1 # Qiskit Terra
2 [](https://opensource.org/licenses/Apache-2.0)<!--- long-description-skip-begin -->[](https://travis-ci.com/Qiskit/qiskit-terra)[](https://github.com/Qiskit/qiskit-terra/releases)[](https://pypi.org/project/qiskit-terra/)[](https://coveralls.io/github/Qiskit/qiskit-terra?branch=master)<!--- long-description-skip-end -->
3
4 **Qiskit** is an open-source framework for working with noisy quantum computers at the level of pulses, circuits, and algorithms.
5
6 Qiskit is made up of elements that work together to enable quantum computing. This element is **Terra** and is the foundation on which the rest of Qiskit is built.
7
8 ## Installation
9
10 We encourage installing Qiskit via the pip tool (a python package manager), which installs all Qiskit elements, including Terra.
11
12 ```bash
13 pip install qiskit
14 ```
15
16 PIP will handle all dependencies automatically and you will always install the latest (and well-tested) version.
17
18 To install from source, follow the instructions in the [documentation](https://qiskit.org/documentation/contributing_to_qiskit.html#install-terra-from-source).
19
20 ## Creating Your First Quantum Program in Qiskit Terra
21
22 Now that Qiskit is installed, it's time to begin working with Terra.
23
24 We are ready to try out a quantum circuit example, which is simulated locally using
25 the Qiskit BasicAer element. This is a simple example that makes an entangled state.
26
27 ```
28 $ python
29 ```
30
31 ```python
32 >>> from qiskit import QuantumCircuit, transpile
33 >>> from qiskit.providers.basicaer import QasmSimulatorPy
34 >>> qc = QuantumCircuit(2, 2)
35 >>> qc.h(0)
36 >>> qc.cx(0, 1)
37 >>> qc.measure([0,1], [0,1])
38 >>> backend_sim = QasmSimulatorPy()
39 >>> transpiled_qc = transpile(qc, backend_sim)
40 >>> result = backend_sim.run(transpiled_qc).result()
41 >>> print(result.get_counts(qc))
42 ```
43
44 In this case, the output will be:
45
46 ```python
47 {'00': 513, '11': 511}
48 ```
49
50 A script is available [here](examples/python/ibmq/hello_quantum.py), where we also show how to
51 run the same program on a real quantum computer via IBMQ.
52
53 ### Executing your code on a real quantum chip
54
55 You can also use Qiskit to execute your code on a
56 **real quantum chip**.
57 In order to do so, you need to configure Qiskit for using the credentials in
58 your IBM Q account:
59
60 #### Configure your IBMQ credentials
61
62 1. Create an _[IBM Q](https://quantum-computing.ibm.com) > Account_ if you haven't already done so.
63
64 2. Get an API token from the IBM Q website under _My Account > API Token_ and the URL for the account.
65
66 3. Take your token and url from step 2, here called `MY_API_TOKEN`, `MY_URL`, and run:
67
68 ```python
69 >>> from qiskit import IBMQ
70 >>> IBMQ.save_account('MY_API_TOKEN', 'MY_URL')
71 ```
72
73 After calling `IBMQ.save_account()`, your credentials will be stored on disk.
74 Once they are stored, at any point in the future you can load and use them
75 in your program simply via:
76
77 ```python
78 >>> from qiskit import IBMQ
79 >>> IBMQ.load_account()
80 ```
81
82 Those who do not want to save their credentials to disk should use instead:
83
84 ```python
85 >>> from qiskit import IBMQ
86 >>> IBMQ.enable_account('MY_API_TOKEN')
87 ```
88
89 and the token will only be active for the session. For examples using Terra with real
90 devices we have provided a set of examples in **examples/python** and we suggest starting with [using_qiskit_terra_level_0.py](examples/python/using_qiskit_terra_level_0.py) and working up in
91 the levels.
92
93 ## Contribution Guidelines
94
95 If you'd like to contribute to Qiskit Terra, please take a look at our
96 [contribution guidelines](CONTRIBUTING.md). This project adheres to Qiskit's [code of conduct](CODE_OF_CONDUCT.md). By participating, you are expected to uphold this code.
97
98 We use [GitHub issues](https://github.com/Qiskit/qiskit-terra/issues) for tracking requests and bugs. Please
99 [join the Qiskit Slack community](https://ibm.co/joinqiskitslack)
100 and use our [Qiskit Slack channel](https://qiskit.slack.com) for discussion and simple questions.
101 For questions that are more suited for a forum we use the Qiskit tag in the [Stack Exchange](https://quantumcomputing.stackexchange.com/questions/tagged/qiskit).
102
103 ## Next Steps
104
105 Now you're set up and ready to check out some of the other examples from our
106 [Qiskit Tutorials](https://github.com/Qiskit/qiskit-tutorials) repository.
107
108 ## Authors and Citation
109
110 Qiskit Terra is the work of [many people](https://github.com/Qiskit/qiskit-terra/graphs/contributors) who contribute
111 to the project at different levels. If you use Qiskit, please cite as per the included [BibTeX file](https://github.com/Qiskit/qiskit/blob/master/Qiskit.bib).
112
113 ## Changelog and Release Notes
114
115 The changelog for a particular release is dynamically generated and gets
116 written to the release page on Github for each release. For example, you can
117 find the page for the `0.9.0` release here:
118
119 https://github.com/Qiskit/qiskit-terra/releases/tag/0.9.0
120
121 The changelog for the current release can be found in the releases tab:
122 [](https://github.com/Qiskit/qiskit-terra/releases)
123 The changelog provides a quick overview of notable changes for a given
124 release.
125
126 Additionally, as part of each release detailed release notes are written to
127 document in detail what has changed as part of a release. This includes any
128 documentation on potential breaking changes on upgrade and new features.
129 For example, You can find the release notes for the `0.9.0` release in the
130 Qiskit documentation here:
131
132 https://qiskit.org/documentation/release_notes.html#terra-0-9
133
134 ## License
135
136 [Apache License 2.0](LICENSE.txt)
137
[end of README.md]
[start of qiskit/tools/jupyter/backend_overview.py]
1 # This code is part of Qiskit.
2 #
3 # (C) Copyright IBM 2017, 2018.
4 #
5 # This code is licensed under the Apache License, Version 2.0. You may
6 # obtain a copy of this license in the LICENSE.txt file in the root directory
7 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
8 #
9 # Any modifications or derivative works of this code must retain this
10 # copyright notice, and modified files need to carry a notice indicating
11 # that they have been altered from the originals.
12
13 """A module for monitoring backends."""
14
15 import time
16 import threading
17 import types
18 from IPython.display import display
19 from IPython.core.magic import line_magic, Magics, magics_class
20 from IPython.core import magic_arguments
21 import matplotlib.pyplot as plt
22 import ipywidgets as widgets
23 from qiskit.tools.monitor.overview import get_unique_backends
24 from qiskit.visualization.gate_map import plot_gate_map
25
26
27 @magics_class
28 class BackendOverview(Magics):
29 """A class of status magic functions."""
30
31 @line_magic
32 @magic_arguments.magic_arguments()
33 @magic_arguments.argument(
34 "-i", "--interval", type=float, default=60, help="Interval for status check."
35 )
36 def qiskit_backend_overview(self, line=""):
37 """A Jupyter magic function to monitor backends."""
38 args = magic_arguments.parse_argstring(self.qiskit_backend_overview, line)
39
40 unique_hardware_backends = get_unique_backends()
41 _value = "<h2 style ='color:#ffffff; background-color:#000000;"
42 _value += "padding-top: 1%; padding-bottom: 1%;padding-left: 1%;"
43 _value += "margin-top: 0px'>Backend Overview</h2>"
44 backend_title = widgets.HTML(value=_value, layout=widgets.Layout(margin="0px 0px 0px 0px"))
45
46 build_back_widgets = [backend_widget(b) for b in unique_hardware_backends]
47
48 _backends = []
49 # Sort backends by operational or not
50 oper_ord_backends = []
51 for n, back in enumerate(unique_hardware_backends):
52 if back.status().operational:
53 oper_ord_backends = [build_back_widgets[n]] + oper_ord_backends
54 _backends = [back] + _backends
55 else:
56 oper_ord_backends = oper_ord_backends + [build_back_widgets[n]]
57 _backends = _backends + [back]
58
59 qubit_label = widgets.Label(value="Num. Qubits")
60 qv_label = widgets.Label(value="Quantum Vol.")
61 pend_label = widgets.Label(
62 value="Pending Jobs", layout=widgets.Layout(margin="5px 0px 0px 0px")
63 )
64 least_label = widgets.Label(
65 value="Least Busy", layout=widgets.Layout(margin="10px 0px 0px 0px")
66 )
67 oper_label = widgets.Label(
68 value="Operational", layout=widgets.Layout(margin="5px 0px 0px 0px")
69 )
70 t12_label = widgets.Label(
71 value="Avg. T1 / T2", layout=widgets.Layout(margin="10px 0px 0px 0px")
72 )
73 cx_label = widgets.Label(
74 value="Avg. CX Err.", layout=widgets.Layout(margin="8px 0px 0px 0px")
75 )
76 meas_label = widgets.Label(
77 value="Avg. Meas. Err.", layout=widgets.Layout(margin="8px 0px 0px 0px")
78 )
79
80 labels_widget = widgets.VBox(
81 [
82 qubit_label,
83 qv_label,
84 pend_label,
85 oper_label,
86 least_label,
87 t12_label,
88 cx_label,
89 meas_label,
90 ],
91 layout=widgets.Layout(margin="295px 0px 0px 0px", min_width="100px"),
92 )
93
94 backend_grid = GridBox_with_thread(
95 children=oper_ord_backends,
96 layout=widgets.Layout(
97 grid_template_columns="250px " * len(unique_hardware_backends),
98 grid_template_rows="auto",
99 grid_gap="0px 25px",
100 ),
101 )
102
103 backend_grid._backends = _backends # pylint: disable=attribute-defined-outside-init
104 backend_grid._update = types.MethodType( # pylint: disable=attribute-defined-outside-init
105 update_backend_info, backend_grid
106 )
107
108 backend_grid._thread = threading.Thread( # pylint: disable=attribute-defined-outside-init
109 target=backend_grid._update, args=(args.interval,)
110 )
111 backend_grid._thread.start()
112
113 back_box = widgets.HBox([labels_widget, backend_grid])
114
115 back_monitor = widgets.VBox([backend_title, back_box])
116 display(back_monitor)
117
118
119 class GridBox_with_thread(widgets.GridBox): # pylint: disable=invalid-name
120 """A GridBox that will close an attached thread"""
121
122 def __del__(self):
123 """Object disposal"""
124 if hasattr(self, "_thread"):
125 try:
126 self._thread.do_run = False
127 self._thread.join()
128 except Exception: # pylint: disable=broad-except
129 pass
130 self.close()
131
132
133 def backend_widget(backend):
134 """Creates a backend widget."""
135 config = backend.configuration().to_dict()
136 props = backend.properties().to_dict()
137
138 name = widgets.HTML(value=f"<h4>{backend.name()}</h4>", layout=widgets.Layout())
139
140 num_qubits = config["n_qubits"]
141
142 qv_val = "-"
143 if "quantum_volume" in config.keys():
144 if config["quantum_volume"]:
145 qv_val = config["quantum_volume"]
146
147 qubit_count = widgets.HTML(
148 value=f"<h5><b>{num_qubits}</b></h5>",
149 layout=widgets.Layout(justify_content="center"),
150 )
151
152 qv_value = widgets.HTML(
153 value=f"<h5>{qv_val}</h5>",
154 layout=widgets.Layout(justify_content="center"),
155 )
156
157 cmap = widgets.Output(
158 layout=widgets.Layout(
159 min_width="250px",
160 max_width="250px",
161 max_height="250px",
162 min_height="250px",
163 justify_content="center",
164 align_items="center",
165 margin="0px 0px 0px 0px",
166 )
167 )
168
169 with cmap:
170 _cmap_fig = plot_gate_map(backend, plot_directed=False, label_qubits=False)
171 if _cmap_fig is not None:
172 display(_cmap_fig)
173 # Prevents plot from showing up twice.
174 plt.close(_cmap_fig)
175
176 pending = generate_jobs_pending_widget()
177
178 is_oper = widgets.HTML(value="<h5></h5>", layout=widgets.Layout(justify_content="center"))
179
180 least_busy = widgets.HTML(value="<h5></h5>", layout=widgets.Layout(justify_content="center"))
181
182 t1_units = props["qubits"][0][0]["unit"]
183 avg_t1 = round(sum(q[0]["value"] for q in props["qubits"]) / num_qubits, 1)
184 avg_t2 = round(sum(q[1]["value"] for q in props["qubits"]) / num_qubits, 1)
185 t12_widget = widgets.HTML(
186 value=f"<h5>{avg_t1} / {avg_t2} {t1_units}</h5>",
187 layout=widgets.Layout(),
188 )
189
190 avg_cx_err = "NA"
191 if config["coupling_map"]:
192 sum_cx_err = 0
193 num_cx = 0
194 for gate in props["gates"]:
195 if gate["gate"] == "cx":
196 for param in gate["parameters"]:
197 if param["name"] == "gate_error":
198 # Value == 1.0 means gate effectively off
199 if param["value"] != 1.0:
200 sum_cx_err += param["value"]
201 num_cx += 1
202 avg_cx_err = round(sum_cx_err / (num_cx), 4)
203
204 cx_widget = widgets.HTML(value=f"<h5>{avg_cx_err}</h5>", layout=widgets.Layout())
205
206 avg_meas_err = 0
207 for qub in props["qubits"]:
208 for item in qub:
209 if item["name"] == "readout_error":
210 avg_meas_err += item["value"]
211 avg_meas_err = round(avg_meas_err / num_qubits, 4)
212 meas_widget = widgets.HTML(value=f"<h5>{avg_meas_err}</h5>", layout=widgets.Layout())
213
214 out = widgets.VBox(
215 [
216 name,
217 cmap,
218 qubit_count,
219 qv_value,
220 pending,
221 is_oper,
222 least_busy,
223 t12_widget,
224 cx_widget,
225 meas_widget,
226 ],
227 layout=widgets.Layout(display="inline-flex", flex_flow="column", align_items="center"),
228 )
229
230 out._is_alive = True
231 return out
232
233
234 def update_backend_info(self, interval=60):
235 """Updates the monitor info
236 Called from another thread.
237 """
238 my_thread = threading.currentThread()
239 current_interval = 0
240 started = False
241 all_dead = False
242 stati = [None] * len(self._backends)
243 while getattr(my_thread, "do_run", True) and not all_dead:
244 if current_interval == interval or started is False:
245 for ind, back in enumerate(self._backends):
246 _value = self.children[ind].children[2].value
247 _head = _value.split("<b>")[0]
248 try:
249 _status = back.status()
250 stati[ind] = _status
251 except Exception: # pylint: disable=broad-except
252 self.children[ind].children[2].value = _value.replace(
253 _head, "<h5 style='color:#ff5c49'>"
254 )
255 self.children[ind]._is_alive = False
256 else:
257 self.children[ind]._is_alive = True
258 self.children[ind].children[2].value = _value.replace(_head, "<h5>")
259
260 idx = list(range(len(self._backends)))
261 pending = [s.pending_jobs for s in stati]
262 _, least_idx = zip(*sorted(zip(pending, idx)))
263
264 # Make sure least pending is operational
265 for ind in least_idx:
266 if stati[ind].operational:
267 least_pending_idx = ind
268 break
269
270 for var in idx:
271 if var == least_pending_idx:
272 self.children[var].children[6].value = "<h5 style='color:#34bc6e'>True</h5>"
273 else:
274 self.children[var].children[6].value = "<h5 style='color:#dc267f'>False</h5>"
275
276 self.children[var].children[4].children[1].max = max(
277 self.children[var].children[4].children[1].max, pending[var] + 10
278 )
279 self.children[var].children[4].children[1].value = pending[var]
280 if stati[var].operational:
281 self.children[var].children[5].value = "<h5 style='color:#34bc6e'>True</h5>"
282 else:
283 self.children[var].children[5].value = "<h5 style='color:#dc267f'>False</h5>"
284
285 started = True
286 current_interval = 0
287 time.sleep(1)
288 all_dead = not any(wid._is_alive for wid in self.children)
289 current_interval += 1
290
291
292 def generate_jobs_pending_widget():
293 """Generates a jobs_pending progress bar widget."""
294 pbar = widgets.IntProgress(
295 value=0,
296 min=0,
297 max=50,
298 description="",
299 orientation="horizontal",
300 layout=widgets.Layout(max_width="180px"),
301 )
302 pbar.style.bar_color = "#71cddd"
303
304 pbar_current = widgets.Label(value=str(pbar.value), layout=widgets.Layout(min_width="auto"))
305 pbar_max = widgets.Label(value=str(pbar.max), layout=widgets.Layout(min_width="auto"))
306
307 def _on_max_change(change):
308 pbar_max.value = str(change["new"])
309
310 def _on_val_change(change):
311 pbar_current.value = str(change["new"])
312
313 pbar.observe(_on_max_change, names="max")
314 pbar.observe(_on_val_change, names="value")
315
316 jobs_widget = widgets.HBox(
317 [pbar_current, pbar, pbar_max],
318 layout=widgets.Layout(max_width="250px", min_width="250px", justify_content="center"),
319 )
320
321 return jobs_widget
322
[end of qiskit/tools/jupyter/backend_overview.py]
[start of qiskit/tools/jupyter/copyright.py]
1 # This code is part of Qiskit.
2 #
3 # (C) Copyright IBM 2017, 2018.
4 #
5 # This code is licensed under the Apache License, Version 2.0. You may
6 # obtain a copy of this license in the LICENSE.txt file in the root directory
7 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
8 #
9 # Any modifications or derivative works of this code must retain this
10 # copyright notice, and modified files need to carry a notice indicating
11 # that they have been altered from the originals.
12 # pylint: disable=unused-argument
13
14 """A module for monitoring backends."""
15
16 import datetime
17 from IPython.display import HTML, display
18 from IPython.core.magic import line_magic, Magics, magics_class
19
20
21 @magics_class
22 class Copyright(Magics):
23 """A class of status magic functions."""
24
25 @line_magic
26 def qiskit_copyright(self, line="", cell=None):
27 """A Jupyter magic function return qiskit copyright"""
28 now = datetime.datetime.now()
29
30 html = "<div style='width: 100%; background-color:#d5d9e0;"
31 html += "padding-left: 10px; padding-bottom: 10px; padding-right: 10px; padding-top: 5px'>"
32 html += "<h3>This code is a part of Qiskit</h3>"
33 html += "<p>© Copyright IBM 2017, %s.</p>" % now.year
34 html += "<p>This code is licensed under the Apache License, Version 2.0. You may<br>"
35 html += "obtain a copy of this license in the LICENSE.txt file in the root directory<br> "
36 html += "of this source tree or at http://www.apache.org/licenses/LICENSE-2.0."
37
38 html += "<p>Any modifications or derivative works of this code must retain this<br>"
39 html += "copyright notice, and modified files need to carry a notice indicating<br>"
40 html += "that they have been altered from the originals.</p>"
41 html += "</div>"
42 return display(HTML(html))
43
[end of qiskit/tools/jupyter/copyright.py]
[start of qiskit/tools/jupyter/job_watcher.py]
1 # This code is part of Qiskit.
2 #
3 # (C) Copyright IBM 2017, 2018.
4 #
5 # This code is licensed under the Apache License, Version 2.0. You may
6 # obtain a copy of this license in the LICENSE.txt file in the root directory
7 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
8 #
9 # Any modifications or derivative works of this code must retain this
10 # copyright notice, and modified files need to carry a notice indicating
11 # that they have been altered from the originals.
12 # pylint: disable=unused-argument
13
14 """A module for the job watcher"""
15
16 from IPython.core.magic import line_magic, Magics, magics_class
17 from qiskit.tools.events.pubsub import Subscriber
18 from qiskit.exceptions import MissingOptionalLibraryError
19
20 try:
21 from qiskit.providers.ibmq.job.exceptions import IBMQJobApiError
22
23 HAS_IBMQ = True
24 except ImportError:
25 HAS_IBMQ = False
26 from .job_widgets import build_job_viewer, make_clear_button, make_labels, create_job_widget
27 from .watcher_monitor import _job_monitor
28
29
30 class JobWatcher(Subscriber):
31 """An IBM Q job watcher."""
32
33 def __init__(self):
34 super().__init__()
35 if not HAS_IBMQ:
36 raise MissingOptionalLibraryError(
37 libname="qiskit-ibmq-provider",
38 name="the job watcher",
39 pip_install="pip install qiskit-ibmq-provider",
40 )
41 self.jobs = []
42 self._init_subscriber()
43 self.job_viewer = None
44 self._clear_button = make_clear_button(self)
45 self._labels = make_labels()
46 self.refresh_viewer()
47
48 def refresh_viewer(self):
49 """Refreshes the job viewer."""
50 if self.job_viewer is not None:
51 self.job_viewer.children[0].children = [self._clear_button, self._labels] + list(
52 reversed(self.jobs)
53 )
54
55 def stop_viewer(self):
56 """Stops the job viewer."""
57 if self.job_viewer:
58 self.job_viewer.close()
59 self.job_viewer = None
60
61 def start_viewer(self):
62 """Starts the job viewer"""
63 self.job_viewer = build_job_viewer()
64 self.refresh_viewer()
65
66 def update_single_job(self, update_info):
67 """Update a single job instance
68
69 Args:
70 update_info (tuple): Updated job info.
71 """
72 job_id = update_info[0]
73 found_job = False
74 ind = None
75 for idx, job in enumerate(self.jobs):
76 if job.job_id == job_id:
77 found_job = True
78 ind = idx
79 break
80 if found_job:
81 job_wid = self.jobs[ind]
82 # update status
83 if update_info[1] == "DONE":
84 stat = f"<font style='color:#34BC6E'>{update_info[1]}</font>"
85 elif update_info[1] == "ERROR":
86 stat = f"<font style='color:#DC267F'>{update_info[1]}</font>"
87 elif update_info[1] == "CANCELLED":
88 stat = f"<font style='color:#FFB000'>{update_info[1]}</font>"
89 else:
90 stat = update_info[1]
91 job_wid.children[3].value = stat
92 # update queue
93 if update_info[2] == 0:
94 queue = "-"
95 else:
96 queue = str(update_info[2])
97 job_wid.children[4].value = queue
98 # update msg
99 job_wid.children[5].value = update_info[3]
100
101 def cancel_job(self, job_id):
102 """Cancels a job in the watcher
103
104 Args:
105 job_id (str): Job id to remove.
106
107 Raises:
108 Exception: Job id not found.
109 """
110 do_pop = False
111 ind = None
112 for idx, job in enumerate(self.jobs):
113 if job.job_id == job_id:
114 do_pop = True
115 ind = idx
116 break
117 if not do_pop:
118 raise Exception("job_id not found")
119 if "CANCELLED" not in self.jobs[ind].children[3].value:
120 try:
121 self.jobs[ind].job.cancel()
122 status = self.jobs[ind].job.status()
123 except IBMQJobApiError:
124 pass
125 else:
126 self.update_single_job((self.jobs[ind].job_id, status.name, 0, status.value))
127
128 def clear_done(self):
129 """Clears the done jobs from the list."""
130 _temp_jobs = []
131 do_refresh = False
132 for job in self.jobs:
133 job_str = job.children[3].value
134 if not (("DONE" in job_str) or ("CANCELLED" in job_str) or ("ERROR" in job_str)):
135 _temp_jobs.append(job)
136 else:
137 job.close()
138 do_refresh = True
139 if do_refresh:
140 self.jobs = _temp_jobs
141 self.refresh_viewer()
142
143 def _init_subscriber(self):
144 def _add_job(job):
145 status = job.status()
146 job_widget = create_job_widget(
147 self, job, job.backend(), status.name, job.queue_position(), status.value
148 )
149 self.jobs.append(job_widget)
150 self.refresh_viewer()
151 _job_monitor(job, status, self)
152
153 self.subscribe("ibmq.job.start", _add_job)
154
155
156 @magics_class
157 class JobWatcherMagic(Magics):
158 """A class for enabling/disabling the job watcher."""
159
160 @line_magic
161 def qiskit_job_watcher(self, line="", cell=None):
162 """A Jupyter magic function to enable job watcher."""
163 _JOB_WATCHER.stop_viewer()
164 _JOB_WATCHER.start_viewer()
165
166 @line_magic
167 def qiskit_disable_job_watcher(self, line="", cell=None):
168 """A Jupyter magic function to disable job watcher."""
169 _JOB_WATCHER.stop_viewer()
170
171
172 if HAS_IBMQ:
173 # The Jupyter job watcher instance
174 _JOB_WATCHER = JobWatcher()
175
[end of qiskit/tools/jupyter/job_watcher.py]
[start of qiskit/tools/jupyter/jupyter_magics.py]
1 # This code is part of Qiskit.
2 #
3 # (C) Copyright IBM 2017, 2018.
4 #
5 # This code is licensed under the Apache License, Version 2.0. You may
6 # obtain a copy of this license in the LICENSE.txt file in the root directory
7 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
8 #
9 # Any modifications or derivative works of this code must retain this
10 # copyright notice, and modified files need to carry a notice indicating
11 # that they have been altered from the originals.
12
13 """A module of magic functions"""
14
15 import time
16 import threading
17 from IPython import get_ipython
18 from IPython.display import display
19 from IPython.core import magic_arguments
20 from IPython.core.magic import cell_magic, line_magic, Magics, magics_class, register_line_magic
21
22 from qiskit.exceptions import MissingOptionalLibraryError
23
24 try:
25 import ipywidgets as widgets
26 except ImportError as ex:
27 raise MissingOptionalLibraryError(
28 libname="ipywidgets",
29 name="jupyter magics",
30 pip_install="pip install ipywidgets",
31 ) from ex
32 import qiskit
33 from qiskit.visualization.matplotlib import HAS_MATPLOTLIB
34 from qiskit.tools.events.progressbar import TextProgressBar
35 from .progressbar import HTMLProgressBar
36 from .library import circuit_library_widget
37
38
39 def _html_checker(job_var, interval, status, header, _interval_set=False):
40 """Internal function that updates the status
41 of a HTML job monitor.
42
43 Args:
44 job_var (BaseJob): The job to keep track of.
45 interval (int): The status check interval
46 status (widget): HTML ipywidget for output to screen
47 header (str): String representing HTML code for status.
48 _interval_set (bool): Was interval set by user?
49 """
50 job_status = job_var.status()
51 job_status_name = job_status.name
52 job_status_msg = job_status.value
53 status.value = header % (job_status_msg)
54 while job_status_name not in ["DONE", "CANCELLED"]:
55 time.sleep(interval)
56 job_status = job_var.status()
57 job_status_name = job_status.name
58 job_status_msg = job_status.value
59 if job_status_name == "ERROR":
60 break
61 if job_status_name == "QUEUED":
62 job_status_msg += " (%s)" % job_var.queue_position()
63 if job_var.queue_position() is None:
64 interval = 2
65 elif not _interval_set:
66 interval = max(job_var.queue_position(), 2)
67 else:
68 if not _interval_set:
69 interval = 2
70 status.value = header % (job_status_msg)
71
72 status.value = header % (job_status_msg)
73
74
75 @magics_class
76 class StatusMagic(Magics):
77 """A class of status magic functions."""
78
79 @cell_magic
80 @magic_arguments.magic_arguments()
81 @magic_arguments.argument(
82 "-i", "--interval", type=float, default=None, help="Interval for status check."
83 )
84 def qiskit_job_status(self, line="", cell=None):
85 """A Jupyter magic function to check the status of a Qiskit job instance."""
86 args = magic_arguments.parse_argstring(self.qiskit_job_status, line)
87
88 if args.interval is None:
89 args.interval = 2
90 _interval_set = False
91 else:
92 _interval_set = True
93
94 # Split cell lines to get LHS variables
95 cell_lines = cell.split("\n")
96 line_vars = []
97 for cline in cell_lines:
98 if "=" in cline and "==" not in cline:
99 line_vars.append(cline.replace(" ", "").split("=")[0])
100 elif ".append(" in cline:
101 line_vars.append(cline.replace(" ", "").split("(")[0])
102
103 # Execute the cell
104 self.shell.ex(cell)
105
106 # Look for all vars that are BaseJob instances
107 jobs = []
108 for var in line_vars:
109 iter_var = False
110 if "#" not in var:
111 # The line var is a list or array, but we cannot parse the index
112 # so just iterate over the whole array for jobs.
113 if "[" in var:
114 var = var.split("[")[0]
115 iter_var = True
116 elif ".append" in var:
117 var = var.split(".append")[0]
118 iter_var = True
119
120 if iter_var:
121 for item in self.shell.user_ns[var]:
122 if isinstance(item, qiskit.providers.basejob.BaseJob):
123 jobs.append(item)
124 else:
125 if isinstance(self.shell.user_ns[var], qiskit.providers.basejob.BaseJob):
126 jobs.append(self.shell.user_ns[var])
127
128 # Must have one job class
129 if not any(jobs):
130 raise Exception("Cell must contain at least one variable of BaseJob type.")
131
132 # List index of job if checking status of multiple jobs.
133 multi_job = False
134 if len(jobs) > 1:
135 multi_job = True
136
137 job_checkers = []
138 # Loop over every BaseJob that was found.
139 for idx, job_var in enumerate(jobs):
140 style = "font-size:16px;"
141 if multi_job:
142 idx_str = "[%s]" % idx
143 else:
144 idx_str = ""
145 header = f"<p style='{style}'>Job Status {idx_str}: %s </p>"
146 status = widgets.HTML(value=header % job_var.status().value)
147
148 thread = threading.Thread(
149 target=_html_checker, args=(job_var, args.interval, status, header, _interval_set)
150 )
151 thread.start()
152 job_checkers.append(status)
153
154 # Group all HTML widgets into single vertical layout
155 box = widgets.VBox(job_checkers)
156 display(box)
157
158
159 @magics_class
160 class ProgressBarMagic(Magics):
161 """A class of progress bar magic functions."""
162
163 @line_magic
164 @magic_arguments.magic_arguments()
165 @magic_arguments.argument(
166 "-t", "--type", type=str, default="html", help="Type of progress bar, 'html' or 'text'."
167 )
168 def qiskit_progress_bar(self, line="", cell=None): # pylint: disable=unused-argument
169 """A Jupyter magic function to generate progressbar."""
170 args = magic_arguments.parse_argstring(self.qiskit_progress_bar, line)
171 if args.type == "html":
172 pbar = HTMLProgressBar()
173 elif args.type == "text":
174 pbar = TextProgressBar()
175 else:
176 raise qiskit.QiskitError("Invalid progress bar type.")
177
178 return pbar
179
180
181 if HAS_MATPLOTLIB and get_ipython():
182
183 @register_line_magic
184 def circuit_library_info(circuit: qiskit.QuantumCircuit) -> None:
185 """Displays library information for a quantum circuit.
186
187 Args:
188 circuit: Input quantum circuit.
189 """
190 shell = get_ipython()
191 circ = shell.ev(circuit)
192 circuit_library_widget(circ)
193
[end of qiskit/tools/jupyter/jupyter_magics.py]
[start of qiskit/tools/jupyter/library.py]
1 # This code is part of Qiskit.
2 #
3 # (C) Copyright IBM 2017, 2018.
4 #
5 # This code is licensed under the Apache License, Version 2.0. You may
6 # obtain a copy of this license in the LICENSE.txt file in the root directory
7 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
8 #
9 # Any modifications or derivative works of this code must retain this
10 # copyright notice, and modified files need to carry a notice indicating
11 # that they have been altered from the originals.
12
13 # pylint: disable=invalid-name,no-name-in-module,ungrouped-imports
14
15 """A circuit library widget module"""
16
17 import ipywidgets as wid
18 from IPython.display import display
19 from qiskit import QuantumCircuit
20 from qiskit.exceptions import MissingOptionalLibraryError
21
22 try:
23 import pygments
24 from pygments.formatters import HtmlFormatter
25 from qiskit.qasm.pygments import QasmHTMLStyle, OpenQASMLexer
26
27 HAS_PYGMENTS = True
28 except Exception: # pylint: disable=broad-except
29 HAS_PYGMENTS = False
30
31
32 def circuit_data_table(circuit: QuantumCircuit) -> wid.HTML:
33 """Create a HTML table widget for a given quantum circuit.
34
35 Args:
36 circuit: Input quantum circuit.
37
38 Returns:
39 Output widget.
40 """
41
42 ops = circuit.count_ops()
43
44 num_nl = circuit.num_nonlocal_gates()
45
46 html = "<table>"
47 html += """<style>
48 table {
49 font-family: "IBM Plex Sans", Arial, Helvetica, sans-serif;
50 border-collapse: collapse;
51 width: 100%;
52 border-left: 2px solid #212121;
53 }
54
55 th {
56 text-align: left;
57 padding: 5px 5px 5px 5px;
58 width: 100%;
59 background-color: #988AFC;
60 color: #fff;
61 font-size: 14px;
62 border-left: 2px solid #988AFC;
63 }
64
65 td {
66 text-align: left;
67 padding: 5px 5px 5px 5px;
68 width: 100%;
69 font-size: 12px;
70 font-weight: medium;
71 }
72
73 tr:nth-child(even) {background-color: #f6f6f6;}
74 </style>"""
75 html += f"<tr><th>{circuit.name}</th><th></tr>"
76 html += f"<tr><td>Width</td><td>{circuit.width()}</td></tr>"
77 html += f"<tr><td>Depth</td><td>{circuit.depth()}</td></tr>"
78 html += f"<tr><td>Total Gates</td><td>{sum(ops.values())}</td></tr>"
79 html += f"<tr><td>Non-local Gates</td><td>{num_nl}</td></tr>"
80 html += "</table>"
81
82 out_wid = wid.HTML(html)
83 return out_wid
84
85
86 head_style = (
87 "font-family: IBM Plex Sans, Arial, Helvetica, sans-serif;"
88 " font-size: 20px; font-weight: medium;"
89 )
90
91 property_label = wid.HTML(
92 f"<p style='{head_style}'>Circuit Properties</p>",
93 layout=wid.Layout(margin="0px 0px 10px 0px"),
94 )
95
96
97 def properties_widget(circuit: QuantumCircuit) -> wid.VBox:
98 """Create a HTML table widget with header for a given quantum circuit.
99
100 Args:
101 circuit: Input quantum circuit.
102
103 Returns:
104 Output widget.
105 """
106 properties = wid.VBox(
107 children=[property_label, circuit_data_table(circuit)],
108 layout=wid.Layout(width="40%", height="auto"),
109 )
110 return properties
111
112
113 def qasm_widget(circuit: QuantumCircuit) -> wid.VBox:
114 """Generate a QASM widget with header for a quantum circuit.
115
116 Args:
117 circuit: Input quantum circuit.
118
119 Returns:
120 Output widget.
121
122 Raises:
123 MissingOptionalLibraryError: If pygments is not installed
124 """
125 if not HAS_PYGMENTS:
126 raise MissingOptionalLibraryError(
127 libname="pygments>2.4",
128 name="qasm_widget",
129 pip_install="pip install pygments",
130 )
131 qasm_code = circuit.qasm()
132 code = pygments.highlight(qasm_code, OpenQASMLexer(), HtmlFormatter())
133
134 html_style = HtmlFormatter(style=QasmHTMLStyle).get_style_defs(".highlight")
135
136 code_style = (
137 """
138 <style>
139 .highlight
140 {
141 font-family: monospace;
142 font-size: 14px;
143 line-height: 1.7em;
144 }
145 .highlight .err { color: #000000; background-color: #FFFFFF }
146 %s
147 </style>
148 """
149 % html_style
150 )
151
152 out = wid.HTML(
153 code_style + code,
154 layout=wid.Layout(max_height="500px", height="auto", overflow="scroll scroll"),
155 )
156
157 out_label = wid.HTML(
158 f"<p style='{head_style}'>OpenQASM</p>",
159 layout=wid.Layout(margin="0px 0px 10px 0px"),
160 )
161
162 qasm = wid.VBox(
163 children=[out_label, out],
164 layout=wid.Layout(
165 height="auto", max_height="500px", width="60%", margin="0px 0px 0px 20px"
166 ),
167 )
168
169 qasm._code_length = len(qasm_code.split("\n"))
170 return qasm
171
172
173 def circuit_diagram_widget() -> wid.Box:
174 """Create a circuit diagram widget.
175
176 Returns:
177 Output widget.
178 """
179 # The max circuit height corresponds to a 20Q circuit with flat
180 # classical register.
181 top_out = wid.Output(
182 layout=wid.Layout(
183 width="100%",
184 height="auto",
185 max_height="1000px",
186 overflow="hidden scroll",
187 )
188 )
189
190 top = wid.Box(children=[top_out], layout=wid.Layout(width="100%", height="auto"))
191
192 return top
193
194
195 def circuit_library_widget(circuit: QuantumCircuit) -> None:
196 """Create a circuit library widget.
197
198 Args:
199 circuit: Input quantum circuit.
200 """
201 qasm_wid = qasm_widget(circuit)
202 sep_length = str(min(20 * qasm_wid._code_length, 495))
203
204 # The separator widget
205 sep = wid.HTML(
206 "<div style='border-left: 3px solid #212121;" "height: {}px;'></div>".format(sep_length),
207 layout=wid.Layout(height="auto", max_height="495px", margin="40px 0px 0px 20px"),
208 )
209 bottom = wid.HBox(
210 children=[properties_widget(circuit), sep, qasm_widget(circuit)],
211 layout=wid.Layout(max_height="550px", height="auto"),
212 )
213
214 top = circuit_diagram_widget()
215
216 with top.children[0]:
217 display(circuit.draw(output="mpl"))
218
219 display(wid.VBox(children=[top, bottom], layout=wid.Layout(width="100%", height="auto")))
220
[end of qiskit/tools/jupyter/library.py]
[start of qiskit/tools/jupyter/monospace.py]
1 # This code is part of Qiskit.
2 #
3 # (C) Copyright IBM 2017, 2020.
4 #
5 # This code is licensed under the Apache License, Version 2.0. You may
6 # obtain a copy of this license in the LICENSE.txt file in the root directory
7 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
8 #
9 # Any modifications or derivative works of this code must retain this
10 # copyright notice, and modified files need to carry a notice indicating
11 # that they have been altered from the originals.
12 # pylint: disable=unused-argument
13
14 """A Jupyter magic to choose a real monospaced fonts, if available."""
15
16 from IPython.display import HTML, display
17 from IPython.core.magic import line_magic, Magics, magics_class
18
19
20 @magics_class
21 class MonospacedOutput(Magics):
22 """A class for setting "Courier New" for output code."""
23
24 @line_magic
25 def monospaced_output(self, line="", cell=None):
26 """A Jupyter magic function to set "Courier New" for output code."""
27 html = """<style type='text/css'>
28 code, kbd, pre, samp {font-family: Courier New,monospace;line-height: 1.1;}</style>"""
29 display(HTML(html))
30
[end of qiskit/tools/jupyter/monospace.py]
[start of tools/update_fake_backends.py]
1 #!/usr/bin/env python3
2
3 # This code is part of Qiskit.
4 #
5 # (C) Copyright IBM 2020.
6 #
7 # This code is licensed under the Apache License, Version 2.0. You may
8 # obtain a copy of this license in the LICENSE.txt file in the root directory
9 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
10 #
11 # Any modifications or derivative works of this code must retain this
12 # copyright notice, and modified files need to carry a notice indicating
13 # that they have been altered from the originals.
14
15
16 import argparse
17 from datetime import datetime
18 import json
19 import os
20
21 from qiskit import IBMQ
22 from qiskit.circuit.parameterexpression import ParameterExpression
23
24
25 class BackendEncoder(json.JSONEncoder):
26 """A json encoder for qobj"""
27
28 def default(self, o):
29 # Convert numpy arrays:
30 if hasattr(o, "tolist"):
31 return o.tolist()
32 # Use Qobj complex json format:
33 if isinstance(o, complex):
34 return [o.real, o.imag]
35 if isinstance(o, ParameterExpression):
36 return float(o)
37 if isinstance(o, datetime):
38 return o.isoformat()
39 return json.JSONEncoder.default(self, o)
40
41
42 DEFAULT_DIR = os.path.join(
43 os.path.dirname(os.path.dirname(os.path.abspath(__file__))),
44 "qiskit",
45 "test",
46 "mock",
47 "backends",
48 )
49
50
51 def main():
52 parser = argparse.ArgumentParser(description="Generate fake backend snapshots")
53 parser.add_argument("--dir", "-d", type=str, default=DEFAULT_DIR)
54 parser.add_argument("backends", type=str, nargs="*")
55 parser.add_argument("--project", type=str, default=None)
56 parser.add_argument("--hub", type=str, default=None)
57 parser.add_argument("--group", type=str, default=None)
58 args = parser.parse_args()
59 provider = IBMQ.load_account()
60 if args.hub or args.group or args.project:
61 provider = IBMQ.get_provider(hub=args.hub, group=args.group, project=args.project)
62 ibmq_backends = provider.backends()
63 for backend in ibmq_backends:
64 raw_name = backend.name()
65 if "sim" in raw_name:
66 continue
67 if raw_name == "ibmqx2":
68 name = "yorktown"
69 else:
70 name = raw_name.split("_")[1]
71 if name == "16":
72 name = "melbourne"
73 if not args.backends or (name in args.backends or raw_name in args.backends):
74 if not os.path.isdir(os.path.join(args.dir, name)):
75 print("Skipping, fake backend for %s does not exist yet" % name)
76 continue
77 config = backend.configuration()
78 props = backend.properties()
79 defs = backend.defaults()
80 if config:
81 config_path = os.path.join(args.dir, name, "conf_%s.json" % name)
82 config_dict = config.to_dict()
83
84 with open(config_path, "w") as fd:
85 fd.write(json.dumps(config_dict, cls=BackendEncoder))
86 if props:
87 props_path = os.path.join(args.dir, name, "props_%s.json" % name)
88 with open(props_path, "w") as fd:
89 fd.write(json.dumps(props.to_dict(), cls=BackendEncoder))
90 if defs:
91 defs_path = os.path.join(args.dir, name, "defs_%s.json" % name)
92 with open(defs_path, "w") as fd:
93 fd.write(json.dumps(defs.to_dict(), cls=BackendEncoder))
94
95
96 if __name__ == main():
97 main()
98
[end of tools/update_fake_backends.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
Qiskit/qiskit
|
247f44ef87b08302514e512e4ed36601e95f33cd
|
Pylint not applied to tools directory
Pylint is not applied to tools directory. Some of the tools are non-compliant with project style.
|
2021-06-20T21:56:08Z
|
<patch>
diff --git a/tools/report_ci_failure.py b/tools/report_ci_failure.py
--- a/tools/report_ci_failure.py
+++ b/tools/report_ci_failure.py
@@ -47,7 +47,7 @@ def report(self, branch, commit, infourl=None, job_name=None):
job_name (str): name of the failed ci job.
"""
if branch != "main" and not self.stable_branch_regex.search(branch):
- return None
+ return
key_label = self._key_label(branch, job_name)
issue_number = self._get_report_issue_number(key_label)
if issue_number:
diff --git a/tools/update_fake_backends.py b/tools/update_fake_backends.py
--- a/tools/update_fake_backends.py
+++ b/tools/update_fake_backends.py
@@ -12,6 +12,7 @@
# copyright notice, and modified files need to carry a notice indicating
# that they have been altered from the originals.
+"""Utility script to update fake backends"""
import argparse
from datetime import datetime
@@ -48,7 +49,7 @@ def default(self, o):
)
-def main():
+def _main():
parser = argparse.ArgumentParser(description="Generate fake backend snapshots")
parser.add_argument("--dir", "-d", type=str, default=DEFAULT_DIR)
parser.add_argument("backends", type=str, nargs="*")
@@ -93,5 +94,5 @@ def main():
fd.write(json.dumps(defs.to_dict(), cls=BackendEncoder))
-if __name__ == main():
- main()
+if __name__ == "__main__":
+ _main()
</patch>
|
[]
|
[]
| ||||
pypa__pip-9993
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
pip 21.1 fails with ResolutionTooDeep while 21.0.1 exits with clear error
### Description
pip 21.0.1 has no error and highlights the dependency error right away while pip 2.1 runs for minutes then throws a _ResolutionTooDeep_ exception.
### Expected behavior
pip version 21.0.1 produces the expected output which includes this diagnostic:
```
The conflict is caused by:
The user requested hyperlink==19.0.0
autobahn 20.12.3 depends on hyperlink>=20.0.1
```
### pip version
21.1
### Python version
3.6.13
### OS
Ubuntu 16.04.7 LTS
### How to Reproduce
1. Create a python3.6 virtualenv
2. activate
3. Ensure pip v21.1 is installed in the virtualenv
4. run `pip -r r.txt` where r.txt has this content:
```
attrs==19.3.0
autobahn==20.6.2
hyperlink==19.0.0
cffi==1.14.0
cryptography>=3.2
idna==2.10
pycparser==2.20
txaio==20.4.1
```
5. Replace `autobahn==20.6.2` with `autobahn==20.12.3` in r.txt
6. run `pip -r r.txt`
### Output
```sh-session
Lots of spew, then:
Requirement already satisfied: txaio==20.4.1 in ./venv/lib/python3.6/site-packages (from -rr test.txt (line 8)) (20.4.1)
INFO: pip is looking at multiple versions of attrs to determine which version is compatible with other requirements. This could take a while.
Then pip seems to hang. If you wait long enough: 5 minutes? it prints:
ERROR: Exception:
Traceback (most recent call last):
File "/home/sigma/dev/contour/daqAdaptor/venv/lib/python3.6/site-packages/pip/_internal/cli/base_command.py", line 180, in _main
status = self.run(options, args)
File "/home/sigma/dev/contour/daqAdaptor/venv/lib/python3.6/site-packages/pip/_internal/cli/req_command.py", line 204, in wrapper
return func(self, options, args)
File "/home/sigma/dev/contour/daqAdaptor/venv/lib/python3.6/site-packages/pip/_internal/commands/install.py", line 319, in run
reqs, check_supported_wheels=not options.target_dir
File "/home/sigma/dev/contour/daqAdaptor/venv/lib/python3.6/site-packages/pip/_internal/resolution/resolvelib/resolver.py", line 128, in resolve
requirements, max_rounds=try_to_avoid_resolution_too_deep
File "/home/sigma/dev/contour/daqAdaptor/venv/lib/python3.6/site-packages/pip/_vendor/resolvelib/resolvers.py", line 473, in resolve
state = resolution.resolve(requirements, max_rounds=max_rounds)
File "/home/sigma/dev/contour/daqAdaptor/venv/lib/python3.6/site-packages/pip/_vendor/resolvelib/resolvers.py", line 384, in resolve
raise ResolutionTooDeep(max_rounds)
pip._vendor.resolvelib.resolvers.ResolutionTooDeep: 2000000
```
### Code of Conduct
- [X] I agree to follow the [PSF Code of Conduct](https://www.python.org/psf/conduct/).
</issue>
<code>
[start of README.rst]
1 pip - The Python Package Installer
2 ==================================
3
4 .. image:: https://img.shields.io/pypi/v/pip.svg
5 :target: https://pypi.org/project/pip/
6
7 .. image:: https://readthedocs.org/projects/pip/badge/?version=latest
8 :target: https://pip.pypa.io/en/latest
9
10 pip is the `package installer`_ for Python. You can use pip to install packages from the `Python Package Index`_ and other indexes.
11
12 Please take a look at our documentation for how to install and use pip:
13
14 * `Installation`_
15 * `Usage`_
16
17 We release updates regularly, with a new version every 3 months. Find more details in our documentation:
18
19 * `Release notes`_
20 * `Release process`_
21
22 In pip 20.3, we've `made a big improvement to the heart of pip`_; `learn more`_. We want your input, so `sign up for our user experience research studies`_ to help us do it right.
23
24 **Note**: pip 21.0, in January 2021, removed Python 2 support, per pip's `Python 2 support policy`_. Please migrate to Python 3.
25
26 If you find bugs, need help, or want to talk to the developers, please use our mailing lists or chat rooms:
27
28 * `Issue tracking`_
29 * `Discourse channel`_
30 * `User IRC`_
31
32 If you want to get involved head over to GitHub to get the source code, look at our development documentation and feel free to jump on the developer mailing lists and chat rooms:
33
34 * `GitHub page`_
35 * `Development documentation`_
36 * `Development mailing list`_
37 * `Development IRC`_
38
39 Code of Conduct
40 ---------------
41
42 Everyone interacting in the pip project's codebases, issue trackers, chat
43 rooms, and mailing lists is expected to follow the `PSF Code of Conduct`_.
44
45 .. _package installer: https://packaging.python.org/guides/tool-recommendations/
46 .. _Python Package Index: https://pypi.org
47 .. _Installation: https://pip.pypa.io/en/stable/installing.html
48 .. _Usage: https://pip.pypa.io/en/stable/
49 .. _Release notes: https://pip.pypa.io/en/stable/news.html
50 .. _Release process: https://pip.pypa.io/en/latest/development/release-process/
51 .. _GitHub page: https://github.com/pypa/pip
52 .. _Development documentation: https://pip.pypa.io/en/latest/development
53 .. _made a big improvement to the heart of pip: https://pyfound.blogspot.com/2020/11/pip-20-3-new-resolver.html
54 .. _learn more: https://pip.pypa.io/en/latest/user_guide/#changes-to-the-pip-dependency-resolver-in-20-3-2020
55 .. _sign up for our user experience research studies: https://pyfound.blogspot.com/2020/03/new-pip-resolver-to-roll-out-this-year.html
56 .. _Python 2 support policy: https://pip.pypa.io/en/latest/development/release-process/#python-2-support
57 .. _Issue tracking: https://github.com/pypa/pip/issues
58 .. _Discourse channel: https://discuss.python.org/c/packaging
59 .. _Development mailing list: https://mail.python.org/mailman3/lists/distutils-sig.python.org/
60 .. _User IRC: https://webchat.freenode.net/?channels=%23pypa
61 .. _Development IRC: https://webchat.freenode.net/?channels=%23pypa-dev
62 .. _PSF Code of Conduct: https://github.com/pypa/.github/blob/main/CODE_OF_CONDUCT.md
63
[end of README.rst]
[start of src/pip/_internal/commands/install.py]
1 import errno
2 import logging
3 import operator
4 import os
5 import shutil
6 import site
7 from optparse import SUPPRESS_HELP, Values
8 from typing import Iterable, List, Optional
9
10 from pip._vendor.packaging.utils import canonicalize_name
11
12 from pip._internal.cache import WheelCache
13 from pip._internal.cli import cmdoptions
14 from pip._internal.cli.cmdoptions import make_target_python
15 from pip._internal.cli.req_command import (
16 RequirementCommand,
17 warn_if_run_as_root,
18 with_cleanup,
19 )
20 from pip._internal.cli.status_codes import ERROR, SUCCESS
21 from pip._internal.exceptions import CommandError, InstallationError
22 from pip._internal.locations import get_scheme
23 from pip._internal.metadata import get_environment
24 from pip._internal.models.format_control import FormatControl
25 from pip._internal.operations.check import ConflictDetails, check_install_conflicts
26 from pip._internal.req import install_given_reqs
27 from pip._internal.req.req_install import InstallRequirement
28 from pip._internal.req.req_tracker import get_requirement_tracker
29 from pip._internal.utils.distutils_args import parse_distutils_args
30 from pip._internal.utils.filesystem import test_writable_dir
31 from pip._internal.utils.misc import (
32 ensure_dir,
33 get_pip_version,
34 protect_pip_from_modification_on_windows,
35 write_output,
36 )
37 from pip._internal.utils.temp_dir import TempDirectory
38 from pip._internal.utils.virtualenv import (
39 running_under_virtualenv,
40 virtualenv_no_global,
41 )
42 from pip._internal.wheel_builder import (
43 BinaryAllowedPredicate,
44 build,
45 should_build_for_install_command,
46 )
47
48 logger = logging.getLogger(__name__)
49
50
51 def get_check_binary_allowed(format_control):
52 # type: (FormatControl) -> BinaryAllowedPredicate
53 def check_binary_allowed(req):
54 # type: (InstallRequirement) -> bool
55 canonical_name = canonicalize_name(req.name or "")
56 allowed_formats = format_control.get_allowed_formats(canonical_name)
57 return "binary" in allowed_formats
58
59 return check_binary_allowed
60
61
62 class InstallCommand(RequirementCommand):
63 """
64 Install packages from:
65
66 - PyPI (and other indexes) using requirement specifiers.
67 - VCS project urls.
68 - Local project directories.
69 - Local or remote source archives.
70
71 pip also supports installing from "requirements files", which provide
72 an easy way to specify a whole environment to be installed.
73 """
74
75 usage = """
76 %prog [options] <requirement specifier> [package-index-options] ...
77 %prog [options] -r <requirements file> [package-index-options] ...
78 %prog [options] [-e] <vcs project url> ...
79 %prog [options] [-e] <local project path> ...
80 %prog [options] <archive url/path> ..."""
81
82 def add_options(self):
83 # type: () -> None
84 self.cmd_opts.add_option(cmdoptions.requirements())
85 self.cmd_opts.add_option(cmdoptions.constraints())
86 self.cmd_opts.add_option(cmdoptions.no_deps())
87 self.cmd_opts.add_option(cmdoptions.pre())
88
89 self.cmd_opts.add_option(cmdoptions.editable())
90 self.cmd_opts.add_option(
91 '-t', '--target',
92 dest='target_dir',
93 metavar='dir',
94 default=None,
95 help='Install packages into <dir>. '
96 'By default this will not replace existing files/folders in '
97 '<dir>. Use --upgrade to replace existing packages in <dir> '
98 'with new versions.'
99 )
100 cmdoptions.add_target_python_options(self.cmd_opts)
101
102 self.cmd_opts.add_option(
103 '--user',
104 dest='use_user_site',
105 action='store_true',
106 help="Install to the Python user install directory for your "
107 "platform. Typically ~/.local/, or %APPDATA%\\Python on "
108 "Windows. (See the Python documentation for site.USER_BASE "
109 "for full details.)")
110 self.cmd_opts.add_option(
111 '--no-user',
112 dest='use_user_site',
113 action='store_false',
114 help=SUPPRESS_HELP)
115 self.cmd_opts.add_option(
116 '--root',
117 dest='root_path',
118 metavar='dir',
119 default=None,
120 help="Install everything relative to this alternate root "
121 "directory.")
122 self.cmd_opts.add_option(
123 '--prefix',
124 dest='prefix_path',
125 metavar='dir',
126 default=None,
127 help="Installation prefix where lib, bin and other top-level "
128 "folders are placed")
129
130 self.cmd_opts.add_option(cmdoptions.build_dir())
131
132 self.cmd_opts.add_option(cmdoptions.src())
133
134 self.cmd_opts.add_option(
135 '-U', '--upgrade',
136 dest='upgrade',
137 action='store_true',
138 help='Upgrade all specified packages to the newest available '
139 'version. The handling of dependencies depends on the '
140 'upgrade-strategy used.'
141 )
142
143 self.cmd_opts.add_option(
144 '--upgrade-strategy',
145 dest='upgrade_strategy',
146 default='only-if-needed',
147 choices=['only-if-needed', 'eager'],
148 help='Determines how dependency upgrading should be handled '
149 '[default: %default]. '
150 '"eager" - dependencies are upgraded regardless of '
151 'whether the currently installed version satisfies the '
152 'requirements of the upgraded package(s). '
153 '"only-if-needed" - are upgraded only when they do not '
154 'satisfy the requirements of the upgraded package(s).'
155 )
156
157 self.cmd_opts.add_option(
158 '--force-reinstall',
159 dest='force_reinstall',
160 action='store_true',
161 help='Reinstall all packages even if they are already '
162 'up-to-date.')
163
164 self.cmd_opts.add_option(
165 '-I', '--ignore-installed',
166 dest='ignore_installed',
167 action='store_true',
168 help='Ignore the installed packages, overwriting them. '
169 'This can break your system if the existing package '
170 'is of a different version or was installed '
171 'with a different package manager!'
172 )
173
174 self.cmd_opts.add_option(cmdoptions.ignore_requires_python())
175 self.cmd_opts.add_option(cmdoptions.no_build_isolation())
176 self.cmd_opts.add_option(cmdoptions.use_pep517())
177 self.cmd_opts.add_option(cmdoptions.no_use_pep517())
178
179 self.cmd_opts.add_option(cmdoptions.install_options())
180 self.cmd_opts.add_option(cmdoptions.global_options())
181
182 self.cmd_opts.add_option(
183 "--compile",
184 action="store_true",
185 dest="compile",
186 default=True,
187 help="Compile Python source files to bytecode",
188 )
189
190 self.cmd_opts.add_option(
191 "--no-compile",
192 action="store_false",
193 dest="compile",
194 help="Do not compile Python source files to bytecode",
195 )
196
197 self.cmd_opts.add_option(
198 "--no-warn-script-location",
199 action="store_false",
200 dest="warn_script_location",
201 default=True,
202 help="Do not warn when installing scripts outside PATH",
203 )
204 self.cmd_opts.add_option(
205 "--no-warn-conflicts",
206 action="store_false",
207 dest="warn_about_conflicts",
208 default=True,
209 help="Do not warn about broken dependencies",
210 )
211
212 self.cmd_opts.add_option(cmdoptions.no_binary())
213 self.cmd_opts.add_option(cmdoptions.only_binary())
214 self.cmd_opts.add_option(cmdoptions.prefer_binary())
215 self.cmd_opts.add_option(cmdoptions.require_hashes())
216 self.cmd_opts.add_option(cmdoptions.progress_bar())
217
218 index_opts = cmdoptions.make_option_group(
219 cmdoptions.index_group,
220 self.parser,
221 )
222
223 self.parser.insert_option_group(0, index_opts)
224 self.parser.insert_option_group(0, self.cmd_opts)
225
226 @with_cleanup
227 def run(self, options, args):
228 # type: (Values, List[str]) -> int
229 if options.use_user_site and options.target_dir is not None:
230 raise CommandError("Can not combine '--user' and '--target'")
231
232 cmdoptions.check_install_build_global(options)
233 upgrade_strategy = "to-satisfy-only"
234 if options.upgrade:
235 upgrade_strategy = options.upgrade_strategy
236
237 cmdoptions.check_dist_restriction(options, check_target=True)
238
239 install_options = options.install_options or []
240
241 logger.debug("Using %s", get_pip_version())
242 options.use_user_site = decide_user_install(
243 options.use_user_site,
244 prefix_path=options.prefix_path,
245 target_dir=options.target_dir,
246 root_path=options.root_path,
247 isolated_mode=options.isolated_mode,
248 )
249
250 target_temp_dir = None # type: Optional[TempDirectory]
251 target_temp_dir_path = None # type: Optional[str]
252 if options.target_dir:
253 options.ignore_installed = True
254 options.target_dir = os.path.abspath(options.target_dir)
255 if (os.path.exists(options.target_dir) and not
256 os.path.isdir(options.target_dir)):
257 raise CommandError(
258 "Target path exists but is not a directory, will not "
259 "continue."
260 )
261
262 # Create a target directory for using with the target option
263 target_temp_dir = TempDirectory(kind="target")
264 target_temp_dir_path = target_temp_dir.path
265 self.enter_context(target_temp_dir)
266
267 global_options = options.global_options or []
268
269 session = self.get_default_session(options)
270
271 target_python = make_target_python(options)
272 finder = self._build_package_finder(
273 options=options,
274 session=session,
275 target_python=target_python,
276 ignore_requires_python=options.ignore_requires_python,
277 )
278 wheel_cache = WheelCache(options.cache_dir, options.format_control)
279
280 req_tracker = self.enter_context(get_requirement_tracker())
281
282 directory = TempDirectory(
283 delete=not options.no_clean,
284 kind="install",
285 globally_managed=True,
286 )
287
288 try:
289 reqs = self.get_requirements(args, options, finder, session)
290
291 reject_location_related_install_options(
292 reqs, options.install_options
293 )
294
295 preparer = self.make_requirement_preparer(
296 temp_build_dir=directory,
297 options=options,
298 req_tracker=req_tracker,
299 session=session,
300 finder=finder,
301 use_user_site=options.use_user_site,
302 )
303 resolver = self.make_resolver(
304 preparer=preparer,
305 finder=finder,
306 options=options,
307 wheel_cache=wheel_cache,
308 use_user_site=options.use_user_site,
309 ignore_installed=options.ignore_installed,
310 ignore_requires_python=options.ignore_requires_python,
311 force_reinstall=options.force_reinstall,
312 upgrade_strategy=upgrade_strategy,
313 use_pep517=options.use_pep517,
314 )
315
316 self.trace_basic_info(finder)
317
318 requirement_set = resolver.resolve(
319 reqs, check_supported_wheels=not options.target_dir
320 )
321
322 try:
323 pip_req = requirement_set.get_requirement("pip")
324 except KeyError:
325 modifying_pip = False
326 else:
327 # If we're not replacing an already installed pip,
328 # we're not modifying it.
329 modifying_pip = pip_req.satisfied_by is None
330 protect_pip_from_modification_on_windows(
331 modifying_pip=modifying_pip
332 )
333
334 check_binary_allowed = get_check_binary_allowed(
335 finder.format_control
336 )
337
338 reqs_to_build = [
339 r for r in requirement_set.requirements.values()
340 if should_build_for_install_command(
341 r, check_binary_allowed
342 )
343 ]
344
345 _, build_failures = build(
346 reqs_to_build,
347 wheel_cache=wheel_cache,
348 verify=True,
349 build_options=[],
350 global_options=[],
351 )
352
353 # If we're using PEP 517, we cannot do a direct install
354 # so we fail here.
355 pep517_build_failure_names = [
356 r.name # type: ignore
357 for r in build_failures if r.use_pep517
358 ] # type: List[str]
359 if pep517_build_failure_names:
360 raise InstallationError(
361 "Could not build wheels for {} which use"
362 " PEP 517 and cannot be installed directly".format(
363 ", ".join(pep517_build_failure_names)
364 )
365 )
366
367 # For now, we just warn about failures building legacy
368 # requirements, as we'll fall through to a direct
369 # install for those.
370 for r in build_failures:
371 if not r.use_pep517:
372 r.legacy_install_reason = 8368
373
374 to_install = resolver.get_installation_order(
375 requirement_set
376 )
377
378 # Check for conflicts in the package set we're installing.
379 conflicts = None # type: Optional[ConflictDetails]
380 should_warn_about_conflicts = (
381 not options.ignore_dependencies and
382 options.warn_about_conflicts
383 )
384 if should_warn_about_conflicts:
385 conflicts = self._determine_conflicts(to_install)
386
387 # Don't warn about script install locations if
388 # --target has been specified
389 warn_script_location = options.warn_script_location
390 if options.target_dir:
391 warn_script_location = False
392
393 installed = install_given_reqs(
394 to_install,
395 install_options,
396 global_options,
397 root=options.root_path,
398 home=target_temp_dir_path,
399 prefix=options.prefix_path,
400 warn_script_location=warn_script_location,
401 use_user_site=options.use_user_site,
402 pycompile=options.compile,
403 )
404
405 lib_locations = get_lib_location_guesses(
406 user=options.use_user_site,
407 home=target_temp_dir_path,
408 root=options.root_path,
409 prefix=options.prefix_path,
410 isolated=options.isolated_mode,
411 )
412 env = get_environment(lib_locations)
413
414 installed.sort(key=operator.attrgetter('name'))
415 items = []
416 for result in installed:
417 item = result.name
418 try:
419 installed_dist = env.get_distribution(item)
420 if installed_dist is not None:
421 item = f"{item}-{installed_dist.version}"
422 except Exception:
423 pass
424 items.append(item)
425
426 if conflicts is not None:
427 self._warn_about_conflicts(
428 conflicts,
429 resolver_variant=self.determine_resolver_variant(options),
430 )
431
432 installed_desc = ' '.join(items)
433 if installed_desc:
434 write_output(
435 'Successfully installed %s', installed_desc,
436 )
437 except OSError as error:
438 show_traceback = (self.verbosity >= 1)
439
440 message = create_os_error_message(
441 error, show_traceback, options.use_user_site,
442 )
443 logger.error(message, exc_info=show_traceback) # noqa
444
445 return ERROR
446
447 if options.target_dir:
448 assert target_temp_dir
449 self._handle_target_dir(
450 options.target_dir, target_temp_dir, options.upgrade
451 )
452
453 warn_if_run_as_root()
454 return SUCCESS
455
456 def _handle_target_dir(self, target_dir, target_temp_dir, upgrade):
457 # type: (str, TempDirectory, bool) -> None
458 ensure_dir(target_dir)
459
460 # Checking both purelib and platlib directories for installed
461 # packages to be moved to target directory
462 lib_dir_list = []
463
464 # Checking both purelib and platlib directories for installed
465 # packages to be moved to target directory
466 scheme = get_scheme('', home=target_temp_dir.path)
467 purelib_dir = scheme.purelib
468 platlib_dir = scheme.platlib
469 data_dir = scheme.data
470
471 if os.path.exists(purelib_dir):
472 lib_dir_list.append(purelib_dir)
473 if os.path.exists(platlib_dir) and platlib_dir != purelib_dir:
474 lib_dir_list.append(platlib_dir)
475 if os.path.exists(data_dir):
476 lib_dir_list.append(data_dir)
477
478 for lib_dir in lib_dir_list:
479 for item in os.listdir(lib_dir):
480 if lib_dir == data_dir:
481 ddir = os.path.join(data_dir, item)
482 if any(s.startswith(ddir) for s in lib_dir_list[:-1]):
483 continue
484 target_item_dir = os.path.join(target_dir, item)
485 if os.path.exists(target_item_dir):
486 if not upgrade:
487 logger.warning(
488 'Target directory %s already exists. Specify '
489 '--upgrade to force replacement.',
490 target_item_dir
491 )
492 continue
493 if os.path.islink(target_item_dir):
494 logger.warning(
495 'Target directory %s already exists and is '
496 'a link. pip will not automatically replace '
497 'links, please remove if replacement is '
498 'desired.',
499 target_item_dir
500 )
501 continue
502 if os.path.isdir(target_item_dir):
503 shutil.rmtree(target_item_dir)
504 else:
505 os.remove(target_item_dir)
506
507 shutil.move(
508 os.path.join(lib_dir, item),
509 target_item_dir
510 )
511
512 def _determine_conflicts(self, to_install):
513 # type: (List[InstallRequirement]) -> Optional[ConflictDetails]
514 try:
515 return check_install_conflicts(to_install)
516 except Exception:
517 logger.exception(
518 "Error while checking for conflicts. Please file an issue on "
519 "pip's issue tracker: https://github.com/pypa/pip/issues/new"
520 )
521 return None
522
523 def _warn_about_conflicts(self, conflict_details, resolver_variant):
524 # type: (ConflictDetails, str) -> None
525 package_set, (missing, conflicting) = conflict_details
526 if not missing and not conflicting:
527 return
528
529 parts = [] # type: List[str]
530 if resolver_variant == "legacy":
531 parts.append(
532 "pip's legacy dependency resolver does not consider dependency "
533 "conflicts when selecting packages. This behaviour is the "
534 "source of the following dependency conflicts."
535 )
536 else:
537 assert resolver_variant == "2020-resolver"
538 parts.append(
539 "pip's dependency resolver does not currently take into account "
540 "all the packages that are installed. This behaviour is the "
541 "source of the following dependency conflicts."
542 )
543
544 # NOTE: There is some duplication here, with commands/check.py
545 for project_name in missing:
546 version = package_set[project_name][0]
547 for dependency in missing[project_name]:
548 message = (
549 "{name} {version} requires {requirement}, "
550 "which is not installed."
551 ).format(
552 name=project_name,
553 version=version,
554 requirement=dependency[1],
555 )
556 parts.append(message)
557
558 for project_name in conflicting:
559 version = package_set[project_name][0]
560 for dep_name, dep_version, req in conflicting[project_name]:
561 message = (
562 "{name} {version} requires {requirement}, but {you} have "
563 "{dep_name} {dep_version} which is incompatible."
564 ).format(
565 name=project_name,
566 version=version,
567 requirement=req,
568 dep_name=dep_name,
569 dep_version=dep_version,
570 you=("you" if resolver_variant == "2020-resolver" else "you'll")
571 )
572 parts.append(message)
573
574 logger.critical("\n".join(parts))
575
576
577 def get_lib_location_guesses(
578 user=False, # type: bool
579 home=None, # type: Optional[str]
580 root=None, # type: Optional[str]
581 isolated=False, # type: bool
582 prefix=None # type: Optional[str]
583 ):
584 # type:(...) -> List[str]
585 scheme = get_scheme(
586 '',
587 user=user,
588 home=home,
589 root=root,
590 isolated=isolated,
591 prefix=prefix,
592 )
593 return [scheme.purelib, scheme.platlib]
594
595
596 def site_packages_writable(root, isolated):
597 # type: (Optional[str], bool) -> bool
598 return all(
599 test_writable_dir(d) for d in set(
600 get_lib_location_guesses(root=root, isolated=isolated))
601 )
602
603
604 def decide_user_install(
605 use_user_site, # type: Optional[bool]
606 prefix_path=None, # type: Optional[str]
607 target_dir=None, # type: Optional[str]
608 root_path=None, # type: Optional[str]
609 isolated_mode=False, # type: bool
610 ):
611 # type: (...) -> bool
612 """Determine whether to do a user install based on the input options.
613
614 If use_user_site is False, no additional checks are done.
615 If use_user_site is True, it is checked for compatibility with other
616 options.
617 If use_user_site is None, the default behaviour depends on the environment,
618 which is provided by the other arguments.
619 """
620 # In some cases (config from tox), use_user_site can be set to an integer
621 # rather than a bool, which 'use_user_site is False' wouldn't catch.
622 if (use_user_site is not None) and (not use_user_site):
623 logger.debug("Non-user install by explicit request")
624 return False
625
626 if use_user_site:
627 if prefix_path:
628 raise CommandError(
629 "Can not combine '--user' and '--prefix' as they imply "
630 "different installation locations"
631 )
632 if virtualenv_no_global():
633 raise InstallationError(
634 "Can not perform a '--user' install. User site-packages "
635 "are not visible in this virtualenv."
636 )
637 logger.debug("User install by explicit request")
638 return True
639
640 # If we are here, user installs have not been explicitly requested/avoided
641 assert use_user_site is None
642
643 # user install incompatible with --prefix/--target
644 if prefix_path or target_dir:
645 logger.debug("Non-user install due to --prefix or --target option")
646 return False
647
648 # If user installs are not enabled, choose a non-user install
649 if not site.ENABLE_USER_SITE:
650 logger.debug("Non-user install because user site-packages disabled")
651 return False
652
653 # If we have permission for a non-user install, do that,
654 # otherwise do a user install.
655 if site_packages_writable(root=root_path, isolated=isolated_mode):
656 logger.debug("Non-user install because site-packages writeable")
657 return False
658
659 logger.info("Defaulting to user installation because normal site-packages "
660 "is not writeable")
661 return True
662
663
664 def reject_location_related_install_options(requirements, options):
665 # type: (List[InstallRequirement], Optional[List[str]]) -> None
666 """If any location-changing --install-option arguments were passed for
667 requirements or on the command-line, then show a deprecation warning.
668 """
669 def format_options(option_names):
670 # type: (Iterable[str]) -> List[str]
671 return ["--{}".format(name.replace("_", "-")) for name in option_names]
672
673 offenders = []
674
675 for requirement in requirements:
676 install_options = requirement.install_options
677 location_options = parse_distutils_args(install_options)
678 if location_options:
679 offenders.append(
680 "{!r} from {}".format(
681 format_options(location_options.keys()), requirement
682 )
683 )
684
685 if options:
686 location_options = parse_distutils_args(options)
687 if location_options:
688 offenders.append(
689 "{!r} from command line".format(
690 format_options(location_options.keys())
691 )
692 )
693
694 if not offenders:
695 return
696
697 raise CommandError(
698 "Location-changing options found in --install-option: {}."
699 " This is unsupported, use pip-level options like --user,"
700 " --prefix, --root, and --target instead.".format(
701 "; ".join(offenders)
702 )
703 )
704
705
706 def create_os_error_message(error, show_traceback, using_user_site):
707 # type: (OSError, bool, bool) -> str
708 """Format an error message for an OSError
709
710 It may occur anytime during the execution of the install command.
711 """
712 parts = []
713
714 # Mention the error if we are not going to show a traceback
715 parts.append("Could not install packages due to an OSError")
716 if not show_traceback:
717 parts.append(": ")
718 parts.append(str(error))
719 else:
720 parts.append(".")
721
722 # Spilt the error indication from a helper message (if any)
723 parts[-1] += "\n"
724
725 # Suggest useful actions to the user:
726 # (1) using user site-packages or (2) verifying the permissions
727 if error.errno == errno.EACCES:
728 user_option_part = "Consider using the `--user` option"
729 permissions_part = "Check the permissions"
730
731 if not running_under_virtualenv() and not using_user_site:
732 parts.extend([
733 user_option_part, " or ",
734 permissions_part.lower(),
735 ])
736 else:
737 parts.append(permissions_part)
738 parts.append(".\n")
739
740 return "".join(parts).strip() + "\n"
741
[end of src/pip/_internal/commands/install.py]
[start of src/pip/_internal/exceptions.py]
1 """Exceptions used throughout package"""
2
3 import configparser
4 from itertools import chain, groupby, repeat
5 from typing import TYPE_CHECKING, Dict, List, Optional
6
7 from pip._vendor.pkg_resources import Distribution
8 from pip._vendor.requests.models import Request, Response
9
10 if TYPE_CHECKING:
11 from hashlib import _Hash
12
13 from pip._internal.req.req_install import InstallRequirement
14
15
16 class PipError(Exception):
17 """Base pip exception"""
18
19
20 class ConfigurationError(PipError):
21 """General exception in configuration"""
22
23
24 class InstallationError(PipError):
25 """General exception during installation"""
26
27
28 class UninstallationError(PipError):
29 """General exception during uninstallation"""
30
31
32 class NoneMetadataError(PipError):
33 """
34 Raised when accessing "METADATA" or "PKG-INFO" metadata for a
35 pip._vendor.pkg_resources.Distribution object and
36 `dist.has_metadata('METADATA')` returns True but
37 `dist.get_metadata('METADATA')` returns None (and similarly for
38 "PKG-INFO").
39 """
40
41 def __init__(self, dist, metadata_name):
42 # type: (Distribution, str) -> None
43 """
44 :param dist: A Distribution object.
45 :param metadata_name: The name of the metadata being accessed
46 (can be "METADATA" or "PKG-INFO").
47 """
48 self.dist = dist
49 self.metadata_name = metadata_name
50
51 def __str__(self):
52 # type: () -> str
53 # Use `dist` in the error message because its stringification
54 # includes more information, like the version and location.
55 return (
56 'None {} metadata found for distribution: {}'.format(
57 self.metadata_name, self.dist,
58 )
59 )
60
61
62 class UserInstallationInvalid(InstallationError):
63 """A --user install is requested on an environment without user site."""
64
65 def __str__(self):
66 # type: () -> str
67 return "User base directory is not specified"
68
69
70 class InvalidSchemeCombination(InstallationError):
71 def __str__(self):
72 # type: () -> str
73 before = ", ".join(str(a) for a in self.args[:-1])
74 return f"Cannot set {before} and {self.args[-1]} together"
75
76
77 class DistributionNotFound(InstallationError):
78 """Raised when a distribution cannot be found to satisfy a requirement"""
79
80
81 class RequirementsFileParseError(InstallationError):
82 """Raised when a general error occurs parsing a requirements file line."""
83
84
85 class BestVersionAlreadyInstalled(PipError):
86 """Raised when the most up-to-date version of a package is already
87 installed."""
88
89
90 class BadCommand(PipError):
91 """Raised when virtualenv or a command is not found"""
92
93
94 class CommandError(PipError):
95 """Raised when there is an error in command-line arguments"""
96
97
98 class PreviousBuildDirError(PipError):
99 """Raised when there's a previous conflicting build directory"""
100
101
102 class NetworkConnectionError(PipError):
103 """HTTP connection error"""
104
105 def __init__(self, error_msg, response=None, request=None):
106 # type: (str, Response, Request) -> None
107 """
108 Initialize NetworkConnectionError with `request` and `response`
109 objects.
110 """
111 self.response = response
112 self.request = request
113 self.error_msg = error_msg
114 if (self.response is not None and not self.request and
115 hasattr(response, 'request')):
116 self.request = self.response.request
117 super().__init__(error_msg, response, request)
118
119 def __str__(self):
120 # type: () -> str
121 return str(self.error_msg)
122
123
124 class InvalidWheelFilename(InstallationError):
125 """Invalid wheel filename."""
126
127
128 class UnsupportedWheel(InstallationError):
129 """Unsupported wheel."""
130
131
132 class MetadataInconsistent(InstallationError):
133 """Built metadata contains inconsistent information.
134
135 This is raised when the metadata contains values (e.g. name and version)
136 that do not match the information previously obtained from sdist filename
137 or user-supplied ``#egg=`` value.
138 """
139 def __init__(self, ireq, field, f_val, m_val):
140 # type: (InstallRequirement, str, str, str) -> None
141 self.ireq = ireq
142 self.field = field
143 self.f_val = f_val
144 self.m_val = m_val
145
146 def __str__(self):
147 # type: () -> str
148 template = (
149 "Requested {} has inconsistent {}: "
150 "filename has {!r}, but metadata has {!r}"
151 )
152 return template.format(self.ireq, self.field, self.f_val, self.m_val)
153
154
155 class InstallationSubprocessError(InstallationError):
156 """A subprocess call failed during installation."""
157 def __init__(self, returncode, description):
158 # type: (int, str) -> None
159 self.returncode = returncode
160 self.description = description
161
162 def __str__(self):
163 # type: () -> str
164 return (
165 "Command errored out with exit status {}: {} "
166 "Check the logs for full command output."
167 ).format(self.returncode, self.description)
168
169
170 class HashErrors(InstallationError):
171 """Multiple HashError instances rolled into one for reporting"""
172
173 def __init__(self):
174 # type: () -> None
175 self.errors = [] # type: List[HashError]
176
177 def append(self, error):
178 # type: (HashError) -> None
179 self.errors.append(error)
180
181 def __str__(self):
182 # type: () -> str
183 lines = []
184 self.errors.sort(key=lambda e: e.order)
185 for cls, errors_of_cls in groupby(self.errors, lambda e: e.__class__):
186 lines.append(cls.head)
187 lines.extend(e.body() for e in errors_of_cls)
188 if lines:
189 return '\n'.join(lines)
190 return ''
191
192 def __nonzero__(self):
193 # type: () -> bool
194 return bool(self.errors)
195
196 def __bool__(self):
197 # type: () -> bool
198 return self.__nonzero__()
199
200
201 class HashError(InstallationError):
202 """
203 A failure to verify a package against known-good hashes
204
205 :cvar order: An int sorting hash exception classes by difficulty of
206 recovery (lower being harder), so the user doesn't bother fretting
207 about unpinned packages when he has deeper issues, like VCS
208 dependencies, to deal with. Also keeps error reports in a
209 deterministic order.
210 :cvar head: A section heading for display above potentially many
211 exceptions of this kind
212 :ivar req: The InstallRequirement that triggered this error. This is
213 pasted on after the exception is instantiated, because it's not
214 typically available earlier.
215
216 """
217 req = None # type: Optional[InstallRequirement]
218 head = ''
219 order = -1 # type: int
220
221 def body(self):
222 # type: () -> str
223 """Return a summary of me for display under the heading.
224
225 This default implementation simply prints a description of the
226 triggering requirement.
227
228 :param req: The InstallRequirement that provoked this error, with
229 its link already populated by the resolver's _populate_link().
230
231 """
232 return f' {self._requirement_name()}'
233
234 def __str__(self):
235 # type: () -> str
236 return f'{self.head}\n{self.body()}'
237
238 def _requirement_name(self):
239 # type: () -> str
240 """Return a description of the requirement that triggered me.
241
242 This default implementation returns long description of the req, with
243 line numbers
244
245 """
246 return str(self.req) if self.req else 'unknown package'
247
248
249 class VcsHashUnsupported(HashError):
250 """A hash was provided for a version-control-system-based requirement, but
251 we don't have a method for hashing those."""
252
253 order = 0
254 head = ("Can't verify hashes for these requirements because we don't "
255 "have a way to hash version control repositories:")
256
257
258 class DirectoryUrlHashUnsupported(HashError):
259 """A hash was provided for a version-control-system-based requirement, but
260 we don't have a method for hashing those."""
261
262 order = 1
263 head = ("Can't verify hashes for these file:// requirements because they "
264 "point to directories:")
265
266
267 class HashMissing(HashError):
268 """A hash was needed for a requirement but is absent."""
269
270 order = 2
271 head = ('Hashes are required in --require-hashes mode, but they are '
272 'missing from some requirements. Here is a list of those '
273 'requirements along with the hashes their downloaded archives '
274 'actually had. Add lines like these to your requirements files to '
275 'prevent tampering. (If you did not enable --require-hashes '
276 'manually, note that it turns on automatically when any package '
277 'has a hash.)')
278
279 def __init__(self, gotten_hash):
280 # type: (str) -> None
281 """
282 :param gotten_hash: The hash of the (possibly malicious) archive we
283 just downloaded
284 """
285 self.gotten_hash = gotten_hash
286
287 def body(self):
288 # type: () -> str
289 # Dodge circular import.
290 from pip._internal.utils.hashes import FAVORITE_HASH
291
292 package = None
293 if self.req:
294 # In the case of URL-based requirements, display the original URL
295 # seen in the requirements file rather than the package name,
296 # so the output can be directly copied into the requirements file.
297 package = (self.req.original_link if self.req.original_link
298 # In case someone feeds something downright stupid
299 # to InstallRequirement's constructor.
300 else getattr(self.req, 'req', None))
301 return ' {} --hash={}:{}'.format(package or 'unknown package',
302 FAVORITE_HASH,
303 self.gotten_hash)
304
305
306 class HashUnpinned(HashError):
307 """A requirement had a hash specified but was not pinned to a specific
308 version."""
309
310 order = 3
311 head = ('In --require-hashes mode, all requirements must have their '
312 'versions pinned with ==. These do not:')
313
314
315 class HashMismatch(HashError):
316 """
317 Distribution file hash values don't match.
318
319 :ivar package_name: The name of the package that triggered the hash
320 mismatch. Feel free to write to this after the exception is raise to
321 improve its error message.
322
323 """
324 order = 4
325 head = ('THESE PACKAGES DO NOT MATCH THE HASHES FROM THE REQUIREMENTS '
326 'FILE. If you have updated the package versions, please update '
327 'the hashes. Otherwise, examine the package contents carefully; '
328 'someone may have tampered with them.')
329
330 def __init__(self, allowed, gots):
331 # type: (Dict[str, List[str]], Dict[str, _Hash]) -> None
332 """
333 :param allowed: A dict of algorithm names pointing to lists of allowed
334 hex digests
335 :param gots: A dict of algorithm names pointing to hashes we
336 actually got from the files under suspicion
337 """
338 self.allowed = allowed
339 self.gots = gots
340
341 def body(self):
342 # type: () -> str
343 return ' {}:\n{}'.format(self._requirement_name(),
344 self._hash_comparison())
345
346 def _hash_comparison(self):
347 # type: () -> str
348 """
349 Return a comparison of actual and expected hash values.
350
351 Example::
352
353 Expected sha256 abcdeabcdeabcdeabcdeabcdeabcdeabcdeabcdeabcde
354 or 123451234512345123451234512345123451234512345
355 Got bcdefbcdefbcdefbcdefbcdefbcdefbcdefbcdefbcdef
356
357 """
358 def hash_then_or(hash_name):
359 # type: (str) -> chain[str]
360 # For now, all the decent hashes have 6-char names, so we can get
361 # away with hard-coding space literals.
362 return chain([hash_name], repeat(' or'))
363
364 lines = [] # type: List[str]
365 for hash_name, expecteds in self.allowed.items():
366 prefix = hash_then_or(hash_name)
367 lines.extend((' Expected {} {}'.format(next(prefix), e))
368 for e in expecteds)
369 lines.append(' Got {}\n'.format(
370 self.gots[hash_name].hexdigest()))
371 return '\n'.join(lines)
372
373
374 class UnsupportedPythonVersion(InstallationError):
375 """Unsupported python version according to Requires-Python package
376 metadata."""
377
378
379 class ConfigurationFileCouldNotBeLoaded(ConfigurationError):
380 """When there are errors while loading a configuration file
381 """
382
383 def __init__(self, reason="could not be loaded", fname=None, error=None):
384 # type: (str, Optional[str], Optional[configparser.Error]) -> None
385 super().__init__(error)
386 self.reason = reason
387 self.fname = fname
388 self.error = error
389
390 def __str__(self):
391 # type: () -> str
392 if self.fname is not None:
393 message_part = f" in {self.fname}."
394 else:
395 assert self.error is not None
396 message_part = f".\n{self.error}\n"
397 return f"Configuration file {self.reason}{message_part}"
398
[end of src/pip/_internal/exceptions.py]
[start of src/pip/_internal/utils/virtualenv.py]
1 import logging
2 import os
3 import re
4 import site
5 import sys
6 from typing import List, Optional
7
8 logger = logging.getLogger(__name__)
9 _INCLUDE_SYSTEM_SITE_PACKAGES_REGEX = re.compile(
10 r"include-system-site-packages\s*=\s*(?P<value>true|false)"
11 )
12
13
14 def _running_under_venv():
15 # type: () -> bool
16 """Checks if sys.base_prefix and sys.prefix match.
17
18 This handles PEP 405 compliant virtual environments.
19 """
20 return sys.prefix != getattr(sys, "base_prefix", sys.prefix)
21
22
23 def _running_under_regular_virtualenv():
24 # type: () -> bool
25 """Checks if sys.real_prefix is set.
26
27 This handles virtual environments created with pypa's virtualenv.
28 """
29 # pypa/virtualenv case
30 return hasattr(sys, "real_prefix")
31
32
33 def running_under_virtualenv():
34 # type: () -> bool
35 """Return True if we're running inside a virtualenv, False otherwise."""
36 return _running_under_venv() or _running_under_regular_virtualenv()
37
38
39 def _get_pyvenv_cfg_lines():
40 # type: () -> Optional[List[str]]
41 """Reads {sys.prefix}/pyvenv.cfg and returns its contents as list of lines
42
43 Returns None, if it could not read/access the file.
44 """
45 pyvenv_cfg_file = os.path.join(sys.prefix, "pyvenv.cfg")
46 try:
47 # Although PEP 405 does not specify, the built-in venv module always
48 # writes with UTF-8. (pypa/pip#8717)
49 with open(pyvenv_cfg_file, encoding="utf-8") as f:
50 return f.read().splitlines() # avoids trailing newlines
51 except OSError:
52 return None
53
54
55 def _no_global_under_venv():
56 # type: () -> bool
57 """Check `{sys.prefix}/pyvenv.cfg` for system site-packages inclusion
58
59 PEP 405 specifies that when system site-packages are not supposed to be
60 visible from a virtual environment, `pyvenv.cfg` must contain the following
61 line:
62
63 include-system-site-packages = false
64
65 Additionally, log a warning if accessing the file fails.
66 """
67 cfg_lines = _get_pyvenv_cfg_lines()
68 if cfg_lines is None:
69 # We're not in a "sane" venv, so assume there is no system
70 # site-packages access (since that's PEP 405's default state).
71 logger.warning(
72 "Could not access 'pyvenv.cfg' despite a virtual environment "
73 "being active. Assuming global site-packages is not accessible "
74 "in this environment."
75 )
76 return True
77
78 for line in cfg_lines:
79 match = _INCLUDE_SYSTEM_SITE_PACKAGES_REGEX.match(line)
80 if match is not None and match.group("value") == "false":
81 return True
82 return False
83
84
85 def _no_global_under_regular_virtualenv():
86 # type: () -> bool
87 """Check if "no-global-site-packages.txt" exists beside site.py
88
89 This mirrors logic in pypa/virtualenv for determining whether system
90 site-packages are visible in the virtual environment.
91 """
92 site_mod_dir = os.path.dirname(os.path.abspath(site.__file__))
93 no_global_site_packages_file = os.path.join(
94 site_mod_dir,
95 "no-global-site-packages.txt",
96 )
97 return os.path.exists(no_global_site_packages_file)
98
99
100 def virtualenv_no_global():
101 # type: () -> bool
102 """Returns a boolean, whether running in venv with no system site-packages."""
103 # PEP 405 compliance needs to be checked first since virtualenv >=20 would
104 # return True for both checks, but is only able to use the PEP 405 config.
105 if _running_under_venv():
106 return _no_global_under_venv()
107
108 if _running_under_regular_virtualenv():
109 return _no_global_under_regular_virtualenv()
110
111 return False
112
[end of src/pip/_internal/utils/virtualenv.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pypa/pip
|
e6414d6db6db37951988f6f2b11ec530ed0b191d
|
pip 21.1 fails with ResolutionTooDeep while 21.0.1 exits with clear error
### Description
pip 21.0.1 has no error and highlights the dependency error right away while pip 2.1 runs for minutes then throws a _ResolutionTooDeep_ exception.
### Expected behavior
pip version 21.0.1 produces the expected output which includes this diagnostic:
```
The conflict is caused by:
The user requested hyperlink==19.0.0
autobahn 20.12.3 depends on hyperlink>=20.0.1
```
### pip version
21.1
### Python version
3.6.13
### OS
Ubuntu 16.04.7 LTS
### How to Reproduce
1. Create a python3.6 virtualenv
2. activate
3. Ensure pip v21.1 is installed in the virtualenv
4. run `pip -r r.txt` where r.txt has this content:
```
attrs==19.3.0
autobahn==20.6.2
hyperlink==19.0.0
cffi==1.14.0
cryptography>=3.2
idna==2.10
pycparser==2.20
txaio==20.4.1
```
5. Replace `autobahn==20.6.2` with `autobahn==20.12.3` in r.txt
6. run `pip -r r.txt`
### Output
```sh-session
Lots of spew, then:
Requirement already satisfied: txaio==20.4.1 in ./venv/lib/python3.6/site-packages (from -rr test.txt (line 8)) (20.4.1)
INFO: pip is looking at multiple versions of attrs to determine which version is compatible with other requirements. This could take a while.
Then pip seems to hang. If you wait long enough: 5 minutes? it prints:
ERROR: Exception:
Traceback (most recent call last):
File "/home/sigma/dev/contour/daqAdaptor/venv/lib/python3.6/site-packages/pip/_internal/cli/base_command.py", line 180, in _main
status = self.run(options, args)
File "/home/sigma/dev/contour/daqAdaptor/venv/lib/python3.6/site-packages/pip/_internal/cli/req_command.py", line 204, in wrapper
return func(self, options, args)
File "/home/sigma/dev/contour/daqAdaptor/venv/lib/python3.6/site-packages/pip/_internal/commands/install.py", line 319, in run
reqs, check_supported_wheels=not options.target_dir
File "/home/sigma/dev/contour/daqAdaptor/venv/lib/python3.6/site-packages/pip/_internal/resolution/resolvelib/resolver.py", line 128, in resolve
requirements, max_rounds=try_to_avoid_resolution_too_deep
File "/home/sigma/dev/contour/daqAdaptor/venv/lib/python3.6/site-packages/pip/_vendor/resolvelib/resolvers.py", line 473, in resolve
state = resolution.resolve(requirements, max_rounds=max_rounds)
File "/home/sigma/dev/contour/daqAdaptor/venv/lib/python3.6/site-packages/pip/_vendor/resolvelib/resolvers.py", line 384, in resolve
raise ResolutionTooDeep(max_rounds)
pip._vendor.resolvelib.resolvers.ResolutionTooDeep: 2000000
```
### Code of Conduct
- [X] I agree to follow the [PSF Code of Conduct](https://www.python.org/psf/conduct/).
|
FYI the versions are 21.0.1 and 21.1, not 2.0 and 2.1. You’re one digit off.
Ahh. What's a digit here or there :) I updated the original comment.
I'm also facing this error.
@junpuf Please describe what you were doing when you encountered the issue.
Hi @uranusjr
Sorry for the delayed response.
I have seen this error from one of my conda environment build process using `environment.yml`, and pip dependencies were used and listed below:
```
"boto3",
"s3fs",
"multi-model-server==1.1.2",
"keras-mxnet==2.2.4.2",
"opencv-python==4.5.1.48"
```
Below are the error message.
```
Collecting aiobotocore>=1.0.1
Downloading aiobotocore-1.3.0.tar.gz (48 kB)
Downloading aiobotocore-1.2.2.tar.gz (48 kB)
Downloading aiobotocore-1.2.1.tar.gz (48 kB)
Downloading aiobotocore-1.2.0.tar.gz (47 kB)
Downloading aiobotocore-1.1.2-py3-none-any.whl (45 kB)
Downloading aiobotocore-1.1.1-py3-none-any.whl (45 kB)
Downloading aiobotocore-1.1.0-py3-none-any.whl (43 kB)
Downloading aiobotocore-1.0.7-py3-none-any.whl (42 kB)
Downloading aiobotocore-1.0.6-py3-none-any.whl (42 kB)
Downloading aiobotocore-1.0.5-py3-none-any.whl (42 kB)
Downloading aiobotocore-1.0.4-py3-none-any.whl (41 kB)
Downloading aiobotocore-1.0.3-py3-none-any.whl (40 kB)
Downloading aiobotocore-1.0.2-py3-none-any.whl (40 kB)
Downloading aiobotocore-1.0.1-py3-none-any.whl (40 kB)
INFO: pip is looking at multiple versions of fsspec to determine which version is compatible with other requirements. This could take a while.
Pip subprocess error:
ERROR: Exception:
...
...
pip._vendor.resolvelib.resolvers.ResolutionTooDeep: 2000000
··failed
CondaEnvException: Pip failed
```
Thanks for the test cases! I have identified the issue and will post a patch shortly.
|
2021-05-18T14:59:10Z
|
<patch>
diff --git a/src/pip/_internal/resolution/resolvelib/factory.py b/src/pip/_internal/resolution/resolvelib/factory.py
--- a/src/pip/_internal/resolution/resolvelib/factory.py
+++ b/src/pip/_internal/resolution/resolvelib/factory.py
@@ -240,18 +240,29 @@ def _iter_found_candidates(
hashes &= ireq.hashes(trust_internet=False)
extras |= frozenset(ireq.extras)
- # Get the installed version, if it matches, unless the user
- # specified `--force-reinstall`, when we want the version from
- # the index instead.
- installed_candidate = None
- if not self._force_reinstall and name in self._installed_dists:
- installed_dist = self._installed_dists[name]
- if specifier.contains(installed_dist.version, prereleases=True):
- installed_candidate = self._make_candidate_from_dist(
- dist=installed_dist,
- extras=extras,
- template=template,
- )
+ def _get_installed_candidate() -> Optional[Candidate]:
+ """Get the candidate for the currently-installed version."""
+ # If --force-reinstall is set, we want the version from the index
+ # instead, so we "pretend" there is nothing installed.
+ if self._force_reinstall:
+ return None
+ try:
+ installed_dist = self._installed_dists[name]
+ except KeyError:
+ return None
+ # Don't use the installed distribution if its version does not fit
+ # the current dependency graph.
+ if not specifier.contains(installed_dist.version, prereleases=True):
+ return None
+ candidate = self._make_candidate_from_dist(
+ dist=installed_dist,
+ extras=extras,
+ template=template,
+ )
+ # The candidate is a known incompatiblity. Don't use it.
+ if id(candidate) in incompatible_ids:
+ return None
+ return candidate
def iter_index_candidate_infos():
# type: () -> Iterator[IndexCandidateInfo]
@@ -283,7 +294,7 @@ def iter_index_candidate_infos():
return FoundCandidates(
iter_index_candidate_infos,
- installed_candidate,
+ _get_installed_candidate(),
prefers_installed,
incompatible_ids,
)
</patch>
|
[]
|
[]
| |||
mesonbuild__meson-5116
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
openmp arguments added when openmp is not preasnat
It turns out that we assume that a compiler will always have openmp support if it can have openmp support, which is a bad assumption. If GCC is compiled without openmp support we'll still introduce `-fopenmp` into the command line arguments for the compiler, which it doesn't recognize and will reject.
I have some WIP patches for this already.
</issue>
<code>
[start of README.md]
1 <p align="center">
2 <img src="http://mesonbuild.com/assets/images/meson_logo.png">
3 </p>
4 Meson® is a project to create the best possible next-generation
5 build system.
6
7 #### Status
8
9 [](https://pypi.python.org/pypi/meson)
10 [](https://travis-ci.org/mesonbuild/meson)
11 [](https://dev.azure.com/jussi0947/jussi/_build/latest?definitionId=1)
12 [](https://codecov.io/gh/mesonbuild/meson/branch/master)
13 [](https://lgtm.com/projects/g/mesonbuild/meson/context:python)
14 [](https://lgtm.com/projects/g/mesonbuild/meson/alerts)
15
16 #### Dependencies
17
18 - [Python](http://python.org) (version 3.5 or newer)
19 - [Ninja](https://ninja-build.org) (version 1.5 or newer)
20
21 #### Installing from source
22
23 You can run Meson directly from a revision control checkout or an
24 extracted tarball. If you wish you can install it locally with the
25 standard Python distutils command `python3 setup.py install <your
26 options here>`.
27
28 Meson is also available from
29 [PyPi](https://pypi.python.org/pypi/meson), so it can be installed
30 with `pip3 install meson` (this does not require a source checkout,
31 pip will download the package automatically). The exact command to
32 type to install with pip can vary between systems, be sure to use the
33 Python 3 version of pip.
34
35 #### Running
36
37 Meson requires that you have a source directory and a build directory
38 and that these two are different. In your source root must exist a file
39 called 'meson.build'. To generate the build system run this command:
40
41 `meson <source directory> <build directory>`
42
43 Depending on how you obtained Meson the command might also be called
44 `meson.py` instead of plain `meson`. In the rest of this document we
45 are going to use the latter form.
46
47 You can omit either of the two directories, and Meson will substitute
48 the current directory and autodetect what you mean. This allows you to
49 do things like this:
50
51 `cd source_root; mkdir builddir; cd builddir; meson ..`
52
53 or
54
55 `cd source_root; mkdir builddir; meson builddir`
56
57 To compile, cd into your build directory and type `ninja`. To run unit
58 tests, type `ninja test`.
59
60 Install is the same but it can take an extra argument:
61
62 `DESTDIR=/destdir/path ninja install`
63
64 `DESTDIR` can be omitted. If you are installing to system directories,
65 you may need to run this command with sudo.
66
67
68 #### Contributing
69
70 We love code contributions. See the [contributing.md](contributing.md) file for
71 details.
72
73
74 #### IRC
75
76 The irc channel for Meson is `#mesonbuild` over at Freenode.
77
78 You can use [FreeNode's official webchat][meson_irc]
79 to connect to this channel.
80
81 [meson_irc]: https://webchat.freenode.net/?channels=%23mesonbuild
82
83 #### Further info
84
85 More information about the Meson build system can be found at the
86 [project's home page](http://mesonbuild.com).
87
88 Meson is a registered trademark of Jussi Pakkanen.
89
[end of README.md]
[start of mesonbuild/compilers/fortran.py]
1 # Copyright 2012-2017 The Meson development team
2
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6
7 # http://www.apache.org/licenses/LICENSE-2.0
8
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 from typing import List
15 import subprocess, os
16 from pathlib import Path
17
18 from .c import CCompiler
19 from .compilers import (
20 CompilerType,
21 apple_buildtype_linker_args,
22 gnulike_buildtype_args,
23 gnulike_buildtype_linker_args,
24 gnu_optimization_args,
25 clike_debug_args,
26 Compiler,
27 GnuCompiler,
28 ClangCompiler,
29 ElbrusCompiler,
30 IntelCompiler,
31 PGICompiler
32 )
33
34 from mesonbuild.mesonlib import EnvironmentException, is_osx
35
36
37 class FortranCompiler(Compiler):
38 library_dirs_cache = CCompiler.library_dirs_cache
39 program_dirs_cache = CCompiler.library_dirs_cache
40 find_library_cache = CCompiler.library_dirs_cache
41
42 def __init__(self, exelist, version, is_cross, exe_wrapper=None, **kwargs):
43 self.language = 'fortran'
44 Compiler.__init__(self, exelist, version, **kwargs)
45 cc = CCompiler(exelist, version, is_cross, exe_wrapper, **kwargs)
46 self.id = 'unknown'
47 self.is_cross = cc.is_cross
48 self.exe_wrapper = cc.exe_wrapper
49
50 def get_display_language(self):
51 return 'Fortran'
52
53 def needs_static_linker(self):
54 return CCompiler.needs_static_linker(self)
55
56 def get_always_args(self):
57 return CCompiler.get_always_args(self)
58
59 def get_linker_debug_crt_args(self):
60 return CCompiler.get_linker_debug_crt_args(self)
61
62 def get_no_stdinc_args(self):
63 return CCompiler.get_no_stdinc_args(self)
64
65 def get_no_stdlib_link_args(self):
66 return CCompiler.get_no_stdlib_link_args(self)
67
68 def get_warn_args(self, level):
69 return CCompiler.get_warn_args(self, level)
70
71 def get_no_warn_args(self):
72 return CCompiler.get_no_warn_args(self)
73
74 def get_soname_args(self, *args):
75 return CCompiler.get_soname_args(self, *args)
76
77 def sanity_check(self, work_dir, environment):
78 source_name = os.path.join(work_dir, 'sanitycheckf.f90')
79 binary_name = os.path.join(work_dir, 'sanitycheckf')
80 with open(source_name, 'w') as ofile:
81 ofile.write('print *, "Fortran compilation is working."; end')
82 pc = subprocess.Popen(self.exelist + [source_name, '-o', binary_name])
83 pc.wait()
84 if pc.returncode != 0:
85 raise EnvironmentException('Compiler %s can not compile programs.' % self.name_string())
86 if self.is_cross:
87 if self.exe_wrapper is None:
88 # Can't check if the binaries run so we have to assume they do
89 return
90 cmdlist = self.exe_wrapper + [binary_name]
91 else:
92 cmdlist = [binary_name]
93 pe = subprocess.Popen(cmdlist, stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
94 pe.wait()
95 if pe.returncode != 0:
96 raise EnvironmentException('Executables created by Fortran compiler %s are not runnable.' % self.name_string())
97
98 def get_std_warn_args(self, level):
99 return FortranCompiler.std_warn_args
100
101 def get_buildtype_args(self, buildtype):
102 return gnulike_buildtype_args[buildtype]
103
104 def get_optimization_args(self, optimization_level):
105 return gnu_optimization_args[optimization_level]
106
107 def get_debug_args(self, is_debug):
108 return clike_debug_args[is_debug]
109
110 def get_buildtype_linker_args(self, buildtype):
111 if is_osx():
112 return apple_buildtype_linker_args[buildtype]
113 return gnulike_buildtype_linker_args[buildtype]
114
115 def split_shlib_to_parts(self, fname):
116 return CCompiler.split_shlib_to_parts(self, fname)
117
118 def build_rpath_args(self, *args):
119 return CCompiler.build_rpath_args(self, *args)
120
121 def get_dependency_gen_args(self, outtarget, outfile):
122 return []
123
124 def depfile_for_object(self, objfile):
125 return CCompiler.depfile_for_object(self, objfile)
126
127 def get_depfile_suffix(self):
128 return CCompiler.get_depfile_suffix(self)
129
130 def get_exelist(self):
131 return CCompiler.get_exelist(self)
132
133 def get_linker_exelist(self):
134 return CCompiler.get_linker_exelist(self)
135
136 def get_preprocess_only_args(self):
137 return ['-cpp'] + CCompiler.get_preprocess_only_args(self)
138
139 def get_compile_only_args(self):
140 return CCompiler.get_compile_only_args(self)
141
142 def get_no_optimization_args(self):
143 return CCompiler.get_no_optimization_args(self)
144
145 def get_compiler_check_args(self):
146 return CCompiler.get_compiler_check_args(self)
147
148 def get_output_args(self, target):
149 return CCompiler.get_output_args(self, target)
150
151 def get_linker_output_args(self, outputname):
152 return CCompiler.get_linker_output_args(self, outputname)
153
154 def get_coverage_args(self):
155 return CCompiler.get_coverage_args(self)
156
157 def get_coverage_link_args(self):
158 return CCompiler.get_coverage_link_args(self)
159
160 def get_werror_args(self):
161 return CCompiler.get_werror_args(self)
162
163 def get_std_exe_link_args(self):
164 return CCompiler.get_std_exe_link_args(self)
165
166 def get_include_args(self, path, is_system):
167 return CCompiler.get_include_args(self, path, is_system)
168
169 def get_module_incdir_args(self):
170 return ('-I', )
171
172 def get_module_outdir_args(self, path):
173 return ['-module', path]
174
175 def compute_parameters_with_absolute_paths(self, parameter_list, build_dir):
176 for idx, i in enumerate(parameter_list):
177 if i[:2] == '-I' or i[:2] == '-L':
178 parameter_list[idx] = i[:2] + os.path.normpath(os.path.join(build_dir, i[2:]))
179
180 return parameter_list
181
182 def module_name_to_filename(self, module_name: str) -> str:
183 return module_name.lower() + '.mod'
184
185 def get_std_shared_lib_link_args(self):
186 return CCompiler.get_std_shared_lib_link_args(self)
187
188 def _get_search_dirs(self, *args, **kwargs):
189 return CCompiler._get_search_dirs(self, *args, **kwargs)
190
191 def get_compiler_dirs(self, *args, **kwargs):
192 return CCompiler.get_compiler_dirs(self, *args, **kwargs)
193
194 def get_library_dirs(self, *args, **kwargs):
195 return CCompiler.get_library_dirs(self, *args, **kwargs)
196
197 def get_pic_args(self):
198 return CCompiler.get_pic_args(self)
199
200 def name_string(self):
201 return CCompiler.name_string(self)
202
203 def get_linker_search_args(self, dirname):
204 return CCompiler.get_linker_search_args(self, dirname)
205
206 def get_default_include_dirs(self):
207 return CCompiler.get_default_include_dirs(self)
208
209 def gen_export_dynamic_link_args(self, env):
210 return CCompiler.gen_export_dynamic_link_args(self, env)
211
212 def gen_import_library_args(self, implibname):
213 return CCompiler.gen_import_library_args(self, implibname)
214
215 def _get_compiler_check_args(self, env, extra_args, dependencies, mode='compile'):
216 return CCompiler._get_compiler_check_args(self, env, extra_args, dependencies, mode='compile')
217
218 def compiles(self, code, env, *, extra_args=None, dependencies=None, mode='compile'):
219 return CCompiler.compiles(self, code, env, extra_args=extra_args,
220 dependencies=dependencies, mode=mode)
221
222 def _build_wrapper(self, code, env, extra_args, dependencies=None, mode='compile', want_output=False):
223 return CCompiler._build_wrapper(self, code, env, extra_args, dependencies, mode, want_output)
224
225 def links(self, code, env, *, extra_args=None, dependencies=None):
226 return CCompiler.links(self, code, env, extra_args=extra_args,
227 dependencies=dependencies)
228
229 def run(self, code, env, *, extra_args=None, dependencies=None):
230 return CCompiler.run(self, code, env, extra_args=extra_args, dependencies=dependencies)
231
232 def _get_patterns(self, *args, **kwargs):
233 return CCompiler._get_patterns(self, *args, **kwargs)
234
235 def get_library_naming(self, *args, **kwargs):
236 return CCompiler.get_library_naming(self, *args, **kwargs)
237
238 def find_library_real(self, *args):
239 return CCompiler.find_library_real(self, *args)
240
241 def find_library_impl(self, *args):
242 return CCompiler.find_library_impl(self, *args)
243
244 def find_library(self, libname, env, extra_dirs, libtype='shared-static'):
245 code = '''program main
246 call exit(0)
247 end program main'''
248 return self.find_library_impl(libname, env, extra_dirs, code, libtype)
249
250 def thread_flags(self, env):
251 return CCompiler.thread_flags(self, env)
252
253 def thread_link_flags(self, env):
254 return CCompiler.thread_link_flags(self, env)
255
256 def linker_to_compiler_args(self, args):
257 return CCompiler.linker_to_compiler_args(self, args)
258
259 def has_arguments(self, args, env, code, mode):
260 return CCompiler.has_arguments(self, args, env, code, mode)
261
262 def has_multi_arguments(self, args, env):
263 return CCompiler.has_multi_arguments(self, args, env)
264
265 def has_header(self, hname, prefix, env, *, extra_args=None, dependencies=None):
266 return CCompiler.has_header(self, hname, prefix, env, extra_args=extra_args, dependencies=dependencies)
267
268 def get_define(self, dname, prefix, env, extra_args, dependencies):
269 return CCompiler.get_define(self, dname, prefix, env, extra_args, dependencies)
270
271 @classmethod
272 def _get_trials_from_pattern(cls, pattern, directory, libname):
273 return CCompiler._get_trials_from_pattern(pattern, directory, libname)
274
275 @staticmethod
276 def _get_file_from_list(env, files: List[str]) -> Path:
277 return CCompiler._get_file_from_list(env, files)
278
279 class GnuFortranCompiler(GnuCompiler, FortranCompiler):
280 def __init__(self, exelist, version, compiler_type, is_cross, exe_wrapper=None, defines=None, **kwargs):
281 FortranCompiler.__init__(self, exelist, version, is_cross, exe_wrapper, **kwargs)
282 GnuCompiler.__init__(self, compiler_type, defines)
283 default_warn_args = ['-Wall']
284 self.warn_args = {'0': [],
285 '1': default_warn_args,
286 '2': default_warn_args + ['-Wextra'],
287 '3': default_warn_args + ['-Wextra', '-Wpedantic']}
288
289 def get_dependency_gen_args(self, outtarget, outfile):
290 # Disabled until this is fixed:
291 # https://gcc.gnu.org/bugzilla/show_bug.cgi?id=62162
292 # return ['-cpp', '-MD', '-MQ', outtarget]
293 return []
294
295 def get_module_outdir_args(self, path):
296 return ['-J' + path]
297
298 def language_stdlib_only_link_flags(self):
299 return ['-lgfortran', '-lm']
300
301
302 class ElbrusFortranCompiler(GnuFortranCompiler, ElbrusCompiler):
303 def __init__(self, exelist, version, compiler_type, is_cross, exe_wrapper=None, defines=None, **kwargs):
304 GnuFortranCompiler.__init__(self, exelist, version, compiler_type, is_cross, exe_wrapper, defines, **kwargs)
305 ElbrusCompiler.__init__(self, compiler_type, defines)
306
307 class G95FortranCompiler(FortranCompiler):
308 def __init__(self, exelist, version, is_cross, exe_wrapper=None, **kwags):
309 FortranCompiler.__init__(self, exelist, version, is_cross, exe_wrapper, **kwags)
310 self.id = 'g95'
311 default_warn_args = ['-Wall']
312 self.warn_args = {'0': [],
313 '1': default_warn_args,
314 '2': default_warn_args + ['-Wextra'],
315 '3': default_warn_args + ['-Wextra', '-pedantic']}
316
317 def get_module_outdir_args(self, path):
318 return ['-fmod=' + path]
319
320 def get_no_warn_args(self):
321 # FIXME: Confirm that there's no compiler option to disable all warnings
322 return []
323
324
325 class SunFortranCompiler(FortranCompiler):
326 def __init__(self, exelist, version, is_cross, exe_wrapper=None, **kwags):
327 FortranCompiler.__init__(self, exelist, version, is_cross, exe_wrapper, **kwags)
328 self.id = 'sun'
329
330 def get_dependency_gen_args(self, outtarget, outfile):
331 return ['-fpp']
332
333 def get_always_args(self):
334 return []
335
336 def get_warn_args(self, level):
337 return []
338
339 def get_module_incdir_args(self):
340 return ('-M', )
341
342 def get_module_outdir_args(self, path):
343 return ['-moddir=' + path]
344
345 def openmp_flags(self):
346 return ['-xopenmp']
347
348
349 class IntelFortranCompiler(IntelCompiler, FortranCompiler):
350 def __init__(self, exelist, version, is_cross, exe_wrapper=None, **kwags):
351 self.file_suffixes = ('f90', 'f', 'for', 'ftn', 'fpp')
352 FortranCompiler.__init__(self, exelist, version, is_cross, exe_wrapper, **kwags)
353 # FIXME: Add support for OS X and Windows in detect_fortran_compiler so
354 # we are sent the type of compiler
355 IntelCompiler.__init__(self, CompilerType.ICC_STANDARD)
356 self.id = 'intel'
357 default_warn_args = ['-warn', 'general', '-warn', 'truncated_source']
358 self.warn_args = {'0': [],
359 '1': default_warn_args,
360 '2': default_warn_args + ['-warn', 'unused'],
361 '3': ['-warn', 'all']}
362
363 def get_preprocess_only_args(self):
364 return ['-cpp', '-EP']
365
366 def get_always_args(self):
367 """Ifort doesn't have -pipe."""
368 val = super().get_always_args()
369 val.remove('-pipe')
370 return val
371
372 def language_stdlib_only_link_flags(self):
373 return ['-lifcore', '-limf']
374
375
376 class PathScaleFortranCompiler(FortranCompiler):
377 def __init__(self, exelist, version, is_cross, exe_wrapper=None, **kwags):
378 FortranCompiler.__init__(self, exelist, version, is_cross, exe_wrapper, **kwags)
379 self.id = 'pathscale'
380 default_warn_args = ['-fullwarn']
381 self.warn_args = {'0': [],
382 '1': default_warn_args,
383 '2': default_warn_args,
384 '3': default_warn_args}
385
386 def openmp_flags(self):
387 return ['-mp']
388
389
390 class PGIFortranCompiler(PGICompiler, FortranCompiler):
391 def __init__(self, exelist, version, is_cross, exe_wrapper=None, **kwags):
392 FortranCompiler.__init__(self, exelist, version, is_cross, exe_wrapper, **kwags)
393 PGICompiler.__init__(self, CompilerType.PGI_STANDARD)
394
395
396 class FlangFortranCompiler(ClangCompiler, FortranCompiler):
397 def __init__(self, exelist, version, is_cross, exe_wrapper=None, **kwags):
398 FortranCompiler.__init__(self, exelist, version, is_cross, exe_wrapper, **kwags)
399 ClangCompiler.__init__(self, CompilerType.CLANG_STANDARD)
400 self.id = 'flang'
401 default_warn_args = ['-Minform=inform']
402 self.warn_args = {'0': [],
403 '1': default_warn_args,
404 '2': default_warn_args,
405 '3': default_warn_args}
406
407 class Open64FortranCompiler(FortranCompiler):
408 def __init__(self, exelist, version, is_cross, exe_wrapper=None, **kwags):
409 FortranCompiler.__init__(self, exelist, version, is_cross, exe_wrapper, **kwags)
410 self.id = 'open64'
411 default_warn_args = ['-fullwarn']
412 self.warn_args = {'0': [],
413 '1': default_warn_args,
414 '2': default_warn_args,
415 '3': default_warn_args}
416
417 def openmp_flags(self):
418 return ['-mp']
419
420
421 class NAGFortranCompiler(FortranCompiler):
422 def __init__(self, exelist, version, is_cross, exe_wrapper=None, **kwags):
423 FortranCompiler.__init__(self, exelist, version, is_cross, exe_wrapper, **kwags)
424 self.id = 'nagfor'
425
426 def get_warn_args(self, level):
427 return []
428
429 def get_module_outdir_args(self, path):
430 return ['-mdir', path]
431
432 def openmp_flags(self):
433 return ['-openmp']
434
[end of mesonbuild/compilers/fortran.py]
[start of mesonbuild/dependencies/misc.py]
1 # Copyright 2013-2017 The Meson development team
2
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6
7 # http://www.apache.org/licenses/LICENSE-2.0
8
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 # This file contains the detection logic for miscellaneous external dependencies.
16
17 from pathlib import Path
18 import functools
19 import os
20 import re
21 import shlex
22 import sysconfig
23
24 from .. import mlog
25 from .. import mesonlib
26 from ..environment import detect_cpu_family
27
28 from .base import (
29 DependencyException, DependencyMethods, ExternalDependency,
30 ExternalProgram, ExtraFrameworkDependency, PkgConfigDependency,
31 CMakeDependency, ConfigToolDependency,
32 )
33
34
35 class CoarrayDependency(ExternalDependency):
36 """
37 Coarrays are a Fortran 2008 feature.
38
39 Coarrays are sometimes implemented via external library (GCC+OpenCoarrays),
40 while other compilers just build in support (Cray, IBM, Intel, NAG).
41 Coarrays may be thought of as a high-level language abstraction of
42 low-level MPI calls.
43 """
44 def __init__(self, environment, kwargs):
45 super().__init__('coarray', environment, 'fortran', kwargs)
46 kwargs['required'] = False
47 kwargs['silent'] = True
48 self.is_found = False
49
50 cid = self.get_compiler().get_id()
51 if cid == 'gcc':
52 """ OpenCoarrays is the most commonly used method for Fortran Coarray with GCC """
53 self.is_found = True
54 kwargs['modules'] = 'OpenCoarrays::caf_mpi'
55 cmakedep = CMakeDependency('OpenCoarrays', environment, kwargs)
56 if not cmakedep.found():
57 self.compile_args = ['-fcoarray=single']
58 self.version = 'single image'
59 return
60
61 self.compile_args = cmakedep.get_compile_args()
62 self.link_args = cmakedep.get_link_args()
63 self.version = cmakedep.get_version()
64 elif cid == 'intel':
65 """ Coarrays are built into Intel compilers, no external library needed """
66 self.is_found = True
67 self.link_args = ['-coarray=shared']
68 self.compile_args = self.link_args
69 elif cid == 'nagfor':
70 """ NAG doesn't require any special arguments for Coarray """
71 self.is_found = True
72
73
74 class HDF5Dependency(ExternalDependency):
75
76 def __init__(self, environment, kwargs):
77 language = kwargs.get('language', 'c')
78 super().__init__('hdf5', environment, language, kwargs)
79 kwargs['required'] = False
80 kwargs['silent'] = True
81 self.is_found = False
82
83 pkgconfig_files = ['hdf5']
84
85 if language not in ('c', 'cpp', 'fortran'):
86 raise DependencyException('Language {} is not supported with HDF5.'.format(language))
87
88 for pkg in pkgconfig_files:
89 try:
90 pkgdep = PkgConfigDependency(pkg, environment, kwargs, language=self.language)
91 if pkgdep.found():
92 self.compile_args = pkgdep.get_compile_args()
93 # derive needed libraries by language
94 pd_link_args = pkgdep.get_link_args()
95 link_args = []
96 for larg in pd_link_args:
97 lpath = Path(larg)
98 if lpath.is_file():
99 if language == 'cpp':
100 link_args.append(str(lpath.parent / (lpath.stem + '_hl_cpp' + lpath.suffix)))
101 link_args.append(str(lpath.parent / (lpath.stem + '_cpp' + lpath.suffix)))
102 elif language == 'fortran':
103 link_args.append(str(lpath.parent / (lpath.stem + 'hl_fortran' + lpath.suffix)))
104 link_args.append(str(lpath.parent / (lpath.stem + '_fortran' + lpath.suffix)))
105
106 # HDF5 C libs are required by other HDF5 languages
107 link_args.append(str(lpath.parent / (lpath.stem + '_hl' + lpath.suffix)))
108 link_args.append(larg)
109 else:
110 link_args.append(larg)
111
112 self.link_args = link_args
113 self.version = pkgdep.get_version()
114 self.is_found = True
115 self.pcdep = pkgdep
116 break
117 except Exception:
118 pass
119
120 class NetCDFDependency(ExternalDependency):
121
122 def __init__(self, environment, kwargs):
123 language = kwargs.get('language', 'c')
124 super().__init__('netcdf', environment, language, kwargs)
125 kwargs['required'] = False
126 kwargs['silent'] = True
127 self.is_found = False
128
129 pkgconfig_files = ['netcdf']
130
131 if language not in ('c', 'cpp', 'fortran'):
132 raise DependencyException('Language {} is not supported with NetCDF.'.format(language))
133
134 if language == 'fortran':
135 pkgconfig_files.append('netcdf-fortran')
136
137 self.compile_args = []
138 self.link_args = []
139 self.pcdep = []
140 for pkg in pkgconfig_files:
141 pkgdep = PkgConfigDependency(pkg, environment, kwargs, language=self.language)
142 if pkgdep.found():
143 self.compile_args.extend(pkgdep.get_compile_args())
144 self.link_args.extend(pkgdep.get_link_args())
145 self.version = pkgdep.get_version()
146 self.is_found = True
147 self.pcdep.append(pkgdep)
148
149 class MPIDependency(ExternalDependency):
150
151 def __init__(self, environment, kwargs):
152 language = kwargs.get('language', 'c')
153 super().__init__('mpi', environment, language, kwargs)
154 kwargs['required'] = False
155 kwargs['silent'] = True
156 self.is_found = False
157
158 # NOTE: Only OpenMPI supplies a pkg-config file at the moment.
159 if language == 'c':
160 env_vars = ['MPICC']
161 pkgconfig_files = ['ompi-c']
162 default_wrappers = ['mpicc']
163 elif language == 'cpp':
164 env_vars = ['MPICXX']
165 pkgconfig_files = ['ompi-cxx']
166 default_wrappers = ['mpic++', 'mpicxx', 'mpiCC']
167 elif language == 'fortran':
168 env_vars = ['MPIFC', 'MPIF90', 'MPIF77']
169 pkgconfig_files = ['ompi-fort']
170 default_wrappers = ['mpifort', 'mpif90', 'mpif77']
171 else:
172 raise DependencyException('Language {} is not supported with MPI.'.format(language))
173
174 for pkg in pkgconfig_files:
175 try:
176 pkgdep = PkgConfigDependency(pkg, environment, kwargs, language=self.language)
177 if pkgdep.found():
178 self.compile_args = pkgdep.get_compile_args()
179 self.link_args = pkgdep.get_link_args()
180 self.version = pkgdep.get_version()
181 self.is_found = True
182 self.pcdep = pkgdep
183 break
184 except Exception:
185 pass
186
187 if not self.is_found:
188 # Prefer environment.
189 for var in env_vars:
190 if var in os.environ:
191 wrappers = [os.environ[var]]
192 break
193 else:
194 # Or search for default wrappers.
195 wrappers = default_wrappers
196
197 for prog in wrappers:
198 result = self._try_openmpi_wrapper(prog)
199 if result is not None:
200 self.is_found = True
201 self.version = result[0]
202 self.compile_args = self._filter_compile_args(result[1])
203 self.link_args = self._filter_link_args(result[2])
204 break
205 result = self._try_other_wrapper(prog)
206 if result is not None:
207 self.is_found = True
208 self.version = result[0]
209 self.compile_args = self._filter_compile_args(result[1])
210 self.link_args = self._filter_link_args(result[2])
211 break
212
213 if not self.is_found and mesonlib.is_windows():
214 result = self._try_msmpi()
215 if result is not None:
216 self.is_found = True
217 self.version, self.compile_args, self.link_args = result
218
219 def _filter_compile_args(self, args):
220 """
221 MPI wrappers return a bunch of garbage args.
222 Drop -O2 and everything that is not needed.
223 """
224 result = []
225 multi_args = ('-I', )
226 if self.language == 'fortran':
227 fc = self.env.coredata.compilers['fortran']
228 multi_args += fc.get_module_incdir_args()
229
230 include_next = False
231 for f in args:
232 if f.startswith(('-D', '-f') + multi_args) or f == '-pthread' \
233 or (f.startswith('-W') and f != '-Wall' and not f.startswith('-Werror')):
234 result.append(f)
235 if f in multi_args:
236 # Path is a separate argument.
237 include_next = True
238 elif include_next:
239 include_next = False
240 result.append(f)
241 return result
242
243 def _filter_link_args(self, args):
244 """
245 MPI wrappers return a bunch of garbage args.
246 Drop -O2 and everything that is not needed.
247 """
248 result = []
249 include_next = False
250 for f in args:
251 if f.startswith(('-L', '-l', '-Xlinker')) or f == '-pthread' \
252 or (f.startswith('-W') and f != '-Wall' and not f.startswith('-Werror')):
253 result.append(f)
254 if f in ('-L', '-Xlinker'):
255 include_next = True
256 elif include_next:
257 include_next = False
258 result.append(f)
259 return result
260
261 def _try_openmpi_wrapper(self, prog):
262 prog = ExternalProgram(prog, silent=True)
263 if prog.found():
264 cmd = prog.get_command() + ['--showme:compile']
265 p, o, e = mesonlib.Popen_safe(cmd)
266 p.wait()
267 if p.returncode != 0:
268 mlog.debug('Command', mlog.bold(cmd), 'failed to run:')
269 mlog.debug(mlog.bold('Standard output\n'), o)
270 mlog.debug(mlog.bold('Standard error\n'), e)
271 return
272 cargs = shlex.split(o)
273
274 cmd = prog.get_command() + ['--showme:link']
275 p, o, e = mesonlib.Popen_safe(cmd)
276 p.wait()
277 if p.returncode != 0:
278 mlog.debug('Command', mlog.bold(cmd), 'failed to run:')
279 mlog.debug(mlog.bold('Standard output\n'), o)
280 mlog.debug(mlog.bold('Standard error\n'), e)
281 return
282 libs = shlex.split(o)
283
284 cmd = prog.get_command() + ['--showme:version']
285 p, o, e = mesonlib.Popen_safe(cmd)
286 p.wait()
287 if p.returncode != 0:
288 mlog.debug('Command', mlog.bold(cmd), 'failed to run:')
289 mlog.debug(mlog.bold('Standard output\n'), o)
290 mlog.debug(mlog.bold('Standard error\n'), e)
291 return
292 version = re.search(r'\d+.\d+.\d+', o)
293 if version:
294 version = version.group(0)
295 else:
296 version = None
297
298 return version, cargs, libs
299
300 def _try_other_wrapper(self, prog):
301 prog = ExternalProgram(prog, silent=True)
302 if prog.found():
303 cmd = prog.get_command() + ['-show']
304 p, o, e = mesonlib.Popen_safe(cmd)
305 p.wait()
306 if p.returncode != 0:
307 mlog.debug('Command', mlog.bold(cmd), 'failed to run:')
308 mlog.debug(mlog.bold('Standard output\n'), o)
309 mlog.debug(mlog.bold('Standard error\n'), e)
310 return
311 args = shlex.split(o)
312
313 version = None
314
315 return version, args, args
316
317 def _try_msmpi(self):
318 if self.language == 'cpp':
319 # MS-MPI does not support the C++ version of MPI, only the standard C API.
320 return
321 if 'MSMPI_INC' not in os.environ:
322 return
323 incdir = os.environ['MSMPI_INC']
324 arch = detect_cpu_family(self.env.coredata.compilers)
325 if arch == 'x86':
326 if 'MSMPI_LIB32' not in os.environ:
327 return
328 libdir = os.environ['MSMPI_LIB32']
329 post = 'x86'
330 elif arch == 'x86_64':
331 if 'MSMPI_LIB64' not in os.environ:
332 return
333 libdir = os.environ['MSMPI_LIB64']
334 post = 'x64'
335 else:
336 return
337 if self.language == 'fortran':
338 return (None,
339 ['-I' + incdir, '-I' + os.path.join(incdir, post)],
340 [os.path.join(libdir, 'msmpi.lib'), os.path.join(libdir, 'msmpifec.lib')])
341 else:
342 return (None,
343 ['-I' + incdir, '-I' + os.path.join(incdir, post)],
344 [os.path.join(libdir, 'msmpi.lib')])
345
346
347 class OpenMPDependency(ExternalDependency):
348 # Map date of specification release (which is the macro value) to a version.
349 VERSIONS = {
350 '201811': '5.0',
351 '201611': '5.0-revision1', # This is supported by ICC 19.x
352 '201511': '4.5',
353 '201307': '4.0',
354 '201107': '3.1',
355 '200805': '3.0',
356 '200505': '2.5',
357 '200203': '2.0',
358 '199810': '1.0',
359 }
360
361 def __init__(self, environment, kwargs):
362 language = kwargs.get('language')
363 super().__init__('openmp', environment, language, kwargs)
364 self.is_found = False
365 try:
366 openmp_date = self.clib_compiler.get_define('_OPENMP', '', self.env, [], [self])
367 except mesonlib.EnvironmentException as e:
368 mlog.debug('OpenMP support not available in the compiler')
369 mlog.debug(e)
370 openmp_date = False
371
372 if openmp_date:
373 self.version = self.VERSIONS[openmp_date]
374 if self.clib_compiler.has_header('omp.h', '', self.env, dependencies=[self]):
375 self.is_found = True
376 else:
377 mlog.log(mlog.yellow('WARNING:'), 'OpenMP found but omp.h missing.')
378
379 def need_openmp(self) -> bool:
380 return True
381
382
383 class ThreadDependency(ExternalDependency):
384 def __init__(self, environment, kwargs):
385 super().__init__('threads', environment, None, kwargs)
386 self.name = 'threads'
387 self.is_found = True
388
389 def need_threads(self):
390 return True
391
392
393 class Python3Dependency(ExternalDependency):
394 def __init__(self, environment, kwargs):
395 super().__init__('python3', environment, None, kwargs)
396
397 if self.want_cross:
398 return
399
400 self.name = 'python3'
401 self.static = kwargs.get('static', False)
402 # We can only be sure that it is Python 3 at this point
403 self.version = '3'
404 self._find_libpy3_windows(environment)
405
406 @classmethod
407 def _factory(cls, environment, kwargs):
408 methods = cls._process_method_kw(kwargs)
409 candidates = []
410
411 if DependencyMethods.PKGCONFIG in methods:
412 candidates.append(functools.partial(PkgConfigDependency, 'python3', environment, kwargs))
413
414 if DependencyMethods.SYSCONFIG in methods:
415 candidates.append(functools.partial(Python3Dependency, environment, kwargs))
416
417 if DependencyMethods.EXTRAFRAMEWORK in methods:
418 # In OSX the Python 3 framework does not have a version
419 # number in its name.
420 # There is a python in /System/Library/Frameworks, but that's
421 # python 2, Python 3 will always be in /Library
422 candidates.append(functools.partial(
423 ExtraFrameworkDependency, 'Python', False, ['/Library/Frameworks'],
424 environment, kwargs.get('language', None), kwargs))
425
426 return candidates
427
428 @staticmethod
429 def get_windows_python_arch():
430 pyplat = sysconfig.get_platform()
431 if pyplat == 'mingw':
432 pycc = sysconfig.get_config_var('CC')
433 if pycc.startswith('x86_64'):
434 return '64'
435 elif pycc.startswith(('i686', 'i386')):
436 return '32'
437 else:
438 mlog.log('MinGW Python built with unknown CC {!r}, please file'
439 'a bug'.format(pycc))
440 return None
441 elif pyplat == 'win32':
442 return '32'
443 elif pyplat in ('win64', 'win-amd64'):
444 return '64'
445 mlog.log('Unknown Windows Python platform {!r}'.format(pyplat))
446 return None
447
448 def get_windows_link_args(self):
449 pyplat = sysconfig.get_platform()
450 if pyplat.startswith('win'):
451 vernum = sysconfig.get_config_var('py_version_nodot')
452 if self.static:
453 libpath = Path('libs') / 'libpython{}.a'.format(vernum)
454 else:
455 comp = self.get_compiler()
456 if comp.id == "gcc":
457 libpath = 'python{}.dll'.format(vernum)
458 else:
459 libpath = Path('libs') / 'python{}.lib'.format(vernum)
460 lib = Path(sysconfig.get_config_var('base')) / libpath
461 elif pyplat == 'mingw':
462 if self.static:
463 libname = sysconfig.get_config_var('LIBRARY')
464 else:
465 libname = sysconfig.get_config_var('LDLIBRARY')
466 lib = Path(sysconfig.get_config_var('LIBDIR')) / libname
467 if not lib.exists():
468 mlog.log('Could not find Python3 library {!r}'.format(str(lib)))
469 return None
470 return [str(lib)]
471
472 def _find_libpy3_windows(self, env):
473 '''
474 Find python3 libraries on Windows and also verify that the arch matches
475 what we are building for.
476 '''
477 pyarch = self.get_windows_python_arch()
478 if pyarch is None:
479 self.is_found = False
480 return
481 arch = detect_cpu_family(env.coredata.compilers)
482 if arch == 'x86':
483 arch = '32'
484 elif arch == 'x86_64':
485 arch = '64'
486 else:
487 # We can't cross-compile Python 3 dependencies on Windows yet
488 mlog.log('Unknown architecture {!r} for'.format(arch),
489 mlog.bold(self.name))
490 self.is_found = False
491 return
492 # Pyarch ends in '32' or '64'
493 if arch != pyarch:
494 mlog.log('Need', mlog.bold(self.name), 'for {}-bit, but '
495 'found {}-bit'.format(arch, pyarch))
496 self.is_found = False
497 return
498 # This can fail if the library is not found
499 largs = self.get_windows_link_args()
500 if largs is None:
501 self.is_found = False
502 return
503 self.link_args = largs
504 # Compile args
505 inc = sysconfig.get_path('include')
506 platinc = sysconfig.get_path('platinclude')
507 self.compile_args = ['-I' + inc]
508 if inc != platinc:
509 self.compile_args.append('-I' + platinc)
510 self.version = sysconfig.get_config_var('py_version')
511 self.is_found = True
512
513 @staticmethod
514 def get_methods():
515 if mesonlib.is_windows():
516 return [DependencyMethods.PKGCONFIG, DependencyMethods.SYSCONFIG]
517 elif mesonlib.is_osx():
518 return [DependencyMethods.PKGCONFIG, DependencyMethods.EXTRAFRAMEWORK]
519 else:
520 return [DependencyMethods.PKGCONFIG]
521
522 def log_tried(self):
523 return 'sysconfig'
524
525 class PcapDependency(ExternalDependency):
526
527 def __init__(self, environment, kwargs):
528 super().__init__('pcap', environment, None, kwargs)
529
530 @classmethod
531 def _factory(cls, environment, kwargs):
532 methods = cls._process_method_kw(kwargs)
533 candidates = []
534
535 if DependencyMethods.PKGCONFIG in methods:
536 candidates.append(functools.partial(PkgConfigDependency, 'pcap', environment, kwargs))
537
538 if DependencyMethods.CONFIG_TOOL in methods:
539 candidates.append(functools.partial(ConfigToolDependency.factory,
540 'pcap', environment, None,
541 kwargs, ['pcap-config'],
542 'pcap-config',
543 PcapDependency.tool_finish_init))
544
545 return candidates
546
547 @staticmethod
548 def tool_finish_init(ctdep):
549 ctdep.compile_args = ctdep.get_config_value(['--cflags'], 'compile_args')
550 ctdep.link_args = ctdep.get_config_value(['--libs'], 'link_args')
551 ctdep.version = PcapDependency.get_pcap_lib_version(ctdep)
552
553 @staticmethod
554 def get_methods():
555 return [DependencyMethods.PKGCONFIG, DependencyMethods.CONFIG_TOOL]
556
557 @staticmethod
558 def get_pcap_lib_version(ctdep):
559 # Since we seem to need to run a program to discover the pcap version,
560 # we can't do that when cross-compiling
561 if ctdep.want_cross:
562 return None
563
564 v = ctdep.clib_compiler.get_return_value('pcap_lib_version', 'string',
565 '#include <pcap.h>', ctdep.env, [], [ctdep])
566 v = re.sub(r'libpcap version ', '', v)
567 v = re.sub(r' -- Apple version.*$', '', v)
568 return v
569
570
571 class CupsDependency(ExternalDependency):
572 def __init__(self, environment, kwargs):
573 super().__init__('cups', environment, None, kwargs)
574
575 @classmethod
576 def _factory(cls, environment, kwargs):
577 methods = cls._process_method_kw(kwargs)
578 candidates = []
579
580 if DependencyMethods.PKGCONFIG in methods:
581 candidates.append(functools.partial(PkgConfigDependency, 'cups', environment, kwargs))
582
583 if DependencyMethods.CONFIG_TOOL in methods:
584 candidates.append(functools.partial(ConfigToolDependency.factory,
585 'cups', environment, None,
586 kwargs, ['cups-config'],
587 'cups-config', CupsDependency.tool_finish_init))
588
589 if DependencyMethods.EXTRAFRAMEWORK in methods:
590 if mesonlib.is_osx():
591 candidates.append(functools.partial(
592 ExtraFrameworkDependency, 'cups', False, None, environment,
593 kwargs.get('language', None), kwargs))
594
595 if DependencyMethods.CMAKE in methods:
596 candidates.append(functools.partial(CMakeDependency, 'Cups', environment, kwargs))
597
598 return candidates
599
600 @staticmethod
601 def tool_finish_init(ctdep):
602 ctdep.compile_args = ctdep.get_config_value(['--cflags'], 'compile_args')
603 ctdep.link_args = ctdep.get_config_value(['--ldflags', '--libs'], 'link_args')
604
605 @staticmethod
606 def get_methods():
607 if mesonlib.is_osx():
608 return [DependencyMethods.PKGCONFIG, DependencyMethods.CONFIG_TOOL, DependencyMethods.EXTRAFRAMEWORK, DependencyMethods.CMAKE]
609 else:
610 return [DependencyMethods.PKGCONFIG, DependencyMethods.CONFIG_TOOL, DependencyMethods.CMAKE]
611
612
613 class LibWmfDependency(ExternalDependency):
614 def __init__(self, environment, kwargs):
615 super().__init__('libwmf', environment, None, kwargs)
616
617 @classmethod
618 def _factory(cls, environment, kwargs):
619 methods = cls._process_method_kw(kwargs)
620 candidates = []
621
622 if DependencyMethods.PKGCONFIG in methods:
623 candidates.append(functools.partial(PkgConfigDependency, 'libwmf', environment, kwargs))
624
625 if DependencyMethods.CONFIG_TOOL in methods:
626 candidates.append(functools.partial(ConfigToolDependency.factory,
627 'libwmf', environment, None, kwargs, ['libwmf-config'], 'libwmf-config', LibWmfDependency.tool_finish_init))
628
629 return candidates
630
631 @staticmethod
632 def tool_finish_init(ctdep):
633 ctdep.compile_args = ctdep.get_config_value(['--cflags'], 'compile_args')
634 ctdep.link_args = ctdep.get_config_value(['--libs'], 'link_args')
635
636 @staticmethod
637 def get_methods():
638 return [DependencyMethods.PKGCONFIG, DependencyMethods.CONFIG_TOOL]
639
640
641 class LibGCryptDependency(ExternalDependency):
642 def __init__(self, environment, kwargs):
643 super().__init__('libgcrypt', environment, None, kwargs)
644
645 @classmethod
646 def _factory(cls, environment, kwargs):
647 methods = cls._process_method_kw(kwargs)
648 candidates = []
649
650 if DependencyMethods.PKGCONFIG in methods:
651 candidates.append(functools.partial(PkgConfigDependency, 'libgcrypt', environment, kwargs))
652
653 if DependencyMethods.CONFIG_TOOL in methods:
654 candidates.append(functools.partial(ConfigToolDependency.factory,
655 'libgcrypt', environment, None, kwargs, ['libgcrypt-config'],
656 'libgcrypt-config',
657 LibGCryptDependency.tool_finish_init))
658
659 return candidates
660
661 @staticmethod
662 def tool_finish_init(ctdep):
663 ctdep.compile_args = ctdep.get_config_value(['--cflags'], 'compile_args')
664 ctdep.link_args = ctdep.get_config_value(['--libs'], 'link_args')
665 ctdep.version = ctdep.get_config_value(['--version'], 'version')[0]
666
667 @staticmethod
668 def get_methods():
669 return [DependencyMethods.PKGCONFIG, DependencyMethods.CONFIG_TOOL]
670
[end of mesonbuild/dependencies/misc.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
mesonbuild/meson
|
44dd5535f056922294867bac2eb07b57f21bede6
|
openmp arguments added when openmp is not preasnat
It turns out that we assume that a compiler will always have openmp support if it can have openmp support, which is a bad assumption. If GCC is compiled without openmp support we'll still introduce `-fopenmp` into the command line arguments for the compiler, which it doesn't recognize and will reject.
I have some WIP patches for this already.
|
2019-03-18T23:46:35Z
|
<patch>
diff --git a/mesonbuild/backend/backends.py b/mesonbuild/backend/backends.py
--- a/mesonbuild/backend/backends.py
+++ b/mesonbuild/backend/backends.py
@@ -641,8 +641,6 @@ def generate_basic_compiler_args(self, target, compiler, no_warn_args=False):
# pkg-config puts the thread flags itself via `Cflags:`
if dep.need_threads():
commands += compiler.thread_flags(self.environment)
- elif dep.need_openmp():
- commands += compiler.openmp_flags()
# Fortran requires extra include directives.
if compiler.language == 'fortran':
for lt in target.link_targets:
diff --git a/mesonbuild/backend/ninjabackend.py b/mesonbuild/backend/ninjabackend.py
--- a/mesonbuild/backend/ninjabackend.py
+++ b/mesonbuild/backend/ninjabackend.py
@@ -2494,7 +2494,6 @@ def generate_link(self, target, outfile, outname, obj_list, linker, extra_args=[
# For 'automagic' deps: Boost and GTest. Also dependency('threads').
# pkg-config puts the thread flags itself via `Cflags:`
need_threads = False
- need_openmp = False
commands += target.link_args
# External deps must be last because target link libraries may depend on them.
@@ -2503,15 +2502,11 @@ def generate_link(self, target, outfile, outname, obj_list, linker, extra_args=[
# https://github.com/mesonbuild/meson/issues/1718
commands.extend_preserving_lflags(dep.get_link_args())
need_threads |= dep.need_threads()
- need_openmp |= dep.need_openmp()
for d in target.get_dependencies():
if isinstance(d, build.StaticLibrary):
for dep in d.get_external_deps():
need_threads |= dep.need_threads()
- need_openmp |= dep.need_openmp()
commands.extend_preserving_lflags(dep.get_link_args())
- if need_openmp:
- commands += linker.openmp_flags()
if need_threads:
commands += linker.thread_link_flags(self.environment)
diff --git a/mesonbuild/compilers/c.py b/mesonbuild/compilers/c.py
--- a/mesonbuild/compilers/c.py
+++ b/mesonbuild/compilers/c.py
@@ -410,8 +410,6 @@ def _get_compiler_check_args(self, env, extra_args, dependencies, mode='compile'
args += d.get_compile_args()
if d.need_threads():
args += self.thread_flags(env)
- elif d.need_openmp():
- args += self.openmp_flags()
if mode == 'link':
# Add link flags needed to find dependencies
args += d.get_link_args()
diff --git a/mesonbuild/dependencies/base.py b/mesonbuild/dependencies/base.py
--- a/mesonbuild/dependencies/base.py
+++ b/mesonbuild/dependencies/base.py
@@ -152,9 +152,6 @@ def get_version(self):
def get_exe_args(self, compiler):
return []
- def need_openmp(self):
- return False
-
def need_threads(self):
return False
diff --git a/mesonbuild/dependencies/misc.py b/mesonbuild/dependencies/misc.py
--- a/mesonbuild/dependencies/misc.py
+++ b/mesonbuild/dependencies/misc.py
@@ -363,7 +363,8 @@ def __init__(self, environment, kwargs):
super().__init__('openmp', environment, language, kwargs)
self.is_found = False
try:
- openmp_date = self.clib_compiler.get_define('_OPENMP', '', self.env, [], [self])
+ openmp_date = self.clib_compiler.get_define(
+ '_OPENMP', '', self.env, self.clib_compiler.openmp_flags(), [self])
except mesonlib.EnvironmentException as e:
mlog.debug('OpenMP support not available in the compiler')
mlog.debug(e)
@@ -373,12 +374,10 @@ def __init__(self, environment, kwargs):
self.version = self.VERSIONS[openmp_date]
if self.clib_compiler.has_header('omp.h', '', self.env, dependencies=[self]):
self.is_found = True
+ self.compile_args = self.link_args = self.clib_compiler.openmp_flags()
else:
mlog.log(mlog.yellow('WARNING:'), 'OpenMP found but omp.h missing.')
- def need_openmp(self) -> bool:
- return True
-
class ThreadDependency(ExternalDependency):
def __init__(self, environment, kwargs):
</patch>
|
[]
|
[]
| ||||
googleapis__google-cloud-python-9991
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
TablesClient: predict does not work with Python list or dictionary
The problematic line is at: https://github.com/googleapis/google-cloud-python/blob/bd5318aa5340fff19e38156b77d64d7db0e0e1c3/automl/google/cloud/automl_v1beta1/tables/tables_client.py#L402-L405
The expected type of the array/struct is google.protobuf.ListValue/google.protobuf.StructValue, not Python list/dictionary.
The failure message is:
```
*** TypeError: Parameter to MergeFrom() must be instance of same class: expect ed google.protobuf.ListValue got list.
```
Preferably we should only support Python list/dictionary. Now for backwards compatibility, we should support list/dictionary/google.protobuf.ListValue/google.protobuf.StructValue
</issue>
<code>
[start of README.rst]
1 Google Cloud Python Client
2 ==========================
3
4 Python idiomatic clients for `Google Cloud Platform`_ services.
5
6 .. _Google Cloud Platform: https://cloud.google.com/
7
8 **Heads up**! These libraries are supported on App Engine standard's `Python 3 runtime`_ but are *not* supported on App Engine's `Python 2 runtime`_.
9
10 .. _Python 3 runtime: https://cloud.google.com/appengine/docs/standard/python3
11 .. _Python 2 runtime: https://cloud.google.com/appengine/docs/standard/python
12
13 General Availability
14 --------------------
15
16 **GA** (general availability) indicates that the client library for a
17 particular service is stable, and that the code surface will not change in
18 backwards-incompatible ways unless either absolutely necessary (e.g. because
19 of critical security issues) or with an extensive deprecation period.
20 Issues and requests against GA libraries are addressed with the highest
21 priority.
22
23 .. note::
24
25 Sub-components of GA libraries explicitly marked as beta in the
26 import path (e.g. ``google.cloud.language_v1beta2``) should be considered
27 to be beta.
28
29 The following client libraries have **GA** support:
30
31 - `Google BigQuery`_ (`BigQuery README`_, `BigQuery Documentation`_)
32 - `Google Cloud Bigtable`_ (`Bigtable README`_, `Bigtable Documentation`_)
33 - `Google Cloud Datastore`_ (`Datastore README`_, `Datastore Documentation`_)
34 - `Google Cloud KMS`_ (`KMS README`_, `KMS Documentation`_)
35 - `Google Cloud Natural Language`_ (`Natural Language README`_, `Natural Language Documentation`_)
36 - `Google Cloud Pub/Sub`_ (`Pub/Sub README`_, `Pub/Sub Documentation`_)
37 - `Google Cloud Scheduler`_ (`Scheduler README`_, `Scheduler Documentation`_)
38 - `Google Cloud Spanner`_ (`Spanner README`_, `Spanner Documentation`_)
39 - `Google Cloud Speech to Text`_ (`Speech to Text README`_, `Speech to Text Documentation`_)
40 - `Google Cloud Storage`_ (`Storage README`_, `Storage Documentation`_)
41 - `Google Cloud Tasks`_ (`Tasks README`_, `Tasks Documentation`_)
42 - `Google Cloud Translation`_ (`Translation README`_, `Translation Documentation`_)
43 - `Stackdriver Logging`_ (`Logging README`_, `Logging Documentation`_)
44
45 .. _Google BigQuery: https://pypi.org/project/google-cloud-bigquery/
46 .. _BigQuery README: https://github.com/googleapis/google-cloud-python/tree/master/bigquery
47 .. _BigQuery Documentation: https://googleapis.dev/python/bigquery/latest
48
49 .. _Google Cloud Bigtable: https://pypi.org/project/google-cloud-bigtable/
50 .. _Bigtable README: https://github.com/googleapis/google-cloud-python/tree/master/bigtable
51 .. _Bigtable Documentation: https://googleapis.dev/python/bigtable/latest
52
53 .. _Google Cloud Datastore: https://pypi.org/project/google-cloud-datastore/
54 .. _Datastore README: https://github.com/googleapis/google-cloud-python/tree/master/datastore
55 .. _Datastore Documentation: https://googleapis.dev/python/datastore/latest
56
57 .. _Google Cloud KMS: https://pypi.org/project/google-cloud-kms/
58 .. _KMS README: https://github.com/googleapis/google-cloud-python/tree/master/kms
59 .. _KMS Documentation: https://googleapis.dev/python/cloudkms/latest
60
61 .. _Google Cloud Natural Language: https://pypi.org/project/google-cloud-language/
62 .. _Natural Language README: https://github.com/googleapis/google-cloud-python/tree/master/language
63 .. _Natural Language Documentation: https://googleapis.dev/python/language/latest
64
65 .. _Google Cloud Pub/Sub: https://pypi.org/project/google-cloud-pubsub/
66 .. _Pub/Sub README: https://github.com/googleapis/google-cloud-python/tree/master/pubsub
67 .. _Pub/Sub Documentation: https://googleapis.dev/python/pubsub/latest
68
69 .. _Google Cloud Spanner: https://pypi.org/project/google-cloud-spanner
70 .. _Spanner README: https://github.com/googleapis/google-cloud-python/tree/master/spanner
71 .. _Spanner Documentation: https://googleapis.dev/python/spanner/latest
72
73 .. _Google Cloud Speech to Text: https://pypi.org/project/google-cloud-speech/
74 .. _Speech to Text README: https://github.com/googleapis/google-cloud-python/tree/master/speech
75 .. _Speech to Text Documentation: https://googleapis.dev/python/speech/latest
76
77 .. _Google Cloud Storage: https://pypi.org/project/google-cloud-storage/
78 .. _Storage README: https://github.com/googleapis/google-cloud-python/tree/master/storage
79 .. _Storage Documentation: https://googleapis.dev/python/storage/latest
80
81 .. _Google Cloud Tasks: https://pypi.org/project/google-cloud-tasks/
82 .. _Tasks README: https://github.com/googleapis/google-cloud-python/tree/master/tasks
83 .. _Tasks Documentation: https://googleapis.dev/python/cloudtasks/latest
84
85 .. _Google Cloud Translation: https://pypi.org/project/google-cloud-translate/
86 .. _Translation README: https://github.com/googleapis/google-cloud-python/tree/master/translate
87 .. _Translation Documentation: https://googleapis.dev/python/translation/latest
88
89 .. _Google Cloud Scheduler: https://pypi.org/project/google-cloud-scheduler/
90 .. _Scheduler README: https://github.com/googleapis/google-cloud-python/tree/master/scheduler
91 .. _Scheduler Documentation: https://googleapis.dev/python/cloudscheduler/latest
92
93 .. _Stackdriver Logging: https://pypi.org/project/google-cloud-logging/
94 .. _Logging README: https://github.com/googleapis/google-cloud-python/tree/master/logging
95 .. _Logging Documentation: https://googleapis.dev/python/logging/latest
96
97 Beta Support
98 ------------
99
100 **Beta** indicates that the client library for a particular service is
101 mostly stable and is being prepared for release. Issues and requests
102 against beta libraries are addressed with a higher priority.
103
104 The following client libraries have **beta** support:
105
106 - `Google Cloud Billing Budgets`_ (`Billing Budgets README`_, `Billing Budgets Documentation`_)
107 - `Google Cloud Firestore`_ (`Firestore README`_, `Firestore Documentation`_)
108 - `Google Cloud Video Intelligence`_ (`Video Intelligence README`_, `Video Intelligence Documentation`_)
109 - `Google Cloud Vision`_ (`Vision README`_, `Vision Documentation`_)
110
111 .. _Google Cloud Billing Budgets: https://pypi.org/project/google-cloud-billing-budgets/
112 .. _Billing Budgets README: https://github.com/googleapis/google-cloud-python/tree/master/billingbudgets
113 .. _Billing Budgets Documentation: https://googleapis.dev/python/billingbudgets/latest
114
115 .. _Google Cloud Firestore: https://pypi.org/project/google-cloud-firestore/
116 .. _Firestore README: https://github.com/googleapis/google-cloud-python/tree/master/firestore
117 .. _Firestore Documentation: https://googleapis.dev/python/firestore/latest
118
119 .. _Google Cloud Video Intelligence: https://pypi.org/project/google-cloud-videointelligence
120 .. _Video Intelligence README: https://github.com/googleapis/google-cloud-python/tree/master/videointelligence
121 .. _Video Intelligence Documentation: https://googleapis.dev/python/videointelligence/latest
122
123 .. _Google Cloud Vision: https://pypi.org/project/google-cloud-vision/
124 .. _Vision README: https://github.com/googleapis/google-cloud-python/tree/master/vision
125 .. _Vision Documentation: https://googleapis.dev/python/vision/latest
126
127
128 Alpha Support
129 -------------
130
131 **Alpha** indicates that the client library for a particular service is
132 still a work-in-progress and is more likely to get backwards-incompatible
133 updates. See `versioning`_ for more details.
134
135 The following client libraries have **alpha** support:
136
137 - `Google Cloud Asset`_ (`Asset README`_, `Asset Documentation`_)
138 - `Google Cloud AutoML`_ (`AutoML README`_, `AutoML Documentation`_)
139 - `Google BigQuery Data Transfer`_ (`BigQuery Data Transfer README`_, `BigQuery Documentation`_)
140 - `Google Cloud Bigtable - HappyBase`_ (`HappyBase README`_, `HappyBase Documentation`_)
141 - `Google Cloud Build`_ (`Cloud Build README`_, `Cloud Build Documentation`_)
142 - `Google Cloud Container`_ (`Container README`_, `Container Documentation`_)
143 - `Google Cloud Container Analysis`_ (`Container Analysis README`_, `Container Analysis Documentation`_)
144 - `Google Cloud Dataproc`_ (`Dataproc README`_, `Dataproc Documentation`_)
145 - `Google Cloud DLP`_ (`DLP README`_, `DLP Documentation`_)
146 - `Google Cloud DNS`_ (`DNS README`_, `DNS Documentation`_)
147 - `Google Cloud IoT`_ (`IoT README`_, `IoT Documentation`_)
148 - `Google Cloud Memorystore for Redis`_ (`Redis README`_, `Redis Documentation`_)
149 - `Google Cloud Recommender`_ (`Recommender README`_, `Recommender Documentation`_)
150 - `Google Cloud Resource Manager`_ (`Resource Manager README`_, `Resource Manager Documentation`_)
151 - `Google Cloud Runtime Configuration`_ (`Runtime Config README`_, `Runtime Config Documentation`_)
152 - `Google Cloud Security Scanner`_ (`Security Scanner README`_ , `Security Scanner Documentation`_)
153 - `Google Cloud Trace`_ (`Trace README`_, `Trace Documentation`_)
154 - `Google Cloud Text-to-Speech`_ (`Text-to-Speech README`_, `Text-to-Speech Documentation`_)
155 - `Grafeas`_ (`Grafeas README`_, `Grafeas Documentation`_)
156 - `Stackdriver Error Reporting`_ (`Error Reporting README`_, `Error Reporting Documentation`_)
157 - `Stackdriver Monitoring`_ (`Monitoring README`_, `Monitoring Documentation`_)
158
159 .. _Google Cloud Asset: https://pypi.org/project/google-cloud-asset/
160 .. _Asset README: https://github.com/googleapis/google-cloud-python/blob/master/asset
161 .. _Asset Documentation: https://googleapis.dev/python/cloudasset/latest
162
163 .. _Google Cloud AutoML: https://pypi.org/project/google-cloud-automl/
164 .. _AutoML README: https://github.com/googleapis/google-cloud-python/blob/master/automl
165 .. _AutoML Documentation: https://googleapis.dev/python/automl/latest
166
167 .. _Google BigQuery Data Transfer: https://pypi.org/project/google-cloud-bigquery-datatransfer/
168 .. _BigQuery Data Transfer README: https://github.com/googleapis/google-cloud-python/tree/master/bigquery_datatransfer
169 .. _BigQuery Documentation: https://googleapis.dev/python/bigquery/latest
170
171 .. _Google Cloud Bigtable - HappyBase: https://pypi.org/project/google-cloud-happybase/
172 .. _HappyBase README: https://github.com/googleapis/google-cloud-python-happybase
173 .. _HappyBase Documentation: https://google-cloud-python-happybase.readthedocs.io/en/latest/
174
175 .. _Google Cloud Build: https://pypi.org/project/google-cloud-build/
176 .. _Cloud Build README: https://github.com/googleapis/google-cloud-python/cloudbuild
177 .. _Cloud Build Documentation: https://googleapis.dev/python/cloudbuild/latest
178
179 .. _Google Cloud Container: https://pypi.org/project/google-cloud-container/
180 .. _Container README: https://github.com/googleapis/google-cloud-python/tree/master/container
181 .. _Container Documentation: https://googleapis.dev/python/container/latest
182
183 .. _Google Cloud Container Analysis: https://pypi.org/project/google-cloud-containeranalysis/
184 .. _Container Analysis README: https://github.com/googleapis/google-cloud-python/tree/master/containeranalysis
185 .. _Container Analysis Documentation: https://googleapis.dev/python/containeranalysis/latest
186
187 .. _Google Cloud Dataproc: https://pypi.org/project/google-cloud-dataproc/
188 .. _Dataproc README: https://github.com/googleapis/google-cloud-python/tree/master/dataproc
189 .. _Dataproc Documentation: https://googleapis.dev/python/dataproc/latest
190
191 .. _Google Cloud DLP: https://pypi.org/project/google-cloud-dlp/
192 .. _DLP README: https://github.com/googleapis/google-cloud-python/tree/master/dlp
193 .. _DLP Documentation: https://googleapis.dev/python/dlp/latest
194
195 .. _Google Cloud DNS: https://pypi.org/project/google-cloud-dns/
196 .. _DNS README: https://github.com/googleapis/google-cloud-python/tree/master/dns
197 .. _DNS Documentation: https://googleapis.dev/python/dns/latest
198
199 .. _Google Cloud IoT: https://pypi.org/project/google-cloud-iot/
200 .. _IoT README: https://github.com/googleapis/google-cloud-python/tree/master/iot
201 .. _IoT Documentation: https://googleapis.dev/python/cloudiot/latest
202
203 .. _Google Cloud Memorystore for Redis: https://pypi.org/project/google-cloud-redis/
204 .. _Redis README: https://github.com/googleapis/google-cloud-python/tree/master/redis
205 .. _Redis Documentation: https://googleapis.dev/python/redis/latest
206
207 .. _Google Cloud Recommender: https://pypi.org/project/google-cloud-recommender/
208 .. _Recommender README: https://github.com/googleapis/google-cloud-python/tree/master/recommender
209 .. _Recommender Documentation: https://googleapis.dev/python/recommender/latest
210
211 .. _Google Cloud Resource Manager: https://pypi.org/project/google-cloud-resource-manager/
212 .. _Resource Manager README: https://github.com/googleapis/google-cloud-python/tree/master/resource_manager
213 .. _Resource Manager Documentation: https://googleapis.dev/python/cloudresourcemanager/latest
214
215 .. _Google Cloud Runtime Configuration: https://pypi.org/project/google-cloud-runtimeconfig/
216 .. _Runtime Config README: https://github.com/googleapis/google-cloud-python/tree/master/runtimeconfig
217 .. _Runtime Config Documentation: https://googleapis.dev/python/runtimeconfig/latest
218
219 .. _Google Cloud Security Scanner: https://pypi.org/project/google-cloud-websecurityscanner/
220 .. _Security Scanner README: https://github.com/googleapis/google-cloud-python/blob/master/websecurityscanner
221 .. _Security Scanner Documentation: https://googleapis.dev/python/websecurityscanner/latest
222
223 .. _Google Cloud Text-to-Speech: https://pypi.org/project/google-cloud-texttospeech/
224 .. _Text-to-Speech README: https://github.com/googleapis/google-cloud-python/tree/master/texttospeech
225 .. _Text-to-Speech Documentation: https://googleapis.dev/python/texttospeech/latest
226
227 .. _Google Cloud Trace: https://pypi.org/project/google-cloud-trace/
228 .. _Trace README: https://github.com/googleapis/google-cloud-python/tree/master/trace
229 .. _Trace Documentation: https://googleapis.dev/python/cloudtrace/latest
230
231 .. _Grafeas: https://pypi.org/project/grafeas/
232 .. _Grafeas README: https://github.com/googleapis/google-cloud-python/tree/master/grafeas
233 .. _Grafeas Documentation: https://googleapis.dev/python/grafeas/latest
234
235 .. _Stackdriver Error Reporting: https://pypi.org/project/google-cloud-error-reporting/
236 .. _Error Reporting README: https://github.com/googleapis/google-cloud-python/tree/master/error_reporting
237 .. _Error Reporting Documentation: https://googleapis.dev/python/clouderrorreporting/latest
238
239 .. _Stackdriver Monitoring: https://pypi.org/project/google-cloud-monitoring/
240 .. _Monitoring README: https://github.com/googleapis/google-cloud-python/tree/master/monitoring
241 .. _Monitoring Documentation: https://googleapis.dev/python/monitoring/latest
242
243 .. _versioning: https://github.com/googleapis/google-cloud-python/blob/master/CONTRIBUTING.rst#versioning
244
245 If you need support for other Google APIs, check out the
246 `Google APIs Python Client library`_.
247
248 .. _Google APIs Python Client library: https://github.com/google/google-api-python-client
249
250
251 Example Applications
252 --------------------
253
254 - `getting-started-python`_ - A sample and `tutorial`_ that demonstrates how to build a complete web application using Cloud Datastore, Cloud Storage, and Cloud Pub/Sub and deploy it to Google App Engine or Google Compute Engine.
255 - `google-cloud-python-expenses-demo`_ - A sample expenses demo using Cloud Datastore and Cloud Storage
256
257 .. _getting-started-python: https://github.com/GoogleCloudPlatform/getting-started-python
258 .. _tutorial: https://cloud.google.com/python
259 .. _google-cloud-python-expenses-demo: https://github.com/GoogleCloudPlatform/google-cloud-python-expenses-demo
260
261
262 Authentication
263 --------------
264
265 With ``google-cloud-python`` we try to make authentication as painless as possible.
266 Check out the `Authentication section`_ in our documentation to learn more.
267 You may also find the `authentication document`_ shared by all the
268 ``google-cloud-*`` libraries to be helpful.
269
270 .. _Authentication section: https://googleapis.dev/python/google-api-core/latest/auth.html
271 .. _authentication document: https://github.com/googleapis/google-cloud-common/tree/master/authentication
272
273 Contributing
274 ------------
275
276 Contributions to this library are always welcome and highly encouraged.
277
278 See the `CONTRIBUTING doc`_ for more information on how to get started.
279
280 .. _CONTRIBUTING doc: https://github.com/googleapis/google-cloud-python/blob/master/CONTRIBUTING.rst
281
282
283 Community
284 ---------
285
286 Google Cloud Platform Python developers hang out in `Slack`_ in the ``#python``
287 channel, click here to `get an invitation`_.
288
289 .. _Slack: https://googlecloud-community.slack.com
290 .. _get an invitation: https://gcp-slack.appspot.com/
291
292
293 License
294 -------
295
296 Apache 2.0 - See `the LICENSE`_ for more information.
297
298 .. _the LICENSE: https://github.com/googleapis/google-cloud-python/blob/master/LICENSE
299
[end of README.rst]
[start of api_core/google/api_core/protobuf_helpers.py]
1 # Copyright 2017 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Helpers for :mod:`protobuf`."""
16
17 import collections
18 import copy
19 import inspect
20
21 from google.protobuf import field_mask_pb2
22 from google.protobuf import message
23 from google.protobuf import wrappers_pb2
24
25 try:
26 from collections import abc as collections_abc
27 except ImportError: # Python 2.7
28 import collections as collections_abc
29
30
31 _SENTINEL = object()
32 _WRAPPER_TYPES = (
33 wrappers_pb2.BoolValue,
34 wrappers_pb2.BytesValue,
35 wrappers_pb2.DoubleValue,
36 wrappers_pb2.FloatValue,
37 wrappers_pb2.Int32Value,
38 wrappers_pb2.Int64Value,
39 wrappers_pb2.StringValue,
40 wrappers_pb2.UInt32Value,
41 wrappers_pb2.UInt64Value,
42 )
43
44
45 def from_any_pb(pb_type, any_pb):
46 """Converts an ``Any`` protobuf to the specified message type.
47
48 Args:
49 pb_type (type): the type of the message that any_pb stores an instance
50 of.
51 any_pb (google.protobuf.any_pb2.Any): the object to be converted.
52
53 Returns:
54 pb_type: An instance of the pb_type message.
55
56 Raises:
57 TypeError: if the message could not be converted.
58 """
59 msg = pb_type()
60
61 # Unwrap proto-plus wrapped messages.
62 if callable(getattr(pb_type, "pb", None)):
63 msg_pb = pb_type.pb(msg)
64 else:
65 msg_pb = msg
66
67 # Unpack the Any object and populate the protobuf message instance.
68 if not any_pb.Unpack(msg_pb):
69 raise TypeError(
70 "Could not convert {} to {}".format(
71 any_pb.__class__.__name__, pb_type.__name__
72 )
73 )
74
75 # Done; return the message.
76 return msg
77
78
79 def check_oneof(**kwargs):
80 """Raise ValueError if more than one keyword argument is not ``None``.
81
82 Args:
83 kwargs (dict): The keyword arguments sent to the function.
84
85 Raises:
86 ValueError: If more than one entry in ``kwargs`` is not ``None``.
87 """
88 # Sanity check: If no keyword arguments were sent, this is fine.
89 if not kwargs:
90 return
91
92 not_nones = [val for val in kwargs.values() if val is not None]
93 if len(not_nones) > 1:
94 raise ValueError(
95 "Only one of {fields} should be set.".format(
96 fields=", ".join(sorted(kwargs.keys()))
97 )
98 )
99
100
101 def get_messages(module):
102 """Discovers all protobuf Message classes in a given import module.
103
104 Args:
105 module (module): A Python module; :func:`dir` will be run against this
106 module to find Message subclasses.
107
108 Returns:
109 dict[str, google.protobuf.message.Message]: A dictionary with the
110 Message class names as keys, and the Message subclasses themselves
111 as values.
112 """
113 answer = collections.OrderedDict()
114 for name in dir(module):
115 candidate = getattr(module, name)
116 if inspect.isclass(candidate) and issubclass(candidate, message.Message):
117 answer[name] = candidate
118 return answer
119
120
121 def _resolve_subkeys(key, separator="."):
122 """Resolve a potentially nested key.
123
124 If the key contains the ``separator`` (e.g. ``.``) then the key will be
125 split on the first instance of the subkey::
126
127 >>> _resolve_subkeys('a.b.c')
128 ('a', 'b.c')
129 >>> _resolve_subkeys('d|e|f', separator='|')
130 ('d', 'e|f')
131
132 If not, the subkey will be :data:`None`::
133
134 >>> _resolve_subkeys('foo')
135 ('foo', None)
136
137 Args:
138 key (str): A string that may or may not contain the separator.
139 separator (str): The namespace separator. Defaults to `.`.
140
141 Returns:
142 Tuple[str, str]: The key and subkey(s).
143 """
144 parts = key.split(separator, 1)
145
146 if len(parts) > 1:
147 return parts
148 else:
149 return parts[0], None
150
151
152 def get(msg_or_dict, key, default=_SENTINEL):
153 """Retrieve a key's value from a protobuf Message or dictionary.
154
155 Args:
156 mdg_or_dict (Union[~google.protobuf.message.Message, Mapping]): the
157 object.
158 key (str): The key to retrieve from the object.
159 default (Any): If the key is not present on the object, and a default
160 is set, returns that default instead. A type-appropriate falsy
161 default is generally recommended, as protobuf messages almost
162 always have default values for unset values and it is not always
163 possible to tell the difference between a falsy value and an
164 unset one. If no default is set then :class:`KeyError` will be
165 raised if the key is not present in the object.
166
167 Returns:
168 Any: The return value from the underlying Message or dict.
169
170 Raises:
171 KeyError: If the key is not found. Note that, for unset values,
172 messages and dictionaries may not have consistent behavior.
173 TypeError: If ``msg_or_dict`` is not a Message or Mapping.
174 """
175 # We may need to get a nested key. Resolve this.
176 key, subkey = _resolve_subkeys(key)
177
178 # Attempt to get the value from the two types of objects we know about.
179 # If we get something else, complain.
180 if isinstance(msg_or_dict, message.Message):
181 answer = getattr(msg_or_dict, key, default)
182 elif isinstance(msg_or_dict, collections_abc.Mapping):
183 answer = msg_or_dict.get(key, default)
184 else:
185 raise TypeError(
186 "get() expected a dict or protobuf message, got {!r}.".format(
187 type(msg_or_dict)
188 )
189 )
190
191 # If the object we got back is our sentinel, raise KeyError; this is
192 # a "not found" case.
193 if answer is _SENTINEL:
194 raise KeyError(key)
195
196 # If a subkey exists, call this method recursively against the answer.
197 if subkey is not None and answer is not default:
198 return get(answer, subkey, default=default)
199
200 return answer
201
202
203 def _set_field_on_message(msg, key, value):
204 """Set helper for protobuf Messages."""
205 # Attempt to set the value on the types of objects we know how to deal
206 # with.
207 if isinstance(value, (collections_abc.MutableSequence, tuple)):
208 # Clear the existing repeated protobuf message of any elements
209 # currently inside it.
210 while getattr(msg, key):
211 getattr(msg, key).pop()
212
213 # Write our new elements to the repeated field.
214 for item in value:
215 if isinstance(item, collections_abc.Mapping):
216 getattr(msg, key).add(**item)
217 else:
218 # protobuf's RepeatedCompositeContainer doesn't support
219 # append.
220 getattr(msg, key).extend([item])
221 elif isinstance(value, collections_abc.Mapping):
222 # Assign the dictionary values to the protobuf message.
223 for item_key, item_value in value.items():
224 set(getattr(msg, key), item_key, item_value)
225 elif isinstance(value, message.Message):
226 getattr(msg, key).CopyFrom(value)
227 else:
228 setattr(msg, key, value)
229
230
231 def set(msg_or_dict, key, value):
232 """Set a key's value on a protobuf Message or dictionary.
233
234 Args:
235 msg_or_dict (Union[~google.protobuf.message.Message, Mapping]): the
236 object.
237 key (str): The key to set.
238 value (Any): The value to set.
239
240 Raises:
241 TypeError: If ``msg_or_dict`` is not a Message or dictionary.
242 """
243 # Sanity check: Is our target object valid?
244 if not isinstance(msg_or_dict, (collections_abc.MutableMapping, message.Message)):
245 raise TypeError(
246 "set() expected a dict or protobuf message, got {!r}.".format(
247 type(msg_or_dict)
248 )
249 )
250
251 # We may be setting a nested key. Resolve this.
252 basekey, subkey = _resolve_subkeys(key)
253
254 # If a subkey exists, then get that object and call this method
255 # recursively against it using the subkey.
256 if subkey is not None:
257 if isinstance(msg_or_dict, collections_abc.MutableMapping):
258 msg_or_dict.setdefault(basekey, {})
259 set(get(msg_or_dict, basekey), subkey, value)
260 return
261
262 if isinstance(msg_or_dict, collections_abc.MutableMapping):
263 msg_or_dict[key] = value
264 else:
265 _set_field_on_message(msg_or_dict, key, value)
266
267
268 def setdefault(msg_or_dict, key, value):
269 """Set the key on a protobuf Message or dictionary to a given value if the
270 current value is falsy.
271
272 Because protobuf Messages do not distinguish between unset values and
273 falsy ones particularly well (by design), this method treats any falsy
274 value (e.g. 0, empty list) as a target to be overwritten, on both Messages
275 and dictionaries.
276
277 Args:
278 msg_or_dict (Union[~google.protobuf.message.Message, Mapping]): the
279 object.
280 key (str): The key on the object in question.
281 value (Any): The value to set.
282
283 Raises:
284 TypeError: If ``msg_or_dict`` is not a Message or dictionary.
285 """
286 if not get(msg_or_dict, key, default=None):
287 set(msg_or_dict, key, value)
288
289
290 def field_mask(original, modified):
291 """Create a field mask by comparing two messages.
292
293 Args:
294 original (~google.protobuf.message.Message): the original message.
295 If set to None, this field will be interpretted as an empty
296 message.
297 modified (~google.protobuf.message.Message): the modified message.
298 If set to None, this field will be interpretted as an empty
299 message.
300
301 Returns:
302 google.protobuf.field_mask_pb2.FieldMask: field mask that contains
303 the list of field names that have different values between the two
304 messages. If the messages are equivalent, then the field mask is empty.
305
306 Raises:
307 ValueError: If the ``original`` or ``modified`` are not the same type.
308 """
309 if original is None and modified is None:
310 return field_mask_pb2.FieldMask()
311
312 if original is None and modified is not None:
313 original = copy.deepcopy(modified)
314 original.Clear()
315
316 if modified is None and original is not None:
317 modified = copy.deepcopy(original)
318 modified.Clear()
319
320 if type(original) != type(modified):
321 raise ValueError(
322 "expected that both original and modified should be of the "
323 'same type, received "{!r}" and "{!r}".'.format(
324 type(original), type(modified)
325 )
326 )
327
328 return field_mask_pb2.FieldMask(paths=_field_mask_helper(original, modified))
329
330
331 def _field_mask_helper(original, modified, current=""):
332 answer = []
333
334 for name in original.DESCRIPTOR.fields_by_name:
335 field_path = _get_path(current, name)
336
337 original_val = getattr(original, name)
338 modified_val = getattr(modified, name)
339
340 if _is_message(original_val) or _is_message(modified_val):
341 if original_val != modified_val:
342 # Wrapper types do not need to include the .value part of the
343 # path.
344 if _is_wrapper(original_val) or _is_wrapper(modified_val):
345 answer.append(field_path)
346 elif not modified_val.ListFields():
347 answer.append(field_path)
348 else:
349 answer.extend(
350 _field_mask_helper(original_val, modified_val, field_path)
351 )
352 else:
353 if original_val != modified_val:
354 answer.append(field_path)
355
356 return answer
357
358
359 def _get_path(current, name):
360 if not current:
361 return name
362 return "%s.%s" % (current, name)
363
364
365 def _is_message(value):
366 return isinstance(value, message.Message)
367
368
369 def _is_wrapper(value):
370 return type(value) in _WRAPPER_TYPES
371
[end of api_core/google/api_core/protobuf_helpers.py]
[start of bigquery/google/cloud/bigquery/dbapi/cursor.py]
1 # Copyright 2017 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Cursor for the Google BigQuery DB-API."""
16
17 import collections
18
19 try:
20 from collections import abc as collections_abc
21 except ImportError: # Python 2.7
22 import collections as collections_abc
23
24 import six
25
26 from google.cloud.bigquery import job
27 from google.cloud.bigquery.dbapi import _helpers
28 from google.cloud.bigquery.dbapi import exceptions
29 import google.cloud.exceptions
30
31 # Per PEP 249: A 7-item sequence containing information describing one result
32 # column. The first two items (name and type_code) are mandatory, the other
33 # five are optional and are set to None if no meaningful values can be
34 # provided.
35 Column = collections.namedtuple(
36 "Column",
37 [
38 "name",
39 "type_code",
40 "display_size",
41 "internal_size",
42 "precision",
43 "scale",
44 "null_ok",
45 ],
46 )
47
48
49 class Cursor(object):
50 """DB-API Cursor to Google BigQuery.
51
52 Args:
53 connection (google.cloud.bigquery.dbapi.Connection):
54 A DB-API connection to Google BigQuery.
55 """
56
57 def __init__(self, connection):
58 self.connection = connection
59 self.description = None
60 # Per PEP 249: The attribute is -1 in case no .execute*() has been
61 # performed on the cursor or the rowcount of the last operation
62 # cannot be determined by the interface.
63 self.rowcount = -1
64 # Per PEP 249: The arraysize attribute defaults to 1, meaning to fetch
65 # a single row at a time. However, we deviate from that, and set the
66 # default to None, allowing the backend to automatically determine the
67 # most appropriate size.
68 self.arraysize = None
69 self._query_data = None
70 self._query_job = None
71
72 def close(self):
73 """No-op."""
74
75 def _set_description(self, schema):
76 """Set description from schema.
77
78 Args:
79 schema (Sequence[google.cloud.bigquery.schema.SchemaField]):
80 A description of fields in the schema.
81 """
82 if schema is None:
83 self.description = None
84 return
85
86 self.description = tuple(
87 [
88 Column(
89 name=field.name,
90 type_code=field.field_type,
91 display_size=None,
92 internal_size=None,
93 precision=None,
94 scale=None,
95 null_ok=field.is_nullable,
96 )
97 for field in schema
98 ]
99 )
100
101 def _set_rowcount(self, query_results):
102 """Set the rowcount from query results.
103
104 Normally, this sets rowcount to the number of rows returned by the
105 query, but if it was a DML statement, it sets rowcount to the number
106 of modified rows.
107
108 Args:
109 query_results (google.cloud.bigquery.query._QueryResults):
110 Results of a query.
111 """
112 total_rows = 0
113 num_dml_affected_rows = query_results.num_dml_affected_rows
114
115 if query_results.total_rows is not None and query_results.total_rows > 0:
116 total_rows = query_results.total_rows
117 if num_dml_affected_rows is not None and num_dml_affected_rows > 0:
118 total_rows = num_dml_affected_rows
119 self.rowcount = total_rows
120
121 def execute(self, operation, parameters=None, job_id=None, job_config=None):
122 """Prepare and execute a database operation.
123
124 .. note::
125 When setting query parameters, values which are "text"
126 (``unicode`` in Python2, ``str`` in Python3) will use
127 the 'STRING' BigQuery type. Values which are "bytes" (``str`` in
128 Python2, ``bytes`` in Python3), will use using the 'BYTES' type.
129
130 A `~datetime.datetime` parameter without timezone information uses
131 the 'DATETIME' BigQuery type (example: Global Pi Day Celebration
132 March 14, 2017 at 1:59pm). A `~datetime.datetime` parameter with
133 timezone information uses the 'TIMESTAMP' BigQuery type (example:
134 a wedding on April 29, 2011 at 11am, British Summer Time).
135
136 For more information about BigQuery data types, see:
137 https://cloud.google.com/bigquery/docs/reference/standard-sql/data-types
138
139 ``STRUCT``/``RECORD`` and ``REPEATED`` query parameters are not
140 yet supported. See:
141 https://github.com/GoogleCloudPlatform/google-cloud-python/issues/3524
142
143 Args:
144 operation (str): A Google BigQuery query string.
145
146 parameters (Union[Mapping[str, Any], Sequence[Any]]):
147 (Optional) dictionary or sequence of parameter values.
148
149 job_id (str):
150 (Optional) The job_id to use. If not set, a job ID
151 is generated at random.
152
153 job_config (google.cloud.bigquery.job.QueryJobConfig):
154 (Optional) Extra configuration options for the query job.
155 """
156 self._query_data = None
157 self._query_job = None
158 client = self.connection._client
159
160 # The DB-API uses the pyformat formatting, since the way BigQuery does
161 # query parameters was not one of the standard options. Convert both
162 # the query and the parameters to the format expected by the client
163 # libraries.
164 formatted_operation = _format_operation(operation, parameters=parameters)
165 query_parameters = _helpers.to_query_parameters(parameters)
166
167 config = job_config or job.QueryJobConfig(use_legacy_sql=False)
168 config.query_parameters = query_parameters
169 self._query_job = client.query(
170 formatted_operation, job_config=config, job_id=job_id
171 )
172
173 # Wait for the query to finish.
174 try:
175 self._query_job.result()
176 except google.cloud.exceptions.GoogleCloudError as exc:
177 raise exceptions.DatabaseError(exc)
178
179 query_results = self._query_job._query_results
180 self._set_rowcount(query_results)
181 self._set_description(query_results.schema)
182
183 def executemany(self, operation, seq_of_parameters):
184 """Prepare and execute a database operation multiple times.
185
186 Args:
187 operation (str): A Google BigQuery query string.
188
189 seq_of_parameters (Union[Sequence[Mapping[str, Any], Sequence[Any]]]):
190 Sequence of many sets of parameter values.
191 """
192 for parameters in seq_of_parameters:
193 self.execute(operation, parameters)
194
195 def _try_fetch(self, size=None):
196 """Try to start fetching data, if not yet started.
197
198 Mutates self to indicate that iteration has started.
199 """
200 if self._query_job is None:
201 raise exceptions.InterfaceError(
202 "No query results: execute() must be called before fetch."
203 )
204
205 is_dml = (
206 self._query_job.statement_type
207 and self._query_job.statement_type.upper() != "SELECT"
208 )
209 if is_dml:
210 self._query_data = iter([])
211 return
212
213 if self._query_data is None:
214 client = self.connection._client
215 rows_iter = client.list_rows(
216 self._query_job.destination,
217 selected_fields=self._query_job._query_results.schema,
218 page_size=self.arraysize,
219 )
220 self._query_data = iter(rows_iter)
221
222 def fetchone(self):
223 """Fetch a single row from the results of the last ``execute*()`` call.
224
225 Returns:
226 Tuple:
227 A tuple representing a row or ``None`` if no more data is
228 available.
229
230 Raises:
231 google.cloud.bigquery.dbapi.InterfaceError: if called before ``execute()``.
232 """
233 self._try_fetch()
234 try:
235 return six.next(self._query_data)
236 except StopIteration:
237 return None
238
239 def fetchmany(self, size=None):
240 """Fetch multiple results from the last ``execute*()`` call.
241
242 .. note::
243 The size parameter is not used for the request/response size.
244 Set the ``arraysize`` attribute before calling ``execute()`` to
245 set the batch size.
246
247 Args:
248 size (int):
249 (Optional) Maximum number of rows to return. Defaults to the
250 ``arraysize`` property value. If ``arraysize`` is not set, it
251 defaults to ``1``.
252
253 Returns:
254 List[Tuple]: A list of rows.
255
256 Raises:
257 google.cloud.bigquery.dbapi.InterfaceError: if called before ``execute()``.
258 """
259 if size is None:
260 # Since self.arraysize can be None (a deviation from PEP 249),
261 # use an actual PEP 249 default of 1 in such case (*some* number
262 # is needed here).
263 size = self.arraysize if self.arraysize else 1
264
265 self._try_fetch(size=size)
266 rows = []
267
268 for row in self._query_data:
269 rows.append(row)
270 if len(rows) >= size:
271 break
272
273 return rows
274
275 def fetchall(self):
276 """Fetch all remaining results from the last ``execute*()`` call.
277
278 Returns:
279 List[Tuple]: A list of all the rows in the results.
280
281 Raises:
282 google.cloud.bigquery.dbapi.InterfaceError: if called before ``execute()``.
283 """
284 self._try_fetch()
285 return list(self._query_data)
286
287 def setinputsizes(self, sizes):
288 """No-op."""
289
290 def setoutputsize(self, size, column=None):
291 """No-op."""
292
293
294 def _format_operation_list(operation, parameters):
295 """Formats parameters in operation in the way BigQuery expects.
296
297 The input operation will be a query like ``SELECT %s`` and the output
298 will be a query like ``SELECT ?``.
299
300 Args:
301 operation (str): A Google BigQuery query string.
302
303 parameters (Sequence[Any]): Sequence of parameter values.
304
305 Returns:
306 str: A formatted query string.
307
308 Raises:
309 google.cloud.bigquery.dbapi.ProgrammingError:
310 if a parameter used in the operation is not found in the
311 ``parameters`` argument.
312 """
313 formatted_params = ["?" for _ in parameters]
314
315 try:
316 return operation % tuple(formatted_params)
317 except TypeError as exc:
318 raise exceptions.ProgrammingError(exc)
319
320
321 def _format_operation_dict(operation, parameters):
322 """Formats parameters in operation in the way BigQuery expects.
323
324 The input operation will be a query like ``SELECT %(namedparam)s`` and
325 the output will be a query like ``SELECT @namedparam``.
326
327 Args:
328 operation (str): A Google BigQuery query string.
329
330 parameters (Mapping[str, Any]): Dictionary of parameter values.
331
332 Returns:
333 str: A formatted query string.
334
335 Raises:
336 google.cloud.bigquery.dbapi.ProgrammingError:
337 if a parameter used in the operation is not found in the
338 ``parameters`` argument.
339 """
340 formatted_params = {}
341 for name in parameters:
342 escaped_name = name.replace("`", r"\`")
343 formatted_params[name] = "@`{}`".format(escaped_name)
344
345 try:
346 return operation % formatted_params
347 except KeyError as exc:
348 raise exceptions.ProgrammingError(exc)
349
350
351 def _format_operation(operation, parameters=None):
352 """Formats parameters in operation in way BigQuery expects.
353
354 Args:
355 operation (str): A Google BigQuery query string.
356
357 parameters (Union[Mapping[str, Any], Sequence[Any]]):
358 Optional parameter values.
359
360 Returns:
361 str: A formatted query string.
362
363 Raises:
364 google.cloud.bigquery.dbapi.ProgrammingError:
365 if a parameter used in the operation is not found in the
366 ``parameters`` argument.
367 """
368 if parameters is None:
369 return operation
370
371 if isinstance(parameters, collections_abc.Mapping):
372 return _format_operation_dict(operation, parameters)
373
374 return _format_operation_list(operation, parameters)
375
[end of bigquery/google/cloud/bigquery/dbapi/cursor.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
googleapis/google-cloud-python
|
8bb406849475d753fbcacca13573377ee94ed1ab
|
TablesClient: predict does not work with Python list or dictionary
The problematic line is at: https://github.com/googleapis/google-cloud-python/blob/bd5318aa5340fff19e38156b77d64d7db0e0e1c3/automl/google/cloud/automl_v1beta1/tables/tables_client.py#L402-L405
The expected type of the array/struct is google.protobuf.ListValue/google.protobuf.StructValue, not Python list/dictionary.
The failure message is:
```
*** TypeError: Parameter to MergeFrom() must be instance of same class: expect ed google.protobuf.ListValue got list.
```
Preferably we should only support Python list/dictionary. Now for backwards compatibility, we should support list/dictionary/google.protobuf.ListValue/google.protobuf.StructValue
|
Before this bug being resolved, the user has to convert Python list/struct to google.protobuf.Value. E.g.:
```Python
predict({"array_feature": to_list_value([1,2,3]), "struct_feature": to_struct_value({"field": 1})})
```
Here is the helper function:
```Python
from google.protobuf import struct_pb2
import six
def to_list_value(value):
value, _ = to_proto_value(value)
return value.list_value
def to_struct_value(value):
value, _ = to_proto_value(value)
return value.struct_value
def to_proto_value(value):
"""translates a Python value to a google.protobuf.Value.
Args:
value: The Python value to be translated.
Returns:
The translated google.protobuf.Value.
"""
# possible Python types (this is a Python3 module):
# https://simplejson.readthedocs.io/en/latest/#encoders-and-decoders
# JSON Python 2 Python 3
# object dict dict
# array list list
# string unicode str
# number (int) int, long int
# number (real) float float
# true True True
# false False False
# null None None
if value is None:
# translate null to an empty value.
return struct_pb2.Value(), None
elif isinstance(value, bool):
# This check needs to happen before isinstance(value, int),
# isinstance(value, int) returns True when value is bool.
return struct_pb2.Value(bool_value=value), None
if isinstance(value, six.integer_types) or isinstance(value, float):
return struct_pb2.Value(number_value=value), None
elif isinstance(value, six.string_types) or isinstance(value, six.text_type):
return struct_pb2.Value(string_value=value), None
elif isinstance(value, dict):
struct_value = struct_pb2.Struct()
for key, v in value.items():
field_value, err = to_proto_value(v)
if err is not None:
return None, err
struct_value.fields[key].CopyFrom(field_value)
return struct_pb2.Value(struct_value=struct_value), None
elif isinstance(value, list):
list_value = []
for v in value:
proto_value, err = to_proto_value(v)
if err is not None:
return None, err
list_value.append(proto_value)
return struct_pb2.Value(list_value=struct_pb2.ListValue(
values=list_value)), None
else:
return None, "unsupport data type: {}".format(type(value))
```
|
2019-12-17T23:36:59Z
|
<patch>
diff --git a/automl/google/cloud/automl_v1beta1/tables/tables_client.py b/automl/google/cloud/automl_v1beta1/tables/tables_client.py
--- a/automl/google/cloud/automl_v1beta1/tables/tables_client.py
+++ b/automl/google/cloud/automl_v1beta1/tables/tables_client.py
@@ -22,8 +22,10 @@
from google.api_core.gapic_v1 import client_info
from google.api_core import exceptions
from google.cloud.automl_v1beta1 import gapic
-from google.cloud.automl_v1beta1.proto import data_types_pb2
+from google.cloud.automl_v1beta1.proto import data_types_pb2, data_items_pb2
from google.cloud.automl_v1beta1.tables import gcs_client
+from google.protobuf import struct_pb2
+
_GAPIC_LIBRARY_VERSION = pkg_resources.get_distribution("google-cloud-automl").version
_LOGGER = logging.getLogger(__name__)
@@ -390,21 +392,39 @@ def __column_spec_name_from_args(
return column_spec_name
- def __type_code_to_value_type(self, type_code, value):
+ def __data_type_to_proto_value(self, data_type, value):
+ type_code = data_type.type_code
if value is None:
- return {"null_value": 0}
+ return struct_pb2.Value(null_value=struct_pb2.NullValue.NULL_VALUE)
elif type_code == data_types_pb2.FLOAT64:
- return {"number_value": value}
- elif type_code == data_types_pb2.TIMESTAMP:
- return {"string_value": value}
- elif type_code == data_types_pb2.STRING:
- return {"string_value": value}
+ return struct_pb2.Value(number_value=value)
+ elif (
+ type_code == data_types_pb2.TIMESTAMP
+ or type_code == data_types_pb2.STRING
+ or type_code == data_types_pb2.CATEGORY
+ ):
+ return struct_pb2.Value(string_value=value)
elif type_code == data_types_pb2.ARRAY:
- return {"list_value": value}
+ if isinstance(value, struct_pb2.ListValue):
+ # in case the user passed in a ListValue.
+ return struct_pb2.Value(list_value=value)
+ array = []
+ for item in value:
+ array.append(
+ self.__data_type_to_proto_value(data_type.list_element_type, item)
+ )
+ return struct_pb2.Value(list_value=struct_pb2.ListValue(values=array))
elif type_code == data_types_pb2.STRUCT:
- return {"struct_value": value}
- elif type_code == data_types_pb2.CATEGORY:
- return {"string_value": value}
+ if isinstance(value, struct_pb2.Struct):
+ # in case the user passed in a Struct.
+ return struct_pb2.Value(struct_value=value)
+ struct_value = struct_pb2.Struct()
+ for k, v in value.items():
+ field_value = self.__data_type_to_proto_value(
+ data_type.struct_type.fields[k], v
+ )
+ struct_value.fields[k].CopyFrom(field_value)
+ return struct_pb2.Value(struct_value=struct_value)
else:
raise ValueError("Unknown type_code: {}".format(type_code))
@@ -2682,16 +2702,17 @@ def predict(
values = []
for i, c in zip(inputs, column_specs):
- value_type = self.__type_code_to_value_type(c.data_type.type_code, i)
+ value_type = self.__data_type_to_proto_value(c.data_type, i)
values.append(value_type)
- request = {"row": {"values": values}}
+ row = data_items_pb2.Row(values=values)
+ payload = data_items_pb2.ExamplePayload(row=row)
params = None
if feature_importance:
params = {"feature_importance": "true"}
- return self.prediction_client.predict(model.name, request, params, **kwargs)
+ return self.prediction_client.predict(model.name, payload, params, **kwargs)
def batch_predict(
self,
</patch>
|
[]
|
[]
| |||
google__jax-818
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Easy api for custom primitives and vjps
JAX supports custom primitives and vjps, just like Autograd did. Improvements:
1) add this to documentation
2) add a minimal example of this in the examples section
3) add a wrapper function if appropriate?
</issue>
<code>
[start of README.md]
1 <div align="center">
2 <img src="https://raw.githubusercontent.com/google/jax/master/images/jax_logo_250px.png" alt="logo"></img>
3 </div>
4
5 # JAX: Autograd and XLA [](https://travis-ci.org/google/jax)
6
7 [**Reference docs**](https://jax.readthedocs.io/en/latest/)
8 | [**Install guide**](#installation)
9 | [**Quickstart**](#quickstart-colab-in-the-cloud)
10
11 JAX is [Autograd](https://github.com/hips/autograd) and
12 [XLA](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/xla/g3doc/overview.md),
13 brought together for high-performance machine learning research.
14
15 With its updated version of [Autograd](https://github.com/hips/autograd),
16 JAX can automatically differentiate native
17 Python and NumPy functions. It can differentiate through loops, branches,
18 recursion, and closures, and it can take derivatives of derivatives of
19 derivatives. It supports reverse-mode differentiation (a.k.a. backpropagation)
20 via [`grad`](#automatic-differentiation-with-grad) as well as forward-mode differentiation,
21 and the two can be composed arbitrarily to any order.
22
23 What’s new is that JAX uses
24 [XLA](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/xla/g3doc/overview.md)
25 to compile and run your NumPy programs on GPUs and TPUs. Compilation happens
26 under the hood by default, with library calls getting just-in-time compiled and
27 executed. But JAX also lets you just-in-time compile your own Python functions
28 into XLA-optimized kernels using a one-function API,
29 [`jit`](#compilation-with-jit). Compilation and automatic differentiation can be
30 composed arbitrarily, so you can express sophisticated algorithms and get
31 maximal performance without leaving Python.
32
33 Dig a little deeper, and you'll see that JAX is really an extensible system for
34 [composable function transformations](#transformations). Both
35 [`grad`](#automatic-differentiation-with-grad) and [`jit`](#compilation-with-jit)
36 are instances of such transformations. Another is [`vmap`](#auto-vectorization-with-vmap)
37 for automatic vectorization, with more to come.
38
39 This is a research project, not an official Google product. Expect bugs and
40 [sharp edges](https://colab.research.google.com/github/google/jax/blob/master/notebooks/Common_Gotchas_in_JAX.ipynb).
41 Please help by trying it out, [reporting
42 bugs](https://github.com/google/jax/issues), and letting us know what you
43 think!
44
45 ```python
46 import jax.numpy as np
47 from jax import grad, jit, vmap
48 from functools import partial
49
50 def predict(params, inputs):
51 for W, b in params:
52 outputs = np.dot(inputs, W) + b
53 inputs = np.tanh(outputs)
54 return outputs
55
56 def logprob_fun(params, inputs, targets):
57 preds = predict(params, inputs)
58 return np.sum((preds - targets)**2)
59
60 grad_fun = jit(grad(logprob_fun)) # compiled gradient evaluation function
61 perex_grads = jit(vmap(grad_fun, in_axes=(None, 0, 0))) # fast per-example grads
62 ```
63
64 JAX started as a research project by [Matt Johnson](https://github.com/mattjj),
65 [Roy Frostig](https://github.com/froystig), [Dougal
66 Maclaurin](https://github.com/dougalm), and [Chris
67 Leary](https://github.com/learyg), and is now developed [in the
68 open](https://github.com/google/jax) by a growing number of
69 [contributors](#contributors).
70
71 ### Contents
72 * [Quickstart: Colab in the Cloud](#quickstart-colab-in-the-cloud)
73 * [Installation](#installation)
74 * [Running the tests](#running-the-tests)
75 * [Reference documentation](#reference-documentation)
76 * [A brief tour](#a-brief-tour)
77 * [What's supported](#whats-supported)
78 * [Transformations](#transformations)
79 * [Random numbers are different](#random-numbers-are-different)
80 * [Mini-libraries](#mini-libraries)
81 * [How it works](#how-it-works)
82 * [What we're working on](#what-were-working-on)
83 * [Current gotchas](#current-gotchas)
84
85 ## Quickstart: Colab in the Cloud
86 Jump right in using a notebook in your browser, connected to a Google Cloud GPU. Here are some starter notebooks:
87 - [The basics: NumPy on accelerators, `grad` for differentiation, `jit` for compilation, and `vmap` for vectorization](https://colab.research.google.com/github/google/jax/blob/master/notebooks/quickstart.ipynb)
88 - [Training a Simple Neural Network, with PyTorch Data Loading](https://colab.research.google.com/github/google/jax/blob/master/notebooks/neural_network_and_data_loading.ipynb)
89 - [Training a Simple Neural Network, with TensorFlow Dataset Data Loading](https://colab.research.google.com/github/google/jax/blob/master/notebooks/neural_network_with_tfds_data.ipynb)
90
91 And for a deeper dive into JAX:
92 - [Common gotchas and sharp edges](https://colab.research.google.com/github/google/jax/blob/master/notebooks/Common_Gotchas_in_JAX.ipynb)
93 - [The Autodiff Cookbook, Part 1: easy and powerful automatic differentiation in JAX](https://colab.research.google.com/github/google/jax/blob/master/notebooks/autodiff_cookbook.ipynb)
94 - [Directly using XLA in Python](https://colab.research.google.com/github/google/jax/blob/master/notebooks/XLA_in_Python.ipynb)
95 - [MAML Tutorial with JAX](https://colab.research.google.com/github/google/jax/blob/master/notebooks/maml.ipynb).
96
97 ## Installation
98 JAX is written in pure Python, but it depends on XLA, which needs to be compiled
99 and installed as the `jaxlib` package. Use the following instructions to build
100 JAX from source or install a binary package with pip.
101
102 We support installing or building `jaxlib` on Linux and macOS platforms, but not
103 Windows. We're not currently working on Windows support, but contributions are
104 welcome (see [#438](https://github.com/google/jax/issues/438)).
105
106 ### Building JAX from source
107 First, obtain the JAX source code, and make sure `scipy` is installed.
108
109 ```bash
110 git clone https://github.com/google/jax
111 cd jax
112 pip install scipy
113 ```
114
115 If you are building on a Mac, make sure XCode and the XCode command line tools
116 are installed.
117
118 To build XLA with CUDA support, you can run
119
120 ```bash
121 python build/build.py --enable_cuda
122 pip install -e build # install jaxlib (includes XLA)
123 pip install -e . # install jax (pure Python)
124 ```
125
126 See `python build/build.py --help` for configuration options, including ways to
127 specify the paths to CUDA and CUDNN, which you must have installed. The build
128 also depends on NumPy, and a compiler toolchain corresponding to that of
129 Ubuntu 16.04 or newer.
130
131 To build XLA without CUDA GPU support (CPU only), drop the `--enable_cuda`:
132
133 ```bash
134 python build/build.py
135 pip install -e build # install jaxlib (includes XLA)
136 pip install -e . # install jax
137 ```
138
139 To upgrade to the latest version from GitHub, just run `git pull` from the JAX
140 repository root, and rebuild by running `build.py` if necessary. You shouldn't have
141 to reinstall because `pip install -e` sets up symbolic links from site-packages
142 into the repository.
143
144 ### pip installation
145
146 Installing XLA with prebuilt binaries via `pip` is still experimental,
147 especially with GPU support. Let us know on [the issue
148 tracker](https://github.com/google/jax/issues) if you run into any errors.
149
150 To install a CPU-only version, which might be useful for doing local
151 development on a laptop, you can run
152
153 ```bash
154 pip install --upgrade jax jaxlib # CPU-only version
155 ```
156
157 If you want to install JAX with both CPU and GPU support, using existing CUDA
158 and CUDNN7 installations on your machine (for example, preinstalled on your
159 cloud VM), you can run
160
161 ```bash
162 # install jaxlib
163 PYTHON_VERSION=cp27 # alternatives: cp27, cp35, cp36, cp37
164 CUDA_VERSION=cuda92 # alternatives: cuda90, cuda92, cuda100
165 PLATFORM=linux_x86_64 # alternatives: linux_x86_64
166 BASE_URL='https://storage.googleapis.com/jax-wheels'
167 pip install --upgrade $BASE_URL/$CUDA_VERSION/jaxlib-latest-$PYTHON_VERSION-none-$PLATFORM.whl
168
169 pip install --upgrade jax # install jax
170 ```
171
172 The library package name must correspond to the version of the existing CUDA
173 installation you want to use, with `cuda100` for CUDA 10.0, `cuda92` for CUDA
174 9.2, and `cuda90` for CUDA 9.0. To find your CUDA and CUDNN versions, you can
175 run commands like these, depending on your CUDNN install path:
176
177 ```bash
178 nvcc --version
179 grep CUDNN_MAJOR -A 2 /usr/local/cuda/include/cudnn.h # might need different path
180 ```
181
182 The Python version must match your Python interpreter. There are prebuilt wheels
183 for Python 2.7, 3.6, and 3.7; for anything else, you must build from source.
184
185
186 ## Running the tests
187
188 To run all the JAX tests, we recommend using `pytest-xdist`, which can run tests in
189 parallel. First, install `pytest-xdist` by running `pip install pytest-xdist`.
190 Then, from the repository root directory run
191
192 ```bash
193 pytest -n auto tests
194 ```
195
196 JAX generates test cases combinatorially, and you can control the number of
197 cases that are generated and checked for each test (default 10):
198
199 ```bash
200 JAX_NUM_GENERATED_CASES=100 pytest -n auto tests
201 ```
202
203 You can run a more specific set of tests using
204 [`pytest`](https://docs.pytest.org/en/latest/usage.html#specifying-tests-selecting-tests)'s
205 built-in selection mechanisms, or alternatively you can run a specific test
206 file directly to see more detailed information about the cases being run:
207
208 ```bash
209 python tests/lax_numpy_test.py --num_generated_cases=5
210 ```
211
212 ## Reference documentation
213
214 For details about the JAX API, see the
215 [reference documentation](https://jax.readthedocs.io/).
216
217 ## A brief tour
218
219 ```python
220 In [1]: import jax.numpy as np
221
222 In [2]: from jax import random
223
224 In [3]: key = random.PRNGKey(0)
225
226 In [4]: x = random.normal(key, (5000, 5000))
227
228 In [5]: print(np.dot(x, x.T) / 2) # fast!
229 [[ 2.52727051e+03 8.15895557e+00 -8.53276134e-01 ..., # ...
230
231 In [6]: print(np.dot(x, x.T) / 2) # even faster!
232 # JIT-compiled code is cached and reused in the 2nd call
233 [[ 2.52727051e+03 8.15895557e+00 -8.53276134e-01 ..., # ...
234 ```
235
236 What’s happening behind-the-scenes is that JAX is using XLA to just-in-time
237 (JIT) compile and execute these individual operations on the GPU. First the
238 `random.normal` call is compiled and the array referred to by `x` is generated
239 on the GPU. Next, each function called on `x` (namely `transpose`, `dot`, and
240 `divide`) is individually JIT-compiled and executed, each keeping its results on
241 the device.
242 It’s only when a value needs to be printed, plotted, saved, or passed into a raw
243 NumPy function that a read-only copy of the value is brought back to the host as
244 an ndarray and cached. The second call to `dot` is faster because the
245 JIT-compiled code is cached and reused, saving the compilation time.
246
247 The fun really starts when you use `grad` for automatic differentiation and
248 `jit` to compile your own functions end-to-end. Here’s a more complete toy
249 example:
250
251 ```python
252 from jax import grad, jit
253 import jax.numpy as np
254
255 def sigmoid(x):
256 return 0.5 * (np.tanh(x / 2.) + 1)
257
258 # Outputs probability of a label being true according to logistic model.
259 def logistic_predictions(weights, inputs):
260 return sigmoid(np.dot(inputs, weights))
261
262 # Training loss is the negative log-likelihood of the training labels.
263 def loss(weights, inputs, targets):
264 preds = logistic_predictions(weights, inputs)
265 label_logprobs = np.log(preds) * targets + np.log(1 - preds) * (1 - targets)
266 return -np.sum(label_logprobs)
267
268 # Build a toy dataset.
269 inputs = np.array([[0.52, 1.12, 0.77],
270 [0.88, -1.08, 0.15],
271 [0.52, 0.06, -1.30],
272 [0.74, -2.49, 1.39]])
273 targets = np.array([True, True, False, True])
274
275 # Define a compiled function that returns gradients of the training loss
276 training_gradient_fun = jit(grad(loss))
277
278 # Optimize weights using gradient descent.
279 weights = np.array([0.0, 0.0, 0.0])
280 print("Initial loss: {:0.2f}".format(loss(weights, inputs, targets)))
281 for i in range(100):
282 weights -= 0.1 * training_gradient_fun(weights, inputs, targets)
283
284 print("Trained loss: {:0.2f}".format(loss(weights, inputs, targets)))
285 ```
286
287 To see more, check out the [quickstart
288 notebook](https://colab.research.google.com/github/google/jax/blob/master/notebooks/quickstart.ipynb),
289 a [simple MNIST classifier
290 example](https://github.com/google/jax/blob/master/examples/mnist_classifier.py)
291 and the rest of the [JAX
292 examples](https://github.com/google/jax/blob/master/examples/).
293
294 ## What's supported
295
296 If you’re using JAX just as an accelerator-backed NumPy, without using `grad` or
297 `jit` in your code, then in principle there are no constraints, though some
298 NumPy functions haven’t been implemented yet. A list of supported functions can
299 be found in the [reference documentation](https://jax.readthedocs.io/).
300
301 Generally using `np.dot(A, B)` is
302 better than `A.dot(B)` because the former gives us more opportunities to run the
303 computation on the device. NumPy also does a lot of work to cast any array-like
304 function arguments to arrays, as in `np.sum([x, y])`, while `jax.numpy`
305 typically requires explicit casting of array arguments, like
306 `np.sum(np.array([x, y]))`.
307
308 For automatic differentiation with `grad`, JAX has the same restrictions
309 as [Autograd](https://github.com/hips/autograd). Specifically, differentiation
310 works with indexing (`x = A[i, j, :]`) but not indexed assignment (`A[i, j] =
311 x`) or indexed in-place updating (`A[i] += b`). You can use lists, tuples, and
312 dicts freely: JAX doesn't even see them. Using `np.dot(A, B)` rather than
313 `A.dot(B)` is required for automatic differentiation when `A` is a raw ndarray.
314
315 For compiling your own functions with `jit` there are a few more requirements.
316 Because `jit` aims to specialize Python functions only on shapes and dtypes
317 during tracing, rather than on concrete values, Python control flow that depends
318 on concrete values won’t be able to execute and will instead raise an error. If
319 you want compiled control flow, use structured control flow primitives like
320 lax.cond and lax.while_loop. Some indexing features, like slice-based indexing
321 `A[i:i+5]` for argument-dependent `i`, or boolean-based indexing `A[bool_ind]`
322 for argument-dependent `bool_ind`, produce abstract values of unknown shape and
323 are thus unsupported in `jit` functions.
324
325 In general, JAX is intended to be used with a functional style of Python
326 programming. Functions passed to transformations like `grad` and `jit` are
327 expected to be free of side-effects. You can write print statements for
328 debugging but they may only be executed once if they're under a `jit` decorator.
329
330 > TLDR **Do use**
331 >
332 > * Functional programming
333 > * [Many](https://jax.readthedocs.io/en/latest/jax.numpy.html) of NumPy’s
334 > functions (help us add more!)
335 > * [Some](https://jax.readthedocs.io/en/latest/jax.scipy.html) SciPy functions
336 > * Indexing and slicing of arrays like `x = A[[5, 1, 7], :, 2:4]`
337 > * Explicit array creation from lists like `A = np.array([x, y])`
338 >
339 > **Don’t use**
340 >
341 > * Assignment into arrays like `A[0, 0] = x`
342 > * Implicit casting to arrays like `np.sum([x, y])` (use `np.sum(np.array([x,
343 > y])` instead)
344 > * `A.dot(B)` method syntax for functions of more than one argument (use
345 > `np.dot(A, B)` instead)
346 > * Side-effects like mutation of arguments or mutation of global variables
347 > * The `out` argument of NumPy functions
348 > * Dtype casting like `np.float64(x)` (use `x.astype('float64')` or
349 > `x.astype(np.float64)` instead).
350 >
351 > **For jit functions, also don’t use**
352 >
353 > * Control flow based on dynamic values `if x > 0: ...`. Control flow based
354 > on shapes is fine: `if x.shape[0] > 2: ...` and `for subarr in array`.
355 > * Slicing `A[i:i+5]` for dynamic index `i` (use `lax.dynamic_slice` instead)
356 > or boolean indexing `A[bool_ind]` for traced values `bool_ind`.
357
358 You should get loud errors if your code violates any of these.
359
360 ## Transformations
361
362 At its core, JAX is an extensible system for transforming numerical functions.
363 We currently expose three important transformations: `grad`, `jit`, and `vmap`.
364
365 ### Automatic differentiation with grad
366
367 JAX has roughly the same API as [Autograd](https://github.com/hips/autograd).
368 The most popular function is `grad` for reverse-mode gradients:
369
370 ```python
371 from jax import grad
372 import jax.numpy as np
373
374 def tanh(x): # Define a function
375 y = np.exp(-2.0 * x)
376 return (1.0 - y) / (1.0 + y)
377
378 grad_tanh = grad(tanh) # Obtain its gradient function
379 print(grad_tanh(1.0)) # Evaluate it at x = 1.0
380 # prints 0.41997434161402603
381 ```
382
383 You can differentiate to any order with `grad`.
384
385 For more advanced autodiff, you can use `jax.vjp` for reverse-mode
386 vector-Jacobian products and `jax.jvp` for forward-mode Jacobian-vector
387 products. The two can be composed arbitrarily with one another, and with other
388 JAX transformations. Here's one way to compose
389 those to make a function that efficiently computes full Hessian matrices:
390
391 ```python
392 from jax import jit, jacfwd, jacrev
393 def hessian(fun):
394 return jit(jacfwd(jacrev(fun)))
395 ```
396
397 As with Autograd, you're free to use differentiation with Python control
398 structures:
399
400 ```python
401 def abs_val(x):
402 if x > 0:
403 return x
404 else:
405 return -x
406
407 abs_val_grad = grad(abs_val)
408 print(abs_val_grad(1.0)) # prints 1.0
409 print(abs_val_grad(-1.0)) # prints -1.0 (abs_val is re-evaluated)
410 ```
411
412 ### Compilation with jit
413
414 You can use XLA to compile your functions end-to-end with `jit`, used either as
415 an `@jit` decorator or as a higher-order function.
416
417 ```python
418 import jax.numpy as np
419 from jax import jit
420
421 def slow_f(x):
422 # Element-wise ops see a large benefit from fusion
423 return x * x + x * 2.0
424
425 x = np.ones((5000, 5000))
426 fast_f = jit(slow_f)
427 %timeit -n10 -r3 fast_f(x) # ~ 4.5 ms / loop on Titan X
428 %timeit -n10 -r3 slow_f(x) # ~ 14.5 ms / loop (also on GPU via JAX)
429 ```
430
431 You can mix `jit` and `grad` and any other JAX transformation however you like.
432
433 ### Auto-vectorization with vmap
434
435 `vmap` is the vectorizing map.
436 It has the familiar semantics of mapping a function along array axes, but
437 instead of keeping the loop on the outside, it pushes the loop down into a
438 function’s primitive operations for better performance.
439
440 Using `vmap` can save you from having to carry around batch dimensions in your
441 code. For example, consider this simple *unbatched* neural network prediction
442 function:
443
444 ```python
445 def predict(params, input_vec):
446 assert input_vec.ndim == 1
447 for W, b in params:
448 output_vec = np.dot(W, input_vec) + b # `input_vec` on the right-hand side!
449 input_vec = np.tanh(output_vec)
450 return output_vec
451 ```
452
453 We often instead write `np.dot(inputs, W)` to allow for a batch dimension on the
454 left side of `inputs`, but we’ve written this particular prediction function to
455 apply only to single input vectors. If we wanted to apply this function to a
456 batch of inputs at once, semantically we could just write
457
458 ```python
459 from functools import partial
460 predictions = np.stack(list(map(partial(predict, params), input_batch)))
461 ```
462
463 But pushing one example through the network at a time would be slow! It’s better
464 to vectorize the computation, so that at every layer we’re doing matrix-matrix
465 multiplies rather than matrix-vector multiplies.
466
467 The `vmap` function does that transformation for us. That is, if we write
468
469 ```python
470 from jax import vmap
471 predictions = vmap(partial(predict, params))(input_batch)
472 # or, alternatively
473 predictions = vmap(predict, in_axes=(None, 0))(params, input_batch)
474 ```
475
476 then the `vmap` function will push the outer loop inside the function, and our
477 machine will end up executing matrix-matrix multiplications exactly as if we’d
478 done the batching by hand.
479
480 It’s easy enough to manually batch a simple neural network without `vmap`, but
481 in other cases manual vectorization can be impractical or impossible. Take the
482 problem of efficiently computing per-example gradients: that is, for a fixed set
483 of parameters, we want to compute the gradient of our loss function evaluated
484 separately at each example in a batch. With `vmap`, it’s easy:
485
486 ```python
487 per_example_gradients = vmap(partial(grad(loss), params))(inputs, targets)
488 ```
489
490 Of course, `vmap` can be arbitrarily composed with `jit`, `grad`, and any other
491 JAX transformation! We use `vmap` with both forward- and reverse-mode automatic
492 differentiation for fast Jacobian and Hessian matrix calculations in
493 `jax.jacfwd`, `jax.jacrev`, and `jax.hessian`.
494
495
496 ## Random numbers are different
497
498 JAX needs a [functional pseudo-random number generator (PRNG) system](design_notes/prng.md) to provide
499 reproducible results invariant to compilation boundaries and backends, while
500 also maximizing performance by enabling vectorized generation and
501 parallelization across random calls. The `numpy.random` library doesn’t have
502 those properties. The `jax.random` library meets those needs: it’s functionally
503 pure, but it doesn’t require you to pass stateful random objects back out of
504 every function.
505
506 The `jax.random` library uses
507 [count-based PRNGs](http://www.thesalmons.org/john/random123/papers/random123sc11.pdf)
508 and a functional array-oriented
509 [splitting model](http://publications.lib.chalmers.se/records/fulltext/183348/local_183348.pdf).
510 To generate random values, you call a function like `jax.random.normal` and give
511 it a PRNG key:
512
513 ```python
514 import jax.random as random
515
516 key = random.PRNGKey(0)
517 print(random.normal(key, shape=(3,))) # [ 1.81608593 -0.48262325 0.33988902]
518 ```
519
520 If we make the same call again with the same key, we get the same values:
521
522 ```python
523 print(random.normal(key, shape=(3,))) # [ 1.81608593 -0.48262325 0.33988902]
524 ```
525
526 The key never gets updated. So how do we get fresh random values? We use
527 `jax.random.split` to create new keys from existing ones. A common pattern is to
528 split off a new key for every function call that needs random values:
529
530 ```python
531 key = random.PRNGKey(0)
532
533 key, subkey = random.split(key)
534 print(random.normal(subkey, shape=(3,))) # [ 1.1378783 -1.22095478 -0.59153646]
535
536 key, subkey = random.split(key)
537 print(random.normal(subkey, shape=(3,))) # [-0.06607265 0.16676566 1.17800343]
538 ```
539
540 By splitting the PRNG key, not only do we avoid having to thread random states
541 back out of every function call, but also we can generate multiple random arrays
542 in parallel because we can avoid unnecessary sequential dependencies.
543
544 There's a gotcha here, which is that it's easy to unintentionally reuse a key
545 without splitting. We intend to add a check for this (a sort of dynamic linear
546 typing) but for now it's something to be careful about.
547
548 For more detailed information on the design and the reasoning behind it, see the
549 [PRNG design doc](design_notes/prng.md).
550
551
552 ## Mini-libraries
553
554 JAX provides some small, experimental libraries for machine learning. These
555 libraries are in part about providing tools and in part about serving as
556 examples for how to build such libraries using JAX. Each one is only a few
557 hundred lines of code, so take a look inside and adapt them as you need!
558
559 ### Neural-net building with Stax
560
561 **Stax** is a functional neural network building library. The basic idea is that
562 a single layer or an entire network can be modeled as an `(init_fun, apply_fun)`
563 pair. The `init_fun` is used to initialize network parameters and the
564 `apply_fun` takes parameters and inputs to produce outputs. There are
565 constructor functions for common basic pairs, like `Conv` and `Relu`, and these
566 pairs can be composed in series using `stax.serial` or in parallel using
567 `stax.parallel`.
568
569 Here’s an example:
570
571 ```python
572 import jax.numpy as np
573 from jax import random
574 from jax.experimental import stax
575 from jax.experimental.stax import Conv, Dense, MaxPool, Relu, Flatten, LogSoftmax
576
577 # Use stax to set up network initialization and evaluation functions
578 net_init, net_apply = stax.serial(
579 Conv(32, (3, 3), padding='SAME'), Relu,
580 Conv(64, (3, 3), padding='SAME'), Relu,
581 MaxPool((2, 2)), Flatten,
582 Dense(128), Relu,
583 Dense(10), LogSoftmax,
584 )
585
586 # Initialize parameters, not committing to a batch shape
587 rng = random.PRNGKey(0)
588 in_shape = (-1, 28, 28, 1)
589 out_shape, net_params = net_init(rng, in_shape)
590
591 # Apply network to dummy inputs
592 inputs = np.zeros((128, 28, 28, 1))
593 predictions = net_apply(net_params, inputs)
594 ```
595
596 ### First-order optimization
597
598 JAX has a minimal optimization library focused on stochastic first-order
599 optimizers. Every optimizer is modeled as an `(init_fun, update_fun,
600 get_params)` triple of functions. The `init_fun` is used to initialize the
601 optimizer state, which could include things like momentum variables, and the
602 `update_fun` accepts a gradient and an optimizer state to produce a new
603 optimizer state. The `get_params` function extracts the current iterate (i.e.
604 the current parameters) from the optimizer state. The parameters being optimized
605 can be ndarrays or arbitrarily-nested list/tuple/dict structures, so you can
606 store your parameters however you’d like.
607
608 Here’s an example, using `jit` to compile the whole update end-to-end:
609
610 ```python
611 from jax.experimental import optimizers
612 from jax import jit, grad
613
614 # Define a simple squared-error loss
615 def loss(params, batch):
616 inputs, targets = batch
617 predictions = net_apply(params, inputs)
618 return np.sum((predictions - targets)**2)
619
620 # Use optimizers to set optimizer initialization and update functions
621 opt_init, opt_update, get_params = optimizers.momentum(step_size=1e-3, mass=0.9)
622
623 # Define a compiled update step
624 @jit
625 def step(i, opt_state, batch):
626 params = get_params(opt_state)
627 g = grad(loss)(params, batch)
628 return opt_update(i, g, opt_state)
629
630 # Dummy input data stream
631 data_generator = ((np.zeros((128, 28, 28, 1)), np.zeros((128, 10)))
632 for _ in range(10))
633
634 # Optimize parameters in a loop
635 opt_state = opt_init(net_params)
636 for i in range(10):
637 opt_state = step(i, opt_state, next(data_generator))
638 net_params = get_params(opt_state)
639 ```
640
641 ## How it works
642
643 Programming in machine learning is about expressing and transforming functions.
644 Transformations include automatic differentiation, compilation for accelerators,
645 and automatic batching. High-level languages like Python are great for
646 expressing functions, but usually all we can do with them is apply them. We lose
647 access to their internal structure which would let us perform transformations.
648
649 JAX is a tool for specializing and translating high-level Python+NumPy functions
650 into a representation that can be transformed and then lifted back into a Python
651 function.
652
653 
654
655 JAX specializes Python functions by tracing. Tracing a function means monitoring
656 all the basic operations that are applied to its input to produce its output,
657 and recording these operations and the data-flow between them in a directed
658 acyclic graph (DAG). To perform tracing, JAX wraps primitive operations, like
659 basic numerical kernels, so that when they’re called they add themselves to a
660 list of operations performed along with their inputs and outputs. To keep track
661 of how data flows between these primitives, values being tracked are wrapped in
662 instances of the `Tracer` class.
663
664 When a Python function is provided to `grad` or `jit`, it’s wrapped for tracing
665 and returned. When the wrapped function is called, we abstract the concrete
666 arguments provided into instances of the `AbstractValue` class, box them for
667 tracing in instances of the `Tracer` class, and call the function on them.
668 Abstract arguments represent sets of possible values rather than specific
669 values: for example, `jit` abstracts ndarray arguments to abstract values that
670 represent all ndarrays with the same shape and dtype. In contrast, `grad`
671 abstracts ndarray arguments to represent an infinitesimal neighborhood of the
672 underlying
673 value. By tracing the Python function on these abstract values, we ensure that
674 it’s specialized enough so that it’s tractable to transform, and that it’s still
675 general enough so that the transformed result is useful, and possibly reusable.
676 These transformed functions are then lifted back into Python callables in a way
677 that allows them to be traced and transformed again as needed.
678
679 The primitive functions that JAX traces are mostly in 1:1 correspondence with
680 [XLA HLO](https://www.tensorflow.org/xla/operation_semantics) and are defined
681 in [lax.py](https://github.com/google/jax/blob/master/jax/lax.py). This 1:1
682 correspondence makes most of the translations to XLA essentially trivial, and
683 ensures we only have a small set of primitives to cover for other
684 transformations like automatic differentiation. The [`jax.numpy`
685 layer](https://github.com/google/jax/blob/master/jax/numpy/) is written in pure
686 Python simply by expressing NumPy functions in terms of the LAX functions (and
687 other NumPy functions we’ve already written). That makes `jax.numpy` easy to
688 extend.
689
690 When you use `jax.numpy`, the underlying LAX primitives are `jit`-compiled
691 behind the scenes, allowing you to write unrestricted Python+Numpy code while
692 still executing each primitive operation on an accelerator.
693
694 But JAX can do more: instead of just compiling and dispatching to a fixed set of
695 individual primitives, you can use `jit` on larger and larger functions to be
696 end-to-end compiled and optimized. For example, instead of just compiling and
697 dispatching a convolution op, you can compile a whole network, or a whole
698 gradient evaluation and optimizer update step.
699
700 The tradeoff is that `jit` functions have to satisfy some additional
701 specialization requirements: since we want to compile traces that are
702 specialized on shapes and dtypes, but not specialized all the way to concrete
703 values, the Python code under a `jit` decorator must be applicable to abstract
704 values. If we try to evaluate `x > 0` on an abstract `x`, the result is an
705 abstract value representing the set `{True, False}`, and so a Python branch like
706 `if x > 0` will raise an error: it doesn’t know which way to go!
707 See [What’s supported](#whats-supported) for more
708 information about `jit` requirements.
709
710 The good news about this tradeoff is that `jit` is opt-in: JAX libraries use
711 `jit` on individual operations and functions behind the scenes, allowing you to
712 write unrestricted Python+Numpy and still make use of a hardware accelerator.
713 But when you want to maximize performance, you can often use `jit` in your own
714 code to compile and end-to-end optimize much bigger functions.
715
716 ## What we're working on
717 1. Documentation!
718 2. Cloud TPU support
719 3. Multi-GPU and multi-TPU support
720 4. Full NumPy coverage and some SciPy coverage
721 5. Full coverage for vmap
722 6. Make everything faster
723 * Lowering the XLA function dispatch overhead
724 * Linear algebra routines (MKL on CPU, MAGMA on GPU)
725 7. `cond` and `while` primitives with efficient automatic differentiation
726
727 ## Current gotchas
728
729 For a survey of current gotchas, with examples and explanations, we highly
730 recommend reading the [Gotchas Notebook](https://colab.research.google.com/github/google/jax/blob/master/notebooks/Common_Gotchas_in_JAX.ipynb).
731
732 Some stand-out gotchas that might surprise NumPy users:
733 1. JAX enforces single-precision (32-bit, e.g. `float32`) values by default, and
734 to enable double-precision (64-bit, e.g. `float64`) one needs to set the
735 `jax_enable_x64` variable **at startup** (or set the environment variable
736 `JAX_ENABLE_x64=True`, see [the Gotchas Notebook](https://colab.research.google.com/github/google/jax/blob/master/notebooks/Common_Gotchas_in_JAX.ipynb#scrollTo=YTktlwTTMgFl))
737 2. Some of NumPy's dtype promotion semantics involving a mix of Python scalars
738 and NumPy types aren't preserved, namely `np.add(1, np.array([2],
739 np.float32)).dtype` is `float64` rather than `float32`.
740 3. In-place mutation of arrays isn't supported, though [there is an
741 alternative](https://jax.readthedocs.io/en/latest/jax.ops.html). Generally
742 JAX requires functional code.
743 4. PRNGs are different and can be awkward, though for [good
744 reasons](https://github.com/google/jax/blob/master/design_notes/prng.md), and
745 non-reuse (linearity) is not yet checked.
746 5. NumPy's nan semantics aren't preserved on some backends
747
748 See [the notebook](https://colab.research.google.com/github/google/jax/blob/master/notebooks/Common_Gotchas_in_JAX.ipynb) for much more information.
749
750 ## Contributors
751
752 So far, JAX includes lots of help and [contributions](https://github.com/google/jax/graphs/contributors). In addition to the code contributions reflected on GitHub, JAX has benefitted substantially from the advice of
753 [Jamie Townsend](https://github.com/j-towns),
754 [Peter Hawkins](https://github.com/hawkinsp),
755 [Jonathan Ragan-Kelley](https://people.eecs.berkeley.edu/~jrk/),
756 [Alex Wiltschko](http://github.com/alexbw),
757 George Dahl,
758 [Stephan Hoyer](http://stephanhoyer.com/),
759 Sam Schoenholz,
760 [Eli Bendersky](https://github.com/eliben),
761 Zak Stone,
762 [Alexey Radul](https://github.com/axch),
763 Michael Isard,
764 Skye Wanderman-Milne,
765 and many others.
766
[end of README.md]
[start of docs/conf.py]
1 # Copyright 2018 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 #
15 # -*- coding: utf-8 -*-
16 #
17 # Configuration file for the Sphinx documentation builder.
18 #
19 # This file does only contain a selection of the most common options. For a
20 # full list see the documentation:
21 # http://www.sphinx-doc.org/en/master/config
22
23 # -- Path setup --------------------------------------------------------------
24
25 # If extensions (or modules to document with autodoc) are in another directory,
26 # add these directories to sys.path here. If the directory is relative to the
27 # documentation root, use os.path.abspath to make it absolute, like shown here.
28 #
29 import os
30 import sys
31 sys.path.insert(0, os.path.abspath('..'))
32
33
34 # -- Project information -----------------------------------------------------
35
36 project = 'JAX'
37 copyright = '2019, Google LLC. NumPy and SciPy documentation are copyright the respective authors.'
38 author = 'The JAX authors'
39
40 # The short X.Y version
41 version = ''
42 # The full version, including alpha/beta/rc tags
43 release = ''
44
45
46 # -- General configuration ---------------------------------------------------
47
48 # If your documentation needs a minimal Sphinx version, state it here.
49 #
50 # needs_sphinx = '1.0'
51
52 # Add any Sphinx extension module names here, as strings. They can be
53 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
54 # ones.
55 extensions = [
56 'sphinx.ext.autodoc',
57 'sphinx.ext.autosummary',
58 'sphinx.ext.intersphinx',
59 'sphinx.ext.mathjax',
60 'sphinx.ext.napoleon',
61 'sphinx.ext.viewcode',
62 ]
63
64 intersphinx_mapping = {
65 'python': ('https://docs.python.org/3/', None),
66 'numpy': ('https://docs.scipy.org/doc/numpy/', None),
67 'scipy': ('https://docs.scipy.org/doc/scipy/reference/', None),
68 }
69
70 # Add any paths that contain templates here, relative to this directory.
71 templates_path = ['_templates']
72
73 # The suffix(es) of source filenames.
74 # You can specify multiple suffix as a list of string:
75 #
76 # source_suffix = ['.rst', '.md']
77 source_suffix = '.rst'
78
79 # The master toctree document.
80 master_doc = 'index'
81
82 # The language for content autogenerated by Sphinx. Refer to documentation
83 # for a list of supported languages.
84 #
85 # This is also used if you do content translation via gettext catalogs.
86 # Usually you set "language" from the command line for these cases.
87 language = None
88
89 # List of patterns, relative to source directory, that match files and
90 # directories to ignore when looking for source files.
91 # This pattern also affects html_static_path and html_extra_path.
92 exclude_patterns = []
93
94 # The name of the Pygments (syntax highlighting) style to use.
95 pygments_style = None
96
97
98 autosummary_generate = True
99 napolean_use_rtype = False
100
101 # -- Options for HTML output -------------------------------------------------
102
103 # The theme to use for HTML and HTML Help pages. See the documentation for
104 # a list of builtin themes.
105 #
106 html_theme = 'sphinx_rtd_theme'
107
108 # Theme options are theme-specific and customize the look and feel of a theme
109 # further. For a list of options available for each theme, see the
110 # documentation.
111 #
112 # html_theme_options = {}
113
114 # Add any paths that contain custom static files (such as style sheets) here,
115 # relative to this directory. They are copied after the builtin static files,
116 # so a file named "default.css" will overwrite the builtin "default.css".
117 html_static_path = ['_static']
118
119 # Custom sidebar templates, must be a dictionary that maps document names
120 # to template names.
121 #
122 # The default sidebars (for documents that don't match any pattern) are
123 # defined by theme itself. Builtin themes are using these templates by
124 # default: ``['localtoc.html', 'relations.html', 'sourcelink.html',
125 # 'searchbox.html']``.
126 #
127 # html_sidebars = {}
128
129
130 # -- Options for HTMLHelp output ---------------------------------------------
131
132 # Output file base name for HTML help builder.
133 htmlhelp_basename = 'JAXdoc'
134
135
136 # -- Options for LaTeX output ------------------------------------------------
137
138 latex_elements = {
139 # The paper size ('letterpaper' or 'a4paper').
140 #
141 # 'papersize': 'letterpaper',
142
143 # The font size ('10pt', '11pt' or '12pt').
144 #
145 # 'pointsize': '10pt',
146
147 # Additional stuff for the LaTeX preamble.
148 #
149 # 'preamble': '',
150
151 # Latex figure (float) alignment
152 #
153 # 'figure_align': 'htbp',
154 }
155
156 # Grouping the document tree into LaTeX files. List of tuples
157 # (source start file, target name, title,
158 # author, documentclass [howto, manual, or own class]).
159 latex_documents = [
160 (master_doc, 'JAX.tex', 'JAX Documentation',
161 'The JAX authors', 'manual'),
162 ]
163
164
165 # -- Options for manual page output ------------------------------------------
166
167 # One entry per manual page. List of tuples
168 # (source start file, name, description, authors, manual section).
169 man_pages = [
170 (master_doc, 'jax', 'JAX Documentation',
171 [author], 1)
172 ]
173
174
175 # -- Options for Texinfo output ----------------------------------------------
176
177 # Grouping the document tree into Texinfo files. List of tuples
178 # (source start file, target name, title, author,
179 # dir menu entry, description, category)
180 texinfo_documents = [
181 (master_doc, 'JAX', 'JAX Documentation',
182 author, 'JAX', 'One line description of project.',
183 'Miscellaneous'),
184 ]
185
186
187 # -- Options for Epub output -------------------------------------------------
188
189 # Bibliographic Dublin Core info.
190 epub_title = project
191
192 # The unique identifier of the text. This can be a ISBN number
193 # or the project homepage.
194 #
195 # epub_identifier = ''
196
197 # A unique identification for the text.
198 #
199 # epub_uid = ''
200
201 # A list of files that should not be packed into the epub file.
202 epub_exclude_files = ['search.html']
203
204
205 # -- Extension configuration -------------------------------------------------
206
[end of docs/conf.py]
[start of examples/onnx2xla.py]
1 # Copyright 2018 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """An ONNX to XLA compiler by JAX-tracing a Numpy-backed ONNX interpreter."""
16 from __future__ import absolute_import
17 from __future__ import division
18 from __future__ import print_function
19
20 from cStringIO import StringIO
21 from functools import partial
22 import hashlib
23 import sys
24
25 import onnx
26 from onnx import numpy_helper
27 from onnx import onnx_pb2
28 from six.moves.urllib.request import urlopen
29
30 import jax.numpy as np
31 from jax import jit, grad
32 from jax import lax
33
34
35 def _asarray(proto):
36 return numpy_helper.to_array(proto).reshape(tuple(proto.dims))
37
38
39 attr_types = dict(onnx_pb2.AttributeProto.AttributeType.items())
40 attribute_handlers = {
41 attr_types['FLOAT']: lambda a: a.f,
42 attr_types['INT']: lambda a: a.i,
43 attr_types['STRING']: lambda a: a.s,
44 attr_types['TENSOR']: lambda a: _asarray(a.t),
45 attr_types['FLOATS']: lambda a: a.floats,
46 attr_types['INTS']: lambda a: a.ints,
47 attr_types['STRINGS']: lambda a: a.strings,
48 attr_types['TENSORS']: lambda a: [_asarray(x) for x in a.tensors],
49 }
50
51
52 def onnx_maxpool(x, kernel_shape, pads=None, strides=None):
53 """Numpy-backed implementation of ONNX MaxPool op."""
54 prefix = (1,) * (x.ndim - len(kernel_shape))
55 dims = prefix + tuple(kernel_shape)
56 pads = tuple(pads) if pads else [0] * len(kernel_shape)
57 strides = (prefix + tuple(strides)) if strides else [1] * len(kernel_shape)
58 return [lax.reduce_window(x, -np.inf, lax.max, dims, strides, 'VALID')]
59
60
61 def onnx_conv(x, w, b=0, group=1, kernel_shape=None, pads=None, strides=None,
62 dilations=None, auto_pad=None):
63 """Numpy-backed implementation of ONNX Conv op."""
64 assert group == 1
65 kernel_shape = kernel_shape or w.shape
66 strides = strides or [1] * (w.ndim - 2)
67 if auto_pad:
68 auto_pad = 'SAME' if auto_pad.startswith('SAME') else 'VALID'
69 pads = lax.padtype_to_pads(x.shape[2:], w.shape[2:], strides, auto_pad)
70 else:
71 pads = pads or [0] * (w.ndim - 2)
72 lhs_dilation = [1] * (w.ndim - 2)
73 rhs_dilation = dilations or [1] * (w.ndim - 2)
74 return [lax.conv_with_general_padding(x, w, strides, pads,
75 lhs_dilation, rhs_dilation) + b]
76
77
78 def onnx_add(a, b, axis=None, broadcast=True):
79 """Numpy-backed implementation of ONNX Add op."""
80 if broadcast:
81 axis = (a.dim - b.ndim) if axis is None else axis % a.ndim
82 assert a.shape[axis:][:b.ndim] == b.shape
83 b_shape = np.ones(a.ndim, dtype='int64').copy()
84 b_shape[axis:axis + b.ndim] = b.shape
85 b = np.reshape(b, b_shape)
86 return [a + b]
87
88
89 onnx_ops = {
90 'Add': onnx_add,
91 'Constant': lambda value: [value],
92 'Conv': onnx_conv,
93 'MatMul': lambda x, y: [np.matmul(x, y)],
94 'MaxPool': onnx_maxpool,
95 'Relu': lambda x: [np.maximum(x, 0)],
96 'Reshape': lambda x, shape: [np.reshape(x, shape)],
97 }
98
99
100 def interpret_onnx(graph, *args):
101 vals = dict({n.name: a for n, a in zip(graph.input, args)},
102 **{n.name: _asarray(n) for n in graph.initializer})
103 for node in graph.node:
104 args = (vals[name] for name in node.input)
105 attrs = {a.name: attribute_handlers[a.type](a) for a in node.attribute}
106 outputs = onnx_ops[node.op_type](*args, **attrs)
107 for name, output in zip(node.output, outputs):
108 vals[name] = output
109 return [vals[n.name] for n in graph.output]
110
111
112 if __name__ == "__main__":
113 # It seems that there are several ONNX proto versions (you had one job!) but
114 # this implementation works with at least this one mnist example file.
115 url = ('https://github.com/onnx/models/blob/'
116 '81c4779096d1205edd0b809e191a924c58c38fef/'
117 'mnist/model.onnx?raw=true')
118 download = urlopen(url).read()
119 if hashlib.md5(download).hexdigest() != 'bc8ad9bd19c5a058055dc18d0f089dad':
120 print("onnx file checksum mismatch")
121 sys.exit(1)
122 model = onnx.load(StringIO(download))
123
124 predict = lambda inputs: interpret_onnx(model.graph, inputs)[0]
125
126 # Run inference in Numpy-backed interpreter
127 print("interpreted:")
128 print(predict(np.ones((1, 1, 28, 28))))
129
130 # JIT compile to XLA device, run inference on device
131 compiled_predict = jit(predict)
132 print("compiled:")
133 print(compiled_predict(np.ones((1, 1, 28, 28))))
134
135 # The interpreter is differentiable too! Even the compiled one:
136 fun = lambda inputs: np.sum(compiled_predict(inputs))
137 print("a derivative with respect to inputs:")
138 print(grad(fun)(np.ones((1, 1, 28, 28)))[..., :3, :3])
139
140
[end of examples/onnx2xla.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
google/jax
|
1feefd10ac42189ff9333a51138c67e67457d0bd
|
Easy api for custom primitives and vjps
JAX supports custom primitives and vjps, just like Autograd did. Improvements:
1) add this to documentation
2) add a minimal example of this in the examples section
3) add a wrapper function if appropriate?
|
Thanks.
I am most concerned about adding wrappers of external libraries that have already been implemented with other forms of computing backend. We use a set of operators for particle mesh simulation and FFTs.
When we are talking about custom primitives, we also need to notify the auto-differ how to pick the tensor operators on the operands. How is this currently done in JAX?
I believe in autograd this was done with a vector space object that knows how to serialize any operand into a numpy array, after which numpy functions are used for inner products etc. This may not always be desirable -- e.g. if data has to be partitioned to several MPI ranks, then serialization to a single MPI rank is not even going to fit into the memory. We weren't able to use autograd due to this.
Another thing to worry about is whether these customized primitives support higher order differentiations. If the vjp function itself needs to be an external routine (not set of primitives) then higher order differentiation and auto-jvp are probably both broken? Is this a supported case?
Not sure if this is the right place: how can I define custom vmap primitives; in my case I am calling an external function that already supports batches and I want to vmap the code surrounding the call of this external primitive.
@jonasrauber that question is from a while ago, but the short answer is that `custom_transforms` (as in `from jax import custom_transforms`) is for doing this. To be improved and documented...
~~Just sketched out a custom VJPs API last night: https://gist.github.com/mattjj/2ba580930472e8e04c1759737268af92~~
~~The example there is trivial, and there's a bit more bookkeeping to be done to handle general code. But our initial thinking is that we can have a `defvjp` and a `defvjp_all`, to be used with `@custom_transforms`, where the former lets you specify a vjp function for each positional argument and the latter lets you specify a vjp for all arguments at once. (Maybe we can also provide a `defvjp_all_staged` if you want to compute some reduced residual information on the forward pass, rather than saving all the argument values.)~~
~~The funny bookkeeping in that gist is due to the fact that in JAX we usually don't specify VJPs (reverse-mode rules) directly, and instead only specify forward-mode rules; JAX generates reverse-mode autodiff through a composition of forward-mode, partial evaluation, and transposition transformations. But if you want to specify a VJP rule directly, that gist shows a trick to do it.~~
That was all a work-in-progress. We've got something better now!
|
2019-06-05T19:02:34Z
|
<patch>
diff --git a/jax/api.py b/jax/api.py
--- a/jax/api.py
+++ b/jax/api.py
@@ -26,6 +26,7 @@
from __future__ import division
from __future__ import print_function
+import collections
import itertools
import operator as op
import os
@@ -38,6 +39,7 @@
from . import core
from . import linear_util as lu
+from . import ad_util
from .core import pack, eval_jaxpr
from .api_util import (pytree_fun_to_jaxtupletree_fun, pytree_to_jaxtupletree,
pytree_fun_to_flatjaxtuple_fun, apply_jaxtree_fun, wraps,
@@ -71,8 +73,8 @@ def jit(fun, static_argnums=()):
Args:
fun: Function to be jitted. Should be a pure function, as side-effects may
- only be executed once. Its positional arguments and return value should be
- arrays, scalars, or standard Python containers (tuple/list/dict) thereof.
+ only be executed once. Its arguments and return value should be arrays,
+ scalars, or (nested) standard Python containers (tuple/list/dict) thereof.
Positional arguments indicated by `static_argnums` can be anything at all,
provided they are hashable and have an equality operation defined. Static
@@ -952,25 +954,457 @@ def _valid_jaxtype(arg):
return True
+class CustomTransformsFunction(object):
+ def __init__(self, fun, prim):
+ self.fun = fun
+ self.prim = prim
+ wraps(fun)(self)
+
+ def __repr__(self):
+ return '<jax.custom_transforms function {fun}>'.format(fun=self.__name__)
+
+ def __call__(self, *args, **kwargs):
+ jax_args, in_trees = unzip2(map(pytree_to_jaxtupletree, args))
+ jax_kwargs, kwargs_tree = pytree_to_jaxtupletree(kwargs)
+ out_tree = lu.Store()
+ ans = self.prim.bind(jax_kwargs, *jax_args, kwargs_tree=kwargs_tree,
+ in_trees=in_trees, out_tree=out_tree)
+ return build_tree(out_tree.val, ans)
+
def custom_transforms(fun):
- name = getattr(fun, '__name__', '<unnamed user primitive>')
+ """Wraps a function so that its transformation behavior can be controlled.
+
+ A primary use case of ``custom_transforms`` is defining custom VJP rules (aka
+ custom gradients) for a Python function, while still supporting other
+ transformations like ``jax.jit`` and ``jax.vmap``. Custom differentiation
+ rules can be supplied using the ``jax.defjvp`` and ``jax.defvjp`` functions.
+
+ The ``custom_transforms`` decorator wraps ``fun`` so that its transformation
+ behavior can be overridden, but not all transformation rules need to be
+ specified manually. The default behavior is retained for any non-overridden
+ rules.
+
+ Args:
+ fun: a Python callable. Must be functionally pure. Its arguments and return
+ value should be arrays, scalars, or (nested) standard Python containers
+ (tuple/list/dict) thereof.
+
+ Returns:
+ A Python callable with the same input/output and transformation behavior as
+ ``fun``, but for which custom transformation rules can be supplied, e.g.
+ using ``jax.defvjp``.
+
+ For example:
+
+ >>> @jax.custom_transforms
+ ... def f(x):
+ ... return np.sin(x ** 2)
+ ...
+ >>> print(f(3.))
+ 0.4121185
+ >>> print(jax.grad(f)(3.))
+ -5.4667816
+ >>> jax.defvjp(f, lambda g, x: g * x)
+ >>> print(jax.grad(f)(3.))
+ 3.0
+ """
+ name = getattr(fun, '__name__', '<unnamed custom_transforms primitive>')
fun_p = core.Primitive(name)
- fun_p.def_impl(fun)
- # generic transformation implementations that rely on traceability of `fun`
- fun_p.def_abstract_eval(partial(pe.abstract_eval_fun, fun))
- xla.translations[fun_p] = partial(xla.lower_fun, fun)
- ad.primitive_jvps[fun_p] = partial(jvp, fun)
- # TODO(mattjj): batching
+ def fun_impl(jax_kwargs, *jax_args, **params):
+ args = map(build_tree, params.pop('in_trees'), jax_args)
+ kwargs = build_tree(params.pop('kwargs_tree'), jax_kwargs)
+ pytree_out = fun(*args, **kwargs)
+ out, out_tree = pytree_to_jaxtupletree(pytree_out)
+ params.pop('out_tree').store(out_tree) # linear_util style side effect
+ assert not params
+ return out
+ fun_p.def_impl(fun_impl)
+
+ def fun_jvp(primals, tangents, **params):
+ return ad.jvp(lu.wrap_init(fun_impl, params)).call_wrapped(primals, tangents)
+ ad.primitive_jvps[fun_p] = fun_jvp
+
+ def fun_batch(batched_args, batch_dims, **params):
+ out = batching.batch(lu.wrap_init(fun_impl, params), batched_args, batch_dims, 0)
+ return out, 0
+ batching.primitive_batchers[fun_p] = fun_batch
+
+ staged_fun_p = core.Primitive('staged_' + name)
+ def fun_partial_eval(trace, *tracers, **params):
+ tracers = tuple(map(trace.instantiate_const, tracers))
+ avals = [t.aval for t in tracers]
+ pvals_in = [pe.PartialVal((a, core.unit)) for a in avals]
+ jaxpr, pval_out, consts = pe.trace_to_jaxpr(lu.wrap_init(fun_impl, params),
+ pvals_in, instantiate=True)
+ consts = trace.new_instantiated_const(core.pack(consts))
+ eqn = pe.JaxprEqn((consts,) + tracers, None, staged_fun_p, (), False, False,
+ dict(params, jaxpr=jaxpr))
+ return pe.JaxprTracer(trace, pval_out, eqn)
+ pe.custom_partial_eval_rules[fun_p] = fun_partial_eval
+
+ def staged_fun_translation(c, xla_consts, *xla_args, **params):
+ consts_shapes = tuple(c.GetShape(xla_consts).tuple_shapes())
+ xla_consts = tuple(xla.xla_destructure(c, xla_consts))
+ arg_shapes = map(c.GetShape, xla_args)
+ built_c = xla.jaxpr_computation(params['jaxpr'], (), consts_shapes, *arg_shapes)
+ return c.Call(built_c, xla_consts + xla_args)
+ xla.translations[staged_fun_p] = staged_fun_translation
+
+ return CustomTransformsFunction(fun, fun_p)
+
+def _check_custom_transforms_type(name, fun):
+ if type(fun) is not CustomTransformsFunction:
+ msg = ("{} requires a custom_transforms function as its first argument, "
+ "but got type {}.")
+ raise TypeError(msg.format(name, type(fun)))
+
+def defjvp_all(fun, custom_jvp):
+ """Define a custom JVP rule for a ``custom_transforms`` function.
+
+ If ``fun`` represents a function with signature ``a -> b``, then
+ ``custom_jvp`` represents a function with signature ``a -> T a -> (b, T b)``,
+ where we use ``T x`` to represent a tangent type for the type ``x``.
+
+ In more detail, ``custom_jvp`` must take two arguments, both tuples of length
+ equal to the number of positional arguments to ``fun``. The first argument to
+ ``custom_jvp`` represents the input primal values, and the second represents
+ the input tangent values. ``custom_jvp`` must return a pair where the first
+ element represents the output primal value and the second element represents
+ the output tangent value.
+
+ Defining a custom JVP rule also affects the default VJP rule, which is derived
+ from the JVP rule automatically via transposition.
- @wraps(fun)
- def traceable(*args, **kwargs):
- # TODO(mattjj): pytrees to jaxtupletrees
- return fun_p.bind(*args, **kwargs)
- traceable.primitive = fun_p
+ Args:
+ fun: a custom_transforms function.
+ custom_jvp: a Python callable specifying the JVP rule, taking two tuples as
+ arguments specifying the input primal values and tangent values,
+ respectively. The tuple elements can be arrays, scalars, or (nested)
+ standard Python containers (tuple/list/dict) thereof. The output must be a
+ pair representing the primal output and tangent output, which can be
+ arrays, scalars, or (nested) standard Python containers. Must be
+ functionally pure.
- return traceable
+ Returns:
+ None. A side-effect is that ``fun`` is associated with the JVP rule
+ specified by ``custom_jvp``.
+ For example:
+
+ >>> @jax.custom_transforms
+ ... def f(x):
+ ... return np.sin(x ** 2)
+ ...
+ >>> print(f(3.))
+ 0.4121185
+ >>> out_primal, out_tangent = jax.jvp(f, (3.,), (2.,))
+ >>> print(out_primal)
+ 0.4121185
+ >>> print(out_tangent)
+ -10.933563
+ >>> jax.defjvp_all(f, lambda ps, ts: (np.sin(ps[0] ** 2), 8. * ts[0]))
+ >>> out_primal, out_tangent = jax.jvp(f, (3.,), (2.,))
+ >>> print(out_primal)
+ 0.4121185
+ >>> print(out_tangent)
+ 16.0
+ """
+ _check_custom_transforms_type("defjvp_all", fun)
+ def custom_transforms_jvp(primals, tangents, **params):
+ jax_kwargs, jax_args = primals[0], primals[1:]
+ _, jax_args_dot = tangents[0], tangents[1:]
+ if jax_kwargs:
+ msg = ("defjvp_all requires the corresponding custom_transforms function "
+ "not to be called with keyword arguments.")
+ raise ValueError(msg)
+ in_trees = params['in_trees']
+ args = tuple(map(build_tree, in_trees, jax_args))
+ args_dot = tuple(map(build_tree, in_trees, jax_args_dot))
+ pytree_out, pytree_out_dot = custom_jvp(args, args_dot)
+ out, out_tree = pytree_to_jaxtupletree(pytree_out)
+ out_dot, out_tree2 = pytree_to_jaxtupletree(pytree_out_dot)
+ if out_tree != out_tree2:
+ msg = ("custom jvp rule returned different tree structures for primals "
+ "and tangents, but they must be equal: {} vs {}.")
+ raise TypeError(msg.format(out_tree, out_tree2))
+ params['out_tree'].store(out_tree) # linear_util style side effect
+ return out, out_dot
+ ad.primitive_jvps[fun.prim] = custom_transforms_jvp
+
+def defjvp(fun, *jvprules):
+ """Definine JVP rules for each argument separately.
+
+ This function is a convenience wrapper around ``jax.defjvp_all`` for
+ separately defining JVP rules for each of the function's arguments. This
+ convenience wrapper does not provide a mechanism for depending on anything
+ other than the function arguments and its primal output value, though
+ depending on intermediate results is possible using ``jax.defjvp_all``.
+
+ The signature of each component JVP rule is ``lambda g, ans, *primals: ...``
+ where ``g`` represents the tangent of the corresponding positional argument,
+ ``ans`` represents the output primal, and ``*primals`` represents all the
+ primal positional arguments.
+
+ Defining a custom JVP rule also affects the default VJP rule, which is derived
+ from the JVP rule automatically via transposition.
+
+ Args:
+ fun: a custom_transforms function.
+ *jvprules: a sequence of functions or Nones specifying the JVP rule for each
+ corresponding positional argument. When an element is None, it indicates
+ that the Jacobian from the corresponding input to the output is zero.
+
+ Returns:
+ None. A side-effect is that ``fun`` is associated with the JVP rule
+ specified by ``*jvprules``.
+
+ For example:
+
+ >>> @jax.custom_transforms
+ ... def f(x):
+ ... return np.sin(x ** 2)
+ ...
+ >>> print(f(3.))
+ 0.4121185
+ >>> out_primal, out_tangent = jax.jvp(f, (3.,), (2.,))
+ >>> print(out_primal)
+ 0.4121185
+ >>> print(out_tangent)
+ -10.933563
+ >>> jax.defjvp(f, lambda g, ans, x: 8. * g + ans)
+ >>> out_primal, out_tangent = jax.jvp(f, (3.,), (2.,))
+ >>> print(out_primal)
+ 0.4121185
+ >>> print(out_tangent)
+ 16.412119
+ """
+ _check_custom_transforms_type("defjvp", fun)
+ def custom_jvp(primals, tangents):
+ ans = fun(*primals)
+ tangents_out = [rule(t, ans, *primals) for rule, t in zip(jvprules, tangents)
+ if rule is not None and t is not ad_util.zero]
+ return ans, reduce(ad.add_tangents, tangents_out, ad_util.zero)
+ defjvp_all(fun, custom_jvp)
+
+def defvjp_all(fun, custom_vjp):
+ """Define a custom VJP rule for a ``custom_transforms`` function.
+
+ If ``fun`` represents a function with signature ``a -> b``, then
+ ``custom_vjp`` represents a function with signature ``a -> (b, CT b -> CT a)``
+ where we use ``CT x`` to represent a cotangent type for the type ``x``. That
+ is, ``custom_vjp`` should take the same arguments as ``fun`` and return a pair
+ where the first element represents the primal value of ``fun`` applied to the
+ arguments, and the second element is a VJP function that maps from output
+ cotangents to input cotangents, returning a tuple with length equal to the
+ number of positional arguments supplied to ``fun``.
+
+ The VJP function returned as the second element of the output of
+ ``custom_vjp`` can close over intermediate values computed when evaluating the
+ primal value of ``fun``. That is, use lexical closure to share work between
+ the forward pass and the backward pass of reverse-mode automatic
+ differentiation.
+
+ See also ``jax.custom_gradient``.
+
+ Args:
+ fun: a custom_transforms function.
+ custom_vjp: a Python callable specifying the VJP rule, taking the same
+ arguments as ``fun`` and returning a pair where the first elment is the
+ value of ``fun`` applied to the arguments and the second element is a
+ Python callable representing the VJP map from output cotangents to input
+ cotangents. The returned VJP function must accept a value with the same
+ shape as the value of ``fun`` applied to the arguments and must return a
+ tuple with length equal to the number of positional arguments to ``fun``.
+ Arguments can be arrays, scalars, or (nested) standard Python containers
+ (tuple/list/dict) thereof. Must be functionally pure.
+
+ Returns:
+ None. A side-effect is that ``fun`` is associated with the VJP rule
+ specified by ``custom_vjp``.
+
+ For example:
+
+ >>> @jax.custom_transforms
+ ... def f(x):
+ ... return np.sin(x ** 2)
+ ...
+ >>> print(f(3.))
+ 0.4121185
+ >>> print(jax.grad(f)(3.))
+ -5.4667816
+ >>> jax.defvjp_all(f, lambda x: (np.sin(x ** 2), lambda g: (g * x,)))
+ >>> print(f(3.))
+ 0.4121185
+ >>> print(jax.grad(f)(3.))
+ 3.0
+
+ An example with a function on two arguments, so that the VJP function must
+ return a tuple of length two:
+
+ >>> @jax.custom_transforms
+ ... def f(x, y):
+ ... return x * y
+ ...
+ >>> jax.defvjp_all(f, lambda x, y: (x * y, lambda g: (y, x)))
+ >>> print(f(3., 4.))
+ 12.0
+ >>> print(jax.grad(f, argnums=(0, 1))(3., 4.))
+ (4.0, 3.0)
+ """
+ _check_custom_transforms_type("defvjp_all", fun)
+ def custom_transforms_vjp(jax_kwargs, *jax_args, **params):
+ if jax_kwargs:
+ msg = ("defvjp_all requires the corresponding custom_transforms function "
+ "not to be called with keyword arguments.")
+ raise ValueError(msg)
+ args = map(build_tree, params['in_trees'], jax_args)
+ pytree_out, vjp_pytree = custom_vjp(*args)
+ out, out_tree = pytree_to_jaxtupletree(pytree_out)
+ params['out_tree'].store(out_tree) # linear_util style side effect
+ def vjp_pytree_(ct):
+ args_cts = tuple(vjp_pytree(ct))
+ if len(args_cts) != len(params['in_trees']):
+ msg = ("custom VJP function must return a tuple of length equal to the "
+ "number of positional arguments to the function being "
+ "differentiated: expected {}, got {}")
+ raise TypeError(msg.format(len(params['in_trees']), len(args_cts)))
+ return ({},) + args_cts
+ vjp, _ = pytree_fun_to_jaxtupletree_fun(lu.wrap_init(vjp_pytree_), (out_tree,))
+ return out, vjp.call_wrapped
+ ad.defvjp_all(fun.prim, custom_transforms_vjp)
+
+def defvjp(fun, *vjprules):
+ """Define VJP rules for each argument separately.
+
+ This function is a convenience wrapper around ``jax.defvjp_all`` for
+ separately defining VJP rules for each of the function's arguments. This
+ convenience wrapper does not provide a mechanism for depending on anything
+ other than the function arguments and its primal output value, though
+ depending on intermediate results is possible using ``jax.defvjp_all``.
+
+ The signature of each component VJP rule is ``lambda g, ans, *primals: ...``
+ where ``g`` represents the output cotangent, ``ans`` represents the output
+ primal, and ``*primals`` represents all the primal positional arguments.
+
+ Args:
+ fun: a custom_transforms function.
+ *vjprules: a sequence of functions or Nones specifying the VJP rule for each
+ corresponding positional argument. When an element is None, it indicates
+ that the Jacobian from the corresponding input to the output is zero.
+
+ Returns:
+ None. A side-effect is that ``fun`` is associated with the VJP rule
+ specified by ``*vjprules``.
+
+ For example:
+
+ >>> @jax.custom_transforms
+ ... def f(x, y):
+ ... return np.sin(x ** 2 + y)
+ ...
+ >>> print(f(3., 4.))
+ 0.42016703
+ >>> print(jax.grad(f)(3., 4.))
+ 5.4446807
+ >>> print(jax.grad(f, 1)(3., 4.))
+ 0.9074468
+ >>> jax.defvjp(f, None, lambda g, ans, x, y: g + x + y + ans)
+ >>> print(jax.grad(f)(3., 4.))
+ 0.0
+ >>> print(jax.grad(f, 1)(3., 4.))
+ 8.420167
+ """
+ _check_custom_transforms_type("defvjp", fun)
+ def custom_vjp(*primals):
+ ans = fun(*primals)
+ # TODO(mattjj): avoid instantiating zeros?
+ vjpfun = lambda ct: [vjp(ct, ans, *primals) if vjp else ad_util.zeros_like_jaxval(x)
+ for x, vjp in zip(primals, vjprules)]
+ return ans, vjpfun
+ defvjp_all(fun, custom_vjp)
+
+def custom_gradient(fun):
+ """Convenience function for defining custom VJP rules (aka custom gradients).
+
+ While the canonical way to define custom VJP rules is via ``jax.defvjp_all``
+ and its convenience wrappers, the ``custom_gradient`` convenience wrapper
+ follows TensorFlow's ``tf.custom_gradient`` API. The difference here is that
+ ``custom_gradient`` can be used as a decorator on one function that returns
+ both the primal value (representing the output of the mathematical function to
+ be differentiated) and the VJP (gradient) function.
+
+ See https://www.tensorflow.org/api_docs/python/tf/custom_gradient.
+
+ If the mathematical function to be differentiated has type signature
+ ``a -> b``, then the Python callable ``fun`` should have signature
+ ``a -> (b, CT b -> CT a)`` where we use ``CT x`` to denote a cotangent type
+ for ``x``. See the example below. That is, ``fun`` should return a pair where
+ the first element represents the value of the mathematical function to be
+ differentiated and the second element is a function that represents the custom
+ VJP rule.
+
+ The custom VJP function returned as the second element of the output of ``fun``
+ can close over intermediate values computed when evaluating the function to be
+ differentiated. That is, use lexical closure to share work between the forward
+ pass and the backward pass of reverse-mode automatic differentiation.
+
+ Args:
+ fun: a Python callable specifying both the mathematical function to be
+ differentiated and its reverse-mode differentiation rule. It should return
+ a pair consisting of an output value and a Python callable that represents
+ the custom gradient function.
+
+ Returns:
+ A Python callable with signature ``a -> b``, i.e. that returns the output
+ value specified by the first element of ``fun``'s output pair. A side effect
+ is that under-the-hood ``jax.defvjp_all`` is called to set up the returned
+ Python callable with the custom VJP rule specified by the second element
+ of ``fun``'s output pair.
+
+ For example:
+
+ >>> @jax.custom_gradient
+ ... def f(x):
+ ... return x ** 2, lambda g: (g * x,)
+ ...
+ >>> print(f(3.))
+ 9.0
+ >>> print(jax.grad(f)(3.))
+ 3.0
+
+ An example with a function on two arguments, so that the VJP function must
+ return a tuple of length two:
+
+ >>> @jax.custom_gradient
+ ... def f(x, y):
+ ... return x * y, lambda g: (y, x)
+ ...
+ >>> print(f(3., 4.))
+ 12.0
+ >>> print(jax.grad(f, argnums=(0, 1))(3., 4.))
+ (4.0, 3.0)
+ """
+ def primal_fun(*args, **kwargs):
+ ans, _ = fun(*args, **kwargs)
+ return ans
+ primal_fun = custom_transforms(primal_fun)
+ defvjp_all(primal_fun, fun)
+ return primal_fun
+
+
+def jarrett(fun):
+ new_fun = custom_transforms(fun)
+
+ def elementwise_jvp(primals, tangents):
+ pushfwd = partial(jvp, fun, primals)
+ y, jacs = vmap(pushfwd, out_axes=(None, 0))(_elementwise_std_basis(tangents))
+ flat_tangents, _ = tree_flatten(tangents)
+ out_tangent = sum([t * jac for t, jac in zip(flat_tangents, jacs)])
+ return y, out_tangent
+ defjvp_all(new_fun, elementwise_jvp)
+
+ return new_fun
def _elementwise_std_basis(pytree):
leaves, _ = tree_flatten(pytree)
@@ -987,19 +1421,6 @@ def _elementwise_std_basis(pytree):
for j in range(arity)]) for i in range(arity)])
return _unravel_array_into_pytree(pytree, 1, basis_array)
-def jarrett(fun):
- new_fun = custom_transforms(fun)
-
- def elementwise_jvp(primals, tangents):
- pushfwd = partial(jvp, fun, primals)
- y, jacs = vmap(pushfwd, out_axes=(None, 0))(_elementwise_std_basis(tangents))
- flat_tangents, _ = tree_flatten(tangents)
- out_tangent = sum([t * jac for t, jac in zip(flat_tangents, jacs)])
- return y, out_tangent
- ad.primitive_jvps[new_fun.primitive] = elementwise_jvp
-
- return new_fun
-
# This function mostly exists for making slides about JAX.
def _make_graphviz(fun):
diff --git a/jax/interpreters/ad.py b/jax/interpreters/ad.py
--- a/jax/interpreters/ad.py
+++ b/jax/interpreters/ad.py
@@ -397,18 +397,19 @@ def add_tangents(x, y):
def defvjp_all(prim, custom_vjp):
+ # see https://github.com/google/jax/pull/636
name = prim.name
- def fun_jvp(xs, ts):
+ def fun_jvp(xs, ts, **params):
ts = map(instantiate_zeros, xs, ts) # TODO(mattjj): avoid instantiation?
- primal_out, tangent_out = fun_jvp_p.bind(pack(xs), pack(ts))
+ primal_out, tangent_out = fun_jvp_p.bind(pack(xs), pack(ts), **params)
return primal_out, tangent_out
primitive_jvps[prim] = fun_jvp
fun_jvp_p = core.Primitive('{name}_jvp'.format(name=name))
- def fun_jvp_partial_eval(trace, *tracers):
+ def fun_jvp_partial_eval(trace, *tracers, **params):
primals_tracer, tangents_tracer = tracers
- primal_out, vjp_py = custom_vjp(*primals_tracer)
+ primal_out, vjp_py = custom_vjp(*primals_tracer, **params)
in_aval = raise_to_shaped(get_aval(primal_out))
ct_pval = pe.PartialVal((in_aval, core.unit))
diff --git a/jax/interpreters/partial_eval.py b/jax/interpreters/partial_eval.py
--- a/jax/interpreters/partial_eval.py
+++ b/jax/interpreters/partial_eval.py
@@ -211,9 +211,10 @@ def partial_eval_wrapper(avals, *consts):
def abstract_eval_fun(fun, *avals, **params):
- pvs_in = [PartialVal((a, unit)) for a in avals]
- _, pvout, _ = trace_to_jaxpr(lu.wrap_init(fun, params), pvs_in, instantiate=True)
- aval_out, _ = pvout
+ pvals_in = [PartialVal((a, unit)) for a in avals]
+ _, pval_out, _ = trace_to_jaxpr(lu.wrap_init(fun, params), pvals_in,
+ instantiate=True)
+ aval_out, _ = pval_out
assert isinstance(aval_out, AbstractValue) # instantiate=True
return aval_out
diff --git a/jax/scipy/special.py b/jax/scipy/special.py
--- a/jax/scipy/special.py
+++ b/jax/scipy/special.py
@@ -20,8 +20,7 @@
import scipy.special as osp_special
from .. import lax
-from ..api import custom_transforms
-from ..interpreters import ad, batching
+from ..api import custom_transforms, defjvp2
from ..numpy import lax_numpy as np
from ..numpy.lax_numpy import (_wraps, asarray, _reduction_dims, _constant_like,
_promote_args_like)
@@ -29,32 +28,32 @@
@_wraps(osp_special.gammaln)
def gammaln(x):
- x, = _promote_args_like(osp_special.gammaln, x)
- return lax.lgamma(x)
+ x, = _promote_args_like(osp_special.gammaln, x)
+ return lax.lgamma(x)
@_wraps(osp_special.digamma)
def digamma(x):
- x, = _promote_args_like(osp_special.digamma, x)
- return lax.digamma(x)
+ x, = _promote_args_like(osp_special.digamma, x)
+ return lax.digamma(x)
@_wraps(osp_special.erf)
def erf(x):
- x, = _promote_args_like(osp_special.erf, x)
- return lax.erf(x)
+ x, = _promote_args_like(osp_special.erf, x)
+ return lax.erf(x)
@_wraps(osp_special.erfc)
def erfc(x):
- x, = _promote_args_like(osp_special.erfc, x)
- return lax.erfc(x)
+ x, = _promote_args_like(osp_special.erfc, x)
+ return lax.erfc(x)
@_wraps(osp_special.erfinv)
def erfinv(x):
- x, = _promote_args_like(osp_special.erfinv, x)
- return lax.erf_inv(x)
+ x, = _promote_args_like(osp_special.erfinv, x)
+ return lax.erf_inv(x)
@_wraps(osp_special.logit)
@@ -62,8 +61,7 @@ def erfinv(x):
def logit(x):
x = asarray(x)
return lax.log(lax.div(x, lax.sub(lax._const(x, 1), x)))
-ad.defjvp2(logit.primitive, lambda g, ans, x: g / (x * (1 - x)))
-batching.defvectorized(logit.primitive)
+defjvp2(logit, lambda g, ans, x: g / (x * (1 - x)))
@_wraps(osp_special.expit)
@@ -72,8 +70,7 @@ def expit(x):
x = asarray(x)
one = lax._const(x, 1)
return lax.div(one, lax.add(one, lax.exp(lax.neg(x))))
-ad.defjvp2(expit.primitive, lambda g, ans, x: g * ans * (lax._const(ans, 1) - ans))
-batching.defvectorized(expit.primitive)
+defjvp2(expit, lambda g, ans, x: g * ans * (lax._const(ans, 1) - ans))
@_wraps(osp_special.logsumexp)
</patch>
|
[]
|
[]
| |||
docker__compose-4721
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Support unicode characters in -f paths
On Ubuntu 16.04:
```
$ docker-compose -f 就吃饭/docker-compose.yml config/home/joffrey/work/compose/compose/config/config.py:234: UnicodeWarning: Unicode equal comparison failed to convert both arguments to Unicode - interpreting them as being unequal
if filenames == ['-']:
Traceback (most recent call last):
File "/home/joffrey/.envs/compose/bin/docker-compose", line 9, in <module>
load_entry_point('docker-compose==1.11.0.dev0', 'console_scripts', 'docker-compose')()
File "/home/joffrey/work/compose/compose/cli/main.py", line 64, in main
command()
File "/home/joffrey/work/compose/compose/cli/main.py", line 110, in perform_command
handler(command, options, command_options)
File "/home/joffrey/work/compose/compose/cli/main.py", line 305, in config
compose_config = get_config_from_options(self.project_dir, config_options)
File "/home/joffrey/work/compose/compose/cli/command.py", line 46, in get_config_from_options
config.find(base_dir, config_path, environment)
File "/home/joffrey/work/compose/compose/config/config.py", line 242, in find
filenames = [os.path.join(base_dir, f) for f in filenames]
File "/home/joffrey/.envs/compose/lib/python2.7/posixpath.py", line 73, in join
path += '/' + b
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe5 in position 1: ordinal not in range(128)
```
On Windows:
```
docker-compose -f "C:\Users\husun\documents\visual studio 2017\Projects\测试中文\docker-compose.yml" up -d --build
ERROR: compose.cli.main.main: .IOError: [Errno 22] invalid mode ('r') or filename: 'C:\\Users\\husun\\documents\\visual studio 2017\\Projects\\????\\docker-compose.yml'
```
</issue>
<code>
[start of README.md]
1 Docker Compose
2 ==============
3 
4
5 Compose is a tool for defining and running multi-container Docker applications.
6 With Compose, you use a Compose file to configure your application's services.
7 Then, using a single command, you create and start all the services
8 from your configuration. To learn more about all the features of Compose
9 see [the list of features](https://github.com/docker/docker.github.io/blob/master/compose/overview.md#features).
10
11 Compose is great for development, testing, and staging environments, as well as
12 CI workflows. You can learn more about each case in
13 [Common Use Cases](https://github.com/docker/docker.github.io/blob/master/compose/overview.md#common-use-cases).
14
15 Using Compose is basically a three-step process.
16
17 1. Define your app's environment with a `Dockerfile` so it can be
18 reproduced anywhere.
19 2. Define the services that make up your app in `docker-compose.yml` so
20 they can be run together in an isolated environment:
21 3. Lastly, run `docker-compose up` and Compose will start and run your entire app.
22
23 A `docker-compose.yml` looks like this:
24
25 version: '2'
26
27 services:
28 web:
29 build: .
30 ports:
31 - "5000:5000"
32 volumes:
33 - .:/code
34 redis:
35 image: redis
36
37 For more information about the Compose file, see the
38 [Compose file reference](https://github.com/docker/docker.github.io/blob/master/compose/compose-file/compose-versioning.md)
39
40 Compose has commands for managing the whole lifecycle of your application:
41
42 * Start, stop and rebuild services
43 * View the status of running services
44 * Stream the log output of running services
45 * Run a one-off command on a service
46
47 Installation and documentation
48 ------------------------------
49
50 - Full documentation is available on [Docker's website](https://docs.docker.com/compose/).
51 - If you have any questions, you can talk in real-time with other developers in the #docker-compose IRC channel on Freenode. [Click here to join using IRCCloud.](https://www.irccloud.com/invite?hostname=irc.freenode.net&channel=%23docker-compose)
52 - Code repository for Compose is on [Github](https://github.com/docker/compose)
53 - If you find any problems please fill out an [issue](https://github.com/docker/compose/issues/new)
54
55 Contributing
56 ------------
57
58 [](https://jenkins.dockerproject.org/job/docker/job/compose/job/master/)
59
60 Want to help build Compose? Check out our [contributing documentation](https://github.com/docker/compose/blob/master/CONTRIBUTING.md).
61
62 Releasing
63 ---------
64
65 Releases are built by maintainers, following an outline of the [release process](https://github.com/docker/compose/blob/master/project/RELEASE-PROCESS.md).
66
[end of README.md]
[start of compose/cli/main.py]
1 from __future__ import absolute_import
2 from __future__ import print_function
3 from __future__ import unicode_literals
4
5 import contextlib
6 import functools
7 import json
8 import logging
9 import pipes
10 import re
11 import subprocess
12 import sys
13 from distutils.spawn import find_executable
14 from inspect import getdoc
15 from operator import attrgetter
16
17 from . import errors
18 from . import signals
19 from .. import __version__
20 from ..bundle import get_image_digests
21 from ..bundle import MissingDigests
22 from ..bundle import serialize_bundle
23 from ..config import ConfigurationError
24 from ..config import parse_environment
25 from ..config import resolve_build_args
26 from ..config.environment import Environment
27 from ..config.serialize import serialize_config
28 from ..config.types import VolumeSpec
29 from ..const import IS_WINDOWS_PLATFORM
30 from ..errors import StreamParseError
31 from ..progress_stream import StreamOutputError
32 from ..project import NoSuchService
33 from ..project import OneOffFilter
34 from ..project import ProjectError
35 from ..service import BuildAction
36 from ..service import BuildError
37 from ..service import ConvergenceStrategy
38 from ..service import ImageType
39 from ..service import NeedsBuildError
40 from ..service import OperationFailedError
41 from .command import get_config_from_options
42 from .command import project_from_options
43 from .docopt_command import DocoptDispatcher
44 from .docopt_command import get_handler
45 from .docopt_command import NoSuchCommand
46 from .errors import UserError
47 from .formatter import ConsoleWarningFormatter
48 from .formatter import Formatter
49 from .log_printer import build_log_presenters
50 from .log_printer import LogPrinter
51 from .utils import get_version_info
52 from .utils import human_readable_file_size
53 from .utils import yesno
54
55
56 if not IS_WINDOWS_PLATFORM:
57 from dockerpty.pty import PseudoTerminal, RunOperation, ExecOperation
58
59 log = logging.getLogger(__name__)
60 console_handler = logging.StreamHandler(sys.stderr)
61
62
63 def main():
64 signals.ignore_sigpipe()
65 try:
66 command = dispatch()
67 command()
68 except (KeyboardInterrupt, signals.ShutdownException):
69 log.error("Aborting.")
70 sys.exit(1)
71 except (UserError, NoSuchService, ConfigurationError,
72 ProjectError, OperationFailedError) as e:
73 log.error(e.msg)
74 sys.exit(1)
75 except BuildError as e:
76 log.error("Service '%s' failed to build: %s" % (e.service.name, e.reason))
77 sys.exit(1)
78 except StreamOutputError as e:
79 log.error(e)
80 sys.exit(1)
81 except NeedsBuildError as e:
82 log.error("Service '%s' needs to be built, but --no-build was passed." % e.service.name)
83 sys.exit(1)
84 except NoSuchCommand as e:
85 commands = "\n".join(parse_doc_section("commands:", getdoc(e.supercommand)))
86 log.error("No such command: %s\n\n%s", e.command, commands)
87 sys.exit(1)
88 except (errors.ConnectionError, StreamParseError):
89 sys.exit(1)
90
91
92 def dispatch():
93 setup_logging()
94 dispatcher = DocoptDispatcher(
95 TopLevelCommand,
96 {'options_first': True, 'version': get_version_info('compose')})
97
98 options, handler, command_options = dispatcher.parse(sys.argv[1:])
99 setup_console_handler(console_handler, options.get('--verbose'))
100 return functools.partial(perform_command, options, handler, command_options)
101
102
103 def perform_command(options, handler, command_options):
104 if options['COMMAND'] in ('help', 'version'):
105 # Skip looking up the compose file.
106 handler(command_options)
107 return
108
109 if options['COMMAND'] in ('config', 'bundle'):
110 command = TopLevelCommand(None)
111 handler(command, options, command_options)
112 return
113
114 project = project_from_options('.', options)
115 command = TopLevelCommand(project)
116 with errors.handle_connection_errors(project.client):
117 handler(command, command_options)
118
119
120 def setup_logging():
121 root_logger = logging.getLogger()
122 root_logger.addHandler(console_handler)
123 root_logger.setLevel(logging.DEBUG)
124
125 # Disable requests logging
126 logging.getLogger("requests").propagate = False
127
128
129 def setup_console_handler(handler, verbose):
130 if handler.stream.isatty():
131 format_class = ConsoleWarningFormatter
132 else:
133 format_class = logging.Formatter
134
135 if verbose:
136 handler.setFormatter(format_class('%(name)s.%(funcName)s: %(message)s'))
137 handler.setLevel(logging.DEBUG)
138 else:
139 handler.setFormatter(format_class())
140 handler.setLevel(logging.INFO)
141
142
143 # stolen from docopt master
144 def parse_doc_section(name, source):
145 pattern = re.compile('^([^\n]*' + name + '[^\n]*\n?(?:[ \t].*?(?:\n|$))*)',
146 re.IGNORECASE | re.MULTILINE)
147 return [s.strip() for s in pattern.findall(source)]
148
149
150 class TopLevelCommand(object):
151 """Define and run multi-container applications with Docker.
152
153 Usage:
154 docker-compose [-f <arg>...] [options] [COMMAND] [ARGS...]
155 docker-compose -h|--help
156
157 Options:
158 -f, --file FILE Specify an alternate compose file (default: docker-compose.yml)
159 -p, --project-name NAME Specify an alternate project name (default: directory name)
160 --verbose Show more output
161 -v, --version Print version and exit
162 -H, --host HOST Daemon socket to connect to
163
164 --tls Use TLS; implied by --tlsverify
165 --tlscacert CA_PATH Trust certs signed only by this CA
166 --tlscert CLIENT_CERT_PATH Path to TLS certificate file
167 --tlskey TLS_KEY_PATH Path to TLS key file
168 --tlsverify Use TLS and verify the remote
169 --skip-hostname-check Don't check the daemon's hostname against the name specified
170 in the client certificate (for example if your docker host
171 is an IP address)
172 --project-directory PATH Specify an alternate working directory
173 (default: the path of the compose file)
174
175 Commands:
176 build Build or rebuild services
177 bundle Generate a Docker bundle from the Compose file
178 config Validate and view the compose file
179 create Create services
180 down Stop and remove containers, networks, images, and volumes
181 events Receive real time events from containers
182 exec Execute a command in a running container
183 help Get help on a command
184 images List images
185 kill Kill containers
186 logs View output from containers
187 pause Pause services
188 port Print the public port for a port binding
189 ps List containers
190 pull Pull service images
191 push Push service images
192 restart Restart services
193 rm Remove stopped containers
194 run Run a one-off command
195 scale Set number of containers for a service
196 start Start services
197 stop Stop services
198 top Display the running processes
199 unpause Unpause services
200 up Create and start containers
201 version Show the Docker-Compose version information
202 """
203
204 def __init__(self, project, project_dir='.'):
205 self.project = project
206 self.project_dir = '.'
207
208 def build(self, options):
209 """
210 Build or rebuild services.
211
212 Services are built once and then tagged as `project_service`,
213 e.g. `composetest_db`. If you change a service's `Dockerfile` or the
214 contents of its build directory, you can run `docker-compose build` to rebuild it.
215
216 Usage: build [options] [--build-arg key=val...] [SERVICE...]
217
218 Options:
219 --force-rm Always remove intermediate containers.
220 --no-cache Do not use cache when building the image.
221 --pull Always attempt to pull a newer version of the image.
222 --build-arg key=val Set build-time variables for one service.
223 """
224 service_names = options['SERVICE']
225 build_args = options.get('--build-arg', None)
226 if build_args:
227 environment = Environment.from_env_file(self.project_dir)
228 build_args = resolve_build_args(build_args, environment)
229
230 if not service_names and build_args:
231 raise UserError("Need service name for --build-arg option")
232
233 self.project.build(
234 service_names=service_names,
235 no_cache=bool(options.get('--no-cache', False)),
236 pull=bool(options.get('--pull', False)),
237 force_rm=bool(options.get('--force-rm', False)),
238 build_args=build_args)
239
240 def bundle(self, config_options, options):
241 """
242 Generate a Distributed Application Bundle (DAB) from the Compose file.
243
244 Images must have digests stored, which requires interaction with a
245 Docker registry. If digests aren't stored for all images, you can fetch
246 them with `docker-compose pull` or `docker-compose push`. To push images
247 automatically when bundling, pass `--push-images`. Only services with
248 a `build` option specified will have their images pushed.
249
250 Usage: bundle [options]
251
252 Options:
253 --push-images Automatically push images for any services
254 which have a `build` option specified.
255
256 -o, --output PATH Path to write the bundle file to.
257 Defaults to "<project name>.dab".
258 """
259 self.project = project_from_options('.', config_options)
260 compose_config = get_config_from_options(self.project_dir, config_options)
261
262 output = options["--output"]
263 if not output:
264 output = "{}.dab".format(self.project.name)
265
266 image_digests = image_digests_for_project(self.project, options['--push-images'])
267
268 with open(output, 'w') as f:
269 f.write(serialize_bundle(compose_config, image_digests))
270
271 log.info("Wrote bundle to {}".format(output))
272
273 def config(self, config_options, options):
274 """
275 Validate and view the compose file.
276
277 Usage: config [options]
278
279 Options:
280 --resolve-image-digests Pin image tags to digests.
281 -q, --quiet Only validate the configuration, don't print
282 anything.
283 --services Print the service names, one per line.
284 --volumes Print the volume names, one per line.
285
286 """
287
288 compose_config = get_config_from_options(self.project_dir, config_options)
289 image_digests = None
290
291 if options['--resolve-image-digests']:
292 self.project = project_from_options('.', config_options)
293 image_digests = image_digests_for_project(self.project)
294
295 if options['--quiet']:
296 return
297
298 if options['--services']:
299 print('\n'.join(service['name'] for service in compose_config.services))
300 return
301
302 if options['--volumes']:
303 print('\n'.join(volume for volume in compose_config.volumes))
304 return
305
306 print(serialize_config(compose_config, image_digests))
307
308 def create(self, options):
309 """
310 Creates containers for a service.
311
312 Usage: create [options] [SERVICE...]
313
314 Options:
315 --force-recreate Recreate containers even if their configuration and
316 image haven't changed. Incompatible with --no-recreate.
317 --no-recreate If containers already exist, don't recreate them.
318 Incompatible with --force-recreate.
319 --no-build Don't build an image, even if it's missing.
320 --build Build images before creating containers.
321 """
322 service_names = options['SERVICE']
323
324 self.project.create(
325 service_names=service_names,
326 strategy=convergence_strategy_from_opts(options),
327 do_build=build_action_from_opts(options),
328 )
329
330 def down(self, options):
331 """
332 Stops containers and removes containers, networks, volumes, and images
333 created by `up`.
334
335 By default, the only things removed are:
336
337 - Containers for services defined in the Compose file
338 - Networks defined in the `networks` section of the Compose file
339 - The default network, if one is used
340
341 Networks and volumes defined as `external` are never removed.
342
343 Usage: down [options]
344
345 Options:
346 --rmi type Remove images. Type must be one of:
347 'all': Remove all images used by any service.
348 'local': Remove only images that don't have a custom tag
349 set by the `image` field.
350 -v, --volumes Remove named volumes declared in the `volumes` section
351 of the Compose file and anonymous volumes
352 attached to containers.
353 --remove-orphans Remove containers for services not defined in the
354 Compose file
355 """
356 image_type = image_type_from_opt('--rmi', options['--rmi'])
357 self.project.down(image_type, options['--volumes'], options['--remove-orphans'])
358
359 def events(self, options):
360 """
361 Receive real time events from containers.
362
363 Usage: events [options] [SERVICE...]
364
365 Options:
366 --json Output events as a stream of json objects
367 """
368 def format_event(event):
369 attributes = ["%s=%s" % item for item in event['attributes'].items()]
370 return ("{time} {type} {action} {id} ({attrs})").format(
371 attrs=", ".join(sorted(attributes)),
372 **event)
373
374 def json_format_event(event):
375 event['time'] = event['time'].isoformat()
376 event.pop('container')
377 return json.dumps(event)
378
379 for event in self.project.events():
380 formatter = json_format_event if options['--json'] else format_event
381 print(formatter(event))
382 sys.stdout.flush()
383
384 def exec_command(self, options):
385 """
386 Execute a command in a running container
387
388 Usage: exec [options] SERVICE COMMAND [ARGS...]
389
390 Options:
391 -d Detached mode: Run command in the background.
392 --privileged Give extended privileges to the process.
393 --user USER Run the command as this user.
394 -T Disable pseudo-tty allocation. By default `docker-compose exec`
395 allocates a TTY.
396 --index=index index of the container if there are multiple
397 instances of a service [default: 1]
398 """
399 index = int(options.get('--index'))
400 service = self.project.get_service(options['SERVICE'])
401 detach = options['-d']
402
403 try:
404 container = service.get_container(number=index)
405 except ValueError as e:
406 raise UserError(str(e))
407 command = [options['COMMAND']] + options['ARGS']
408 tty = not options["-T"]
409
410 if IS_WINDOWS_PLATFORM and not detach:
411 args = ["exec"]
412
413 if options["-d"]:
414 args += ["--detach"]
415 else:
416 args += ["--interactive"]
417
418 if not options["-T"]:
419 args += ["--tty"]
420
421 if options["--privileged"]:
422 args += ["--privileged"]
423
424 if options["--user"]:
425 args += ["--user", options["--user"]]
426
427 args += [container.id]
428 args += command
429
430 sys.exit(call_docker(args))
431
432 create_exec_options = {
433 "privileged": options["--privileged"],
434 "user": options["--user"],
435 "tty": tty,
436 "stdin": tty,
437 }
438
439 exec_id = container.create_exec(command, **create_exec_options)
440
441 if detach:
442 container.start_exec(exec_id, tty=tty)
443 return
444
445 signals.set_signal_handler_to_shutdown()
446 try:
447 operation = ExecOperation(
448 self.project.client,
449 exec_id,
450 interactive=tty,
451 )
452 pty = PseudoTerminal(self.project.client, operation)
453 pty.start()
454 except signals.ShutdownException:
455 log.info("received shutdown exception: closing")
456 exit_code = self.project.client.exec_inspect(exec_id).get("ExitCode")
457 sys.exit(exit_code)
458
459 @classmethod
460 def help(cls, options):
461 """
462 Get help on a command.
463
464 Usage: help [COMMAND]
465 """
466 if options['COMMAND']:
467 subject = get_handler(cls, options['COMMAND'])
468 else:
469 subject = cls
470
471 print(getdoc(subject))
472
473 def images(self, options):
474 """
475 List images used by the created containers.
476 Usage: images [options] [SERVICE...]
477
478 Options:
479 -q Only display IDs
480 """
481 containers = sorted(
482 self.project.containers(service_names=options['SERVICE'], stopped=True) +
483 self.project.containers(service_names=options['SERVICE'], one_off=OneOffFilter.only),
484 key=attrgetter('name'))
485
486 if options['-q']:
487 for image in set(c.image for c in containers):
488 print(image.split(':')[1])
489 else:
490 headers = [
491 'Container',
492 'Repository',
493 'Tag',
494 'Image Id',
495 'Size'
496 ]
497 rows = []
498 for container in containers:
499 image_config = container.image_config
500 repo_tags = image_config['RepoTags'][0].split(':')
501 image_id = image_config['Id'].split(':')[1][:12]
502 size = human_readable_file_size(image_config['Size'])
503 rows.append([
504 container.name,
505 repo_tags[0],
506 repo_tags[1],
507 image_id,
508 size
509 ])
510 print(Formatter().table(headers, rows))
511
512 def kill(self, options):
513 """
514 Force stop service containers.
515
516 Usage: kill [options] [SERVICE...]
517
518 Options:
519 -s SIGNAL SIGNAL to send to the container.
520 Default signal is SIGKILL.
521 """
522 signal = options.get('-s', 'SIGKILL')
523
524 self.project.kill(service_names=options['SERVICE'], signal=signal)
525
526 def logs(self, options):
527 """
528 View output from containers.
529
530 Usage: logs [options] [SERVICE...]
531
532 Options:
533 --no-color Produce monochrome output.
534 -f, --follow Follow log output.
535 -t, --timestamps Show timestamps.
536 --tail="all" Number of lines to show from the end of the logs
537 for each container.
538 """
539 containers = self.project.containers(service_names=options['SERVICE'], stopped=True)
540
541 tail = options['--tail']
542 if tail is not None:
543 if tail.isdigit():
544 tail = int(tail)
545 elif tail != 'all':
546 raise UserError("tail flag must be all or a number")
547 log_args = {
548 'follow': options['--follow'],
549 'tail': tail,
550 'timestamps': options['--timestamps']
551 }
552 print("Attaching to", list_containers(containers))
553 log_printer_from_project(
554 self.project,
555 containers,
556 options['--no-color'],
557 log_args,
558 event_stream=self.project.events(service_names=options['SERVICE'])).run()
559
560 def pause(self, options):
561 """
562 Pause services.
563
564 Usage: pause [SERVICE...]
565 """
566 containers = self.project.pause(service_names=options['SERVICE'])
567 exit_if(not containers, 'No containers to pause', 1)
568
569 def port(self, options):
570 """
571 Print the public port for a port binding.
572
573 Usage: port [options] SERVICE PRIVATE_PORT
574
575 Options:
576 --protocol=proto tcp or udp [default: tcp]
577 --index=index index of the container if there are multiple
578 instances of a service [default: 1]
579 """
580 index = int(options.get('--index'))
581 service = self.project.get_service(options['SERVICE'])
582 try:
583 container = service.get_container(number=index)
584 except ValueError as e:
585 raise UserError(str(e))
586 print(container.get_local_port(
587 options['PRIVATE_PORT'],
588 protocol=options.get('--protocol') or 'tcp') or '')
589
590 def ps(self, options):
591 """
592 List containers.
593
594 Usage: ps [options] [SERVICE...]
595
596 Options:
597 -q Only display IDs
598 """
599 containers = sorted(
600 self.project.containers(service_names=options['SERVICE'], stopped=True) +
601 self.project.containers(service_names=options['SERVICE'], one_off=OneOffFilter.only),
602 key=attrgetter('name'))
603
604 if options['-q']:
605 for container in containers:
606 print(container.id)
607 else:
608 headers = [
609 'Name',
610 'Command',
611 'State',
612 'Ports',
613 ]
614 rows = []
615 for container in containers:
616 command = container.human_readable_command
617 if len(command) > 30:
618 command = '%s ...' % command[:26]
619 rows.append([
620 container.name,
621 command,
622 container.human_readable_state,
623 container.human_readable_ports,
624 ])
625 print(Formatter().table(headers, rows))
626
627 def pull(self, options):
628 """
629 Pulls images for services.
630
631 Usage: pull [options] [SERVICE...]
632
633 Options:
634 --ignore-pull-failures Pull what it can and ignores images with pull failures.
635 --parallel Pull multiple images in parallel.
636 """
637 self.project.pull(
638 service_names=options['SERVICE'],
639 ignore_pull_failures=options.get('--ignore-pull-failures'),
640 parallel_pull=options.get('--parallel')
641 )
642
643 def push(self, options):
644 """
645 Pushes images for services.
646
647 Usage: push [options] [SERVICE...]
648
649 Options:
650 --ignore-push-failures Push what it can and ignores images with push failures.
651 """
652 self.project.push(
653 service_names=options['SERVICE'],
654 ignore_push_failures=options.get('--ignore-push-failures')
655 )
656
657 def rm(self, options):
658 """
659 Removes stopped service containers.
660
661 By default, anonymous volumes attached to containers will not be removed. You
662 can override this with `-v`. To list all volumes, use `docker volume ls`.
663
664 Any data which is not in a volume will be lost.
665
666 Usage: rm [options] [SERVICE...]
667
668 Options:
669 -f, --force Don't ask to confirm removal
670 -s, --stop Stop the containers, if required, before removing
671 -v Remove any anonymous volumes attached to containers
672 -a, --all Deprecated - no effect.
673 """
674 if options.get('--all'):
675 log.warn(
676 '--all flag is obsolete. This is now the default behavior '
677 'of `docker-compose rm`'
678 )
679 one_off = OneOffFilter.include
680
681 if options.get('--stop'):
682 running_containers = self.project.containers(
683 service_names=options['SERVICE'], stopped=False, one_off=one_off
684 )
685 self.project.stop(
686 service_names=running_containers,
687 one_off=one_off
688 )
689
690 all_containers = self.project.containers(
691 service_names=options['SERVICE'], stopped=True, one_off=one_off
692 )
693 stopped_containers = [c for c in all_containers if not c.is_running]
694
695 if len(stopped_containers) > 0:
696 print("Going to remove", list_containers(stopped_containers))
697 if options.get('--force') \
698 or yesno("Are you sure? [yN] ", default=False):
699 self.project.remove_stopped(
700 service_names=options['SERVICE'],
701 v=options.get('-v', False),
702 one_off=one_off
703 )
704 else:
705 print("No stopped containers")
706
707 def run(self, options):
708 """
709 Run a one-off command on a service.
710
711 For example:
712
713 $ docker-compose run web python manage.py shell
714
715 By default, linked services will be started, unless they are already
716 running. If you do not want to start linked services, use
717 `docker-compose run --no-deps SERVICE COMMAND [ARGS...]`.
718
719 Usage: run [options] [-v VOLUME...] [-p PORT...] [-e KEY=VAL...] SERVICE [COMMAND] [ARGS...]
720
721 Options:
722 -d Detached mode: Run container in the background, print
723 new container name.
724 --name NAME Assign a name to the container
725 --entrypoint CMD Override the entrypoint of the image.
726 -e KEY=VAL Set an environment variable (can be used multiple times)
727 -u, --user="" Run as specified username or uid
728 --no-deps Don't start linked services.
729 --rm Remove container after run. Ignored in detached mode.
730 -p, --publish=[] Publish a container's port(s) to the host
731 --service-ports Run command with the service's ports enabled and mapped
732 to the host.
733 -v, --volume=[] Bind mount a volume (default [])
734 -T Disable pseudo-tty allocation. By default `docker-compose run`
735 allocates a TTY.
736 -w, --workdir="" Working directory inside the container
737 """
738 service = self.project.get_service(options['SERVICE'])
739 detach = options['-d']
740
741 if options['--publish'] and options['--service-ports']:
742 raise UserError(
743 'Service port mapping and manual port mapping '
744 'can not be used together'
745 )
746
747 if options['COMMAND'] is not None:
748 command = [options['COMMAND']] + options['ARGS']
749 elif options['--entrypoint'] is not None:
750 command = []
751 else:
752 command = service.options.get('command')
753
754 container_options = build_container_options(options, detach, command)
755 run_one_off_container(container_options, self.project, service, options)
756
757 def scale(self, options):
758 """
759 Set number of containers to run for a service.
760
761 Numbers are specified in the form `service=num` as arguments.
762 For example:
763
764 $ docker-compose scale web=2 worker=3
765
766 Usage: scale [options] [SERVICE=NUM...]
767
768 Options:
769 -t, --timeout TIMEOUT Specify a shutdown timeout in seconds.
770 (default: 10)
771 """
772 timeout = timeout_from_opts(options)
773
774 for s in options['SERVICE=NUM']:
775 if '=' not in s:
776 raise UserError('Arguments to scale should be in the form service=num')
777 service_name, num = s.split('=', 1)
778 try:
779 num = int(num)
780 except ValueError:
781 raise UserError('Number of containers for service "%s" is not a '
782 'number' % service_name)
783 self.project.get_service(service_name).scale(num, timeout=timeout)
784
785 def start(self, options):
786 """
787 Start existing containers.
788
789 Usage: start [SERVICE...]
790 """
791 containers = self.project.start(service_names=options['SERVICE'])
792 exit_if(not containers, 'No containers to start', 1)
793
794 def stop(self, options):
795 """
796 Stop running containers without removing them.
797
798 They can be started again with `docker-compose start`.
799
800 Usage: stop [options] [SERVICE...]
801
802 Options:
803 -t, --timeout TIMEOUT Specify a shutdown timeout in seconds.
804 (default: 10)
805 """
806 timeout = timeout_from_opts(options)
807 self.project.stop(service_names=options['SERVICE'], timeout=timeout)
808
809 def restart(self, options):
810 """
811 Restart running containers.
812
813 Usage: restart [options] [SERVICE...]
814
815 Options:
816 -t, --timeout TIMEOUT Specify a shutdown timeout in seconds.
817 (default: 10)
818 """
819 timeout = timeout_from_opts(options)
820 containers = self.project.restart(service_names=options['SERVICE'], timeout=timeout)
821 exit_if(not containers, 'No containers to restart', 1)
822
823 def top(self, options):
824 """
825 Display the running processes
826
827 Usage: top [SERVICE...]
828
829 """
830 containers = sorted(
831 self.project.containers(service_names=options['SERVICE'], stopped=False) +
832 self.project.containers(service_names=options['SERVICE'], one_off=OneOffFilter.only),
833 key=attrgetter('name')
834 )
835
836 for idx, container in enumerate(containers):
837 if idx > 0:
838 print()
839
840 top_data = self.project.client.top(container.name)
841 headers = top_data.get("Titles")
842 rows = []
843
844 for process in top_data.get("Processes", []):
845 rows.append(process)
846
847 print(container.name)
848 print(Formatter().table(headers, rows))
849
850 def unpause(self, options):
851 """
852 Unpause services.
853
854 Usage: unpause [SERVICE...]
855 """
856 containers = self.project.unpause(service_names=options['SERVICE'])
857 exit_if(not containers, 'No containers to unpause', 1)
858
859 def up(self, options):
860 """
861 Builds, (re)creates, starts, and attaches to containers for a service.
862
863 Unless they are already running, this command also starts any linked services.
864
865 The `docker-compose up` command aggregates the output of each container. When
866 the command exits, all containers are stopped. Running `docker-compose up -d`
867 starts the containers in the background and leaves them running.
868
869 If there are existing containers for a service, and the service's configuration
870 or image was changed after the container's creation, `docker-compose up` picks
871 up the changes by stopping and recreating the containers (preserving mounted
872 volumes). To prevent Compose from picking up changes, use the `--no-recreate`
873 flag.
874
875 If you want to force Compose to stop and recreate all containers, use the
876 `--force-recreate` flag.
877
878 Usage: up [options] [SERVICE...]
879
880 Options:
881 -d Detached mode: Run containers in the background,
882 print new container names.
883 Incompatible with --abort-on-container-exit.
884 --no-color Produce monochrome output.
885 --no-deps Don't start linked services.
886 --force-recreate Recreate containers even if their configuration
887 and image haven't changed.
888 Incompatible with --no-recreate.
889 --no-recreate If containers already exist, don't recreate them.
890 Incompatible with --force-recreate.
891 --no-build Don't build an image, even if it's missing.
892 --build Build images before starting containers.
893 --abort-on-container-exit Stops all containers if any container was stopped.
894 Incompatible with -d.
895 -t, --timeout TIMEOUT Use this timeout in seconds for container shutdown
896 when attached or when containers are already
897 running. (default: 10)
898 --remove-orphans Remove containers for services not
899 defined in the Compose file
900 --exit-code-from SERVICE Return the exit code of the selected service container.
901 Requires --abort-on-container-exit.
902 """
903 start_deps = not options['--no-deps']
904 exit_value_from = exitval_from_opts(options, self.project)
905 cascade_stop = options['--abort-on-container-exit']
906 service_names = options['SERVICE']
907 timeout = timeout_from_opts(options)
908 remove_orphans = options['--remove-orphans']
909 detached = options.get('-d')
910
911 if detached and cascade_stop:
912 raise UserError("--abort-on-container-exit and -d cannot be combined.")
913
914 with up_shutdown_context(self.project, service_names, timeout, detached):
915 to_attach = self.project.up(
916 service_names=service_names,
917 start_deps=start_deps,
918 strategy=convergence_strategy_from_opts(options),
919 do_build=build_action_from_opts(options),
920 timeout=timeout,
921 detached=detached,
922 remove_orphans=remove_orphans)
923
924 if detached:
925 return
926
927 attached_containers = filter_containers_to_service_names(to_attach, service_names)
928
929 log_printer = log_printer_from_project(
930 self.project,
931 attached_containers,
932 options['--no-color'],
933 {'follow': True},
934 cascade_stop,
935 event_stream=self.project.events(service_names=service_names))
936 print("Attaching to", list_containers(log_printer.containers))
937 cascade_starter = log_printer.run()
938
939 if cascade_stop:
940 print("Aborting on container exit...")
941
942 exit_code = 0
943 if exit_value_from:
944 candidates = filter(
945 lambda c: c.service == exit_value_from,
946 attached_containers)
947 if not candidates:
948 log.error(
949 'No containers matching the spec "{0}" '
950 'were run.'.format(exit_value_from)
951 )
952 exit_code = 2
953 elif len(candidates) > 1:
954 exit_values = filter(
955 lambda e: e != 0,
956 [c.inspect()['State']['ExitCode'] for c in candidates]
957 )
958
959 exit_code = exit_values[0]
960 else:
961 exit_code = candidates[0].inspect()['State']['ExitCode']
962 else:
963 for e in self.project.containers(service_names=options['SERVICE'], stopped=True):
964 if (not e.is_running and cascade_starter == e.name):
965 if not e.exit_code == 0:
966 exit_code = e.exit_code
967 break
968
969 self.project.stop(service_names=service_names, timeout=timeout)
970 sys.exit(exit_code)
971
972 @classmethod
973 def version(cls, options):
974 """
975 Show version informations
976
977 Usage: version [--short]
978
979 Options:
980 --short Shows only Compose's version number.
981 """
982 if options['--short']:
983 print(__version__)
984 else:
985 print(get_version_info('full'))
986
987
988 def convergence_strategy_from_opts(options):
989 no_recreate = options['--no-recreate']
990 force_recreate = options['--force-recreate']
991 if force_recreate and no_recreate:
992 raise UserError("--force-recreate and --no-recreate cannot be combined.")
993
994 if force_recreate:
995 return ConvergenceStrategy.always
996
997 if no_recreate:
998 return ConvergenceStrategy.never
999
1000 return ConvergenceStrategy.changed
1001
1002
1003 def timeout_from_opts(options):
1004 timeout = options.get('--timeout')
1005 return None if timeout is None else int(timeout)
1006
1007
1008 def image_digests_for_project(project, allow_push=False):
1009 with errors.handle_connection_errors(project.client):
1010 try:
1011 return get_image_digests(
1012 project,
1013 allow_push=allow_push
1014 )
1015 except MissingDigests as e:
1016 def list_images(images):
1017 return "\n".join(" {}".format(name) for name in sorted(images))
1018
1019 paras = ["Some images are missing digests."]
1020
1021 if e.needs_push:
1022 command_hint = (
1023 "Use `docker-compose push {}` to push them. "
1024 .format(" ".join(sorted(e.needs_push)))
1025 )
1026 paras += [
1027 "The following images can be pushed:",
1028 list_images(e.needs_push),
1029 command_hint,
1030 ]
1031
1032 if e.needs_pull:
1033 command_hint = (
1034 "Use `docker-compose pull {}` to pull them. "
1035 .format(" ".join(sorted(e.needs_pull)))
1036 )
1037
1038 paras += [
1039 "The following images need to be pulled:",
1040 list_images(e.needs_pull),
1041 command_hint,
1042 ]
1043
1044 raise UserError("\n\n".join(paras))
1045
1046
1047 def exitval_from_opts(options, project):
1048 exit_value_from = options.get('--exit-code-from')
1049 if exit_value_from:
1050 if not options.get('--abort-on-container-exit'):
1051 log.warn('using --exit-code-from implies --abort-on-container-exit')
1052 options['--abort-on-container-exit'] = True
1053 if exit_value_from not in [s.name for s in project.get_services()]:
1054 log.error('No service named "%s" was found in your compose file.',
1055 exit_value_from)
1056 sys.exit(2)
1057 return exit_value_from
1058
1059
1060 def image_type_from_opt(flag, value):
1061 if not value:
1062 return ImageType.none
1063 try:
1064 return ImageType[value]
1065 except KeyError:
1066 raise UserError("%s flag must be one of: all, local" % flag)
1067
1068
1069 def build_action_from_opts(options):
1070 if options['--build'] and options['--no-build']:
1071 raise UserError("--build and --no-build can not be combined.")
1072
1073 if options['--build']:
1074 return BuildAction.force
1075
1076 if options['--no-build']:
1077 return BuildAction.skip
1078
1079 return BuildAction.none
1080
1081
1082 def build_container_options(options, detach, command):
1083 container_options = {
1084 'command': command,
1085 'tty': not (detach or options['-T'] or not sys.stdin.isatty()),
1086 'stdin_open': not detach,
1087 'detach': detach,
1088 }
1089
1090 if options['-e']:
1091 container_options['environment'] = Environment.from_command_line(
1092 parse_environment(options['-e'])
1093 )
1094
1095 if options['--entrypoint']:
1096 container_options['entrypoint'] = options.get('--entrypoint')
1097
1098 if options['--rm']:
1099 container_options['restart'] = None
1100
1101 if options['--user']:
1102 container_options['user'] = options.get('--user')
1103
1104 if not options['--service-ports']:
1105 container_options['ports'] = []
1106
1107 if options['--publish']:
1108 container_options['ports'] = options.get('--publish')
1109
1110 if options['--name']:
1111 container_options['name'] = options['--name']
1112
1113 if options['--workdir']:
1114 container_options['working_dir'] = options['--workdir']
1115
1116 if options['--volume']:
1117 volumes = [VolumeSpec.parse(i) for i in options['--volume']]
1118 container_options['volumes'] = volumes
1119
1120 return container_options
1121
1122
1123 def run_one_off_container(container_options, project, service, options):
1124 if not options['--no-deps']:
1125 deps = service.get_dependency_names()
1126 if deps:
1127 project.up(
1128 service_names=deps,
1129 start_deps=True,
1130 strategy=ConvergenceStrategy.never)
1131
1132 project.initialize()
1133
1134 container = service.create_container(
1135 quiet=True,
1136 one_off=True,
1137 **container_options)
1138
1139 if options['-d']:
1140 service.start_container(container)
1141 print(container.name)
1142 return
1143
1144 def remove_container(force=False):
1145 if options['--rm']:
1146 project.client.remove_container(container.id, force=True, v=True)
1147
1148 signals.set_signal_handler_to_shutdown()
1149 try:
1150 try:
1151 if IS_WINDOWS_PLATFORM:
1152 service.connect_container_to_networks(container)
1153 exit_code = call_docker(["start", "--attach", "--interactive", container.id])
1154 else:
1155 operation = RunOperation(
1156 project.client,
1157 container.id,
1158 interactive=not options['-T'],
1159 logs=False,
1160 )
1161 pty = PseudoTerminal(project.client, operation)
1162 sockets = pty.sockets()
1163 service.start_container(container)
1164 pty.start(sockets)
1165 exit_code = container.wait()
1166 except signals.ShutdownException:
1167 project.client.stop(container.id)
1168 exit_code = 1
1169 except signals.ShutdownException:
1170 project.client.kill(container.id)
1171 remove_container(force=True)
1172 sys.exit(2)
1173
1174 remove_container()
1175 sys.exit(exit_code)
1176
1177
1178 def log_printer_from_project(
1179 project,
1180 containers,
1181 monochrome,
1182 log_args,
1183 cascade_stop=False,
1184 event_stream=None,
1185 ):
1186 return LogPrinter(
1187 containers,
1188 build_log_presenters(project.service_names, monochrome),
1189 event_stream or project.events(),
1190 cascade_stop=cascade_stop,
1191 log_args=log_args)
1192
1193
1194 def filter_containers_to_service_names(containers, service_names):
1195 if not service_names:
1196 return containers
1197
1198 return [
1199 container
1200 for container in containers if container.service in service_names
1201 ]
1202
1203
1204 @contextlib.contextmanager
1205 def up_shutdown_context(project, service_names, timeout, detached):
1206 if detached:
1207 yield
1208 return
1209
1210 signals.set_signal_handler_to_shutdown()
1211 try:
1212 try:
1213 yield
1214 except signals.ShutdownException:
1215 print("Gracefully stopping... (press Ctrl+C again to force)")
1216 project.stop(service_names=service_names, timeout=timeout)
1217 except signals.ShutdownException:
1218 project.kill(service_names=service_names)
1219 sys.exit(2)
1220
1221
1222 def list_containers(containers):
1223 return ", ".join(c.name for c in containers)
1224
1225
1226 def exit_if(condition, message, exit_code):
1227 if condition:
1228 log.error(message)
1229 raise SystemExit(exit_code)
1230
1231
1232 def call_docker(args):
1233 executable_path = find_executable('docker')
1234 if not executable_path:
1235 raise UserError(errors.docker_not_found_msg("Couldn't find `docker` binary."))
1236
1237 args = [executable_path] + args
1238 log.debug(" ".join(map(pipes.quote, args)))
1239
1240 return subprocess.call(args)
1241
[end of compose/cli/main.py]
[start of compose/config/environment.py]
1 from __future__ import absolute_import
2 from __future__ import unicode_literals
3
4 import codecs
5 import contextlib
6 import logging
7 import os
8
9 import six
10
11 from ..const import IS_WINDOWS_PLATFORM
12 from .errors import ConfigurationError
13
14 log = logging.getLogger(__name__)
15
16
17 def split_env(env):
18 if isinstance(env, six.binary_type):
19 env = env.decode('utf-8', 'replace')
20 if '=' in env:
21 return env.split('=', 1)
22 else:
23 return env, None
24
25
26 def env_vars_from_file(filename):
27 """
28 Read in a line delimited file of environment variables.
29 """
30 if not os.path.exists(filename):
31 raise ConfigurationError("Couldn't find env file: %s" % filename)
32 elif not os.path.isfile(filename):
33 raise ConfigurationError("%s is not a file." % (filename))
34 env = {}
35 with contextlib.closing(codecs.open(filename, 'r', 'utf-8')) as fileobj:
36 for line in fileobj:
37 line = line.strip()
38 if line and not line.startswith('#'):
39 k, v = split_env(line)
40 env[k] = v
41 return env
42
43
44 class Environment(dict):
45 def __init__(self, *args, **kwargs):
46 super(Environment, self).__init__(*args, **kwargs)
47 self.missing_keys = []
48
49 @classmethod
50 def from_env_file(cls, base_dir):
51 def _initialize():
52 result = cls()
53 if base_dir is None:
54 return result
55 env_file_path = os.path.join(base_dir, '.env')
56 try:
57 return cls(env_vars_from_file(env_file_path))
58 except ConfigurationError:
59 pass
60 return result
61 instance = _initialize()
62 instance.update(os.environ)
63 return instance
64
65 @classmethod
66 def from_command_line(cls, parsed_env_opts):
67 result = cls()
68 for k, v in parsed_env_opts.items():
69 # Values from the command line take priority, unless they're unset
70 # in which case they take the value from the system's environment
71 if v is None and k in os.environ:
72 result[k] = os.environ[k]
73 else:
74 result[k] = v
75 return result
76
77 def __getitem__(self, key):
78 try:
79 return super(Environment, self).__getitem__(key)
80 except KeyError:
81 if IS_WINDOWS_PLATFORM:
82 try:
83 return super(Environment, self).__getitem__(key.upper())
84 except KeyError:
85 pass
86 if key not in self.missing_keys:
87 log.warn(
88 "The {} variable is not set. Defaulting to a blank string."
89 .format(key)
90 )
91 self.missing_keys.append(key)
92
93 return ""
94
95 def __contains__(self, key):
96 result = super(Environment, self).__contains__(key)
97 if IS_WINDOWS_PLATFORM:
98 return (
99 result or super(Environment, self).__contains__(key.upper())
100 )
101 return result
102
103 def get(self, key, *args, **kwargs):
104 if IS_WINDOWS_PLATFORM:
105 return super(Environment, self).get(
106 key,
107 super(Environment, self).get(key.upper(), *args, **kwargs)
108 )
109 return super(Environment, self).get(key, *args, **kwargs)
110
111 def get_boolean(self, key):
112 # Convert a value to a boolean using "common sense" rules.
113 # Unset, empty, "0" and "false" (i-case) yield False.
114 # All other values yield True.
115 value = self.get(key)
116 if not value:
117 return False
118 if value.lower() in ['0', 'false']:
119 return False
120 return True
121
[end of compose/config/environment.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
docker/compose
|
94defc159a2dcdc548f2c5295f29acfad6122684
|
Support unicode characters in -f paths
On Ubuntu 16.04:
```
$ docker-compose -f 就吃饭/docker-compose.yml config/home/joffrey/work/compose/compose/config/config.py:234: UnicodeWarning: Unicode equal comparison failed to convert both arguments to Unicode - interpreting them as being unequal
if filenames == ['-']:
Traceback (most recent call last):
File "/home/joffrey/.envs/compose/bin/docker-compose", line 9, in <module>
load_entry_point('docker-compose==1.11.0.dev0', 'console_scripts', 'docker-compose')()
File "/home/joffrey/work/compose/compose/cli/main.py", line 64, in main
command()
File "/home/joffrey/work/compose/compose/cli/main.py", line 110, in perform_command
handler(command, options, command_options)
File "/home/joffrey/work/compose/compose/cli/main.py", line 305, in config
compose_config = get_config_from_options(self.project_dir, config_options)
File "/home/joffrey/work/compose/compose/cli/command.py", line 46, in get_config_from_options
config.find(base_dir, config_path, environment)
File "/home/joffrey/work/compose/compose/config/config.py", line 242, in find
filenames = [os.path.join(base_dir, f) for f in filenames]
File "/home/joffrey/.envs/compose/lib/python2.7/posixpath.py", line 73, in join
path += '/' + b
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe5 in position 1: ordinal not in range(128)
```
On Windows:
```
docker-compose -f "C:\Users\husun\documents\visual studio 2017\Projects\测试中文\docker-compose.yml" up -d --build
ERROR: compose.cli.main.main: .IOError: [Errno 22] invalid mode ('r') or filename: 'C:\\Users\\husun\\documents\\visual studio 2017\\Projects\\????\\docker-compose.yml'
```
|
2017-04-11T00:40:02Z
|
<patch>
diff --git a/compose/cli/command.py b/compose/cli/command.py
--- a/compose/cli/command.py
+++ b/compose/cli/command.py
@@ -49,14 +49,17 @@ def get_config_from_options(base_dir, options):
def get_config_path_from_options(base_dir, options, environment):
+ def unicode_paths(paths):
+ return [p.decode('utf-8') if isinstance(p, six.binary_type) else p for p in paths]
+
file_option = options.get('--file')
if file_option:
- return file_option
+ return unicode_paths(file_option)
config_files = environment.get('COMPOSE_FILE')
if config_files:
pathsep = environment.get('COMPOSE_PATH_SEPARATOR', os.pathsep)
- return config_files.split(pathsep)
+ return unicode_paths(config_files.split(pathsep))
return None
</patch>
|
[]
|
[]
| ||||
pandas-dev__pandas-11294
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
PERF: checking is_monotonic_increasing/decreasing before sorting on an index
We don't keep the sortedness state in an index per-se, but it is rather cheap to check
- `is_monotonic_increasing` or `is_monotonic_decreasing` on a reg-index
- MultiIndex should check `is_lexsorted` (this might be done already)
```
In [8]: df = DataFrame(np.random.randn(1000000,2),columns=list('AB'))
In [9]: %timeit df.sort_index()
10 loops, best of 3: 37.1 ms per loop
In [10]: %timeit -n 1 -r 1 df.index.is_monotonic_increasing
1 loops, best of 1: 2.01 ms per loop
In [11]: %timeit -n 1 -r 1 df.index.is_monotonic_increasin^C
KeyboardInterrupt
In [11]: %timeit df.set_index('A').sort_index()
10 loops, best of 3: 175 ms per loop
In [12]: %timeit -n 1 -r 1 df.set_index('A').index.is_monotonic_increasing
1 loops, best of 1: 9.54 ms per loop
```
</issue>
<code>
[start of README.md]
1 # pandas: powerful Python data analysis toolkit
2
3 <table>
4 <tr>
5 <td>Latest Release</td>
6 <td><img src="https://img.shields.io/pypi/v/pandas.svg" alt="latest release" /></td>
7 </tr>
8 <td></td>
9 <td><img src="https://anaconda.org/pandas/pandas/badges/version.svg" alt="latest release" /></td>
10 </tr>
11 <tr>
12 <td>Package Status</td>
13 <td><img src="https://img.shields.io/pypi/status/pandas.svg" alt="status" /></td>
14 </tr>
15 <tr>
16 <td>License</td>
17 <td><img src="https://img.shields.io/pypi/l/pandas.svg" alt="license" /></td>
18 </tr>
19 <tr>
20 <td>Build Status</td>
21 <td>
22 <a href="https://travis-ci.org/pydata/pandas">
23 <img src="https://travis-ci.org/pydata/pandas.svg?branch=master" alt="build status" />
24 </a>
25 </td>
26 </tr>
27 <tr>
28 <td>Conda</td>
29 <td>
30 <a href="http://pandas.pydata.org">
31 <img src="http://pubbadges.s3-website-us-east-1.amazonaws.com/pkgs-downloads-pandas.png" alt="conda downloads" />
32 </a>
33 </td>
34 </tr>
35 <tr>
36 <td>PyPI</td>
37 <td>
38 <a href="https://pypi.python.org/pypi/pandas/">
39 <img src="https://img.shields.io/pypi/dm/pandas.svg" alt="pypi downloads" />
40 </a>
41 </td>
42 </tr>
43 </table>
44
45 [](https://gitter.im/pydata/pandas?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
46
47 ## What is it
48
49 **pandas** is a Python package providing fast, flexible, and expressive data
50 structures designed to make working with "relational" or "labeled" data both
51 easy and intuitive. It aims to be the fundamental high-level building block for
52 doing practical, **real world** data analysis in Python. Additionally, it has
53 the broader goal of becoming **the most powerful and flexible open source data
54 analysis / manipulation tool available in any language**. It is already well on
55 its way toward this goal.
56
57 ## Main Features
58 Here are just a few of the things that pandas does well:
59
60 - Easy handling of [**missing data**][missing-data] (represented as
61 `NaN`) in floating point as well as non-floating point data
62 - Size mutability: columns can be [**inserted and
63 deleted**][insertion-deletion] from DataFrame and higher dimensional
64 objects
65 - Automatic and explicit [**data alignment**][alignment]: objects can
66 be explicitly aligned to a set of labels, or the user can simply
67 ignore the labels and let `Series`, `DataFrame`, etc. automatically
68 align the data for you in computations
69 - Powerful, flexible [**group by**][groupby] functionality to perform
70 split-apply-combine operations on data sets, for both aggregating
71 and transforming data
72 - Make it [**easy to convert**][conversion] ragged,
73 differently-indexed data in other Python and NumPy data structures
74 into DataFrame objects
75 - Intelligent label-based [**slicing**][slicing], [**fancy
76 indexing**][fancy-indexing], and [**subsetting**][subsetting] of
77 large data sets
78 - Intuitive [**merging**][merging] and [**joining**][joining] data
79 sets
80 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of
81 data sets
82 - [**Hierarchical**][mi] labeling of axes (possible to have multiple
83 labels per tick)
84 - Robust IO tools for loading data from [**flat files**][flat-files]
85 (CSV and delimited), [**Excel files**][excel], [**databases**][db],
86 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore]
87 - [**Time series**][timeseries]-specific functionality: date range
88 generation and frequency conversion, moving window statistics,
89 moving window linear regressions, date shifting and lagging, etc.
90
91
92 [missing-data]: http://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data
93 [insertion-deletion]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html#column-selection-addition-deletion
94 [alignment]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html?highlight=alignment#intro-to-data-structures
95 [groupby]: http://pandas.pydata.org/pandas-docs/stable/groupby.html#group-by-split-apply-combine
96 [conversion]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe
97 [slicing]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges
98 [fancy-indexing]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#advanced-indexing-with-ix
99 [subsetting]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing
100 [merging]: http://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging
101 [joining]: http://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index
102 [reshape]: http://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-and-pivot-tables
103 [pivot-table]: http://pandas.pydata.org/pandas-docs/stable/reshaping.html#pivot-tables-and-cross-tabulations
104 [mi]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex
105 [flat-files]: http://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files
106 [excel]: http://pandas.pydata.org/pandas-docs/stable/io.html#excel-files
107 [db]: http://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries
108 [hdfstore]: http://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables
109 [timeseries]: http://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-series-date-functionality
110
111 ## Where to get it
112 The source code is currently hosted on GitHub at:
113 http://github.com/pydata/pandas
114
115 Binary installers for the latest released version are available at the Python
116 package index
117
118 http://pypi.python.org/pypi/pandas/
119
120 And via `easy_install`:
121
122 ```sh
123 easy_install pandas
124 ```
125
126 or `pip`:
127
128 ```sh
129 pip install pandas
130 ```
131
132 or `conda`:
133
134 ```sh
135 conda install pandas
136 ```
137
138 ## Dependencies
139 - [NumPy](http://www.numpy.org): 1.7.0 or higher
140 - [python-dateutil](http://labix.org/python-dateutil): 1.5 or higher
141 - [pytz](http://pytz.sourceforge.net)
142 - Needed for time zone support with ``pandas.date_range``
143
144 ### Highly Recommended Dependencies
145 - [numexpr](https://github.com/pydata/numexpr)
146 - Needed to accelerate some expression evaluation operations
147 - Required by PyTables
148 - [bottleneck](http://berkeleyanalytics.com/bottleneck)
149 - Needed to accelerate certain numerical operations
150
151 ### Optional dependencies
152 - [Cython](http://www.cython.org): Only necessary to build development version. Version 0.17.1 or higher.
153 - [SciPy](http://www.scipy.org): miscellaneous statistical functions
154 - [PyTables](http://www.pytables.org): necessary for HDF5-based storage
155 - [SQLAlchemy](http://www.sqlalchemy.org): for SQL database support. Version 0.8.1 or higher recommended.
156 - [matplotlib](http://matplotlib.sourceforge.net/): for plotting
157 - [statsmodels](http://statsmodels.sourceforge.net/)
158 - Needed for parts of `pandas.stats`
159 - For Excel I/O:
160 - [xlrd/xlwt](http://www.python-excel.org/)
161 - Excel reading (xlrd) and writing (xlwt)
162 - [openpyxl](http://packages.python.org/openpyxl/)
163 - openpyxl version 1.6.1 or higher, but lower than 2.0.0, for
164 writing .xlsx files
165 - xlrd >= 0.9.0
166 - [XlsxWriter](https://pypi.python.org/pypi/XlsxWriter)
167 - Alternative Excel writer.
168 - [Google bq Command Line Tool](https://cloud.google.com/bigquery/bq-command-line-tool)
169 - Needed for `pandas.io.gbq`
170 - [boto](https://pypi.python.org/pypi/boto): necessary for Amazon S3 access.
171 - One of the following combinations of libraries is needed to use the
172 top-level [`pandas.read_html`][read-html-docs] function:
173 - [BeautifulSoup4][BeautifulSoup4] and [html5lib][html5lib] (Any
174 recent version of [html5lib][html5lib] is okay.)
175 - [BeautifulSoup4][BeautifulSoup4] and [lxml][lxml]
176 - [BeautifulSoup4][BeautifulSoup4] and [html5lib][html5lib] and [lxml][lxml]
177 - Only [lxml][lxml], although see [HTML reading gotchas][html-gotchas]
178 for reasons as to why you should probably **not** take this approach.
179
180 #### Notes about HTML parsing libraries
181 - If you install [BeautifulSoup4][BeautifulSoup4] you must install
182 either [lxml][lxml] or [html5lib][html5lib] or both.
183 `pandas.read_html` will **not** work with *only* `BeautifulSoup4`
184 installed.
185 - You are strongly encouraged to read [HTML reading
186 gotchas][html-gotchas]. It explains issues surrounding the
187 installation and usage of the above three libraries.
188 - You may need to install an older version of
189 [BeautifulSoup4][BeautifulSoup4]:
190 - Versions 4.2.1, 4.1.3 and 4.0.2 have been confirmed for 64 and
191 32-bit Ubuntu/Debian
192 - Additionally, if you're using [Anaconda][Anaconda] you should
193 definitely read [the gotchas about HTML parsing][html-gotchas]
194 libraries
195 - If you're on a system with `apt-get` you can do
196
197 ```sh
198 sudo apt-get build-dep python-lxml
199 ```
200
201 to get the necessary dependencies for installation of [lxml][lxml].
202 This will prevent further headaches down the line.
203
204 [html5lib]: https://github.com/html5lib/html5lib-python "html5lib"
205 [BeautifulSoup4]: http://www.crummy.com/software/BeautifulSoup "BeautifulSoup4"
206 [lxml]: http://lxml.de
207 [Anaconda]: https://store.continuum.io/cshop/anaconda
208 [NumPy]: http://numpy.scipy.org/
209 [html-gotchas]: http://pandas.pydata.org/pandas-docs/stable/gotchas.html#html-table-parsing
210 [read-html-docs]: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.io.html.read_html.html#pandas.io.html.read_html
211
212 ## Installation from sources
213 To install pandas from source you need Cython in addition to the normal
214 dependencies above. Cython can be installed from pypi:
215
216 ```sh
217 pip install cython
218 ```
219
220 In the `pandas` directory (same one where you found this file after
221 cloning the git repo), execute:
222
223 ```sh
224 python setup.py install
225 ```
226
227 or for installing in [development mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs):
228
229 ```sh
230 python setup.py develop
231 ```
232
233 Alternatively, you can use `pip` if you want all the dependencies pulled
234 in automatically (the `-e` option is for installing it in [development
235 mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs)):
236
237 ```sh
238 pip install -e .
239 ```
240
241 On Windows, you will need to install MinGW and execute:
242
243 ```sh
244 python setup.py build --compiler=mingw32
245 python setup.py install
246 ```
247
248 See http://pandas.pydata.org/ for more information.
249
250 ## License
251 BSD
252
253 ## Documentation
254 The official documentation is hosted on PyData.org: http://pandas.pydata.org/
255
256 The Sphinx documentation should provide a good starting point for learning how
257 to use the library. Expect the docs to continue to expand as time goes on.
258
259 ## Background
260 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and
261 has been under active development since then.
262
263 ## Discussion and Development
264 Since pandas development is related to a number of other scientific
265 Python projects, questions are welcome on the scipy-user mailing
266 list. Specialized discussions or design issues should take place on
267 the PyData mailing list / Google group:
268
269 https://groups.google.com/forum/#!forum/pydata
270
[end of README.md]
[start of bench/bench_unique.py]
1 from __future__ import print_function
2 from pandas import *
3 from pandas.util.testing import rands
4 from pandas.compat import range, zip
5 import pandas._tseries as lib
6 import numpy as np
7 import matplotlib.pyplot as plt
8
9 N = 50000
10 K = 10000
11
12 groups = np.array([rands(10) for _ in range(K)], dtype='O')
13 groups2 = np.array([rands(10) for _ in range(K)], dtype='O')
14
15 labels = np.tile(groups, N // K)
16 labels2 = np.tile(groups2, N // K)
17 data = np.random.randn(N)
18
19
20 def timeit(f, niter):
21 import gc
22 import time
23 gc.disable()
24 start = time.time()
25 for _ in range(niter):
26 f()
27 elapsed = (time.time() - start) / niter
28 gc.enable()
29 return elapsed
30
31
32 def algo1():
33 unique_labels = np.unique(labels)
34 result = np.empty(len(unique_labels))
35 for i, label in enumerate(unique_labels):
36 result[i] = data[labels == label].sum()
37
38
39 def algo2():
40 unique_labels = np.unique(labels)
41 indices = lib.groupby_indices(labels)
42 result = np.empty(len(unique_labels))
43
44 for i, label in enumerate(unique_labels):
45 result[i] = data.take(indices[label]).sum()
46
47
48 def algo3_nosort():
49 rizer = lib.DictFactorizer()
50 labs, counts = rizer.factorize(labels, sort=False)
51 k = len(rizer.uniques)
52 out = np.empty(k)
53 lib.group_add(out, counts, data, labs)
54
55
56 def algo3_sort():
57 rizer = lib.DictFactorizer()
58 labs, counts = rizer.factorize(labels, sort=True)
59 k = len(rizer.uniques)
60 out = np.empty(k)
61 lib.group_add(out, counts, data, labs)
62
63 import numpy as np
64 import random
65
66
67 # dict to hold results
68 counts = {}
69
70 # a hack to generate random key, value pairs.
71 # 5k keys, 100k values
72 x = np.tile(np.arange(5000, dtype='O'), 20)
73 random.shuffle(x)
74 xarr = x
75 x = [int(y) for y in x]
76 data = np.random.uniform(0, 1, 100000)
77
78
79 def f():
80 # groupby sum
81 for k, v in zip(x, data):
82 try:
83 counts[k] += v
84 except KeyError:
85 counts[k] = v
86
87
88 def f2():
89 rizer = lib.DictFactorizer()
90 labs, counts = rizer.factorize(xarr, sort=False)
91 k = len(rizer.uniques)
92 out = np.empty(k)
93 lib.group_add(out, counts, data, labs)
94
95
96 def algo4():
97 rizer = lib.DictFactorizer()
98 labs1, _ = rizer.factorize(labels, sort=False)
99 k1 = len(rizer.uniques)
100
101 rizer = lib.DictFactorizer()
102 labs2, _ = rizer.factorize(labels2, sort=False)
103 k2 = len(rizer.uniques)
104
105 group_id = labs1 * k2 + labs2
106 max_group = k1 * k2
107
108 if max_group > 1e6:
109 rizer = lib.Int64Factorizer(len(group_id))
110 group_id, _ = rizer.factorize(group_id.astype('i8'), sort=True)
111 max_group = len(rizer.uniques)
112
113 out = np.empty(max_group)
114 counts = np.zeros(max_group, dtype='i4')
115 lib.group_add(out, counts, data, group_id)
116
117 # cumtime percall filename:lineno(function)
118 # 0.592 0.592 <string>:1(<module>)
119 # 0.584 0.006 groupby_ex.py:37(algo3_nosort)
120 # 0.535 0.005 {method 'factorize' of DictFactorizer' objects}
121 # 0.047 0.000 {pandas._tseries.group_add}
122 # 0.002 0.000 numeric.py:65(zeros_like)
123 # 0.001 0.000 {method 'fill' of 'numpy.ndarray' objects}
124 # 0.000 0.000 {numpy.core.multiarray.empty_like}
125 # 0.000 0.000 {numpy.core.multiarray.empty}
126
127 # UNIQUE timings
128
129 # N = 10000000
130 # K = 500000
131
132 # groups = np.array([rands(10) for _ in range(K)], dtype='O')
133
134 # labels = np.tile(groups, N // K)
135 data = np.random.randn(N)
136
137 data = np.random.randn(N)
138
139 Ks = [100, 1000, 5000, 10000, 25000, 50000, 100000]
140
141 # Ks = [500000, 1000000, 2500000, 5000000, 10000000]
142
143 import psutil
144 import os
145 import gc
146
147 pid = os.getpid()
148 proc = psutil.Process(pid)
149
150
151 def dict_unique(values, expected_K, sort=False, memory=False):
152 if memory:
153 gc.collect()
154 before_mem = proc.get_memory_info().rss
155
156 rizer = lib.DictFactorizer()
157 result = rizer.unique_int64(values)
158
159 if memory:
160 result = proc.get_memory_info().rss - before_mem
161 return result
162
163 if sort:
164 result.sort()
165 assert(len(result) == expected_K)
166 return result
167
168
169 def khash_unique(values, expected_K, size_hint=False, sort=False,
170 memory=False):
171 if memory:
172 gc.collect()
173 before_mem = proc.get_memory_info().rss
174
175 if size_hint:
176 rizer = lib.Factorizer(len(values))
177 else:
178 rizer = lib.Factorizer(100)
179
180 result = []
181 result = rizer.unique(values)
182
183 if memory:
184 result = proc.get_memory_info().rss - before_mem
185 return result
186
187 if sort:
188 result.sort()
189 assert(len(result) == expected_K)
190
191
192 def khash_unique_str(values, expected_K, size_hint=False, sort=False,
193 memory=False):
194 if memory:
195 gc.collect()
196 before_mem = proc.get_memory_info().rss
197
198 if size_hint:
199 rizer = lib.StringHashTable(len(values))
200 else:
201 rizer = lib.StringHashTable(100)
202
203 result = []
204 result = rizer.unique(values)
205
206 if memory:
207 result = proc.get_memory_info().rss - before_mem
208 return result
209
210 if sort:
211 result.sort()
212 assert(len(result) == expected_K)
213
214
215 def khash_unique_int64(values, expected_K, size_hint=False, sort=False):
216 if size_hint:
217 rizer = lib.Int64HashTable(len(values))
218 else:
219 rizer = lib.Int64HashTable(100)
220
221 result = []
222 result = rizer.unique(values)
223
224 if sort:
225 result.sort()
226 assert(len(result) == expected_K)
227
228
229 def hash_bench():
230 numpy = []
231 dict_based = []
232 dict_based_sort = []
233 khash_hint = []
234 khash_nohint = []
235 for K in Ks:
236 print(K)
237 # groups = np.array([rands(10) for _ in range(K)])
238 # labels = np.tile(groups, N // K).astype('O')
239
240 groups = np.random.randint(0, long(100000000000), size=K)
241 labels = np.tile(groups, N // K)
242 dict_based.append(timeit(lambda: dict_unique(labels, K), 20))
243 khash_nohint.append(timeit(lambda: khash_unique_int64(labels, K), 20))
244 khash_hint.append(timeit(lambda: khash_unique_int64(labels, K,
245 size_hint=True), 20))
246
247 # memory, hard to get
248 # dict_based.append(np.mean([dict_unique(labels, K, memory=True)
249 # for _ in range(10)]))
250 # khash_nohint.append(np.mean([khash_unique(labels, K, memory=True)
251 # for _ in range(10)]))
252 # khash_hint.append(np.mean([khash_unique(labels, K, size_hint=True, memory=True)
253 # for _ in range(10)]))
254
255 # dict_based_sort.append(timeit(lambda: dict_unique(labels, K,
256 # sort=True), 10))
257 # numpy.append(timeit(lambda: np.unique(labels), 10))
258
259 # unique_timings = DataFrame({'numpy.unique' : numpy,
260 # 'dict, no sort' : dict_based,
261 # 'dict, sort' : dict_based_sort},
262 # columns=['dict, no sort',
263 # 'dict, sort', 'numpy.unique'],
264 # index=Ks)
265
266 unique_timings = DataFrame({'dict': dict_based,
267 'khash, preallocate': khash_hint,
268 'khash': khash_nohint},
269 columns=['khash, preallocate', 'khash', 'dict'],
270 index=Ks)
271
272 unique_timings.plot(kind='bar', legend=False)
273 plt.legend(loc='best')
274 plt.title('Unique on 100,000 values, int64')
275 plt.xlabel('Number of unique labels')
276 plt.ylabel('Mean execution time')
277
278 plt.show()
279
[end of bench/bench_unique.py]
[start of bench/bench_with_subset.py]
1 #!/usr/bin/env python
2
3 """
4 Microbenchmarks for comparison with R's "with" and "subset" functions
5 """
6
7 from __future__ import print_function
8 import numpy as np
9 from numpy import array
10 from timeit import repeat as timeit
11 from pandas.compat import range, zip
12 from pandas import DataFrame
13
14
15 setup_common = """from pandas import DataFrame
16 from numpy.random import randn
17 df = DataFrame(randn(%d, 3), columns=list('abc'))
18 %s"""
19
20
21 setup_with = "s = 'a + b * (c ** 2 + b ** 2 - a) / (a * c) ** 3'"
22
23
24 def bench_with(n, times=10, repeat=3, engine='numexpr'):
25 return np.array(timeit('df.eval(s, engine=%r)' % engine,
26 setup=setup_common % (n, setup_with),
27 repeat=repeat, number=times)) / times
28
29
30 setup_subset = "s = 'a <= b <= c ** 2 + b ** 2 - a and b > c'"
31
32
33 def bench_subset(n, times=10, repeat=3, engine='numexpr'):
34 return np.array(timeit('df.query(s, engine=%r)' % engine,
35 setup=setup_common % (n, setup_subset),
36 repeat=repeat, number=times)) / times
37
38
39 def bench(mn=1, mx=7, num=100, engines=('python', 'numexpr'), verbose=False):
40 r = np.logspace(mn, mx, num=num).round().astype(int)
41
42 ev = DataFrame(np.empty((num, len(engines))), columns=engines)
43 qu = ev.copy(deep=True)
44
45 ev['size'] = qu['size'] = r
46
47 for engine in engines:
48 for i, n in enumerate(r):
49 if verbose:
50 print('engine: %r, i == %d' % (engine, i))
51 ev.loc[i, engine] = bench_with(n, times=1, repeat=1, engine=engine)
52 qu.loc[i, engine] = bench_subset(n, times=1, repeat=1,
53 engine=engine)
54
55 return ev, qu
56
57
58 def plot_perf(df, engines, title, filename=None):
59 from matplotlib.pyplot import figure, rc
60
61 try:
62 from mpltools import style
63 except ImportError:
64 pass
65 else:
66 style.use('ggplot')
67
68 rc('text', usetex=True)
69
70 fig = figure(figsize=(4, 3), dpi=100)
71 ax = fig.add_subplot(111)
72
73 for engine in engines:
74 ax.plot(df.size, df[engine], label=engine, lw=2)
75
76 ax.set_xlabel('Number of Rows')
77 ax.set_ylabel('Time (s)')
78 ax.set_title(title)
79 ax.legend(loc='best')
80 ax.tick_params(top=False, right=False)
81
82 fig.tight_layout()
83
84 if filename is not None:
85 fig.savefig(filename)
86
87
88 if __name__ == '__main__':
89 import os
90 import pandas as pd
91
92 pandas_dir = os.path.dirname(os.path.abspath(os.path.dirname(__file__)))
93 static_path = os.path.join(pandas_dir, 'doc', 'source', '_static')
94
95 join = lambda p: os.path.join(static_path, p)
96
97 fn = join('eval-query-perf-data.h5')
98
99 engines = 'python', 'numexpr'
100
101 if not os.path.exists(fn):
102 ev, qu = bench(verbose=True)
103 ev.to_hdf(fn, 'eval')
104 qu.to_hdf(fn, 'query')
105 else:
106 ev = pd.read_hdf(fn, 'eval')
107 qu = pd.read_hdf(fn, 'query')
108
109 plot_perf(ev, engines, 'DataFrame.eval()', filename=join('eval-perf.png'))
110 plot_perf(qu, engines, 'DataFrame.query()',
111 filename=join('query-perf.png'))
112
113 plot_perf(ev[ev.size <= 50000], engines, 'DataFrame.eval()',
114 filename=join('eval-perf-small.png'))
115 plot_perf(qu[qu.size <= 500000], engines, 'DataFrame.query()',
116 filename=join('query-perf-small.png'))
117
[end of bench/bench_with_subset.py]
[start of pandas/io/wb.py]
1 # -*- coding: utf-8 -*-
2
3 from __future__ import print_function
4
5 from pandas.compat import map, reduce, range, lrange
6 from pandas.io.common import urlopen
7 from pandas.io import json
8 import pandas
9 import numpy as np
10 import warnings
11
12 warnings.warn("\n"
13 "The pandas.io.wb module is moved to a separate package "
14 "(pandas-datareader) and will be removed from pandas in a "
15 "future version.\nAfter installing the pandas-datareader package "
16 "(https://github.com/pydata/pandas-datareader), you can change "
17 "the import ``from pandas.io import data, wb`` to "
18 "``from pandas_datareader import data, wb``.",
19 FutureWarning)
20
21
22 # This list of country codes was pulled from wikipedia during October 2014.
23 # While some exceptions do exist, it is the best proxy for countries supported
24 # by World Bank. It is an aggregation of the 2-digit ISO 3166-1 alpha-2, and
25 # 3-digit ISO 3166-1 alpha-3, codes, with 'all', 'ALL', and 'All' appended ot
26 # the end.
27
28 country_codes = ['AD', 'AE', 'AF', 'AG', 'AI', 'AL', 'AM', 'AO', 'AQ', 'AR', \
29 'AS', 'AT', 'AU', 'AW', 'AX', 'AZ', 'BA', 'BB', 'BD', 'BE', \
30 'BF', 'BG', 'BH', 'BI', 'BJ', 'BL', 'BM', 'BN', 'BO', 'BQ', \
31 'BR', 'BS', 'BT', 'BV', 'BW', 'BY', 'BZ', 'CA', 'CC', 'CD', \
32 'CF', 'CG', 'CH', 'CI', 'CK', 'CL', 'CM', 'CN', 'CO', 'CR', \
33 'CU', 'CV', 'CW', 'CX', 'CY', 'CZ', 'DE', 'DJ', 'DK', 'DM', \
34 'DO', 'DZ', 'EC', 'EE', 'EG', 'EH', 'ER', 'ES', 'ET', 'FI', \
35 'FJ', 'FK', 'FM', 'FO', 'FR', 'GA', 'GB', 'GD', 'GE', 'GF', \
36 'GG', 'GH', 'GI', 'GL', 'GM', 'GN', 'GP', 'GQ', 'GR', 'GS', \
37 'GT', 'GU', 'GW', 'GY', 'HK', 'HM', 'HN', 'HR', 'HT', 'HU', \
38 'ID', 'IE', 'IL', 'IM', 'IN', 'IO', 'IQ', 'IR', 'IS', 'IT', \
39 'JE', 'JM', 'JO', 'JP', 'KE', 'KG', 'KH', 'KI', 'KM', 'KN', \
40 'KP', 'KR', 'KW', 'KY', 'KZ', 'LA', 'LB', 'LC', 'LI', 'LK', \
41 'LR', 'LS', 'LT', 'LU', 'LV', 'LY', 'MA', 'MC', 'MD', 'ME', \
42 'MF', 'MG', 'MH', 'MK', 'ML', 'MM', 'MN', 'MO', 'MP', 'MQ', \
43 'MR', 'MS', 'MT', 'MU', 'MV', 'MW', 'MX', 'MY', 'MZ', 'NA', \
44 'NC', 'NE', 'NF', 'NG', 'NI', 'NL', 'NO', 'NP', 'NR', 'NU', \
45 'NZ', 'OM', 'PA', 'PE', 'PF', 'PG', 'PH', 'PK', 'PL', 'PM', \
46 'PN', 'PR', 'PS', 'PT', 'PW', 'PY', 'QA', 'RE', 'RO', 'RS', \
47 'RU', 'RW', 'SA', 'SB', 'SC', 'SD', 'SE', 'SG', 'SH', 'SI', \
48 'SJ', 'SK', 'SL', 'SM', 'SN', 'SO', 'SR', 'SS', 'ST', 'SV', \
49 'SX', 'SY', 'SZ', 'TC', 'TD', 'TF', 'TG', 'TH', 'TJ', 'TK', \
50 'TL', 'TM', 'TN', 'TO', 'TR', 'TT', 'TV', 'TW', 'TZ', 'UA', \
51 'UG', 'UM', 'US', 'UY', 'UZ', 'VA', 'VC', 'VE', 'VG', 'VI', \
52 'VN', 'VU', 'WF', 'WS', 'YE', 'YT', 'ZA', 'ZM', 'ZW', \
53 'ABW', 'AFG', 'AGO', 'AIA', 'ALA', 'ALB', 'AND', 'ARE', \
54 'ARG', 'ARM', 'ASM', 'ATA', 'ATF', 'ATG', 'AUS', 'AUT', \
55 'AZE', 'BDI', 'BEL', 'BEN', 'BES', 'BFA', 'BGD', 'BGR', \
56 'BHR', 'BHS', 'BIH', 'BLM', 'BLR', 'BLZ', 'BMU', 'BOL', \
57 'BRA', 'BRB', 'BRN', 'BTN', 'BVT', 'BWA', 'CAF', 'CAN', \
58 'CCK', 'CHE', 'CHL', 'CHN', 'CIV', 'CMR', 'COD', 'COG', \
59 'COK', 'COL', 'COM', 'CPV', 'CRI', 'CUB', 'CUW', 'CXR', \
60 'CYM', 'CYP', 'CZE', 'DEU', 'DJI', 'DMA', 'DNK', 'DOM', \
61 'DZA', 'ECU', 'EGY', 'ERI', 'ESH', 'ESP', 'EST', 'ETH', \
62 'FIN', 'FJI', 'FLK', 'FRA', 'FRO', 'FSM', 'GAB', 'GBR', \
63 'GEO', 'GGY', 'GHA', 'GIB', 'GIN', 'GLP', 'GMB', 'GNB', \
64 'GNQ', 'GRC', 'GRD', 'GRL', 'GTM', 'GUF', 'GUM', 'GUY', \
65 'HKG', 'HMD', 'HND', 'HRV', 'HTI', 'HUN', 'IDN', 'IMN', \
66 'IND', 'IOT', 'IRL', 'IRN', 'IRQ', 'ISL', 'ISR', 'ITA', \
67 'JAM', 'JEY', 'JOR', 'JPN', 'KAZ', 'KEN', 'KGZ', 'KHM', \
68 'KIR', 'KNA', 'KOR', 'KWT', 'LAO', 'LBN', 'LBR', 'LBY', \
69 'LCA', 'LIE', 'LKA', 'LSO', 'LTU', 'LUX', 'LVA', 'MAC', \
70 'MAF', 'MAR', 'MCO', 'MDA', 'MDG', 'MDV', 'MEX', 'MHL', \
71 'MKD', 'MLI', 'MLT', 'MMR', 'MNE', 'MNG', 'MNP', 'MOZ', \
72 'MRT', 'MSR', 'MTQ', 'MUS', 'MWI', 'MYS', 'MYT', 'NAM', \
73 'NCL', 'NER', 'NFK', 'NGA', 'NIC', 'NIU', 'NLD', 'NOR', \
74 'NPL', 'NRU', 'NZL', 'OMN', 'PAK', 'PAN', 'PCN', 'PER', \
75 'PHL', 'PLW', 'PNG', 'POL', 'PRI', 'PRK', 'PRT', 'PRY', \
76 'PSE', 'PYF', 'QAT', 'REU', 'ROU', 'RUS', 'RWA', 'SAU', \
77 'SDN', 'SEN', 'SGP', 'SGS', 'SHN', 'SJM', 'SLB', 'SLE', \
78 'SLV', 'SMR', 'SOM', 'SPM', 'SRB', 'SSD', 'STP', 'SUR', \
79 'SVK', 'SVN', 'SWE', 'SWZ', 'SXM', 'SYC', 'SYR', 'TCA', \
80 'TCD', 'TGO', 'THA', 'TJK', 'TKL', 'TKM', 'TLS', 'TON', \
81 'TTO', 'TUN', 'TUR', 'TUV', 'TWN', 'TZA', 'UGA', 'UKR', \
82 'UMI', 'URY', 'USA', 'UZB', 'VAT', 'VCT', 'VEN', 'VGB', \
83 'VIR', 'VNM', 'VUT', 'WLF', 'WSM', 'YEM', 'ZAF', 'ZMB', \
84 'ZWE', 'all', 'ALL', 'All']
85
86 def download(country=['MX', 'CA', 'US'], indicator=['NY.GDP.MKTP.CD', 'NY.GNS.ICTR.ZS'],
87 start=2003, end=2005,errors='warn'):
88 """
89 Download data series from the World Bank's World Development Indicators
90
91 Parameters
92 ----------
93
94 indicator: string or list of strings
95 taken from the ``id`` field in ``WDIsearch()``
96
97 country: string or list of strings.
98 ``all`` downloads data for all countries
99 2 or 3 character ISO country codes select individual
100 countries (e.g.``US``,``CA``) or (e.g.``USA``,``CAN``). The codes
101 can be mixed.
102
103 The two ISO lists of countries, provided by wikipedia, are hardcoded
104 into pandas as of 11/10/2014.
105
106 start: int
107 First year of the data series
108
109 end: int
110 Last year of the data series (inclusive)
111
112 errors: str {'ignore', 'warn', 'raise'}, default 'warn'
113 Country codes are validated against a hardcoded list. This controls
114 the outcome of that validation, and attempts to also apply
115 to the results from world bank.
116
117 errors='raise', will raise a ValueError on a bad country code.
118
119 Returns
120 -------
121
122 ``pandas`` DataFrame with columns: country, iso_code, year,
123 indicator value.
124
125 """
126
127 if type(country) == str:
128 country = [country]
129
130 bad_countries = np.setdiff1d(country, country_codes)
131
132 # Validate the input
133 if len(bad_countries) > 0:
134 tmp = ", ".join(bad_countries)
135 if errors == 'raise':
136 raise ValueError("Invalid Country Code(s): %s" % tmp)
137 if errors == 'warn':
138 warnings.warn('Non-standard ISO country codes: %s' % tmp)
139
140 # Work with a list of indicators
141 if type(indicator) == str:
142 indicator = [indicator]
143
144 # Download
145 data = []
146 bad_indicators = {}
147 for ind in indicator:
148 one_indicator_data,msg = _get_data(ind, country, start, end)
149 if msg == "Success":
150 data.append(one_indicator_data)
151 else:
152 bad_indicators[ind] = msg
153
154 if len(bad_indicators.keys()) > 0:
155 bad_ind_msgs = [i + " : " + m for i,m in bad_indicators.items()]
156 bad_ind_msgs = "\n\n".join(bad_ind_msgs)
157 bad_ind_msgs = "\n\nInvalid Indicators:\n\n%s" % bad_ind_msgs
158 if errors == 'raise':
159 raise ValueError(bad_ind_msgs)
160 if errors == 'warn':
161 warnings.warn(bad_ind_msgs)
162
163 # Confirm we actually got some data, and build Dataframe
164 if len(data) > 0:
165 out = reduce(lambda x, y: x.merge(y, how='outer'), data)
166 out = out.drop('iso_code', axis=1)
167 out = out.set_index(['country', 'year'])
168 out = out._convert(datetime=True, numeric=True)
169 return out
170 else:
171 msg = "No indicators returned data."
172 if errors == 'ignore':
173 msg += " Set errors='warn' for more information."
174 raise ValueError(msg)
175
176 def _get_data(indicator="NY.GNS.ICTR.GN.ZS", country='US',
177 start=2002, end=2005):
178
179 if type(country) == str:
180 country = [country]
181
182 countries = ';'.join(country)
183
184 # Build URL for api call
185 url = ("http://api.worldbank.org/countries/" + countries + "/indicators/" +
186 indicator + "?date=" + str(start) + ":" + str(end) +
187 "&per_page=25000&format=json")
188
189 # Download
190 with urlopen(url) as response:
191 data = response.read()
192
193 # Check to see if there is a possible problem
194 possible_message = json.loads(data)[0]
195 if 'message' in possible_message.keys():
196 msg = possible_message['message'][0]
197 try:
198 msg = msg['key'].split() + ["\n "] + msg['value'].split()
199 wb_err = ' '.join(msg)
200 except:
201 wb_err = ""
202 if 'key' in msg.keys():
203 wb_err = msg['key'] + "\n "
204 if 'value' in msg.keys():
205 wb_err += msg['value']
206 error_msg = "Problem with a World Bank Query \n %s"
207 return None, error_msg % wb_err
208
209 if 'total' in possible_message.keys():
210 if possible_message['total'] == 0:
211 return None, "No results from world bank."
212
213 # Parse JSON file
214 data = json.loads(data)[1]
215 country = [x['country']['value'] for x in data]
216 iso_code = [x['country']['id'] for x in data]
217 year = [x['date'] for x in data]
218 value = [x['value'] for x in data]
219 # Prepare output
220 out = pandas.DataFrame([country, iso_code, year, value]).T
221 out.columns = ['country', 'iso_code', 'year', indicator]
222 return out,"Success"
223
224 def get_countries():
225 '''Query information about countries
226 '''
227 url = 'http://api.worldbank.org/countries/?per_page=1000&format=json'
228 with urlopen(url) as response:
229 data = response.read()
230 data = json.loads(data)[1]
231 data = pandas.DataFrame(data)
232 data.adminregion = [x['value'] for x in data.adminregion]
233 data.incomeLevel = [x['value'] for x in data.incomeLevel]
234 data.lendingType = [x['value'] for x in data.lendingType]
235 data.region = [x['value'] for x in data.region]
236 data = data.rename(columns={'id': 'iso3c', 'iso2Code': 'iso2c'})
237 return data
238
239 def get_indicators():
240 '''Download information about all World Bank data series
241 '''
242 url = 'http://api.worldbank.org/indicators?per_page=50000&format=json'
243 with urlopen(url) as response:
244 data = response.read()
245 data = json.loads(data)[1]
246 data = pandas.DataFrame(data)
247 # Clean fields
248 data.source = [x['value'] for x in data.source]
249 fun = lambda x: x.encode('ascii', 'ignore')
250 data.sourceOrganization = data.sourceOrganization.apply(fun)
251 # Clean topic field
252
253 def get_value(x):
254 try:
255 return x['value']
256 except:
257 return ''
258 fun = lambda x: [get_value(y) for y in x]
259 data.topics = data.topics.apply(fun)
260 data.topics = data.topics.apply(lambda x: ' ; '.join(x))
261 # Clean outpu
262 data = data.sort(columns='id')
263 data.index = pandas.Index(lrange(data.shape[0]))
264 return data
265
266 _cached_series = None
267
268
269 def search(string='gdp.*capi', field='name', case=False):
270 """
271 Search available data series from the world bank
272
273 Parameters
274 ----------
275
276 string: string
277 regular expression
278 field: string
279 id, name, source, sourceNote, sourceOrganization, topics
280 See notes below
281 case: bool
282 case sensitive search?
283
284 Notes
285 -----
286
287 The first time this function is run it will download and cache the full
288 list of available series. Depending on the speed of your network
289 connection, this can take time. Subsequent searches will use the cached
290 copy, so they should be much faster.
291
292 id : Data series indicator (for use with the ``indicator`` argument of
293 ``WDI()``) e.g. NY.GNS.ICTR.GN.ZS"
294 name: Short description of the data series
295 source: Data collection project
296 sourceOrganization: Data collection organization
297 note:
298 sourceNote:
299 topics:
300 """
301 # Create cached list of series if it does not exist
302 global _cached_series
303 if type(_cached_series) is not pandas.core.frame.DataFrame:
304 _cached_series = get_indicators()
305 data = _cached_series[field]
306 idx = data.str.contains(string, case=case)
307 out = _cached_series.ix[idx].dropna()
308 return out
309
[end of pandas/io/wb.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pandas-dev/pandas
|
51a70dcb7133bc7cb8e6bea5da39a2cf58fa8319
|
PERF: checking is_monotonic_increasing/decreasing before sorting on an index
We don't keep the sortedness state in an index per-se, but it is rather cheap to check
- `is_monotonic_increasing` or `is_monotonic_decreasing` on a reg-index
- MultiIndex should check `is_lexsorted` (this might be done already)
```
In [8]: df = DataFrame(np.random.randn(1000000,2),columns=list('AB'))
In [9]: %timeit df.sort_index()
10 loops, best of 3: 37.1 ms per loop
In [10]: %timeit -n 1 -r 1 df.index.is_monotonic_increasing
1 loops, best of 1: 2.01 ms per loop
In [11]: %timeit -n 1 -r 1 df.index.is_monotonic_increasin^C
KeyboardInterrupt
In [11]: %timeit df.set_index('A').sort_index()
10 loops, best of 3: 175 ms per loop
In [12]: %timeit -n 1 -r 1 df.set_index('A').index.is_monotonic_increasing
1 loops, best of 1: 9.54 ms per loop
```
|
2015-10-12T08:26:46Z
|
<patch>
diff --git a/asv_bench/benchmarks/frame_methods.py b/asv_bench/benchmarks/frame_methods.py
--- a/asv_bench/benchmarks/frame_methods.py
+++ b/asv_bench/benchmarks/frame_methods.py
@@ -930,6 +930,16 @@ def time_frame_xs_row(self):
self.df.xs(50000)
+class frame_sort_index(object):
+ goal_time = 0.2
+
+ def setup(self):
+ self.df = DataFrame(randn(1000000, 2), columns=list('AB'))
+
+ def time_frame_sort_index(self):
+ self.df.sort_index()
+
+
class series_string_vector_slice(object):
goal_time = 0.2
diff --git a/doc/source/whatsnew/v0.17.1.txt b/doc/source/whatsnew/v0.17.1.txt
--- a/doc/source/whatsnew/v0.17.1.txt
+++ b/doc/source/whatsnew/v0.17.1.txt
@@ -52,6 +52,8 @@ Deprecations
Performance Improvements
~~~~~~~~~~~~~~~~~~~~~~~~
+- Checking monotonic-ness before sorting on an index (:issue:`11080`)
+
.. _whatsnew_0171.bug_fixes:
Bug Fixes
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -3157,6 +3157,15 @@ def sort_index(self, axis=0, level=None, ascending=True, inplace=False,
else:
from pandas.core.groupby import _nargsort
+ # GH11080 - Check monotonic-ness before sort an index
+ # if monotonic (already sorted), return None or copy() according to 'inplace'
+ if (ascending and labels.is_monotonic_increasing) or \
+ (not ascending and labels.is_monotonic_decreasing):
+ if inplace:
+ return
+ else:
+ return self.copy()
+
indexer = _nargsort(labels, kind=kind, ascending=ascending,
na_position=na_position)
</patch>
|
[]
|
[]
| ||||
pandas-dev__pandas-31569
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
BUG display.max_colwidth do not accept -1 for unlimited width
#### Code Sample, a copy-pastable example if possible
```python
import pandas as pd
pd.set_option("display.max_colwidth", -1)
```
#### Problem description
There is a regression with `"display.max_colwidth"`. In the past, it was only accepting integer. The way to no limit the size was to pass `-1`. In pandas 1,0, this option becomes more consistent and not limiting the width should be set with `None`. However, the support for negative integer was removed.
Thus, one would need to either set to `-1` or `None` depending on the pandas version. It would be best to support both options. Potentially, support for negative integer could be removed with a deprecation cycle.
</issue>
<code>
[start of README.md]
1 <div align="center">
2 <img src="https://dev.pandas.io/static/img/pandas.svg"><br>
3 </div>
4
5 -----------------
6
7 # pandas: powerful Python data analysis toolkit
8
9 <table>
10 <tr>
11 <td>Latest Release</td>
12 <td>
13 <a href="https://pypi.org/project/pandas/">
14 <img src="https://img.shields.io/pypi/v/pandas.svg" alt="latest release" />
15 </a>
16 </td>
17 </tr>
18 <td></td>
19 <td>
20 <a href="https://anaconda.org/anaconda/pandas/">
21 <img src="https://anaconda.org/conda-forge/pandas/badges/version.svg" alt="latest release" />
22 </a>
23 </td>
24 </tr>
25 <tr>
26 <td>Package Status</td>
27 <td>
28 <a href="https://pypi.org/project/pandas/">
29 <img src="https://img.shields.io/pypi/status/pandas.svg" alt="status" />
30 </a>
31 </td>
32 </tr>
33 <tr>
34 <td>License</td>
35 <td>
36 <a href="https://github.com/pandas-dev/pandas/blob/master/LICENSE">
37 <img src="https://img.shields.io/pypi/l/pandas.svg" alt="license" />
38 </a>
39 </td>
40 </tr>
41 <tr>
42 <td>Build Status</td>
43 <td>
44 <a href="https://travis-ci.org/pandas-dev/pandas">
45 <img src="https://travis-ci.org/pandas-dev/pandas.svg?branch=master" alt="travis build status" />
46 </a>
47 </td>
48 </tr>
49 <tr>
50 <td></td>
51 <td>
52 <a href="https://dev.azure.com/pandas-dev/pandas/_build/latest?definitionId=1&branch=master">
53 <img src="https://dev.azure.com/pandas-dev/pandas/_apis/build/status/pandas-dev.pandas?branch=master" alt="Azure Pipelines build status" />
54 </a>
55 </td>
56 </tr>
57 <tr>
58 <td>Coverage</td>
59 <td>
60 <a href="https://codecov.io/gh/pandas-dev/pandas">
61 <img src="https://codecov.io/github/pandas-dev/pandas/coverage.svg?branch=master" alt="coverage" />
62 </a>
63 </td>
64 </tr>
65 <tr>
66 <td>Downloads</td>
67 <td>
68 <a href="https://pandas.pydata.org">
69 <img src="https://anaconda.org/conda-forge/pandas/badges/downloads.svg" alt="conda-forge downloads" />
70 </a>
71 </td>
72 </tr>
73 <tr>
74 <td>Gitter</td>
75 <td>
76 <a href="https://gitter.im/pydata/pandas">
77 <img src="https://badges.gitter.im/Join%20Chat.svg" />
78 </a>
79 </td>
80 </tr>
81 </table>
82
83
84
85 ## What is it?
86
87 **pandas** is a Python package providing fast, flexible, and expressive data
88 structures designed to make working with "relational" or "labeled" data both
89 easy and intuitive. It aims to be the fundamental high-level building block for
90 doing practical, **real world** data analysis in Python. Additionally, it has
91 the broader goal of becoming **the most powerful and flexible open source data
92 analysis / manipulation tool available in any language**. It is already well on
93 its way towards this goal.
94
95 ## Main Features
96 Here are just a few of the things that pandas does well:
97
98 - Easy handling of [**missing data**][missing-data] (represented as
99 `NaN`) in floating point as well as non-floating point data
100 - Size mutability: columns can be [**inserted and
101 deleted**][insertion-deletion] from DataFrame and higher dimensional
102 objects
103 - Automatic and explicit [**data alignment**][alignment]: objects can
104 be explicitly aligned to a set of labels, or the user can simply
105 ignore the labels and let `Series`, `DataFrame`, etc. automatically
106 align the data for you in computations
107 - Powerful, flexible [**group by**][groupby] functionality to perform
108 split-apply-combine operations on data sets, for both aggregating
109 and transforming data
110 - Make it [**easy to convert**][conversion] ragged,
111 differently-indexed data in other Python and NumPy data structures
112 into DataFrame objects
113 - Intelligent label-based [**slicing**][slicing], [**fancy
114 indexing**][fancy-indexing], and [**subsetting**][subsetting] of
115 large data sets
116 - Intuitive [**merging**][merging] and [**joining**][joining] data
117 sets
118 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of
119 data sets
120 - [**Hierarchical**][mi] labeling of axes (possible to have multiple
121 labels per tick)
122 - Robust IO tools for loading data from [**flat files**][flat-files]
123 (CSV and delimited), [**Excel files**][excel], [**databases**][db],
124 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore]
125 - [**Time series**][timeseries]-specific functionality: date range
126 generation and frequency conversion, moving window statistics,
127 date shifting and lagging.
128
129
130 [missing-data]: https://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data
131 [insertion-deletion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#column-selection-addition-deletion
132 [alignment]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html?highlight=alignment#intro-to-data-structures
133 [groupby]: https://pandas.pydata.org/pandas-docs/stable/groupby.html#group-by-split-apply-combine
134 [conversion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe
135 [slicing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges
136 [fancy-indexing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#advanced-indexing-with-ix
137 [subsetting]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing
138 [merging]: https://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging
139 [joining]: https://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index
140 [reshape]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-and-pivot-tables
141 [pivot-table]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#pivot-tables-and-cross-tabulations
142 [mi]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex
143 [flat-files]: https://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files
144 [excel]: https://pandas.pydata.org/pandas-docs/stable/io.html#excel-files
145 [db]: https://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries
146 [hdfstore]: https://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables
147 [timeseries]: https://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-series-date-functionality
148
149 ## Where to get it
150 The source code is currently hosted on GitHub at:
151 https://github.com/pandas-dev/pandas
152
153 Binary installers for the latest released version are available at the [Python
154 package index](https://pypi.org/project/pandas) and on conda.
155
156 ```sh
157 # conda
158 conda install pandas
159 ```
160
161 ```sh
162 # or PyPI
163 pip install pandas
164 ```
165
166 ## Dependencies
167 - [NumPy](https://www.numpy.org)
168 - [python-dateutil](https://labix.org/python-dateutil)
169 - [pytz](https://pythonhosted.org/pytz)
170
171 See the [full installation instructions](https://pandas.pydata.org/pandas-docs/stable/install.html#dependencies) for minimum supported versions of required, recommended and optional dependencies.
172
173 ## Installation from sources
174 To install pandas from source you need Cython in addition to the normal
175 dependencies above. Cython can be installed from pypi:
176
177 ```sh
178 pip install cython
179 ```
180
181 In the `pandas` directory (same one where you found this file after
182 cloning the git repo), execute:
183
184 ```sh
185 python setup.py install
186 ```
187
188 or for installing in [development mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs):
189
190
191 ```sh
192 python -m pip install -e . --no-build-isolation --no-use-pep517
193 ```
194
195 If you have `make`, you can also use `make develop` to run the same command.
196
197 or alternatively
198
199 ```sh
200 python setup.py develop
201 ```
202
203 See the full instructions for [installing from source](https://pandas.pydata.org/pandas-docs/stable/install.html#installing-from-source).
204
205 ## License
206 [BSD 3](LICENSE)
207
208 ## Documentation
209 The official documentation is hosted on PyData.org: https://pandas.pydata.org/pandas-docs/stable
210
211 ## Background
212 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and
213 has been under active development since then.
214
215 ## Getting Help
216
217 For usage questions, the best place to go to is [StackOverflow](https://stackoverflow.com/questions/tagged/pandas).
218 Further, general questions and discussions can also take place on the [pydata mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata).
219
220 ## Discussion and Development
221 Most development discussion is taking place on github in this repo. Further, the [pandas-dev mailing list](https://mail.python.org/mailman/listinfo/pandas-dev) can also be used for specialized discussions or design issues, and a [Gitter channel](https://gitter.im/pydata/pandas) is available for quick development related questions.
222
223 ## Contributing to pandas [](https://www.codetriage.com/pandas-dev/pandas)
224
225 All contributions, bug reports, bug fixes, documentation improvements, enhancements and ideas are welcome.
226
227 A detailed overview on how to contribute can be found in the **[contributing guide](https://dev.pandas.io/docs/contributing.html)**. There is also an [overview](.github/CONTRIBUTING.md) on GitHub.
228
229 If you are simply looking to start working with the pandas codebase, navigate to the [GitHub "issues" tab](https://github.com/pandas-dev/pandas/issues) and start looking through interesting issues. There are a number of issues listed under [Docs](https://github.com/pandas-dev/pandas/issues?labels=Docs&sort=updated&state=open) and [good first issue](https://github.com/pandas-dev/pandas/issues?labels=good+first+issue&sort=updated&state=open) where you could start out.
230
231 You can also triage issues which may include reproducing bug reports, or asking for vital information such as version numbers or reproduction instructions. If you would like to start triaging issues, one easy way to get started is to [subscribe to pandas on CodeTriage](https://www.codetriage.com/pandas-dev/pandas).
232
233 Or maybe through using pandas you have an idea of your own or are looking for something in the documentation and thinking ‘this can be improved’...you can do something about it!
234
235 Feel free to ask questions on the [mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata) or on [Gitter](https://gitter.im/pydata/pandas).
236
237 As contributors and maintainers to this project, you are expected to abide by pandas' code of conduct. More information can be found at: [Contributor Code of Conduct](https://github.com/pandas-dev/pandas/blob/master/.github/CODE_OF_CONDUCT.md)
238
[end of README.md]
[start of pandas/core/config_init.py]
1 """
2 This module is imported from the pandas package __init__.py file
3 in order to ensure that the core.config options registered here will
4 be available as soon as the user loads the package. if register_option
5 is invoked inside specific modules, they will not be registered until that
6 module is imported, which may or may not be a problem.
7
8 If you need to make sure options are available even before a certain
9 module is imported, register them here rather than in the module.
10
11 """
12 import pandas._config.config as cf
13 from pandas._config.config import (
14 is_bool,
15 is_callable,
16 is_instance_factory,
17 is_int,
18 is_nonnegative_int,
19 is_one_of_factory,
20 is_text,
21 )
22
23 # compute
24
25 use_bottleneck_doc = """
26 : bool
27 Use the bottleneck library to accelerate if it is installed,
28 the default is True
29 Valid values: False,True
30 """
31
32
33 def use_bottleneck_cb(key):
34 from pandas.core import nanops
35
36 nanops.set_use_bottleneck(cf.get_option(key))
37
38
39 use_numexpr_doc = """
40 : bool
41 Use the numexpr library to accelerate computation if it is installed,
42 the default is True
43 Valid values: False,True
44 """
45
46
47 def use_numexpr_cb(key):
48 from pandas.core.computation import expressions
49
50 expressions.set_use_numexpr(cf.get_option(key))
51
52
53 with cf.config_prefix("compute"):
54 cf.register_option(
55 "use_bottleneck",
56 True,
57 use_bottleneck_doc,
58 validator=is_bool,
59 cb=use_bottleneck_cb,
60 )
61 cf.register_option(
62 "use_numexpr", True, use_numexpr_doc, validator=is_bool, cb=use_numexpr_cb
63 )
64 #
65 # options from the "display" namespace
66
67 pc_precision_doc = """
68 : int
69 Floating point output precision (number of significant digits). This is
70 only a suggestion
71 """
72
73 pc_colspace_doc = """
74 : int
75 Default space for DataFrame columns.
76 """
77
78 pc_max_rows_doc = """
79 : int
80 If max_rows is exceeded, switch to truncate view. Depending on
81 `large_repr`, objects are either centrally truncated or printed as
82 a summary view. 'None' value means unlimited.
83
84 In case python/IPython is running in a terminal and `large_repr`
85 equals 'truncate' this can be set to 0 and pandas will auto-detect
86 the height of the terminal and print a truncated object which fits
87 the screen height. The IPython notebook, IPython qtconsole, or
88 IDLE do not run in a terminal and hence it is not possible to do
89 correct auto-detection.
90 """
91
92 pc_min_rows_doc = """
93 : int
94 The numbers of rows to show in a truncated view (when `max_rows` is
95 exceeded). Ignored when `max_rows` is set to None or 0. When set to
96 None, follows the value of `max_rows`.
97 """
98
99 pc_max_cols_doc = """
100 : int
101 If max_cols is exceeded, switch to truncate view. Depending on
102 `large_repr`, objects are either centrally truncated or printed as
103 a summary view. 'None' value means unlimited.
104
105 In case python/IPython is running in a terminal and `large_repr`
106 equals 'truncate' this can be set to 0 and pandas will auto-detect
107 the width of the terminal and print a truncated object which fits
108 the screen width. The IPython notebook, IPython qtconsole, or IDLE
109 do not run in a terminal and hence it is not possible to do
110 correct auto-detection.
111 """
112
113 pc_max_categories_doc = """
114 : int
115 This sets the maximum number of categories pandas should output when
116 printing out a `Categorical` or a Series of dtype "category".
117 """
118
119 pc_max_info_cols_doc = """
120 : int
121 max_info_columns is used in DataFrame.info method to decide if
122 per column information will be printed.
123 """
124
125 pc_nb_repr_h_doc = """
126 : boolean
127 When True, IPython notebook will use html representation for
128 pandas objects (if it is available).
129 """
130
131 pc_pprint_nest_depth = """
132 : int
133 Controls the number of nested levels to process when pretty-printing
134 """
135
136 pc_multi_sparse_doc = """
137 : boolean
138 "sparsify" MultiIndex display (don't display repeated
139 elements in outer levels within groups)
140 """
141
142 float_format_doc = """
143 : callable
144 The callable should accept a floating point number and return
145 a string with the desired format of the number. This is used
146 in some places like SeriesFormatter.
147 See formats.format.EngFormatter for an example.
148 """
149
150 max_colwidth_doc = """
151 : int or None
152 The maximum width in characters of a column in the repr of
153 a pandas data structure. When the column overflows, a "..."
154 placeholder is embedded in the output. A 'None' value means unlimited.
155 """
156
157 colheader_justify_doc = """
158 : 'left'/'right'
159 Controls the justification of column headers. used by DataFrameFormatter.
160 """
161
162 pc_expand_repr_doc = """
163 : boolean
164 Whether to print out the full DataFrame repr for wide DataFrames across
165 multiple lines, `max_columns` is still respected, but the output will
166 wrap-around across multiple "pages" if its width exceeds `display.width`.
167 """
168
169 pc_show_dimensions_doc = """
170 : boolean or 'truncate'
171 Whether to print out dimensions at the end of DataFrame repr.
172 If 'truncate' is specified, only print out the dimensions if the
173 frame is truncated (e.g. not display all rows and/or columns)
174 """
175
176 pc_east_asian_width_doc = """
177 : boolean
178 Whether to use the Unicode East Asian Width to calculate the display text
179 width.
180 Enabling this may affect to the performance (default: False)
181 """
182
183 pc_ambiguous_as_wide_doc = """
184 : boolean
185 Whether to handle Unicode characters belong to Ambiguous as Wide (width=2)
186 (default: False)
187 """
188
189 pc_latex_repr_doc = """
190 : boolean
191 Whether to produce a latex DataFrame representation for jupyter
192 environments that support it.
193 (default: False)
194 """
195
196 pc_table_schema_doc = """
197 : boolean
198 Whether to publish a Table Schema representation for frontends
199 that support it.
200 (default: False)
201 """
202
203 pc_html_border_doc = """
204 : int
205 A ``border=value`` attribute is inserted in the ``<table>`` tag
206 for the DataFrame HTML repr.
207 """
208
209 pc_html_use_mathjax_doc = """\
210 : boolean
211 When True, Jupyter notebook will process table contents using MathJax,
212 rendering mathematical expressions enclosed by the dollar symbol.
213 (default: True)
214 """
215
216 pc_width_doc = """
217 : int
218 Width of the display in characters. In case python/IPython is running in
219 a terminal this can be set to None and pandas will correctly auto-detect
220 the width.
221 Note that the IPython notebook, IPython qtconsole, or IDLE do not run in a
222 terminal and hence it is not possible to correctly detect the width.
223 """
224
225 pc_chop_threshold_doc = """
226 : float or None
227 if set to a float value, all float values smaller then the given threshold
228 will be displayed as exactly 0 by repr and friends.
229 """
230
231 pc_max_seq_items = """
232 : int or None
233 when pretty-printing a long sequence, no more then `max_seq_items`
234 will be printed. If items are omitted, they will be denoted by the
235 addition of "..." to the resulting string.
236
237 If set to None, the number of items to be printed is unlimited.
238 """
239
240 pc_max_info_rows_doc = """
241 : int or None
242 df.info() will usually show null-counts for each column.
243 For large frames this can be quite slow. max_info_rows and max_info_cols
244 limit this null check only to frames with smaller dimensions than
245 specified.
246 """
247
248 pc_large_repr_doc = """
249 : 'truncate'/'info'
250 For DataFrames exceeding max_rows/max_cols, the repr (and HTML repr) can
251 show a truncated table (the default from 0.13), or switch to the view from
252 df.info() (the behaviour in earlier versions of pandas).
253 """
254
255 pc_memory_usage_doc = """
256 : bool, string or None
257 This specifies if the memory usage of a DataFrame should be displayed when
258 df.info() is called. Valid values True,False,'deep'
259 """
260
261 pc_latex_escape = """
262 : bool
263 This specifies if the to_latex method of a Dataframe uses escapes special
264 characters.
265 Valid values: False,True
266 """
267
268 pc_latex_longtable = """
269 :bool
270 This specifies if the to_latex method of a Dataframe uses the longtable
271 format.
272 Valid values: False,True
273 """
274
275 pc_latex_multicolumn = """
276 : bool
277 This specifies if the to_latex method of a Dataframe uses multicolumns
278 to pretty-print MultiIndex columns.
279 Valid values: False,True
280 """
281
282 pc_latex_multicolumn_format = """
283 : string
284 This specifies the format for multicolumn headers.
285 Can be surrounded with '|'.
286 Valid values: 'l', 'c', 'r', 'p{<width>}'
287 """
288
289 pc_latex_multirow = """
290 : bool
291 This specifies if the to_latex method of a Dataframe uses multirows
292 to pretty-print MultiIndex rows.
293 Valid values: False,True
294 """
295
296
297 def table_schema_cb(key):
298 from pandas.io.formats.printing import _enable_data_resource_formatter
299
300 _enable_data_resource_formatter(cf.get_option(key))
301
302
303 def is_terminal() -> bool:
304 """
305 Detect if Python is running in a terminal.
306
307 Returns True if Python is running in a terminal or False if not.
308 """
309 try:
310 # error: Name 'get_ipython' is not defined
311 ip = get_ipython() # type: ignore
312 except NameError: # assume standard Python interpreter in a terminal
313 return True
314 else:
315 if hasattr(ip, "kernel"): # IPython as a Jupyter kernel
316 return False
317 else: # IPython in a terminal
318 return True
319
320
321 with cf.config_prefix("display"):
322 cf.register_option("precision", 6, pc_precision_doc, validator=is_nonnegative_int)
323 cf.register_option(
324 "float_format",
325 None,
326 float_format_doc,
327 validator=is_one_of_factory([None, is_callable]),
328 )
329 cf.register_option("column_space", 12, validator=is_int)
330 cf.register_option(
331 "max_info_rows",
332 1690785,
333 pc_max_info_rows_doc,
334 validator=is_instance_factory((int, type(None))),
335 )
336 cf.register_option("max_rows", 60, pc_max_rows_doc, validator=is_nonnegative_int)
337 cf.register_option(
338 "min_rows",
339 10,
340 pc_min_rows_doc,
341 validator=is_instance_factory([type(None), int]),
342 )
343 cf.register_option("max_categories", 8, pc_max_categories_doc, validator=is_int)
344 cf.register_option(
345 "max_colwidth", 50, max_colwidth_doc, validator=is_nonnegative_int
346 )
347 if is_terminal():
348 max_cols = 0 # automatically determine optimal number of columns
349 else:
350 max_cols = 20 # cannot determine optimal number of columns
351 cf.register_option(
352 "max_columns", max_cols, pc_max_cols_doc, validator=is_nonnegative_int
353 )
354 cf.register_option(
355 "large_repr",
356 "truncate",
357 pc_large_repr_doc,
358 validator=is_one_of_factory(["truncate", "info"]),
359 )
360 cf.register_option("max_info_columns", 100, pc_max_info_cols_doc, validator=is_int)
361 cf.register_option(
362 "colheader_justify", "right", colheader_justify_doc, validator=is_text
363 )
364 cf.register_option("notebook_repr_html", True, pc_nb_repr_h_doc, validator=is_bool)
365 cf.register_option("pprint_nest_depth", 3, pc_pprint_nest_depth, validator=is_int)
366 cf.register_option("multi_sparse", True, pc_multi_sparse_doc, validator=is_bool)
367 cf.register_option("expand_frame_repr", True, pc_expand_repr_doc)
368 cf.register_option(
369 "show_dimensions",
370 "truncate",
371 pc_show_dimensions_doc,
372 validator=is_one_of_factory([True, False, "truncate"]),
373 )
374 cf.register_option("chop_threshold", None, pc_chop_threshold_doc)
375 cf.register_option("max_seq_items", 100, pc_max_seq_items)
376 cf.register_option(
377 "width", 80, pc_width_doc, validator=is_instance_factory([type(None), int])
378 )
379 cf.register_option(
380 "memory_usage",
381 True,
382 pc_memory_usage_doc,
383 validator=is_one_of_factory([None, True, False, "deep"]),
384 )
385 cf.register_option(
386 "unicode.east_asian_width", False, pc_east_asian_width_doc, validator=is_bool
387 )
388 cf.register_option(
389 "unicode.ambiguous_as_wide", False, pc_east_asian_width_doc, validator=is_bool
390 )
391 cf.register_option("latex.repr", False, pc_latex_repr_doc, validator=is_bool)
392 cf.register_option("latex.escape", True, pc_latex_escape, validator=is_bool)
393 cf.register_option("latex.longtable", False, pc_latex_longtable, validator=is_bool)
394 cf.register_option(
395 "latex.multicolumn", True, pc_latex_multicolumn, validator=is_bool
396 )
397 cf.register_option(
398 "latex.multicolumn_format", "l", pc_latex_multicolumn, validator=is_text
399 )
400 cf.register_option("latex.multirow", False, pc_latex_multirow, validator=is_bool)
401 cf.register_option(
402 "html.table_schema",
403 False,
404 pc_table_schema_doc,
405 validator=is_bool,
406 cb=table_schema_cb,
407 )
408 cf.register_option("html.border", 1, pc_html_border_doc, validator=is_int)
409 cf.register_option(
410 "html.use_mathjax", True, pc_html_use_mathjax_doc, validator=is_bool
411 )
412
413 tc_sim_interactive_doc = """
414 : boolean
415 Whether to simulate interactive mode for purposes of testing
416 """
417
418 with cf.config_prefix("mode"):
419 cf.register_option("sim_interactive", False, tc_sim_interactive_doc)
420
421 use_inf_as_null_doc = """
422 : boolean
423 use_inf_as_null had been deprecated and will be removed in a future
424 version. Use `use_inf_as_na` instead.
425 """
426
427 use_inf_as_na_doc = """
428 : boolean
429 True means treat None, NaN, INF, -INF as NA (old way),
430 False means None and NaN are null, but INF, -INF are not NA
431 (new way).
432 """
433
434 # We don't want to start importing everything at the global context level
435 # or we'll hit circular deps.
436
437
438 def use_inf_as_na_cb(key):
439 from pandas.core.dtypes.missing import _use_inf_as_na
440
441 _use_inf_as_na(key)
442
443
444 with cf.config_prefix("mode"):
445 cf.register_option("use_inf_as_na", False, use_inf_as_na_doc, cb=use_inf_as_na_cb)
446 cf.register_option(
447 "use_inf_as_null", False, use_inf_as_null_doc, cb=use_inf_as_na_cb
448 )
449
450 cf.deprecate_option(
451 "mode.use_inf_as_null", msg=use_inf_as_null_doc, rkey="mode.use_inf_as_na"
452 )
453
454
455 # user warnings
456 chained_assignment = """
457 : string
458 Raise an exception, warn, or no action if trying to use chained assignment,
459 The default is warn
460 """
461
462 with cf.config_prefix("mode"):
463 cf.register_option(
464 "chained_assignment",
465 "warn",
466 chained_assignment,
467 validator=is_one_of_factory([None, "warn", "raise"]),
468 )
469
470
471 # Set up the io.excel specific reader configuration.
472 reader_engine_doc = """
473 : string
474 The default Excel reader engine for '{ext}' files. Available options:
475 auto, {others}.
476 """
477
478 _xls_options = ["xlrd"]
479 _xlsm_options = ["xlrd", "openpyxl"]
480 _xlsx_options = ["xlrd", "openpyxl"]
481 _ods_options = ["odf"]
482 _xlsb_options = ["pyxlsb"]
483
484
485 with cf.config_prefix("io.excel.xls"):
486 cf.register_option(
487 "reader",
488 "auto",
489 reader_engine_doc.format(ext="xls", others=", ".join(_xls_options)),
490 validator=str,
491 )
492
493 with cf.config_prefix("io.excel.xlsm"):
494 cf.register_option(
495 "reader",
496 "auto",
497 reader_engine_doc.format(ext="xlsm", others=", ".join(_xlsm_options)),
498 validator=str,
499 )
500
501
502 with cf.config_prefix("io.excel.xlsx"):
503 cf.register_option(
504 "reader",
505 "auto",
506 reader_engine_doc.format(ext="xlsx", others=", ".join(_xlsx_options)),
507 validator=str,
508 )
509
510
511 with cf.config_prefix("io.excel.ods"):
512 cf.register_option(
513 "reader",
514 "auto",
515 reader_engine_doc.format(ext="ods", others=", ".join(_ods_options)),
516 validator=str,
517 )
518
519 with cf.config_prefix("io.excel.xlsb"):
520 cf.register_option(
521 "reader",
522 "auto",
523 reader_engine_doc.format(ext="xlsb", others=", ".join(_xlsb_options)),
524 validator=str,
525 )
526
527 # Set up the io.excel specific writer configuration.
528 writer_engine_doc = """
529 : string
530 The default Excel writer engine for '{ext}' files. Available options:
531 auto, {others}.
532 """
533
534 _xls_options = ["xlwt"]
535 _xlsm_options = ["openpyxl"]
536 _xlsx_options = ["openpyxl", "xlsxwriter"]
537
538
539 with cf.config_prefix("io.excel.xls"):
540 cf.register_option(
541 "writer",
542 "auto",
543 writer_engine_doc.format(ext="xls", others=", ".join(_xls_options)),
544 validator=str,
545 )
546
547 with cf.config_prefix("io.excel.xlsm"):
548 cf.register_option(
549 "writer",
550 "auto",
551 writer_engine_doc.format(ext="xlsm", others=", ".join(_xlsm_options)),
552 validator=str,
553 )
554
555
556 with cf.config_prefix("io.excel.xlsx"):
557 cf.register_option(
558 "writer",
559 "auto",
560 writer_engine_doc.format(ext="xlsx", others=", ".join(_xlsx_options)),
561 validator=str,
562 )
563
564
565 # Set up the io.parquet specific configuration.
566 parquet_engine_doc = """
567 : string
568 The default parquet reader/writer engine. Available options:
569 'auto', 'pyarrow', 'fastparquet', the default is 'auto'
570 """
571
572 with cf.config_prefix("io.parquet"):
573 cf.register_option(
574 "engine",
575 "auto",
576 parquet_engine_doc,
577 validator=is_one_of_factory(["auto", "pyarrow", "fastparquet"]),
578 )
579
580 # --------
581 # Plotting
582 # ---------
583
584 plotting_backend_doc = """
585 : str
586 The plotting backend to use. The default value is "matplotlib", the
587 backend provided with pandas. Other backends can be specified by
588 prodiving the name of the module that implements the backend.
589 """
590
591
592 def register_plotting_backend_cb(key):
593 if key == "matplotlib":
594 # We defer matplotlib validation, since it's the default
595 return
596 from pandas.plotting._core import _get_plot_backend
597
598 _get_plot_backend(key)
599
600
601 with cf.config_prefix("plotting"):
602 cf.register_option(
603 "backend",
604 defval="matplotlib",
605 doc=plotting_backend_doc,
606 validator=register_plotting_backend_cb,
607 )
608
609
610 register_converter_doc = """
611 : bool or 'auto'.
612 Whether to register converters with matplotlib's units registry for
613 dates, times, datetimes, and Periods. Toggling to False will remove
614 the converters, restoring any converters that pandas overwrote.
615 """
616
617
618 def register_converter_cb(key):
619 from pandas.plotting import register_matplotlib_converters
620 from pandas.plotting import deregister_matplotlib_converters
621
622 if cf.get_option(key):
623 register_matplotlib_converters()
624 else:
625 deregister_matplotlib_converters()
626
627
628 with cf.config_prefix("plotting.matplotlib"):
629 cf.register_option(
630 "register_converters",
631 "auto",
632 register_converter_doc,
633 validator=is_one_of_factory(["auto", True, False]),
634 cb=register_converter_cb,
635 )
636
[end of pandas/core/config_init.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pandas-dev/pandas
|
918cc482022188fb3d0a6215804aa332cadbae37
|
BUG display.max_colwidth do not accept -1 for unlimited width
#### Code Sample, a copy-pastable example if possible
```python
import pandas as pd
pd.set_option("display.max_colwidth", -1)
```
#### Problem description
There is a regression with `"display.max_colwidth"`. In the past, it was only accepting integer. The way to no limit the size was to pass `-1`. In pandas 1,0, this option becomes more consistent and not limiting the width should be set with `None`. However, the support for negative integer was removed.
Thus, one would need to either set to `-1` or `None` depending on the pandas version. It would be best to support both options. Potentially, support for negative integer could be removed with a deprecation cycle.
|
> It would be best to support both options. Potentially, support for negative integer could be removed with a deprecation cycle.
@jorisvandenbossche Is this what we're going for then - add support for `-1` back in, but raise a FutureWarning alerting the user that it'll no longer work in a future version?
It reminds me of the `header` argument in `read_csv`, whereby `header=-1` used to work in 0.24, then in 0.25 it didn't (see #27779) and so we now raise
```
ValueError: Passing negative integer to header is invalid. For no header, use header=None instead
```
Perhaps the current ValueError message could just have an extra line specifically instructing to use `None` instead of `-1`, in the case that `-1` is passed?
|
2020-02-02T11:44:06Z
|
<patch>
diff --git a/doc/source/whatsnew/v1.0.1.rst b/doc/source/whatsnew/v1.0.1.rst
--- a/doc/source/whatsnew/v1.0.1.rst
+++ b/doc/source/whatsnew/v1.0.1.rst
@@ -10,6 +10,14 @@ including other versions of pandas.
.. ---------------------------------------------------------------------------
+.. _whatsnew_101.deprecations:
+
+Deprecations
+~~~~~~~~~~~~
+
+- Support for negative integer for :attr:`pd.options.display.max_colwidth` is deprecated in favor of using ``None`` (:issue:`31532`)
+
+.. ---------------------------------------------------------------------------
.. _whatsnew_101.bug_fixes:
@@ -129,6 +137,7 @@ ExtensionArray
Other
^^^^^
- Regression fixed in objTOJSON.c fix return-type warning (:issue:`31463`)
+- Fixed a regression where setting :attr:`pd.options.display.max_colwidth` was not accepting negative integer. In addition, this behavior has been deprecated in favor of using ``None`` (:issue:`31532`)
-
.. ---------------------------------------------------------------------------
diff --git a/pandas/core/config_init.py b/pandas/core/config_init.py
--- a/pandas/core/config_init.py
+++ b/pandas/core/config_init.py
@@ -9,6 +9,8 @@
module is imported, register them here rather than in the module.
"""
+import warnings
+
import pandas._config.config as cf
from pandas._config.config import (
is_bool,
@@ -341,8 +343,26 @@ def is_terminal() -> bool:
validator=is_instance_factory([type(None), int]),
)
cf.register_option("max_categories", 8, pc_max_categories_doc, validator=is_int)
+
+ def _deprecate_negative_int_max_colwidth(key):
+ value = cf.get_option(key)
+ if value is not None and value < 0:
+ warnings.warn(
+ "Passing a negative integer is deprecated in version 1.0 and "
+ "will not be supported in future version. Instead, use None "
+ "to not limit the column width.",
+ FutureWarning,
+ stacklevel=4,
+ )
+
cf.register_option(
- "max_colwidth", 50, max_colwidth_doc, validator=is_nonnegative_int
+ # FIXME: change `validator=is_nonnegative_int`
+ # in version 1.2
+ "max_colwidth",
+ 50,
+ max_colwidth_doc,
+ validator=is_instance_factory([type(None), int]),
+ cb=_deprecate_negative_int_max_colwidth,
)
if is_terminal():
max_cols = 0 # automatically determine optimal number of columns
</patch>
|
[]
|
[]
| |||
pandas-dev__pandas-35852
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
DOC: document dropna kwarg of pd.factorize
#### Location of the documentation
https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.factorize.html
#### Documentation problem
The docs show the existence of a kwarg "dropna" which does not exist
#### Suggested fix for documentation
Delete the kwarg "dropna"
</issue>
<code>
[start of README.md]
1 <div align="center">
2 <img src="https://dev.pandas.io/static/img/pandas.svg"><br>
3 </div>
4
5 -----------------
6
7 # pandas: powerful Python data analysis toolkit
8 [](https://pypi.org/project/pandas/)
9 [](https://anaconda.org/anaconda/pandas/)
10 [](https://doi.org/10.5281/zenodo.3509134)
11 [](https://pypi.org/project/pandas/)
12 [](https://github.com/pandas-dev/pandas/blob/master/LICENSE)
13 [](https://travis-ci.org/pandas-dev/pandas)
14 [](https://dev.azure.com/pandas-dev/pandas/_build/latest?definitionId=1&branch=master)
15 [](https://codecov.io/gh/pandas-dev/pandas)
16 [](https://pandas.pydata.org)
17 [](https://gitter.im/pydata/pandas)
18 [](https://numfocus.org)
19 [](https://github.com/psf/black)
20
21 ## What is it?
22
23 **pandas** is a Python package that provides fast, flexible, and expressive data
24 structures designed to make working with "relational" or "labeled" data both
25 easy and intuitive. It aims to be the fundamental high-level building block for
26 doing practical, **real world** data analysis in Python. Additionally, it has
27 the broader goal of becoming **the most powerful and flexible open source data
28 analysis / manipulation tool available in any language**. It is already well on
29 its way towards this goal.
30
31 ## Main Features
32 Here are just a few of the things that pandas does well:
33
34 - Easy handling of [**missing data**][missing-data] (represented as
35 `NaN`, `NA`, or `NaT`) in floating point as well as non-floating point data
36 - Size mutability: columns can be [**inserted and
37 deleted**][insertion-deletion] from DataFrame and higher dimensional
38 objects
39 - Automatic and explicit [**data alignment**][alignment]: objects can
40 be explicitly aligned to a set of labels, or the user can simply
41 ignore the labels and let `Series`, `DataFrame`, etc. automatically
42 align the data for you in computations
43 - Powerful, flexible [**group by**][groupby] functionality to perform
44 split-apply-combine operations on data sets, for both aggregating
45 and transforming data
46 - Make it [**easy to convert**][conversion] ragged,
47 differently-indexed data in other Python and NumPy data structures
48 into DataFrame objects
49 - Intelligent label-based [**slicing**][slicing], [**fancy
50 indexing**][fancy-indexing], and [**subsetting**][subsetting] of
51 large data sets
52 - Intuitive [**merging**][merging] and [**joining**][joining] data
53 sets
54 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of
55 data sets
56 - [**Hierarchical**][mi] labeling of axes (possible to have multiple
57 labels per tick)
58 - Robust IO tools for loading data from [**flat files**][flat-files]
59 (CSV and delimited), [**Excel files**][excel], [**databases**][db],
60 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore]
61 - [**Time series**][timeseries]-specific functionality: date range
62 generation and frequency conversion, moving window statistics,
63 date shifting and lagging.
64
65
66 [missing-data]: https://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data
67 [insertion-deletion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#column-selection-addition-deletion
68 [alignment]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html?highlight=alignment#intro-to-data-structures
69 [groupby]: https://pandas.pydata.org/pandas-docs/stable/groupby.html#group-by-split-apply-combine
70 [conversion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe
71 [slicing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges
72 [fancy-indexing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#advanced-indexing-with-ix
73 [subsetting]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing
74 [merging]: https://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging
75 [joining]: https://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index
76 [reshape]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-and-pivot-tables
77 [pivot-table]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#pivot-tables-and-cross-tabulations
78 [mi]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex
79 [flat-files]: https://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files
80 [excel]: https://pandas.pydata.org/pandas-docs/stable/io.html#excel-files
81 [db]: https://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries
82 [hdfstore]: https://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables
83 [timeseries]: https://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-series-date-functionality
84
85 ## Where to get it
86 The source code is currently hosted on GitHub at:
87 https://github.com/pandas-dev/pandas
88
89 Binary installers for the latest released version are available at the [Python
90 package index](https://pypi.org/project/pandas) and on conda.
91
92 ```sh
93 # conda
94 conda install pandas
95 ```
96
97 ```sh
98 # or PyPI
99 pip install pandas
100 ```
101
102 ## Dependencies
103 - [NumPy](https://www.numpy.org)
104 - [python-dateutil](https://labix.org/python-dateutil)
105 - [pytz](https://pythonhosted.org/pytz)
106
107 See the [full installation instructions](https://pandas.pydata.org/pandas-docs/stable/install.html#dependencies) for minimum supported versions of required, recommended and optional dependencies.
108
109 ## Installation from sources
110 To install pandas from source you need Cython in addition to the normal
111 dependencies above. Cython can be installed from pypi:
112
113 ```sh
114 pip install cython
115 ```
116
117 In the `pandas` directory (same one where you found this file after
118 cloning the git repo), execute:
119
120 ```sh
121 python setup.py install
122 ```
123
124 or for installing in [development mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs):
125
126
127 ```sh
128 python -m pip install -e . --no-build-isolation --no-use-pep517
129 ```
130
131 If you have `make`, you can also use `make develop` to run the same command.
132
133 or alternatively
134
135 ```sh
136 python setup.py develop
137 ```
138
139 See the full instructions for [installing from source](https://pandas.pydata.org/pandas-docs/stable/install.html#installing-from-source).
140
141 ## License
142 [BSD 3](LICENSE)
143
144 ## Documentation
145 The official documentation is hosted on PyData.org: https://pandas.pydata.org/pandas-docs/stable
146
147 ## Background
148 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and
149 has been under active development since then.
150
151 ## Getting Help
152
153 For usage questions, the best place to go to is [StackOverflow](https://stackoverflow.com/questions/tagged/pandas).
154 Further, general questions and discussions can also take place on the [pydata mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata).
155
156 ## Discussion and Development
157 Most development discussions take place on github in this repo. Further, the [pandas-dev mailing list](https://mail.python.org/mailman/listinfo/pandas-dev) can also be used for specialized discussions or design issues, and a [Gitter channel](https://gitter.im/pydata/pandas) is available for quick development related questions.
158
159 ## Contributing to pandas [](https://www.codetriage.com/pandas-dev/pandas)
160
161 All contributions, bug reports, bug fixes, documentation improvements, enhancements, and ideas are welcome.
162
163 A detailed overview on how to contribute can be found in the **[contributing guide](https://pandas.pydata.org/docs/dev/development/contributing.html)**. There is also an [overview](.github/CONTRIBUTING.md) on GitHub.
164
165 If you are simply looking to start working with the pandas codebase, navigate to the [GitHub "issues" tab](https://github.com/pandas-dev/pandas/issues) and start looking through interesting issues. There are a number of issues listed under [Docs](https://github.com/pandas-dev/pandas/issues?labels=Docs&sort=updated&state=open) and [good first issue](https://github.com/pandas-dev/pandas/issues?labels=good+first+issue&sort=updated&state=open) where you could start out.
166
167 You can also triage issues which may include reproducing bug reports, or asking for vital information such as version numbers or reproduction instructions. If you would like to start triaging issues, one easy way to get started is to [subscribe to pandas on CodeTriage](https://www.codetriage.com/pandas-dev/pandas).
168
169 Or maybe through using pandas you have an idea of your own or are looking for something in the documentation and thinking ‘this can be improved’...you can do something about it!
170
171 Feel free to ask questions on the [mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata) or on [Gitter](https://gitter.im/pydata/pandas).
172
173 As contributors and maintainers to this project, you are expected to abide by pandas' code of conduct. More information can be found at: [Contributor Code of Conduct](https://github.com/pandas-dev/pandas/blob/master/.github/CODE_OF_CONDUCT.md)
174
[end of README.md]
[start of doc/make.py]
1 #!/usr/bin/env python3
2 """
3 Python script for building documentation.
4
5 To build the docs you must have all optional dependencies for pandas
6 installed. See the installation instructions for a list of these.
7
8 Usage
9 -----
10 $ python make.py clean
11 $ python make.py html
12 $ python make.py latex
13 """
14 import argparse
15 import csv
16 import importlib
17 import os
18 import shutil
19 import subprocess
20 import sys
21 import webbrowser
22
23 import docutils
24 import docutils.parsers.rst
25
26 DOC_PATH = os.path.dirname(os.path.abspath(__file__))
27 SOURCE_PATH = os.path.join(DOC_PATH, "source")
28 BUILD_PATH = os.path.join(DOC_PATH, "build")
29 REDIRECTS_FILE = os.path.join(DOC_PATH, "redirects.csv")
30
31
32 class DocBuilder:
33 """
34 Class to wrap the different commands of this script.
35
36 All public methods of this class can be called as parameters of the
37 script.
38 """
39
40 def __init__(
41 self,
42 num_jobs=0,
43 include_api=True,
44 single_doc=None,
45 verbosity=0,
46 warnings_are_errors=False,
47 ):
48 self.num_jobs = num_jobs
49 self.verbosity = verbosity
50 self.warnings_are_errors = warnings_are_errors
51
52 if single_doc:
53 single_doc = self._process_single_doc(single_doc)
54 include_api = False
55 os.environ["SPHINX_PATTERN"] = single_doc
56 elif not include_api:
57 os.environ["SPHINX_PATTERN"] = "-api"
58
59 self.single_doc_html = None
60 if single_doc and single_doc.endswith(".rst"):
61 self.single_doc_html = os.path.splitext(single_doc)[0] + ".html"
62 elif single_doc:
63 self.single_doc_html = f"reference/api/pandas.{single_doc}.html"
64
65 def _process_single_doc(self, single_doc):
66 """
67 Make sure the provided value for --single is a path to an existing
68 .rst/.ipynb file, or a pandas object that can be imported.
69
70 For example, categorial.rst or pandas.DataFrame.head. For the latter,
71 return the corresponding file path
72 (e.g. reference/api/pandas.DataFrame.head.rst).
73 """
74 base_name, extension = os.path.splitext(single_doc)
75 if extension in (".rst", ".ipynb"):
76 if os.path.exists(os.path.join(SOURCE_PATH, single_doc)):
77 return single_doc
78 else:
79 raise FileNotFoundError(f"File {single_doc} not found")
80
81 elif single_doc.startswith("pandas."):
82 try:
83 obj = pandas # noqa: F821
84 for name in single_doc.split("."):
85 obj = getattr(obj, name)
86 except AttributeError as err:
87 raise ImportError(f"Could not import {single_doc}") from err
88 else:
89 return single_doc[len("pandas.") :]
90 else:
91 raise ValueError(
92 f"--single={single_doc} not understood. "
93 "Value should be a valid path to a .rst or .ipynb file, "
94 "or a valid pandas object "
95 "(e.g. categorical.rst or pandas.DataFrame.head)"
96 )
97
98 @staticmethod
99 def _run_os(*args):
100 """
101 Execute a command as a OS terminal.
102
103 Parameters
104 ----------
105 *args : list of str
106 Command and parameters to be executed
107
108 Examples
109 --------
110 >>> DocBuilder()._run_os('python', '--version')
111 """
112 subprocess.check_call(args, stdout=sys.stdout, stderr=sys.stderr)
113
114 def _sphinx_build(self, kind: str):
115 """
116 Call sphinx to build documentation.
117
118 Attribute `num_jobs` from the class is used.
119
120 Parameters
121 ----------
122 kind : {'html', 'latex'}
123
124 Examples
125 --------
126 >>> DocBuilder(num_jobs=4)._sphinx_build('html')
127 """
128 if kind not in ("html", "latex"):
129 raise ValueError(f"kind must be html or latex, not {kind}")
130
131 cmd = ["sphinx-build", "-b", kind]
132 if self.num_jobs:
133 cmd += ["-j", str(self.num_jobs)]
134 if self.warnings_are_errors:
135 cmd += ["-W", "--keep-going"]
136 if self.verbosity:
137 cmd.append(f"-{'v' * self.verbosity}")
138 cmd += [
139 "-d",
140 os.path.join(BUILD_PATH, "doctrees"),
141 SOURCE_PATH,
142 os.path.join(BUILD_PATH, kind),
143 ]
144 return subprocess.call(cmd)
145
146 def _open_browser(self, single_doc_html):
147 """
148 Open a browser tab showing single
149 """
150 url = os.path.join("file://", DOC_PATH, "build", "html", single_doc_html)
151 webbrowser.open(url, new=2)
152
153 def _get_page_title(self, page):
154 """
155 Open the rst file `page` and extract its title.
156 """
157 fname = os.path.join(SOURCE_PATH, f"{page}.rst")
158 option_parser = docutils.frontend.OptionParser(
159 components=(docutils.parsers.rst.Parser,)
160 )
161 doc = docutils.utils.new_document("<doc>", option_parser.get_default_values())
162 with open(fname) as f:
163 data = f.read()
164
165 parser = docutils.parsers.rst.Parser()
166 # do not generate any warning when parsing the rst
167 with open(os.devnull, "a") as f:
168 doc.reporter.stream = f
169 parser.parse(data, doc)
170
171 section = next(
172 node for node in doc.children if isinstance(node, docutils.nodes.section)
173 )
174 title = next(
175 node for node in section.children if isinstance(node, docutils.nodes.title)
176 )
177
178 return title.astext()
179
180 def _add_redirects(self):
181 """
182 Create in the build directory an html file with a redirect,
183 for every row in REDIRECTS_FILE.
184 """
185 with open(REDIRECTS_FILE) as mapping_fd:
186 reader = csv.reader(mapping_fd)
187 for row in reader:
188 if not row or row[0].strip().startswith("#"):
189 continue
190
191 path = os.path.join(BUILD_PATH, "html", *row[0].split("/")) + ".html"
192
193 try:
194 title = self._get_page_title(row[1])
195 except Exception:
196 # the file can be an ipynb and not an rst, or docutils
197 # may not be able to read the rst because it has some
198 # sphinx specific stuff
199 title = "this page"
200
201 if os.path.exists(path):
202 raise RuntimeError(
203 f"Redirection would overwrite an existing file: {path}"
204 )
205
206 with open(path, "w") as moved_page_fd:
207 html = f"""\
208 <html>
209 <head>
210 <meta http-equiv="refresh" content="0;URL={row[1]}.html"/>
211 </head>
212 <body>
213 <p>
214 The page has been moved to <a href="{row[1]}.html">{title}</a>
215 </p>
216 </body>
217 <html>"""
218
219 moved_page_fd.write(html)
220
221 def html(self):
222 """
223 Build HTML documentation.
224 """
225 ret_code = self._sphinx_build("html")
226 zip_fname = os.path.join(BUILD_PATH, "html", "pandas.zip")
227 if os.path.exists(zip_fname):
228 os.remove(zip_fname)
229
230 if ret_code == 0:
231 if self.single_doc_html is not None:
232 self._open_browser(self.single_doc_html)
233 else:
234 self._add_redirects()
235 return ret_code
236
237 def latex(self, force=False):
238 """
239 Build PDF documentation.
240 """
241 if sys.platform == "win32":
242 sys.stderr.write("latex build has not been tested on windows\n")
243 else:
244 ret_code = self._sphinx_build("latex")
245 os.chdir(os.path.join(BUILD_PATH, "latex"))
246 if force:
247 for i in range(3):
248 self._run_os("pdflatex", "-interaction=nonstopmode", "pandas.tex")
249 raise SystemExit(
250 "You should check the file "
251 '"build/latex/pandas.pdf" for problems.'
252 )
253 else:
254 self._run_os("make")
255 return ret_code
256
257 def latex_forced(self):
258 """
259 Build PDF documentation with retries to find missing references.
260 """
261 return self.latex(force=True)
262
263 @staticmethod
264 def clean():
265 """
266 Clean documentation generated files.
267 """
268 shutil.rmtree(BUILD_PATH, ignore_errors=True)
269 shutil.rmtree(os.path.join(SOURCE_PATH, "reference", "api"), ignore_errors=True)
270
271 def zip_html(self):
272 """
273 Compress HTML documentation into a zip file.
274 """
275 zip_fname = os.path.join(BUILD_PATH, "html", "pandas.zip")
276 if os.path.exists(zip_fname):
277 os.remove(zip_fname)
278 dirname = os.path.join(BUILD_PATH, "html")
279 fnames = os.listdir(dirname)
280 os.chdir(dirname)
281 self._run_os("zip", zip_fname, "-r", "-q", *fnames)
282
283
284 def main():
285 cmds = [method for method in dir(DocBuilder) if not method.startswith("_")]
286
287 joined = ",".join(cmds)
288 argparser = argparse.ArgumentParser(
289 description="pandas documentation builder", epilog=f"Commands: {joined}",
290 )
291
292 joined = ", ".join(cmds)
293 argparser.add_argument(
294 "command", nargs="?", default="html", help=f"command to run: {joined}",
295 )
296 argparser.add_argument(
297 "--num-jobs", type=int, default=0, help="number of jobs used by sphinx-build"
298 )
299 argparser.add_argument(
300 "--no-api", default=False, help="omit api and autosummary", action="store_true"
301 )
302 argparser.add_argument(
303 "--single",
304 metavar="FILENAME",
305 type=str,
306 default=None,
307 help=(
308 "filename (relative to the 'source' folder) of section or method name to "
309 "compile, e.g. 'development/contributing.rst', "
310 "'ecosystem.rst', 'pandas.DataFrame.join'"
311 ),
312 )
313 argparser.add_argument(
314 "--python-path", type=str, default=os.path.dirname(DOC_PATH), help="path"
315 )
316 argparser.add_argument(
317 "-v",
318 action="count",
319 dest="verbosity",
320 default=0,
321 help=(
322 "increase verbosity (can be repeated), "
323 "passed to the sphinx build command"
324 ),
325 )
326 argparser.add_argument(
327 "--warnings-are-errors",
328 "-W",
329 action="store_true",
330 help="fail if warnings are raised",
331 )
332 args = argparser.parse_args()
333
334 if args.command not in cmds:
335 joined = ", ".join(cmds)
336 raise ValueError(f"Unknown command {args.command}. Available options: {joined}")
337
338 # Below we update both os.environ and sys.path. The former is used by
339 # external libraries (namely Sphinx) to compile this module and resolve
340 # the import of `python_path` correctly. The latter is used to resolve
341 # the import within the module, injecting it into the global namespace
342 os.environ["PYTHONPATH"] = args.python_path
343 sys.path.insert(0, args.python_path)
344 globals()["pandas"] = importlib.import_module("pandas")
345
346 # Set the matplotlib backend to the non-interactive Agg backend for all
347 # child processes.
348 os.environ["MPLBACKEND"] = "module://matplotlib.backends.backend_agg"
349
350 builder = DocBuilder(
351 args.num_jobs,
352 not args.no_api,
353 args.single,
354 args.verbosity,
355 args.warnings_are_errors,
356 )
357 return getattr(builder, args.command)()
358
359
360 if __name__ == "__main__":
361 sys.exit(main())
362
[end of doc/make.py]
[start of doc/source/conf.py]
1 #
2 # pandas documentation build configuration file, created by
3 #
4 # This file is execfile()d with the current directory set to its containing
5 # dir.
6 #
7 # Note that not all possible configuration values are present in this
8 # autogenerated file.
9 #
10 # All configuration values have a default; values that are commented out
11 # serve to show the default.
12
13 from datetime import datetime
14 import importlib
15 import inspect
16 import logging
17 import os
18 import sys
19
20 import jinja2
21 from numpydoc.docscrape import NumpyDocString
22 from sphinx.ext.autosummary import _import_by_name
23
24 logger = logging.getLogger(__name__)
25
26 # https://github.com/sphinx-doc/sphinx/pull/2325/files
27 # Workaround for sphinx-build recursion limit overflow:
28 # pickle.dump(doctree, f, pickle.HIGHEST_PROTOCOL)
29 # RuntimeError: maximum recursion depth exceeded while pickling an object
30 #
31 # Python's default allowed recursion depth is 1000.
32 sys.setrecursionlimit(5000)
33
34 # If extensions (or modules to document with autodoc) are in another directory,
35 # add these directories to sys.path here. If the directory is relative to the
36 # documentation root, use os.path.abspath to make it absolute, like shown here.
37 # sys.path.append(os.path.abspath('.'))
38 sys.path.insert(0, os.path.abspath("../sphinxext"))
39 sys.path.extend(
40 [
41 # numpy standard doc extensions
42 os.path.join(os.path.dirname(__file__), "..", "../..", "sphinxext")
43 ]
44 )
45
46 # -- General configuration -----------------------------------------------
47
48 # Add any Sphinx extension module names here, as strings. They can be
49 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
50 # sphinxext.
51
52 extensions = [
53 "sphinx.ext.autodoc",
54 "sphinx.ext.autosummary",
55 "sphinx.ext.doctest",
56 "sphinx.ext.extlinks",
57 "sphinx.ext.todo",
58 "numpydoc", # handle NumPy documentation formatted docstrings
59 "IPython.sphinxext.ipython_directive",
60 "IPython.sphinxext.ipython_console_highlighting",
61 "matplotlib.sphinxext.plot_directive",
62 "sphinx.ext.intersphinx",
63 "sphinx.ext.coverage",
64 "sphinx.ext.mathjax",
65 "sphinx.ext.ifconfig",
66 "sphinx.ext.linkcode",
67 "nbsphinx",
68 "contributors", # custom pandas extension
69 ]
70
71 exclude_patterns = ["**.ipynb_checkpoints"]
72 try:
73 import nbconvert
74 except ImportError:
75 logger.warn("nbconvert not installed. Skipping notebooks.")
76 exclude_patterns.append("**/*.ipynb")
77 else:
78 try:
79 nbconvert.utils.pandoc.get_pandoc_version()
80 except nbconvert.utils.pandoc.PandocMissing:
81 logger.warn("Pandoc not installed. Skipping notebooks.")
82 exclude_patterns.append("**/*.ipynb")
83
84 # sphinx_pattern can be '-api' to exclude the API pages,
85 # the path to a file, or a Python object
86 # (e.g. '10min.rst' or 'pandas.DataFrame.head')
87 source_path = os.path.dirname(os.path.abspath(__file__))
88 pattern = os.environ.get("SPHINX_PATTERN")
89 if pattern:
90 for dirname, dirs, fnames in os.walk(source_path):
91 for fname in fnames:
92 if os.path.splitext(fname)[-1] in (".rst", ".ipynb"):
93 fname = os.path.relpath(os.path.join(dirname, fname), source_path)
94
95 if fname == "index.rst" and os.path.abspath(dirname) == source_path:
96 continue
97 elif pattern == "-api" and dirname == "reference":
98 exclude_patterns.append(fname)
99 elif pattern != "-api" and fname != pattern:
100 exclude_patterns.append(fname)
101
102 with open(os.path.join(source_path, "index.rst.template")) as f:
103 t = jinja2.Template(f.read())
104 with open(os.path.join(source_path, "index.rst"), "w") as f:
105 f.write(
106 t.render(
107 include_api=pattern is None,
108 single_doc=(pattern if pattern is not None and pattern != "-api" else None),
109 )
110 )
111 autosummary_generate = True if pattern is None else ["index"]
112 autodoc_typehints = "none"
113
114 # numpydoc
115 numpydoc_attributes_as_param_list = False
116
117 # matplotlib plot directive
118 plot_include_source = True
119 plot_formats = [("png", 90)]
120 plot_html_show_formats = False
121 plot_html_show_source_link = False
122 plot_pre_code = """import numpy as np
123 import pandas as pd"""
124
125 # nbsphinx do not use requirejs (breaks bootstrap)
126 nbsphinx_requirejs_path = ""
127
128 # Add any paths that contain templates here, relative to this directory.
129 templates_path = ["../_templates"]
130
131 # The suffix of source filenames.
132 source_suffix = [".rst"]
133
134 # The encoding of source files.
135 source_encoding = "utf-8"
136
137 # The master toctree document.
138 master_doc = "index"
139
140 # General information about the project.
141 project = "pandas"
142 copyright = f"2008-{datetime.now().year}, the pandas development team"
143
144 # The version info for the project you're documenting, acts as replacement for
145 # |version| and |release|, also used in various other places throughout the
146 # built documents.
147 #
148 # The short X.Y version.
149 import pandas # noqa: E402 isort:skip
150
151 # version = '%s r%s' % (pandas.__version__, svn_version())
152 version = str(pandas.__version__)
153
154 # The full version, including alpha/beta/rc tags.
155 release = version
156
157 # The language for content autogenerated by Sphinx. Refer to documentation
158 # for a list of supported languages.
159 # language = None
160
161 # There are two options for replacing |today|: either, you set today to some
162 # non-false value, then it is used:
163 # today = ''
164 # Else, today_fmt is used as the format for a strftime call.
165 # today_fmt = '%B %d, %Y'
166
167 # List of documents that shouldn't be included in the build.
168 # unused_docs = []
169
170 # List of directories, relative to source directory, that shouldn't be searched
171 # for source files.
172 exclude_trees = []
173
174 # The reST default role (used for this markup: `text`) to use for all
175 # documents. default_role = None
176
177 # If true, '()' will be appended to :func: etc. cross-reference text.
178 # add_function_parentheses = True
179
180 # If true, the current module name will be prepended to all description
181 # unit titles (such as .. function::).
182 # add_module_names = True
183
184 # If true, sectionauthor and moduleauthor directives will be shown in the
185 # output. They are ignored by default.
186 # show_authors = False
187
188 # The name of the Pygments (syntax highlighting) style to use.
189 pygments_style = "sphinx"
190
191 # A list of ignored prefixes for module index sorting.
192 # modindex_common_prefix = []
193
194
195 # -- Options for HTML output ---------------------------------------------
196
197 # The theme to use for HTML and HTML Help pages. Major themes that come with
198 # Sphinx are currently 'default' and 'sphinxdoc'.
199 html_theme = "pydata_sphinx_theme"
200
201 # The style sheet to use for HTML and HTML Help pages. A file of that name
202 # must exist either in Sphinx' static/ path, or in one of the custom paths
203 # given in html_static_path.
204 # html_style = 'statsmodels.css'
205
206 # Theme options are theme-specific and customize the look and feel of a theme
207 # further. For a list of options available for each theme, see the
208 # documentation.
209 html_theme_options = {
210 "external_links": [],
211 "github_url": "https://github.com/pandas-dev/pandas",
212 "twitter_url": "https://twitter.com/pandas_dev",
213 "google_analytics_id": "UA-27880019-2",
214 }
215
216 # Add any paths that contain custom themes here, relative to this directory.
217 # html_theme_path = ["themes"]
218
219 # The name for this set of Sphinx documents. If None, it defaults to
220 # "<project> v<release> documentation".
221 # html_title = None
222
223 # A shorter title for the navigation bar. Default is the same as html_title.
224 # html_short_title = None
225
226 # The name of an image file (relative to this directory) to place at the top
227 # of the sidebar.
228 html_logo = "../../web/pandas/static/img/pandas.svg"
229
230 # Add any paths that contain custom static files (such as style sheets) here,
231 # relative to this directory. They are copied after the builtin static files,
232 # so a file named "default.css" will overwrite the builtin "default.css".
233 html_static_path = ["_static"]
234
235 html_css_files = [
236 "css/getting_started.css",
237 "css/pandas.css",
238 ]
239
240 # The name of an image file (within the static path) to use as favicon of the
241 # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
242 # pixels large.
243 html_favicon = "../../web/pandas/static/img/favicon.ico"
244
245 # If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
246 # using the given strftime format.
247 # html_last_updated_fmt = '%b %d, %Y'
248
249 # If true, SmartyPants will be used to convert quotes and dashes to
250 # typographically correct entities.
251 # html_use_smartypants = True
252
253 # Custom sidebar templates, maps document names to template names.
254 # html_sidebars = {}
255
256 # Additional templates that should be rendered to pages, maps page names to
257 # template names.
258
259 # Add redirect for previously existing API pages
260 # each item is like `(from_old, to_new)`
261 # To redirect a class and all its methods, see below
262 # https://github.com/pandas-dev/pandas/issues/16186
263
264 moved_api_pages = [
265 ("pandas.core.common.isnull", "pandas.isna"),
266 ("pandas.core.common.notnull", "pandas.notna"),
267 ("pandas.core.reshape.get_dummies", "pandas.get_dummies"),
268 ("pandas.tools.merge.concat", "pandas.concat"),
269 ("pandas.tools.merge.merge", "pandas.merge"),
270 ("pandas.tools.pivot.pivot_table", "pandas.pivot_table"),
271 ("pandas.tseries.tools.to_datetime", "pandas.to_datetime"),
272 ("pandas.io.clipboard.read_clipboard", "pandas.read_clipboard"),
273 ("pandas.io.excel.ExcelFile.parse", "pandas.ExcelFile.parse"),
274 ("pandas.io.excel.read_excel", "pandas.read_excel"),
275 ("pandas.io.gbq.read_gbq", "pandas.read_gbq"),
276 ("pandas.io.html.read_html", "pandas.read_html"),
277 ("pandas.io.json.read_json", "pandas.read_json"),
278 ("pandas.io.parsers.read_csv", "pandas.read_csv"),
279 ("pandas.io.parsers.read_fwf", "pandas.read_fwf"),
280 ("pandas.io.parsers.read_table", "pandas.read_table"),
281 ("pandas.io.pickle.read_pickle", "pandas.read_pickle"),
282 ("pandas.io.pytables.HDFStore.append", "pandas.HDFStore.append"),
283 ("pandas.io.pytables.HDFStore.get", "pandas.HDFStore.get"),
284 ("pandas.io.pytables.HDFStore.put", "pandas.HDFStore.put"),
285 ("pandas.io.pytables.HDFStore.select", "pandas.HDFStore.select"),
286 ("pandas.io.pytables.read_hdf", "pandas.read_hdf"),
287 ("pandas.io.sql.read_sql", "pandas.read_sql"),
288 ("pandas.io.sql.read_frame", "pandas.read_frame"),
289 ("pandas.io.sql.write_frame", "pandas.write_frame"),
290 ("pandas.io.stata.read_stata", "pandas.read_stata"),
291 ]
292
293 # Again, tuples of (from_old, to_new)
294 moved_classes = [
295 ("pandas.tseries.resample.Resampler", "pandas.core.resample.Resampler"),
296 ("pandas.formats.style.Styler", "pandas.io.formats.style.Styler"),
297 ]
298
299 for old, new in moved_classes:
300 # the class itself...
301 moved_api_pages.append((old, new))
302
303 mod, classname = new.rsplit(".", 1)
304 klass = getattr(importlib.import_module(mod), classname)
305 methods = [
306 x for x in dir(klass) if not x.startswith("_") or x in ("__iter__", "__array__")
307 ]
308
309 for method in methods:
310 # ... and each of its public methods
311 moved_api_pages.append((f"{old}.{method}", f"{new}.{method}",))
312
313 if pattern is None:
314 html_additional_pages = {
315 "generated/" + page[0]: "api_redirect.html" for page in moved_api_pages
316 }
317
318
319 header = f"""\
320 .. currentmodule:: pandas
321
322 .. ipython:: python
323 :suppress:
324
325 import numpy as np
326 import pandas as pd
327
328 np.random.seed(123456)
329 np.set_printoptions(precision=4, suppress=True)
330 pd.options.display.max_rows = 15
331
332 import os
333 os.chdir(r'{os.path.dirname(os.path.dirname(__file__))}')
334 """
335
336
337 html_context = {
338 "redirects": {old: new for old, new in moved_api_pages},
339 "header": header,
340 }
341
342 # If false, no module index is generated.
343 html_use_modindex = True
344
345 # If false, no index is generated.
346 # html_use_index = True
347
348 # If true, the index is split into individual pages for each letter.
349 # html_split_index = False
350
351 # If true, links to the reST sources are added to the pages.
352 # html_show_sourcelink = True
353
354 # If true, an OpenSearch description file will be output, and all pages will
355 # contain a <link> tag referring to it. The value of this option must be the
356 # base URL from which the finished HTML is served.
357 # html_use_opensearch = ''
358
359 # If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml").
360 # html_file_suffix = ''
361
362 # Output file base name for HTML help builder.
363 htmlhelp_basename = "pandas"
364
365 # -- Options for nbsphinx ------------------------------------------------
366
367 nbsphinx_allow_errors = True
368
369 # -- Options for LaTeX output --------------------------------------------
370
371 latex_elements = {}
372
373 # The paper size ('letter' or 'a4').
374 # latex_paper_size = 'letter'
375
376 # The font size ('10pt', '11pt' or '12pt').
377 # latex_font_size = '10pt'
378
379 # Grouping the document tree into LaTeX files. List of tuples (source start
380 # file, target name, title, author, documentclass [howto/manual]).
381 latex_documents = [
382 (
383 "index",
384 "pandas.tex",
385 "pandas: powerful Python data analysis toolkit",
386 "Wes McKinney and the Pandas Development Team",
387 "manual",
388 )
389 ]
390
391 # The name of an image file (relative to this directory) to place at the top of
392 # the title page.
393 # latex_logo = None
394
395 # For "manual" documents, if this is true, then toplevel headings are parts,
396 # not chapters.
397 # latex_use_parts = False
398
399 # Additional stuff for the LaTeX preamble.
400 # latex_preamble = ''
401
402 # Documents to append as an appendix to all manuals.
403 # latex_appendices = []
404
405 # If false, no module index is generated.
406 # latex_use_modindex = True
407
408
409 if pattern is None:
410 intersphinx_mapping = {
411 "dateutil": ("https://dateutil.readthedocs.io/en/latest/", None),
412 "matplotlib": ("https://matplotlib.org/", None),
413 "numpy": ("https://numpy.org/doc/stable/", None),
414 "pandas-gbq": ("https://pandas-gbq.readthedocs.io/en/latest/", None),
415 "py": ("https://pylib.readthedocs.io/en/latest/", None),
416 "python": ("https://docs.python.org/3/", None),
417 "scipy": ("https://docs.scipy.org/doc/scipy/reference/", None),
418 "statsmodels": ("https://www.statsmodels.org/devel/", None),
419 "pyarrow": ("https://arrow.apache.org/docs/", None),
420 }
421
422 # extlinks alias
423 extlinks = {
424 "issue": ("https://github.com/pandas-dev/pandas/issues/%s", "GH"),
425 "wiki": ("https://github.com/pandas-dev/pandas/wiki/%s", "wiki "),
426 }
427
428
429 ipython_warning_is_error = False
430 ipython_exec_lines = [
431 "import numpy as np",
432 "import pandas as pd",
433 # This ensures correct rendering on system with console encoding != utf8
434 # (windows). It forces pandas to encode its output reprs using utf8
435 # wherever the docs are built. The docs' target is the browser, not
436 # the console, so this is fine.
437 'pd.options.display.encoding="utf8"',
438 ]
439
440
441 # Add custom Documenter to handle attributes/methods of an AccessorProperty
442 # eg pandas.Series.str and pandas.Series.dt (see GH9322)
443
444 import sphinx # noqa: E402 isort:skip
445 from sphinx.util import rpartition # noqa: E402 isort:skip
446 from sphinx.ext.autodoc import ( # noqa: E402 isort:skip
447 AttributeDocumenter,
448 Documenter,
449 MethodDocumenter,
450 )
451 from sphinx.ext.autosummary import Autosummary # noqa: E402 isort:skip
452
453
454 class AccessorDocumenter(MethodDocumenter):
455 """
456 Specialized Documenter subclass for accessors.
457 """
458
459 objtype = "accessor"
460 directivetype = "method"
461
462 # lower than MethodDocumenter so this is not chosen for normal methods
463 priority = 0.6
464
465 def format_signature(self):
466 # this method gives an error/warning for the accessors, therefore
467 # overriding it (accessor has no arguments)
468 return ""
469
470
471 class AccessorLevelDocumenter(Documenter):
472 """
473 Specialized Documenter subclass for objects on accessor level (methods,
474 attributes).
475 """
476
477 # This is the simple straightforward version
478 # modname is None, base the last elements (eg 'hour')
479 # and path the part before (eg 'Series.dt')
480 # def resolve_name(self, modname, parents, path, base):
481 # modname = 'pandas'
482 # mod_cls = path.rstrip('.')
483 # mod_cls = mod_cls.split('.')
484 #
485 # return modname, mod_cls + [base]
486 def resolve_name(self, modname, parents, path, base):
487 if modname is None:
488 if path:
489 mod_cls = path.rstrip(".")
490 else:
491 mod_cls = None
492 # if documenting a class-level object without path,
493 # there must be a current class, either from a parent
494 # auto directive ...
495 mod_cls = self.env.temp_data.get("autodoc:class")
496 # ... or from a class directive
497 if mod_cls is None:
498 mod_cls = self.env.temp_data.get("py:class")
499 # ... if still None, there's no way to know
500 if mod_cls is None:
501 return None, []
502 # HACK: this is added in comparison to ClassLevelDocumenter
503 # mod_cls still exists of class.accessor, so an extra
504 # rpartition is needed
505 modname, accessor = rpartition(mod_cls, ".")
506 modname, cls = rpartition(modname, ".")
507 parents = [cls, accessor]
508 # if the module name is still missing, get it like above
509 if not modname:
510 modname = self.env.temp_data.get("autodoc:module")
511 if not modname:
512 if sphinx.__version__ > "1.3":
513 modname = self.env.ref_context.get("py:module")
514 else:
515 modname = self.env.temp_data.get("py:module")
516 # ... else, it stays None, which means invalid
517 return modname, parents + [base]
518
519
520 class AccessorAttributeDocumenter(AccessorLevelDocumenter, AttributeDocumenter):
521 objtype = "accessorattribute"
522 directivetype = "attribute"
523
524 # lower than AttributeDocumenter so this is not chosen for normal
525 # attributes
526 priority = 0.6
527
528
529 class AccessorMethodDocumenter(AccessorLevelDocumenter, MethodDocumenter):
530 objtype = "accessormethod"
531 directivetype = "method"
532
533 # lower than MethodDocumenter so this is not chosen for normal methods
534 priority = 0.6
535
536
537 class AccessorCallableDocumenter(AccessorLevelDocumenter, MethodDocumenter):
538 """
539 This documenter lets us removes .__call__ from the method signature for
540 callable accessors like Series.plot
541 """
542
543 objtype = "accessorcallable"
544 directivetype = "method"
545
546 # lower than MethodDocumenter; otherwise the doc build prints warnings
547 priority = 0.5
548
549 def format_name(self):
550 return MethodDocumenter.format_name(self).rstrip(".__call__")
551
552
553 class PandasAutosummary(Autosummary):
554 """
555 This alternative autosummary class lets us override the table summary for
556 Series.plot and DataFrame.plot in the API docs.
557 """
558
559 def _replace_pandas_items(self, display_name, sig, summary, real_name):
560 # this a hack: ideally we should extract the signature from the
561 # .__call__ method instead of hard coding this
562 if display_name == "DataFrame.plot":
563 sig = "([x, y, kind, ax, ....])"
564 summary = "DataFrame plotting accessor and method"
565 elif display_name == "Series.plot":
566 sig = "([kind, ax, figsize, ....])"
567 summary = "Series plotting accessor and method"
568 return (display_name, sig, summary, real_name)
569
570 @staticmethod
571 def _is_deprecated(real_name):
572 try:
573 obj, parent, modname = _import_by_name(real_name)
574 except ImportError:
575 return False
576 doc = NumpyDocString(obj.__doc__ or "")
577 summary = "".join(doc["Summary"] + doc["Extended Summary"])
578 return ".. deprecated::" in summary
579
580 def _add_deprecation_prefixes(self, items):
581 for item in items:
582 display_name, sig, summary, real_name = item
583 if self._is_deprecated(real_name):
584 summary = f"(DEPRECATED) {summary}"
585 yield display_name, sig, summary, real_name
586
587 def get_items(self, names):
588 items = Autosummary.get_items(self, names)
589 items = [self._replace_pandas_items(*item) for item in items]
590 items = list(self._add_deprecation_prefixes(items))
591 return items
592
593
594 # based on numpy doc/source/conf.py
595 def linkcode_resolve(domain, info):
596 """
597 Determine the URL corresponding to Python object
598 """
599 if domain != "py":
600 return None
601
602 modname = info["module"]
603 fullname = info["fullname"]
604
605 submod = sys.modules.get(modname)
606 if submod is None:
607 return None
608
609 obj = submod
610 for part in fullname.split("."):
611 try:
612 obj = getattr(obj, part)
613 except AttributeError:
614 return None
615
616 try:
617 fn = inspect.getsourcefile(inspect.unwrap(obj))
618 except TypeError:
619 fn = None
620 if not fn:
621 return None
622
623 try:
624 source, lineno = inspect.getsourcelines(obj)
625 except OSError:
626 lineno = None
627
628 if lineno:
629 linespec = f"#L{lineno}-L{lineno + len(source) - 1}"
630 else:
631 linespec = ""
632
633 fn = os.path.relpath(fn, start=os.path.dirname(pandas.__file__))
634
635 if "+" in pandas.__version__:
636 return f"https://github.com/pandas-dev/pandas/blob/master/pandas/{fn}{linespec}"
637 else:
638 return (
639 f"https://github.com/pandas-dev/pandas/blob/"
640 f"v{pandas.__version__}/pandas/{fn}{linespec}"
641 )
642
643
644 # remove the docstring of the flags attribute (inherited from numpy ndarray)
645 # because these give doc build errors (see GH issue 5331)
646 def remove_flags_docstring(app, what, name, obj, options, lines):
647 if what == "attribute" and name.endswith(".flags"):
648 del lines[:]
649
650
651 def process_class_docstrings(app, what, name, obj, options, lines):
652 """
653 For those classes for which we use ::
654
655 :template: autosummary/class_without_autosummary.rst
656
657 the documented attributes/methods have to be listed in the class
658 docstring. However, if one of those lists is empty, we use 'None',
659 which then generates warnings in sphinx / ugly html output.
660 This "autodoc-process-docstring" event connector removes that part
661 from the processed docstring.
662
663 """
664 if what == "class":
665 joined = "\n".join(lines)
666
667 templates = [
668 """.. rubric:: Attributes
669
670 .. autosummary::
671 :toctree:
672
673 None
674 """,
675 """.. rubric:: Methods
676
677 .. autosummary::
678 :toctree:
679
680 None
681 """,
682 ]
683
684 for template in templates:
685 if template in joined:
686 joined = joined.replace(template, "")
687 lines[:] = joined.split("\n")
688
689
690 suppress_warnings = [
691 # We "overwrite" autosummary with our PandasAutosummary, but
692 # still want the regular autosummary setup to run. So we just
693 # suppress this warning.
694 "app.add_directive"
695 ]
696 if pattern:
697 # When building a single document we don't want to warn because references
698 # to other documents are unknown, as it's expected
699 suppress_warnings.append("ref.ref")
700
701
702 def rstjinja(app, docname, source):
703 """
704 Render our pages as a jinja template for fancy templating goodness.
705 """
706 # https://www.ericholscher.com/blog/2016/jul/25/integrating-jinja-rst-sphinx/
707 # Make sure we're outputting HTML
708 if app.builder.format != "html":
709 return
710 src = source[0]
711 rendered = app.builder.templates.render_string(src, app.config.html_context)
712 source[0] = rendered
713
714
715 def setup(app):
716 app.connect("source-read", rstjinja)
717 app.connect("autodoc-process-docstring", remove_flags_docstring)
718 app.connect("autodoc-process-docstring", process_class_docstrings)
719 app.add_autodocumenter(AccessorDocumenter)
720 app.add_autodocumenter(AccessorAttributeDocumenter)
721 app.add_autodocumenter(AccessorMethodDocumenter)
722 app.add_autodocumenter(AccessorCallableDocumenter)
723 app.add_directive("autosummary", PandasAutosummary)
724
[end of doc/source/conf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pandas-dev/pandas
|
73c1d3269830d787c8990de8f02bf4279d2720ab
|
DOC: document dropna kwarg of pd.factorize
#### Location of the documentation
https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.factorize.html
#### Documentation problem
The docs show the existence of a kwarg "dropna" which does not exist
#### Suggested fix for documentation
Delete the kwarg "dropna"
|
2020-08-22T10:50:31Z
|
<patch>
diff --git a/doc/source/whatsnew/v1.1.2.rst b/doc/source/whatsnew/v1.1.2.rst
--- a/doc/source/whatsnew/v1.1.2.rst
+++ b/doc/source/whatsnew/v1.1.2.rst
@@ -35,6 +35,14 @@ Bug fixes
.. ---------------------------------------------------------------------------
+.. _whatsnew_112.other:
+
+Other
+~~~~~
+- :meth:`factorize` now supports ``na_sentinel=None`` to include NaN in the uniques of the values and remove ``dropna`` keyword which was unintentionally exposed to public facing API in 1.1 version from :meth:`factorize`(:issue:`35667`)
+
+.. ---------------------------------------------------------------------------
+
.. _whatsnew_112.contributors:
Contributors
diff --git a/pandas/core/algorithms.py b/pandas/core/algorithms.py
--- a/pandas/core/algorithms.py
+++ b/pandas/core/algorithms.py
@@ -526,9 +526,8 @@ def _factorize_array(
def factorize(
values,
sort: bool = False,
- na_sentinel: int = -1,
+ na_sentinel: Optional[int] = -1,
size_hint: Optional[int] = None,
- dropna: bool = True,
) -> Tuple[np.ndarray, Union[np.ndarray, ABCIndex]]:
"""
Encode the object as an enumerated type or categorical variable.
@@ -541,8 +540,11 @@ def factorize(
Parameters
----------
{values}{sort}
- na_sentinel : int, default -1
- Value to mark "not found".
+ na_sentinel : int or None, default -1
+ Value to mark "not found". If None, will not drop the NaN
+ from the uniques of the values.
+
+ .. versionchanged:: 1.1.2
{size_hint}\
Returns
@@ -620,6 +622,22 @@ def factorize(
array([0, 0, 1]...)
>>> uniques
Index(['a', 'c'], dtype='object')
+
+ If NaN is in the values, and we want to include NaN in the uniques of the
+ values, it can be achieved by setting ``na_sentinel=None``.
+
+ >>> values = np.array([1, 2, 1, np.nan])
+ >>> codes, uniques = pd.factorize(values) # default: na_sentinel=-1
+ >>> codes
+ array([ 0, 1, 0, -1])
+ >>> uniques
+ array([1., 2.])
+
+ >>> codes, uniques = pd.factorize(values, na_sentinel=None)
+ >>> codes
+ array([0, 1, 0, 2])
+ >>> uniques
+ array([ 1., 2., nan])
"""
# Implementation notes: This method is responsible for 3 things
# 1.) coercing data to array-like (ndarray, Index, extension array)
@@ -633,6 +651,13 @@ def factorize(
values = _ensure_arraylike(values)
original = values
+ # GH35667, if na_sentinel=None, we will not dropna NaNs from the uniques
+ # of values, assign na_sentinel=-1 to replace code value for NaN.
+ dropna = True
+ if na_sentinel is None:
+ na_sentinel = -1
+ dropna = False
+
if is_extension_array_dtype(values.dtype):
values = extract_array(values)
codes, uniques = values.factorize(na_sentinel=na_sentinel)
diff --git a/pandas/core/base.py b/pandas/core/base.py
--- a/pandas/core/base.py
+++ b/pandas/core/base.py
@@ -1398,7 +1398,7 @@ def memory_usage(self, deep=False):
"""
),
)
- def factorize(self, sort=False, na_sentinel=-1):
+ def factorize(self, sort: bool = False, na_sentinel: Optional[int] = -1):
return algorithms.factorize(self, sort=sort, na_sentinel=na_sentinel)
_shared_docs[
diff --git a/pandas/core/groupby/grouper.py b/pandas/core/groupby/grouper.py
--- a/pandas/core/groupby/grouper.py
+++ b/pandas/core/groupby/grouper.py
@@ -587,8 +587,13 @@ def _make_codes(self) -> None:
codes = self.grouper.codes_info
uniques = self.grouper.result_index
else:
+ # GH35667, replace dropna=False with na_sentinel=None
+ if not self.dropna:
+ na_sentinel = None
+ else:
+ na_sentinel = -1
codes, uniques = algorithms.factorize(
- self.grouper, sort=self.sort, dropna=self.dropna
+ self.grouper, sort=self.sort, na_sentinel=na_sentinel
)
uniques = Index(uniques, name=self.name)
self._codes = codes
</patch>
|
[]
|
[]
| ||||
apache__airflow-22506
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
BigQueryToGCSOperator: Invalid dataset ID error
### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
`apache-airflow-providers-google==6.3.0`
### Apache Airflow version
2.2.3
### Operating System
Linux
### Deployment
Composer
### Deployment details
- Composer Environment version: `composer-2.0.3-airflow-2.2.3`
### What happened
When I use BigQueryToGCSOperator, I got following error.
```
Invalid dataset ID "MY_PROJECT:MY_DATASET". Dataset IDs must be alphanumeric (plus underscores and dashes) and must be at most 1024 characters long.
```
### What you expected to happen
I guess that it is due to I use colon (`:` ) as the separator between project_id and dataset_id in `source_project_dataset_table `.
I tried use dot(`.`) as separator and it worked.
However, [document of BigQueryToGCSOperator](https://airflow.apache.org/docs/apache-airflow-providers-google/stable/_api/airflow/providers/google/cloud/transfers/bigquery_to_gcs/index.html) states that it is possible to use colon as the separator between project_id and dataset_id. In fact, at least untill Airflow1.10.15 version, it also worked with colon separator.
In Airflow 1.10.*, it separate and extract project_id and dataset_id by colon in bigquery hook. But `apache-airflow-providers-google==6.3.0` doesn't have this process.
https://github.com/apache/airflow/blob/d3b066931191b82880d216af103517ea941c74ba/airflow/contrib/hooks/bigquery_hook.py#L2186-L2247
### How to reproduce
You can reproduce following steps.
- Create a test DAG to execute BigQueryToGCSOperator in Composer environment(`composer-2.0.3-airflow-2.2.3`).
- And give `source_project_dataset_table` arg source BigQuery table path in following format.
- Trigger DAG.
```
source_project_dataset_table = 'PROJECT_ID:DATASET_ID.TABLE_ID'
```
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
</issue>
<code>
[start of README.md]
1 <!--
2 Licensed to the Apache Software Foundation (ASF) under one
3 or more contributor license agreements. See the NOTICE file
4 distributed with this work for additional information
5 regarding copyright ownership. The ASF licenses this file
6 to you under the Apache License, Version 2.0 (the
7 "License"); you may not use this file except in compliance
8 with the License. You may obtain a copy of the License at
9
10 http://www.apache.org/licenses/LICENSE-2.0
11
12 Unless required by applicable law or agreed to in writing,
13 software distributed under the License is distributed on an
14 "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
15 KIND, either express or implied. See the License for the
16 specific language governing permissions and limitations
17 under the License.
18 -->
19
20 # Apache Airflow
21
22 [](https://badge.fury.io/py/apache-airflow)
23 [](https://github.com/apache/airflow/actions)
24 [](https://codecov.io/github/apache/airflow?branch=main)
25 [](https://www.apache.org/licenses/LICENSE-2.0.txt)
26 [](https://pypi.org/project/apache-airflow/)
27 [](https://hub.docker.com/r/apache/airflow)
28 [](https://hub.docker.com/r/apache/airflow)
29 [](https://pypi.org/project/apache-airflow/)
30 [](https://artifacthub.io/packages/search?repo=apache-airflow)
31 [](https://github.com/psf/black)
32 [](https://twitter.com/ApacheAirflow)
33 [](https://s.apache.org/airflow-slack)
34
35 [Apache Airflow](https://airflow.apache.org/docs/apache-airflow/stable/) (or simply Airflow) is a platform to programmatically author, schedule, and monitor workflows.
36
37 When workflows are defined as code, they become more maintainable, versionable, testable, and collaborative.
38
39 Use Airflow to author workflows as directed acyclic graphs (DAGs) of tasks. The Airflow scheduler executes your tasks on an array of workers while following the specified dependencies. Rich command line utilities make performing complex surgeries on DAGs a snap. The rich user interface makes it easy to visualize pipelines running in production, monitor progress, and troubleshoot issues when needed.
40
41 <!-- START doctoc generated TOC please keep comment here to allow auto update -->
42 <!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
43 **Table of contents**
44
45 - [Project Focus](#project-focus)
46 - [Principles](#principles)
47 - [Requirements](#requirements)
48 - [Getting started](#getting-started)
49 - [Installing from PyPI](#installing-from-pypi)
50 - [Official source code](#official-source-code)
51 - [Convenience packages](#convenience-packages)
52 - [User Interface](#user-interface)
53 - [Semantic versioning](#semantic-versioning)
54 - [Version Life Cycle](#version-life-cycle)
55 - [Support for Python and Kubernetes versions](#support-for-python-and-kubernetes-versions)
56 - [Base OS support for reference Airflow images](#base-os-support-for-reference-airflow-images)
57 - [Approach to dependencies of Airflow](#approach-to-dependencies-of-airflow)
58 - [Support for providers](#support-for-providers)
59 - [Contributing](#contributing)
60 - [Who uses Apache Airflow?](#who-uses-apache-airflow)
61 - [Who Maintains Apache Airflow?](#who-maintains-apache-airflow)
62 - [Can I use the Apache Airflow logo in my presentation?](#can-i-use-the-apache-airflow-logo-in-my-presentation)
63 - [Airflow merchandise](#airflow-merchandise)
64 - [Links](#links)
65 - [Sponsors](#sponsors)
66
67 <!-- END doctoc generated TOC please keep comment here to allow auto update -->
68
69 ## Project Focus
70
71 Airflow works best with workflows that are mostly static and slowly changing. When the DAG structure is similar from one run to the next, it clarifies the unit of work and continuity. Other similar projects include [Luigi](https://github.com/spotify/luigi), [Oozie](https://oozie.apache.org/) and [Azkaban](https://azkaban.github.io/).
72
73 Airflow is commonly used to process data, but has the opinion that tasks should ideally be idempotent (i.e., results of the task will be the same, and will not create duplicated data in a destination system), and should not pass large quantities of data from one task to the next (though tasks can pass metadata using Airflow's [Xcom feature](https://airflow.apache.org/docs/apache-airflow/stable/concepts.html#xcoms)). For high-volume, data-intensive tasks, a best practice is to delegate to external services specializing in that type of work.
74
75 Airflow is not a streaming solution, but it is often used to process real-time data, pulling data off streams in batches.
76
77 ## Principles
78
79 - **Dynamic**: Airflow pipelines are configuration as code (Python), allowing for dynamic pipeline generation. This allows for writing code that instantiates pipelines dynamically.
80 - **Extensible**: Easily define your own operators, executors and extend the library so that it fits the level of abstraction that suits your environment.
81 - **Elegant**: Airflow pipelines are lean and explicit. Parameterizing your scripts is built into the core of Airflow using the powerful **Jinja** templating engine.
82 - **Scalable**: Airflow has a modular architecture and uses a message queue to orchestrate an arbitrary number of workers.
83
84 ## Requirements
85
86 Apache Airflow is tested with:
87
88 | | Main version (dev) | Stable version (2.2.4) |
89 |---------------------|-------------------------|--------------------------|
90 | Python | 3.7, 3.8, 3.9, 3.10 | 3.6, 3.7, 3.8, 3.9 |
91 | Platform | AMD64/ARM64(\*) | AMD64 |
92 | Kubernetes | 1.20, 1.21, 1.22, 1.23 | 1.18, 1.19, 1.20 |
93 | PostgreSQL | 10, 11, 12, 13 | 9.6, 10, 11, 12, 13 |
94 | MySQL | 5.7, 8 | 5.7, 8 |
95 | SQLite | 3.15.0+ | 3.15.0+ |
96 | MSSQL | 2017(\*), 2019 (\*) | |
97
98 \* Experimental
99
100 **Note**: MySQL 5.x versions are unable to or have limitations with
101 running multiple schedulers -- please see the [Scheduler docs](https://airflow.apache.org/docs/apache-airflow/stable/scheduler.html).
102 MariaDB is not tested/recommended.
103
104 **Note**: SQLite is used in Airflow tests. Do not use it in production. We recommend
105 using the latest stable version of SQLite for local development.
106
107 **Note**: Python v3.10 is not supported yet. For details, see [#19059](https://github.com/apache/airflow/issues/19059).
108
109 **Note**: Airflow currently can be run on POSIX-compliant Operating Systems. For development it is regularly
110 tested on fairly modern Linux Distros and recent versions of MacOS.
111 On Windows you can run it via WSL2 (Windows Subsystem for Linux 2) or via Linux Containers.
112 The work to add Windows support is tracked via [#10388](https://github.com/apache/airflow/issues/10388) but
113 it is not a high priority. You should only use Linux-based distros as "Production" execution environment
114 as this is the only environment that is supported. The only distro that is used in our CI tests and that
115 is used in the [Community managed DockerHub image](https://hub.docker.com/p/apache/airflow) is
116 `Debian Bullseye`.
117
118 ## Getting started
119
120 Visit the official Airflow website documentation (latest **stable** release) for help with
121 [installing Airflow](https://airflow.apache.org/docs/apache-airflow/stable/installation.html),
122 [getting started](https://airflow.apache.org/docs/apache-airflow/stable/start/index.html), or walking
123 through a more complete [tutorial](https://airflow.apache.org/docs/apache-airflow/stable/tutorial.html).
124
125 > Note: If you're looking for documentation for the main branch (latest development branch): you can find it on [s.apache.org/airflow-docs](https://s.apache.org/airflow-docs/).
126
127 For more information on Airflow Improvement Proposals (AIPs), visit
128 the [Airflow Wiki](https://cwiki.apache.org/confluence/display/AIRFLOW/Airflow+Improvements+Proposals).
129
130 Documentation for dependent projects like provider packages, Docker image, Helm Chart, you'll find it in [the documentation index](https://airflow.apache.org/docs/).
131
132 ## Installing from PyPI
133
134 We publish Apache Airflow as `apache-airflow` package in PyPI. Installing it however might be sometimes tricky
135 because Airflow is a bit of both a library and application. Libraries usually keep their dependencies open, and
136 applications usually pin them, but we should do neither and both simultaneously. We decided to keep
137 our dependencies as open as possible (in `setup.py`) so users can install different versions of libraries
138 if needed. This means that `pip install apache-airflow` will not work from time to time or will
139 produce unusable Airflow installation.
140
141 To have repeatable installation, however, we keep a set of "known-to-be-working" constraint
142 files in the orphan `constraints-main` and `constraints-2-0` branches. We keep those "known-to-be-working"
143 constraints files separately per major/minor Python version.
144 You can use them as constraint files when installing Airflow from PyPI. Note that you have to specify
145 correct Airflow tag/version/branch and Python versions in the URL.
146
147
148 1. Installing just Airflow:
149
150 > Note: Only `pip` installation is currently officially supported.
151
152 While it is possible to install Airflow with tools like [Poetry](https://python-poetry.org) or
153 [pip-tools](https://pypi.org/project/pip-tools), they do not share the same workflow as
154 `pip` - especially when it comes to constraint vs. requirements management.
155 Installing via `Poetry` or `pip-tools` is not currently supported.
156
157 If you wish to install Airflow using those tools, you should use the constraint files and convert
158 them to the appropriate format and workflow that your tool requires.
159
160
161 ```bash
162 pip install 'apache-airflow==2.2.4' \
163 --constraint "https://raw.githubusercontent.com/apache/airflow/constraints-2.2.4/constraints-3.7.txt"
164 ```
165
166 2. Installing with extras (i.e., postgres, google)
167
168 ```bash
169 pip install 'apache-airflow[postgres,google]==2.2.4' \
170 --constraint "https://raw.githubusercontent.com/apache/airflow/constraints-2.2.4/constraints-3.7.txt"
171 ```
172
173 For information on installing provider packages, check
174 [providers](http://airflow.apache.org/docs/apache-airflow-providers/index.html).
175
176 ## Official source code
177
178 Apache Airflow is an [Apache Software Foundation](https://www.apache.org) (ASF) project,
179 and our official source code releases:
180
181 - Follow the [ASF Release Policy](https://www.apache.org/legal/release-policy.html)
182 - Can be downloaded from [the ASF Distribution Directory](https://downloads.apache.org/airflow)
183 - Are cryptographically signed by the release manager
184 - Are officially voted on by the PMC members during the
185 [Release Approval Process](https://www.apache.org/legal/release-policy.html#release-approval)
186
187 Following the ASF rules, the source packages released must be sufficient for a user to build and test the
188 release provided they have access to the appropriate platform and tools.
189
190 ## Convenience packages
191
192 There are other ways of installing and using Airflow. Those are "convenience" methods - they are
193 not "official releases" as stated by the `ASF Release Policy`, but they can be used by the users
194 who do not want to build the software themselves.
195
196 Those are - in the order of most common ways people install Airflow:
197
198 - [PyPI releases](https://pypi.org/project/apache-airflow/) to install Airflow using standard `pip` tool
199 - [Docker Images](https://hub.docker.com/r/apache/airflow) to install airflow via
200 `docker` tool, use them in Kubernetes, Helm Charts, `docker-compose`, `docker swarm`, etc. You can
201 read more about using, customising, and extending the images in the
202 [Latest docs](https://airflow.apache.org/docs/docker-stack/index.html), and
203 learn details on the internals in the [IMAGES.rst](https://github.com/apache/airflow/blob/main/IMAGES.rst) document.
204 - [Tags in GitHub](https://github.com/apache/airflow/tags) to retrieve the git project sources that
205 were used to generate official source packages via git
206
207 All those artifacts are not official releases, but they are prepared using officially released sources.
208 Some of those artifacts are "development" or "pre-release" ones, and they are clearly marked as such
209 following the ASF Policy.
210
211 ## User Interface
212
213 - **DAGs**: Overview of all DAGs in your environment.
214
215 
216
217 - **Tree**: Tree representation of a DAG that spans across time.
218
219 
220
221 - **Graph**: Visualization of a DAG's dependencies and their current status for a specific run.
222
223 
224
225 - **Task Duration**: Total time spent on different tasks over time.
226
227 
228
229 - **Gantt**: Duration and overlap of a DAG.
230
231 
232
233 - **Code**: Quick way to view source code of a DAG.
234
235 
236
237 ## Semantic versioning
238
239 As of Airflow 2.0.0, we support a strict [SemVer](https://semver.org/) approach for all packages released.
240
241 There are few specific rules that we agreed to that define details of versioning of the different
242 packages:
243
244 * **Airflow**: SemVer rules apply to core airflow only (excludes any changes to providers).
245 Changing limits for versions of Airflow dependencies is not a breaking change on its own.
246 * **Airflow Providers**: SemVer rules apply to changes in the particular provider's code only.
247 SemVer MAJOR and MINOR versions for the packages are independent of the Airflow version.
248 For example, `google 4.1.0` and `amazon 3.0.3` providers can happily be installed
249 with `Airflow 2.1.2`. If there are limits of cross-dependencies between providers and Airflow packages,
250 they are present in providers as `install_requires` limitations. We aim to keep backwards
251 compatibility of providers with all previously released Airflow 2 versions but
252 there will sometimes be breaking changes that might make some, or all
253 providers, have minimum Airflow version specified. Change of that minimum supported Airflow version
254 is a breaking change for provider because installing the new provider might automatically
255 upgrade Airflow (which might be an undesired side effect of upgrading provider).
256 * **Airflow Helm Chart**: SemVer rules apply to changes in the chart only. SemVer MAJOR and MINOR
257 versions for the chart are independent from the Airflow version. We aim to keep backwards
258 compatibility of the Helm Chart with all released Airflow 2 versions, but some new features might
259 only work starting from specific Airflow releases. We might however limit the Helm
260 Chart to depend on minimal Airflow version.
261 * **Airflow API clients**: SemVer MAJOR and MINOR versions follow MAJOR and MINOR versions of Airflow.
262 The first MAJOR or MINOR X.Y.0 release of Airflow should always be followed by X.Y.0 release of
263 all clients. The clients then can release their own PATCH releases with bugfixes,
264 independently of Airflow PATCH releases.
265
266 ## Version Life Cycle
267
268 Apache Airflow version life cycle:
269
270 <!-- This table is automatically updated by pre-commit scripts/ci/pre-commit/supported_versions.py -->
271 <!-- Beginning of auto-generated table -->
272
273 | Version | Current Patch/Minor | State | First Release | Limited Support | EOL/Terminated |
274 |-----------|-----------------------|-----------|-----------------|-------------------|------------------|
275 | 2 | 2.2.4 | Supported | Dec 17, 2020 | TBD | TBD |
276 | 1.10 | 1.10.15 | EOL | Aug 27, 2018 | Dec 17, 2020 | June 17, 2021 |
277 | 1.9 | 1.9.0 | EOL | Jan 03, 2018 | Aug 27, 2018 | Aug 27, 2018 |
278 | 1.8 | 1.8.2 | EOL | Mar 19, 2017 | Jan 03, 2018 | Jan 03, 2018 |
279 | 1.7 | 1.7.1.2 | EOL | Mar 28, 2016 | Mar 19, 2017 | Mar 19, 2017 |
280
281 <!-- End of auto-generated table -->
282
283 Limited support versions will be supported with security and critical bug fix only.
284 EOL versions will not get any fixes nor support.
285 We always recommend that all users run the latest available minor release for whatever major version is in use.
286 We **highly** recommend upgrading to the latest Airflow major release at the earliest convenient time and before the EOL date.
287
288 ## Support for Python and Kubernetes versions
289
290 As of Airflow 2.0, we agreed to certain rules we follow for Python and Kubernetes support.
291 They are based on the official release schedule of Python and Kubernetes, nicely summarized in the
292 [Python Developer's Guide](https://devguide.python.org/#status-of-python-branches) and
293 [Kubernetes version skew policy](https://kubernetes.io/docs/setup/release/version-skew-policy/).
294
295 1. We drop support for Python and Kubernetes versions when they reach EOL. Except for kubernetes, a
296 version stay supported by Airflow if two major cloud provider still provide support for it. We drop
297 support for those EOL versions in main right after EOL date, and it is effectively removed when we release
298 the first new MINOR (Or MAJOR if there is no new MINOR version) of Airflow For example, for Python 3.7 it
299 means that we will drop support in main right after 27.06.2023, and the first MAJOR or MINOR version of
300 Airflow released after will not have it.
301
302 2. The "oldest" supported version of Python/Kubernetes is the default one until we decide to switch to
303 later version. "Default" is only meaningful in terms of "smoke tests" in CI PRs, which are run using this
304 default version and the default reference image available. Currently `apache/airflow:latest`
305 and `apache/airflow:2.2.4` images are Python 3.7 images. This means that default reference image will
306 become the default at the time when we start preparing for dropping 3.7 support which is few months
307 before the end of life for Python 3.7.
308
309 3. We support a new version of Python/Kubernetes in main after they are officially released, as soon as we
310 make them work in our CI pipeline (which might not be immediate due to dependencies catching up with
311 new versions of Python mostly) we release new images/support in Airflow based on the working CI setup.
312
313 ## Base OS support for reference Airflow images
314
315 The Airflow Community provides conveniently packaged container images that are published whenever
316 we publish an Apache Airflow release. Those images contain:
317
318 * Base OS with necessary packages to install Airflow (stable Debian OS)
319 * Base Python installation in versions supported at the time of release for the MINOR version of
320 Airflow released (so there could be different versions for 2.3 and 2.2 line for example)
321 * Libraries required to connect to suppoerted Databases (again the set of databases supported depends
322 on the MINOR version of Airflow.
323 * Predefined set of popular providers (for details see the [Dockerfile](Dockerfile)).
324 * Possibility of building your own, custom image where the user can choose their own set of providers
325 and libraries (see [Building the image](https://airflow.apache.org/docs/docker-stack/build.html))
326 * In the future Airflow might also support a "slim" version without providers nor database clients installed
327
328 The version of the base OS image is the stable version of Debian. Airflow supports using all currently active
329 stable versions - as soon as all Airflow dependencies support building, and we set up the CI pipeline for
330 building and testing the OS version. Approximately 6 months before the end-of-life of a previous stable
331 version of the OS, Airflow switches the images released to use the latest supported version of the OS.
332 For example since Debian Buster end-of-life is August 2022, Airflow switches the images in `main` branch
333 to use Debian Bullseye in February/March 2022. The version will be used in the next MINOR release after
334 the switch happens. In case of the Bullseye switch - 2.3.0 version will use Bullseye. The images released
335 in the previous MINOR version continue to use the version that all other releases for the MINOR version
336 used.
337
338 Users will continue to be able to build their images using stable Debian releases until the end of life and
339 building and verifying of the images happens in our CI but no unit tests are executed using this image in
340 the `main` branch.
341
342 ## Approach to dependencies of Airflow
343
344 Airflow has a lot of dependencies - direct and transitive, also Airflow is both - library and application,
345 therefore our policies to dependencies has to include both - stability of installation of application,
346 but also ability to install newer version of dependencies for those users who develop DAGs. We developed
347 the approach where `constraints` are used to make sure airflow can be installed in a repeatable way, while
348 we do not limit our users to upgrade most of the dependencies. As a result we decided not to upper-bound
349 version of Airflow dependencies by default, unless we have good reasons to believe upper-bounding them is
350 needed because of importance of the dependency as well as risk it involves to upgrade specific dependency.
351 We also upper-bound the dependencies that we know cause problems.
352
353 The constraint mechanism of ours takes care about finding and upgrading all the non-upper bound dependencies
354 automatically (providing that all the tests pass). Our `main` build failures will indicate in case there
355 are versions of dependencies that break our tests - indicating that we should either upper-bind them or
356 that we should fix our code/tests to account for the upstream changes from those dependencies.
357
358 Whenever we upper-bound such a dependency, we should always comment why we are doing it - i.e. we should have
359 a good reason why dependency is upper-bound. And we should also mention what is the condition to remove the
360 binding.
361
362 ### Approach for dependencies for Airflow Core
363
364 Those `extras` and `providers` dependencies are maintained in `setup.cfg`.
365
366 There are few dependencies that we decided are important enough to upper-bound them by default, as they are
367 known to follow predictable versioning scheme, and we know that new versions of those are very likely to
368 bring breaking changes. We commit to regularly review and attempt to upgrade to the newer versions of
369 the dependencies as they are released, but this is manual process.
370
371 The important dependencies are:
372
373 * `SQLAlchemy`: upper-bound to specific MINOR version (SQLAlchemy is known to remove deprecations and
374 introduce breaking changes especially that support for different Databases varies and changes at
375 various speed (example: SQLAlchemy 1.4 broke MSSQL integration for Airflow)
376 * `Alembic`: it is important to handle our migrations in predictable and performant way. It is developed
377 together with SQLAlchemy. Our experience with Alembic is that it very stable in MINOR version
378 * `Flask`: We are using Flask as the back-bone of our web UI and API. We know major version of Flask
379 are very likely to introduce breaking changes across those so limiting it to MAJOR version makes sense
380 * `werkzeug`: the library is known to cause problems in new versions. It is tightly coupled with Flask
381 libraries, and we should update them together
382
383 ### Approach for dependencies in Airflow Providers and extras
384
385 Those `extras` and `providers` dependencies are maintained in `setup.py`.
386
387 By default, we should not upper-bound dependencies for providers, however each provider's maintainer
388 might decide to add additional limits (and justify them with comment)
389
390 ## Support for providers
391
392 Providers released by the community have limitation of a minimum supported version of Airflow. The minimum
393 version of Airflow is the `MINOR` version (2.1, 2.2 etc.) indicating that the providers might use features
394 that appeared in this release. The default support timespan for the minimum version of Airflow
395 (there could be justified exceptions) is that we increase the minimum Airflow version, when 12 months passed
396 since the first release for the MINOR version of Airflow.
397
398 For example this means that by default we upgrade the minimum version of Airflow supported by providers
399 to 2.2.0 in the first Provider's release after 21st of May 2022 (21st of May 2021 is the date when the
400 first `PATCHLEVEL` of 2.1 (2.1.0) has been released.
401
402 ## Contributing
403
404 Want to help build Apache Airflow? Check out our [contributing documentation](https://github.com/apache/airflow/blob/main/CONTRIBUTING.rst).
405
406 Official Docker (container) images for Apache Airflow are described in [IMAGES.rst](https://github.com/apache/airflow/blob/main/IMAGES.rst).
407
408 ## Who uses Apache Airflow?
409
410 More than 400 organizations are using Apache Airflow
411 [in the wild](https://github.com/apache/airflow/blob/main/INTHEWILD.md).
412
413 ## Who Maintains Apache Airflow?
414
415 Airflow is the work of the [community](https://github.com/apache/airflow/graphs/contributors),
416 but the [core committers/maintainers](https://people.apache.org/committers-by-project.html#airflow)
417 are responsible for reviewing and merging PRs as well as steering conversations around new feature requests.
418 If you would like to become a maintainer, please review the Apache Airflow
419 [committer requirements](https://github.com/apache/airflow/blob/main/COMMITTERS.rst#guidelines-to-become-an-airflow-committer).
420
421 ## Can I use the Apache Airflow logo in my presentation?
422
423 Yes! Be sure to abide by the Apache Foundation [trademark policies](https://www.apache.org/foundation/marks/#books) and the Apache Airflow [Brandbook](https://cwiki.apache.org/confluence/display/AIRFLOW/Brandbook). The most up to date logos are found in [this repo](/docs/apache-airflow/img/logos) and on the Apache Software Foundation [website](https://www.apache.org/logos/about.html).
424
425 ## Airflow merchandise
426
427 If you would love to have Apache Airflow stickers, t-shirt, etc. then check out
428 [Redbubble Shop](https://www.redbubble.com/i/sticker/Apache-Airflow-by-comdev/40497530.EJUG5).
429
430 ## Links
431
432 - [Documentation](https://airflow.apache.org/docs/apache-airflow/stable/)
433 - [Chat](https://s.apache.org/airflow-slack)
434
435 ## Sponsors
436
437 The CI infrastructure for Apache Airflow has been sponsored by:
438
439 <!-- Ordered by most recently "funded" -->
440
441 <a href="https://astronomer.io"><img src="https://assets2.astronomer.io/logos/logoForLIGHTbackground.png" alt="astronomer.io" width="250px"></a>
442 <a href="https://aws.amazon.com/opensource/"><img src="docs/integration-logos/aws/[email protected]" alt="AWS OpenSource" width="130px"></a>
443
[end of README.md]
[start of airflow/providers/google/cloud/example_dags/example_bigquery_to_gcs.py]
1 #
2 # Licensed to the Apache Software Foundation (ASF) under one
3 # or more contributor license agreements. See the NOTICE file
4 # distributed with this work for additional information
5 # regarding copyright ownership. The ASF licenses this file
6 # to you under the Apache License, Version 2.0 (the
7 # "License"); you may not use this file except in compliance
8 # with the License. You may obtain a copy of the License at
9 #
10 # http://www.apache.org/licenses/LICENSE-2.0
11 #
12 # Unless required by applicable law or agreed to in writing,
13 # software distributed under the License is distributed on an
14 # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
15 # KIND, either express or implied. See the License for the
16 # specific language governing permissions and limitations
17 # under the License.
18
19 """
20 Example Airflow DAG for Google BigQuery service.
21 """
22 import os
23 from datetime import datetime
24
25 from airflow import models
26 from airflow.providers.google.cloud.operators.bigquery import (
27 BigQueryCreateEmptyDatasetOperator,
28 BigQueryCreateEmptyTableOperator,
29 BigQueryDeleteDatasetOperator,
30 )
31 from airflow.providers.google.cloud.transfers.bigquery_to_gcs import BigQueryToGCSOperator
32
33 PROJECT_ID = os.environ.get("GCP_PROJECT_ID", "example-project")
34 DATASET_NAME = os.environ.get("GCP_BIGQUERY_DATASET_NAME", "test_dataset_transfer")
35 DATA_EXPORT_BUCKET_NAME = os.environ.get("GCP_BIGQUERY_EXPORT_BUCKET_NAME", "INVALID BUCKET NAME")
36 TABLE = "table_42"
37
38 with models.DAG(
39 "example_bigquery_to_gcs",
40 schedule_interval=None, # Override to match your needs
41 start_date=datetime(2021, 1, 1),
42 catchup=False,
43 tags=["example"],
44 ) as dag:
45 bigquery_to_gcs = BigQueryToGCSOperator(
46 task_id="bigquery_to_gcs",
47 source_project_dataset_table=f"{DATASET_NAME}.{TABLE}",
48 destination_cloud_storage_uris=[f"gs://{DATA_EXPORT_BUCKET_NAME}/export-bigquery.csv"],
49 )
50
51 create_dataset = BigQueryCreateEmptyDatasetOperator(task_id="create_dataset", dataset_id=DATASET_NAME)
52
53 create_table = BigQueryCreateEmptyTableOperator(
54 task_id="create_table",
55 dataset_id=DATASET_NAME,
56 table_id=TABLE,
57 schema_fields=[
58 {"name": "emp_name", "type": "STRING", "mode": "REQUIRED"},
59 {"name": "salary", "type": "INTEGER", "mode": "NULLABLE"},
60 ],
61 )
62 create_dataset >> create_table >> bigquery_to_gcs
63
64 delete_dataset = BigQueryDeleteDatasetOperator(
65 task_id="delete_dataset", dataset_id=DATASET_NAME, delete_contents=True
66 )
67
68 bigquery_to_gcs >> delete_dataset
69
[end of airflow/providers/google/cloud/example_dags/example_bigquery_to_gcs.py]
[start of airflow/providers/google/cloud/example_dags/example_bigquery_transfer.py]
1 #
2 # Licensed to the Apache Software Foundation (ASF) under one
3 # or more contributor license agreements. See the NOTICE file
4 # distributed with this work for additional information
5 # regarding copyright ownership. The ASF licenses this file
6 # to you under the Apache License, Version 2.0 (the
7 # "License"); you may not use this file except in compliance
8 # with the License. You may obtain a copy of the License at
9 #
10 # http://www.apache.org/licenses/LICENSE-2.0
11 #
12 # Unless required by applicable law or agreed to in writing,
13 # software distributed under the License is distributed on an
14 # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
15 # KIND, either express or implied. See the License for the
16 # specific language governing permissions and limitations
17 # under the License.
18
19 """
20 Example Airflow DAG for Google BigQuery service.
21 """
22 import os
23 from datetime import datetime
24
25 from airflow import models
26 from airflow.providers.google.cloud.operators.bigquery import (
27 BigQueryCreateEmptyDatasetOperator,
28 BigQueryCreateEmptyTableOperator,
29 BigQueryDeleteDatasetOperator,
30 )
31 from airflow.providers.google.cloud.transfers.bigquery_to_bigquery import BigQueryToBigQueryOperator
32 from airflow.providers.google.cloud.transfers.bigquery_to_gcs import BigQueryToGCSOperator
33
34 PROJECT_ID = os.environ.get("GCP_PROJECT_ID", "example-project")
35 DATASET_NAME = os.environ.get("GCP_BIGQUERY_DATASET_NAME", "test_dataset_transfer")
36 DATA_EXPORT_BUCKET_NAME = os.environ.get("GCP_BIGQUERY_EXPORT_BUCKET_NAME", "INVALID BUCKET NAME")
37 ORIGIN = "origin"
38 TARGET = "target"
39
40 with models.DAG(
41 "example_bigquery_transfer",
42 schedule_interval=None, # Override to match your needs
43 start_date=datetime(2021, 1, 1),
44 catchup=False,
45 tags=["example"],
46 ) as dag:
47 copy_selected_data = BigQueryToBigQueryOperator(
48 task_id="copy_selected_data",
49 source_project_dataset_tables=f"{DATASET_NAME}.{ORIGIN}",
50 destination_project_dataset_table=f"{DATASET_NAME}.{TARGET}",
51 )
52
53 bigquery_to_gcs = BigQueryToGCSOperator(
54 task_id="bigquery_to_gcs",
55 source_project_dataset_table=f"{DATASET_NAME}.{ORIGIN}",
56 destination_cloud_storage_uris=[f"gs://{DATA_EXPORT_BUCKET_NAME}/export-bigquery.csv"],
57 )
58
59 create_dataset = BigQueryCreateEmptyDatasetOperator(task_id="create_dataset", dataset_id=DATASET_NAME)
60
61 for table in [ORIGIN, TARGET]:
62 create_table = BigQueryCreateEmptyTableOperator(
63 task_id=f"create_{table}_table",
64 dataset_id=DATASET_NAME,
65 table_id=table,
66 schema_fields=[
67 {"name": "emp_name", "type": "STRING", "mode": "REQUIRED"},
68 {"name": "salary", "type": "INTEGER", "mode": "NULLABLE"},
69 ],
70 )
71 create_dataset >> create_table >> [copy_selected_data, bigquery_to_gcs]
72
73 delete_dataset = BigQueryDeleteDatasetOperator(
74 task_id="delete_dataset", dataset_id=DATASET_NAME, delete_contents=True
75 )
76
77 [copy_selected_data, bigquery_to_gcs] >> delete_dataset
78
[end of airflow/providers/google/cloud/example_dags/example_bigquery_transfer.py]
[start of airflow/providers/google/cloud/transfers/bigquery_to_gcs.py]
1 #
2 # Licensed to the Apache Software Foundation (ASF) under one
3 # or more contributor license agreements. See the NOTICE file
4 # distributed with this work for additional information
5 # regarding copyright ownership. The ASF licenses this file
6 # to you under the Apache License, Version 2.0 (the
7 # "License"); you may not use this file except in compliance
8 # with the License. You may obtain a copy of the License at
9 #
10 # http://www.apache.org/licenses/LICENSE-2.0
11 #
12 # Unless required by applicable law or agreed to in writing,
13 # software distributed under the License is distributed on an
14 # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
15 # KIND, either express or implied. See the License for the
16 # specific language governing permissions and limitations
17 # under the License.
18 """This module contains Google BigQuery to Google Cloud Storage operator."""
19 import warnings
20 from typing import TYPE_CHECKING, Any, Dict, List, Optional, Sequence, Union
21
22 from google.cloud.bigquery.table import TableReference
23
24 from airflow.models import BaseOperator
25 from airflow.providers.google.cloud.hooks.bigquery import BigQueryHook
26
27 if TYPE_CHECKING:
28 from airflow.utils.context import Context
29
30
31 class BigQueryToGCSOperator(BaseOperator):
32 """
33 Transfers a BigQuery table to a Google Cloud Storage bucket.
34
35 .. seealso::
36 For more details about these parameters:
37 https://cloud.google.com/bigquery/docs/reference/v2/jobs
38
39 :param source_project_dataset_table: The dotted
40 ``(<project>.|<project>:)<dataset>.<table>`` BigQuery table to use as the
41 source data. If ``<project>`` is not included, project will be the project
42 defined in the connection json. (templated)
43 :param destination_cloud_storage_uris: The destination Google Cloud
44 Storage URI (e.g. gs://some-bucket/some-file.txt). (templated) Follows
45 convention defined here:
46 https://cloud.google.com/bigquery/exporting-data-from-bigquery#exportingmultiple
47 :param compression: Type of compression to use.
48 :param export_format: File format to export.
49 :param field_delimiter: The delimiter to use when extracting to a CSV.
50 :param print_header: Whether to print a header for a CSV file extract.
51 :param gcp_conn_id: (Optional) The connection ID used to connect to Google Cloud.
52 :param bigquery_conn_id: (Deprecated) The connection ID used to connect to Google Cloud.
53 This parameter has been deprecated. You should pass the gcp_conn_id parameter instead.
54 :param delegate_to: The account to impersonate using domain-wide delegation of authority,
55 if any. For this to work, the service account making the request must have
56 domain-wide delegation enabled.
57 :param labels: a dictionary containing labels for the job/query,
58 passed to BigQuery
59 :param location: The location used for the operation.
60 :param impersonation_chain: Optional service account to impersonate using short-term
61 credentials, or chained list of accounts required to get the access_token
62 of the last account in the list, which will be impersonated in the request.
63 If set as a string, the account must grant the originating account
64 the Service Account Token Creator IAM role.
65 If set as a sequence, the identities from the list must grant
66 Service Account Token Creator IAM role to the directly preceding identity, with first
67 account from the list granting this role to the originating account (templated).
68 """
69
70 template_fields: Sequence[str] = (
71 'source_project_dataset_table',
72 'destination_cloud_storage_uris',
73 'labels',
74 'impersonation_chain',
75 )
76 template_ext: Sequence[str] = ()
77 ui_color = '#e4e6f0'
78
79 def __init__(
80 self,
81 *,
82 source_project_dataset_table: str,
83 destination_cloud_storage_uris: List[str],
84 compression: str = 'NONE',
85 export_format: str = 'CSV',
86 field_delimiter: str = ',',
87 print_header: bool = True,
88 gcp_conn_id: str = 'google_cloud_default',
89 bigquery_conn_id: Optional[str] = None,
90 delegate_to: Optional[str] = None,
91 labels: Optional[Dict] = None,
92 location: Optional[str] = None,
93 impersonation_chain: Optional[Union[str, Sequence[str]]] = None,
94 **kwargs,
95 ) -> None:
96 super().__init__(**kwargs)
97
98 if bigquery_conn_id:
99 warnings.warn(
100 "The bigquery_conn_id parameter has been deprecated. You should pass "
101 "the gcp_conn_id parameter.",
102 DeprecationWarning,
103 stacklevel=3,
104 )
105 gcp_conn_id = bigquery_conn_id
106
107 self.source_project_dataset_table = source_project_dataset_table
108 self.destination_cloud_storage_uris = destination_cloud_storage_uris
109 self.compression = compression
110 self.export_format = export_format
111 self.field_delimiter = field_delimiter
112 self.print_header = print_header
113 self.gcp_conn_id = gcp_conn_id
114 self.delegate_to = delegate_to
115 self.labels = labels
116 self.location = location
117 self.impersonation_chain = impersonation_chain
118
119 def execute(self, context: 'Context'):
120 self.log.info(
121 'Executing extract of %s into: %s',
122 self.source_project_dataset_table,
123 self.destination_cloud_storage_uris,
124 )
125 hook = BigQueryHook(
126 gcp_conn_id=self.gcp_conn_id,
127 delegate_to=self.delegate_to,
128 location=self.location,
129 impersonation_chain=self.impersonation_chain,
130 )
131
132 table_ref = TableReference.from_string(self.source_project_dataset_table, hook.project_id)
133
134 configuration: Dict[str, Any] = {
135 'extract': {
136 'sourceTable': table_ref.to_api_repr(),
137 'compression': self.compression,
138 'destinationUris': self.destination_cloud_storage_uris,
139 'destinationFormat': self.export_format,
140 }
141 }
142
143 if self.labels:
144 configuration['labels'] = self.labels
145
146 if self.export_format == 'CSV':
147 # Only set fieldDelimiter and printHeader fields if using CSV.
148 # Google does not like it if you set these fields for other export
149 # formats.
150 configuration['extract']['fieldDelimiter'] = self.field_delimiter
151 configuration['extract']['printHeader'] = self.print_header
152
153 hook.insert_job(configuration=configuration)
154
[end of airflow/providers/google/cloud/transfers/bigquery_to_gcs.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
apache/airflow
|
51d61df5a656101046a7825be53ac61ac4f2b047
|
BigQueryToGCSOperator: Invalid dataset ID error
### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
`apache-airflow-providers-google==6.3.0`
### Apache Airflow version
2.2.3
### Operating System
Linux
### Deployment
Composer
### Deployment details
- Composer Environment version: `composer-2.0.3-airflow-2.2.3`
### What happened
When I use BigQueryToGCSOperator, I got following error.
```
Invalid dataset ID "MY_PROJECT:MY_DATASET". Dataset IDs must be alphanumeric (plus underscores and dashes) and must be at most 1024 characters long.
```
### What you expected to happen
I guess that it is due to I use colon (`:` ) as the separator between project_id and dataset_id in `source_project_dataset_table `.
I tried use dot(`.`) as separator and it worked.
However, [document of BigQueryToGCSOperator](https://airflow.apache.org/docs/apache-airflow-providers-google/stable/_api/airflow/providers/google/cloud/transfers/bigquery_to_gcs/index.html) states that it is possible to use colon as the separator between project_id and dataset_id. In fact, at least untill Airflow1.10.15 version, it also worked with colon separator.
In Airflow 1.10.*, it separate and extract project_id and dataset_id by colon in bigquery hook. But `apache-airflow-providers-google==6.3.0` doesn't have this process.
https://github.com/apache/airflow/blob/d3b066931191b82880d216af103517ea941c74ba/airflow/contrib/hooks/bigquery_hook.py#L2186-L2247
### How to reproduce
You can reproduce following steps.
- Create a test DAG to execute BigQueryToGCSOperator in Composer environment(`composer-2.0.3-airflow-2.2.3`).
- And give `source_project_dataset_table` arg source BigQuery table path in following format.
- Trigger DAG.
```
source_project_dataset_table = 'PROJECT_ID:DATASET_ID.TABLE_ID'
```
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
|
Thanks for opening your first issue here! Be sure to follow the issue template!
Assigned you @shuhoy
|
2022-03-24T14:00:34Z
|
<patch>
diff --git a/airflow/providers/google/cloud/hooks/bigquery.py b/airflow/providers/google/cloud/hooks/bigquery.py
--- a/airflow/providers/google/cloud/hooks/bigquery.py
+++ b/airflow/providers/google/cloud/hooks/bigquery.py
@@ -1905,7 +1905,7 @@ def run_copy(
def run_extract(
self,
source_project_dataset_table: str,
- destination_cloud_storage_uris: str,
+ destination_cloud_storage_uris: List[str],
compression: str = 'NONE',
export_format: str = 'CSV',
field_delimiter: str = ',',
@@ -1945,7 +1945,7 @@ def run_extract(
var_name='source_project_dataset_table',
)
- configuration = {
+ configuration: Dict[str, Any] = {
'extract': {
'sourceTable': {
'projectId': source_project,
@@ -1956,7 +1956,7 @@ def run_extract(
'destinationUris': destination_cloud_storage_uris,
'destinationFormat': export_format,
}
- } # type: Dict[str, Any]
+ }
if labels:
configuration['labels'] = labels
diff --git a/airflow/providers/google/cloud/transfers/bigquery_to_gcs.py b/airflow/providers/google/cloud/transfers/bigquery_to_gcs.py
--- a/airflow/providers/google/cloud/transfers/bigquery_to_gcs.py
+++ b/airflow/providers/google/cloud/transfers/bigquery_to_gcs.py
@@ -17,9 +17,7 @@
# under the License.
"""This module contains Google BigQuery to Google Cloud Storage operator."""
import warnings
-from typing import TYPE_CHECKING, Any, Dict, List, Optional, Sequence, Union
-
-from google.cloud.bigquery.table import TableReference
+from typing import TYPE_CHECKING, Dict, List, Optional, Sequence, Union
from airflow.models import BaseOperator
from airflow.providers.google.cloud.hooks.bigquery import BigQueryHook
@@ -128,26 +126,12 @@ def execute(self, context: 'Context'):
location=self.location,
impersonation_chain=self.impersonation_chain,
)
-
- table_ref = TableReference.from_string(self.source_project_dataset_table, hook.project_id)
-
- configuration: Dict[str, Any] = {
- 'extract': {
- 'sourceTable': table_ref.to_api_repr(),
- 'compression': self.compression,
- 'destinationUris': self.destination_cloud_storage_uris,
- 'destinationFormat': self.export_format,
- }
- }
-
- if self.labels:
- configuration['labels'] = self.labels
-
- if self.export_format == 'CSV':
- # Only set fieldDelimiter and printHeader fields if using CSV.
- # Google does not like it if you set these fields for other export
- # formats.
- configuration['extract']['fieldDelimiter'] = self.field_delimiter
- configuration['extract']['printHeader'] = self.print_header
-
- hook.insert_job(configuration=configuration)
+ hook.run_extract(
+ source_project_dataset_table=self.source_project_dataset_table,
+ destination_cloud_storage_uris=self.destination_cloud_storage_uris,
+ compression=self.compression,
+ export_format=self.export_format,
+ field_delimiter=self.field_delimiter,
+ print_header=self.print_header,
+ labels=self.labels,
+ )
</patch>
|
[]
|
[]
| |||
pyca__cryptography-5438
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Make OpenSSL 1.0.2 error (+ env var fallback)
</issue>
<code>
[start of README.rst]
1 pyca/cryptography
2 =================
3
4 .. image:: https://img.shields.io/pypi/v/cryptography.svg
5 :target: https://pypi.org/project/cryptography/
6 :alt: Latest Version
7
8 .. image:: https://readthedocs.org/projects/cryptography/badge/?version=latest
9 :target: https://cryptography.io
10 :alt: Latest Docs
11
12 .. image:: https://travis-ci.org/pyca/cryptography.svg?branch=master
13 :target: https://travis-ci.org/pyca/cryptography
14
15 .. image:: https://github.com/pyca/cryptography/workflows/CI/badge.svg?branch=master
16 :target: https://github.com/pyca/cryptography/actions?query=workflow%3ACI+branch%3Amaster
17
18 .. image:: https://codecov.io/github/pyca/cryptography/coverage.svg?branch=master
19 :target: https://codecov.io/github/pyca/cryptography?branch=master
20
21
22 ``cryptography`` is a package which provides cryptographic recipes and
23 primitives to Python developers. Our goal is for it to be your "cryptographic
24 standard library". It supports Python 2.7, Python 3.5+, and PyPy 5.4+.
25
26 ``cryptography`` includes both high level recipes and low level interfaces to
27 common cryptographic algorithms such as symmetric ciphers, message digests, and
28 key derivation functions. For example, to encrypt something with
29 ``cryptography``'s high level symmetric encryption recipe:
30
31 .. code-block:: pycon
32
33 >>> from cryptography.fernet import Fernet
34 >>> # Put this somewhere safe!
35 >>> key = Fernet.generate_key()
36 >>> f = Fernet(key)
37 >>> token = f.encrypt(b"A really secret message. Not for prying eyes.")
38 >>> token
39 '...'
40 >>> f.decrypt(token)
41 'A really secret message. Not for prying eyes.'
42
43 You can find more information in the `documentation`_.
44
45 You can install ``cryptography`` with:
46
47 .. code-block:: console
48
49 $ pip install cryptography
50
51 For full details see `the installation documentation`_.
52
53 Discussion
54 ~~~~~~~~~~
55
56 If you run into bugs, you can file them in our `issue tracker`_.
57
58 We maintain a `cryptography-dev`_ mailing list for development discussion.
59
60 You can also join ``#cryptography-dev`` on Freenode to ask questions or get
61 involved.
62
63 Security
64 ~~~~~~~~
65
66 Need to report a security issue? Please consult our `security reporting`_
67 documentation.
68
69
70 .. _`documentation`: https://cryptography.io/
71 .. _`the installation documentation`: https://cryptography.io/en/latest/installation/
72 .. _`issue tracker`: https://github.com/pyca/cryptography/issues
73 .. _`cryptography-dev`: https://mail.python.org/mailman/listinfo/cryptography-dev
74 .. _`security reporting`: https://cryptography.io/en/latest/security/
75
[end of README.rst]
[start of docs/conf.py]
1 # -*- coding: utf-8 -*-
2
3 # This file is dual licensed under the terms of the Apache License, Version
4 # 2.0, and the BSD License. See the LICENSE file in the root of this repository
5 # for complete details.
6
7 #
8 # Cryptography documentation build configuration file, created by
9 # sphinx-quickstart on Tue Aug 6 19:19:14 2013.
10 #
11 # This file is execfile()d with the current directory set to its containing dir
12 #
13 # Note that not all possible configuration values are present in this
14 # autogenerated file.
15 #
16 # All configuration values have a default; values that are commented out
17 # serve to show the default.
18
19 from __future__ import absolute_import, division, print_function
20
21 import os
22 import sys
23
24 try:
25 import sphinx_rtd_theme
26 except ImportError:
27 sphinx_rtd_theme = None
28
29 try:
30 from sphinxcontrib import spelling
31 except ImportError:
32 spelling = None
33
34
35 # If extensions (or modules to document with autodoc) are in another directory,
36 # add these directories to sys.path here. If the directory is relative to the
37 # documentation root, use os.path.abspath to make it absolute, like shown here.
38 sys.path.insert(0, os.path.abspath("."))
39
40 # -- General configuration ----------------------------------------------------
41
42 # If your documentation needs a minimal Sphinx version, state it here.
43 # needs_sphinx = '1.0'
44
45 # Add any Sphinx extension module names here, as strings. They can be
46 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
47 extensions = [
48 "sphinx.ext.autodoc",
49 "sphinx.ext.doctest",
50 "sphinx.ext.intersphinx",
51 "sphinx.ext.viewcode",
52 "cryptography-docs",
53 ]
54
55 if spelling is not None:
56 extensions.append("sphinxcontrib.spelling")
57
58 # Add any paths that contain templates here, relative to this directory.
59 templates_path = ["_templates"]
60
61 nitpicky = True
62
63 # The suffix of source filenames.
64 source_suffix = ".rst"
65
66 # The encoding of source files.
67 # source_encoding = 'utf-8-sig'
68
69 # The master toctree document.
70 master_doc = "index"
71
72 # General information about the project.
73 project = "Cryptography"
74 copyright = "2013-2020, Individual Contributors"
75
76 # The version info for the project you're documenting, acts as replacement for
77 # |version| and |release|, also used in various other places throughout the
78 # built documents.
79 #
80
81 base_dir = os.path.join(os.path.dirname(__file__), os.pardir)
82 about = {}
83 with open(os.path.join(base_dir, "src", "cryptography", "__about__.py")) as f:
84 exec (f.read(), about)
85
86 version = release = about["__version__"]
87
88 # The language for content autogenerated by Sphinx. Refer to documentation
89 # for a list of supported languages.
90 # language = None
91
92 # There are two options for replacing |today|: either, you set today to some
93 # non-false value, then it is used:
94 # today = ''
95 # Else, today_fmt is used as the format for a strftime call.
96 # today_fmt = '%B %d, %Y'
97
98 # List of patterns, relative to source directory, that match files and
99 # directories to ignore when looking for source files.
100 exclude_patterns = ["_build"]
101
102 # The reST default role (used for this markup: `text`) to use for all documents
103 # default_role = None
104
105 # If true, '()' will be appended to :func: etc. cross-reference text.
106 # add_function_parentheses = True
107
108 # If true, the current module name will be prepended to all description
109 # unit titles (such as .. function::).
110 # add_module_names = True
111
112 # If true, sectionauthor and moduleauthor directives will be shown in the
113 # output. They are ignored by default.
114 # show_authors = False
115
116 # The name of the Pygments (syntax highlighting) style to use.
117 pygments_style = "sphinx"
118
119 # -- Options for HTML output --------------------------------------------------
120
121 # The theme to use for HTML and HTML Help pages. See the documentation for
122 # a list of builtin themes.
123
124 if sphinx_rtd_theme:
125 html_theme = "sphinx_rtd_theme"
126 html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
127 else:
128 html_theme = "default"
129
130 # Add any paths that contain custom static files (such as style sheets) here,
131 # relative to this directory. They are copied after the builtin static files,
132 # so a file named "default.css" will overwrite the builtin "default.css".
133 html_static_path = ["_static"]
134
135 # Output file base name for HTML help builder.
136 htmlhelp_basename = "Cryptographydoc"
137
138
139 # -- Options for LaTeX output -------------------------------------------------
140
141 latex_elements = {}
142
143 # Grouping the document tree into LaTeX files. List of tuples
144 # (source start file, target name, title, author, documentclass [howto/manual])
145 latex_documents = [
146 (
147 "index",
148 "Cryptography.tex",
149 "Cryptography Documentation",
150 "Individual Contributors",
151 "manual",
152 ),
153 ]
154
155 # -- Options for manual page output -------------------------------------------
156
157 # One entry per manual page. List of tuples
158 # (source start file, name, description, authors, manual section).
159 man_pages = [
160 (
161 "index",
162 "cryptography",
163 "Cryptography Documentation",
164 ["Individual Contributors"],
165 1,
166 )
167 ]
168
169 # -- Options for Texinfo output -----------------------------------------------
170
171 # Grouping the document tree into Texinfo files. List of tuples
172 # (source start file, target name, title, author,
173 # dir menu entry, description, category)
174 texinfo_documents = [
175 (
176 "index",
177 "Cryptography",
178 "Cryptography Documentation",
179 "Individual Contributors",
180 "Cryptography",
181 "One line description of project.",
182 "Miscellaneous",
183 ),
184 ]
185
186 # Example configuration for intersphinx: refer to the Python standard library.
187 intersphinx_mapping = {"https://docs.python.org/3": None}
188
189 epub_theme = "epub"
190
191 # Retry requests in the linkcheck builder so that we're resillient against
192 # transient network errors.
193 linkcheck_retries = 10
194
195 linkcheck_timeout = 5
196
197 linkcheck_ignore = [
198 # Small DH key results in a TLS failure on modern OpenSSL
199 r"https://info.isl.ntt.co.jp/crypt/eng/camellia/",
200 # Inconsistent small DH params they seem incapable of fixing
201 r"https://www.secg.org/sec1-v2.pdf",
202 # 403ing from Travis
203 r"https://devblogs.microsoft.com/oldnewthing/\?p=4223",
204 # Incomplete cert chain
205 r"https://cveform.mitre.org/",
206 ]
207
[end of docs/conf.py]
[start of docs/development/custom-vectors/secp256k1/generate_secp256k1.py]
1 from __future__ import absolute_import, print_function
2
3 import hashlib
4 import os
5 from binascii import hexlify
6 from collections import defaultdict
7
8 from ecdsa import SECP256k1, SigningKey
9 from ecdsa.util import sigdecode_der, sigencode_der
10
11 from cryptography_vectors import open_vector_file
12
13 from tests.utils import load_fips_ecdsa_signing_vectors, load_vectors_from_file
14
15 HASHLIB_HASH_TYPES = {
16 "SHA-1": hashlib.sha1,
17 "SHA-224": hashlib.sha224,
18 "SHA-256": hashlib.sha256,
19 "SHA-384": hashlib.sha384,
20 "SHA-512": hashlib.sha512,
21 }
22
23
24 class TruncatedHash(object):
25 def __init__(self, hasher):
26 self.hasher = hasher
27
28 def __call__(self, data):
29 self.hasher.update(data)
30 return self
31
32 def digest(self):
33 return self.hasher.digest()[: 256 // 8]
34
35
36 def build_vectors(fips_vectors):
37 vectors = defaultdict(list)
38 for vector in fips_vectors:
39 vectors[vector["digest_algorithm"]].append(vector["message"])
40
41 for digest_algorithm, messages in vectors.items():
42 if digest_algorithm not in HASHLIB_HASH_TYPES:
43 continue
44
45 yield ""
46 yield "[K-256,{0}]".format(digest_algorithm)
47 yield ""
48
49 for message in messages:
50 # Make a hash context
51 hash_func = TruncatedHash(HASHLIB_HASH_TYPES[digest_algorithm]())
52
53 # Sign the message using warner/ecdsa
54 secret_key = SigningKey.generate(curve=SECP256k1)
55 public_key = secret_key.get_verifying_key()
56 signature = secret_key.sign(
57 message, hashfunc=hash_func, sigencode=sigencode_der
58 )
59
60 r, s = sigdecode_der(signature, None)
61
62 yield "Msg = {0}".format(hexlify(message))
63 yield "d = {0:x}".format(secret_key.privkey.secret_multiplier)
64 yield "Qx = {0:x}".format(public_key.pubkey.point.x())
65 yield "Qy = {0:x}".format(public_key.pubkey.point.y())
66 yield "R = {0:x}".format(r)
67 yield "S = {0:x}".format(s)
68 yield ""
69
70
71 def write_file(lines, dest):
72 for line in lines:
73 print(line)
74 print(line, file=dest)
75
76
77 source_path = os.path.join("asymmetric", "ECDSA", "FIPS_186-3", "SigGen.txt")
78 dest_path = os.path.join("asymmetric", "ECDSA", "SECP256K1", "SigGen.txt")
79
80 fips_vectors = load_vectors_from_file(
81 source_path, load_fips_ecdsa_signing_vectors
82 )
83
84 with open_vector_file(dest_path, "w") as dest_file:
85 write_file(build_vectors(fips_vectors), dest_file)
86
[end of docs/development/custom-vectors/secp256k1/generate_secp256k1.py]
[start of src/_cffi_src/build_openssl.py]
1 # This file is dual licensed under the terms of the Apache License, Version
2 # 2.0, and the BSD License. See the LICENSE file in the root of this repository
3 # for complete details.
4
5 from __future__ import absolute_import, division, print_function
6
7 import os
8 import sys
9 from distutils import dist
10 from distutils.ccompiler import get_default_compiler
11 from distutils.command.config import config
12
13 from _cffi_src.utils import (
14 build_ffi_for_binding,
15 compiler_type,
16 extra_link_args,
17 )
18
19
20 def _get_openssl_libraries(platform):
21 if os.environ.get("CRYPTOGRAPHY_SUPPRESS_LINK_FLAGS", None):
22 return []
23 # OpenSSL goes by a different library name on different operating systems.
24 if platform == "win32" and compiler_type() == "msvc":
25 windows_link_legacy_openssl = os.environ.get(
26 "CRYPTOGRAPHY_WINDOWS_LINK_LEGACY_OPENSSL", None
27 )
28 if windows_link_legacy_openssl is None:
29 # Link against the 1.1.0 names
30 # CRYPTOGRAPHY_OPENSSL_110_OR_GREATER
31 libs = ["libssl", "libcrypto"]
32 else:
33 # Link against the 1.0.2 and lower names
34 libs = ["libeay32", "ssleay32"]
35 return libs + ["advapi32", "crypt32", "gdi32", "user32", "ws2_32"]
36 else:
37 # darwin, linux, mingw all use this path
38 # In some circumstances, the order in which these libs are
39 # specified on the linker command-line is significant;
40 # libssl must come before libcrypto
41 # (https://marc.info/?l=openssl-users&m=135361825921871)
42 # -lpthread required due to usage of pthread an potential
43 # existance of a static part containing e.g. pthread_atfork
44 # (https://github.com/pyca/cryptography/issues/5084)
45 if sys.platform == "zos":
46 return ["ssl", "crypto"]
47 else:
48 return ["ssl", "crypto", "pthread"]
49
50
51 def _extra_compile_args(platform):
52 """
53 We set -Wconversion args here so that we only do Wconversion checks on the
54 code we're compiling and not on cffi itself (as passing -Wconversion in
55 CFLAGS would do). We set no error on sign conversion because some
56 function signatures in OpenSSL have changed from long -> unsigned long
57 in the past. Since that isn't a precision issue we don't care.
58 When we drop support for CRYPTOGRAPHY_OPENSSL_LESS_THAN_110 we can
59 revisit this.
60 """
61 # make sure the compiler used supports the flags to be added
62 is_gcc = False
63 if get_default_compiler() == "unix":
64 d = dist.Distribution()
65 cmd = config(d)
66 cmd._check_compiler()
67 is_gcc = (
68 "gcc" in cmd.compiler.compiler[0]
69 or "clang" in cmd.compiler.compiler[0]
70 )
71 if is_gcc or not (
72 platform in ["win32", "hp-ux11", "sunos5"]
73 or platform.startswith("aix")
74 ):
75 return ["-Wconversion", "-Wno-error=sign-conversion"]
76 else:
77 return []
78
79
80 ffi = build_ffi_for_binding(
81 module_name="_openssl",
82 module_prefix="_cffi_src.openssl.",
83 modules=[
84 # This goes first so we can define some cryptography-wide symbols.
85 "cryptography",
86 "aes",
87 "asn1",
88 "bignum",
89 "bio",
90 "cmac",
91 "conf",
92 "crypto",
93 "ct",
94 "dh",
95 "dsa",
96 "ec",
97 "ecdh",
98 "ecdsa",
99 "engine",
100 "err",
101 "evp",
102 "fips",
103 "hmac",
104 "nid",
105 "objects",
106 "ocsp",
107 "opensslv",
108 "osrandom_engine",
109 "pem",
110 "pkcs12",
111 "rand",
112 "rsa",
113 "ssl",
114 "x509",
115 "x509name",
116 "x509v3",
117 "x509_vfy",
118 "pkcs7",
119 "callbacks",
120 ],
121 libraries=_get_openssl_libraries(sys.platform),
122 # These args are passed here so that we only do Wconversion checks on the
123 # code we're compiling and not on cffi itself (as passing -Wconversion in
124 # CFLAGS would do). We set no error on sign convesrion because some
125 # function signatures in OpenSSL have changed from long -> unsigned long
126 # in the past. Since that isn't a precision issue we don't care.
127 # When we drop support for CRYPTOGRAPHY_OPENSSL_LESS_THAN_110 we can
128 # revisit this.
129 extra_compile_args=_extra_compile_args(sys.platform),
130 extra_link_args=extra_link_args(compiler_type()),
131 )
132
[end of src/_cffi_src/build_openssl.py]
[start of src/_cffi_src/openssl/cryptography.py]
1 # This file is dual licensed under the terms of the Apache License, Version
2 # 2.0, and the BSD License. See the LICENSE file in the root of this repository
3 # for complete details.
4
5 from __future__ import absolute_import, division, print_function
6
7 INCLUDES = """
8 /* define our OpenSSL API compatibility level to 1.0.1. Any symbols older than
9 that will raise an error during compilation. We can raise this number again
10 after we drop 1.0.2 support in the distant future. */
11 #define OPENSSL_API_COMPAT 0x10001000L
12
13 #include <openssl/opensslv.h>
14
15
16 #if defined(LIBRESSL_VERSION_NUMBER)
17 #define CRYPTOGRAPHY_IS_LIBRESSL 1
18 #else
19 #define CRYPTOGRAPHY_IS_LIBRESSL 0
20 #endif
21
22 /*
23 LibreSSL removed e_os2.h from the public headers so we'll only include it
24 if we're using vanilla OpenSSL.
25 */
26 #if !CRYPTOGRAPHY_IS_LIBRESSL
27 #include <openssl/e_os2.h>
28 #endif
29 #if defined(_WIN32)
30 #define WIN32_LEAN_AND_MEAN
31 #include <windows.h>
32 #include <Wincrypt.h>
33 #include <Winsock2.h>
34 #endif
35
36 #define CRYPTOGRAPHY_OPENSSL_102L_OR_GREATER \
37 (OPENSSL_VERSION_NUMBER >= 0x100020cf && !CRYPTOGRAPHY_IS_LIBRESSL)
38 #define CRYPTOGRAPHY_OPENSSL_102U_OR_GREATER \
39 (OPENSSL_VERSION_NUMBER >= 0x1000215fL && !CRYPTOGRAPHY_IS_LIBRESSL)
40 #define CRYPTOGRAPHY_OPENSSL_110_OR_GREATER \
41 (OPENSSL_VERSION_NUMBER >= 0x10100000 && !CRYPTOGRAPHY_IS_LIBRESSL)
42 #define CRYPTOGRAPHY_OPENSSL_110F_OR_GREATER \
43 (OPENSSL_VERSION_NUMBER >= 0x1010006f && !CRYPTOGRAPHY_IS_LIBRESSL)
44
45 #define CRYPTOGRAPHY_OPENSSL_LESS_THAN_102I \
46 (OPENSSL_VERSION_NUMBER < 0x1000209f || CRYPTOGRAPHY_IS_LIBRESSL)
47 #define CRYPTOGRAPHY_OPENSSL_LESS_THAN_110 \
48 (OPENSSL_VERSION_NUMBER < 0x10100000 || CRYPTOGRAPHY_IS_LIBRESSL)
49 #define CRYPTOGRAPHY_OPENSSL_LESS_THAN_110J \
50 (OPENSSL_VERSION_NUMBER < 0x101000af || CRYPTOGRAPHY_IS_LIBRESSL)
51 #define CRYPTOGRAPHY_OPENSSL_LESS_THAN_111 \
52 (OPENSSL_VERSION_NUMBER < 0x10101000 || CRYPTOGRAPHY_IS_LIBRESSL)
53 #define CRYPTOGRAPHY_OPENSSL_LESS_THAN_111B \
54 (OPENSSL_VERSION_NUMBER < 0x10101020 || CRYPTOGRAPHY_IS_LIBRESSL)
55 #define CRYPTOGRAPHY_OPENSSL_LESS_THAN_111D \
56 (OPENSSL_VERSION_NUMBER < 0x10101040 || CRYPTOGRAPHY_IS_LIBRESSL)
57 #if (CRYPTOGRAPHY_OPENSSL_LESS_THAN_111D && !defined(OPENSSL_NO_ENGINE)) || \
58 defined(USE_OSRANDOM_RNG_FOR_TESTING)
59 #define CRYPTOGRAPHY_NEEDS_OSRANDOM_ENGINE 1
60 #else
61 #define CRYPTOGRAPHY_NEEDS_OSRANDOM_ENGINE 0
62 #endif
63 """
64
65 TYPES = """
66 static const int CRYPTOGRAPHY_OPENSSL_102L_OR_GREATER;
67 static const int CRYPTOGRAPHY_OPENSSL_102U_OR_GREATER;
68 static const int CRYPTOGRAPHY_OPENSSL_110_OR_GREATER;
69 static const int CRYPTOGRAPHY_OPENSSL_110F_OR_GREATER;
70
71 static const int CRYPTOGRAPHY_OPENSSL_LESS_THAN_102I;
72 static const int CRYPTOGRAPHY_OPENSSL_LESS_THAN_110;
73 static const int CRYPTOGRAPHY_OPENSSL_LESS_THAN_111;
74 static const int CRYPTOGRAPHY_OPENSSL_LESS_THAN_111B;
75 static const int CRYPTOGRAPHY_NEEDS_OSRANDOM_ENGINE;
76
77 static const int CRYPTOGRAPHY_IS_LIBRESSL;
78 """
79
80 FUNCTIONS = """
81 """
82
83 CUSTOMIZATIONS = """
84 """
85
[end of src/_cffi_src/openssl/cryptography.py]
[start of src/cryptography/hazmat/backends/openssl/rsa.py]
1 # This file is dual licensed under the terms of the Apache License, Version
2 # 2.0, and the BSD License. See the LICENSE file in the root of this repository
3 # for complete details.
4
5 from __future__ import absolute_import, division, print_function
6
7 from cryptography import utils
8 from cryptography.exceptions import (
9 InvalidSignature,
10 UnsupportedAlgorithm,
11 _Reasons,
12 )
13 from cryptography.hazmat.backends.openssl.utils import (
14 _calculate_digest_and_algorithm,
15 _check_not_prehashed,
16 _warn_sign_verify_deprecated,
17 )
18 from cryptography.hazmat.primitives import hashes
19 from cryptography.hazmat.primitives.asymmetric import (
20 AsymmetricSignatureContext,
21 AsymmetricVerificationContext,
22 rsa,
23 )
24 from cryptography.hazmat.primitives.asymmetric.padding import (
25 AsymmetricPadding,
26 MGF1,
27 OAEP,
28 PKCS1v15,
29 PSS,
30 calculate_max_pss_salt_length,
31 )
32 from cryptography.hazmat.primitives.asymmetric.rsa import (
33 RSAPrivateKeyWithSerialization,
34 RSAPublicKeyWithSerialization,
35 )
36
37
38 def _get_rsa_pss_salt_length(pss, key, hash_algorithm):
39 salt = pss._salt_length
40
41 if salt is MGF1.MAX_LENGTH or salt is PSS.MAX_LENGTH:
42 return calculate_max_pss_salt_length(key, hash_algorithm)
43 else:
44 return salt
45
46
47 def _enc_dec_rsa(backend, key, data, padding):
48 if not isinstance(padding, AsymmetricPadding):
49 raise TypeError("Padding must be an instance of AsymmetricPadding.")
50
51 if isinstance(padding, PKCS1v15):
52 padding_enum = backend._lib.RSA_PKCS1_PADDING
53 elif isinstance(padding, OAEP):
54 padding_enum = backend._lib.RSA_PKCS1_OAEP_PADDING
55
56 if not isinstance(padding._mgf, MGF1):
57 raise UnsupportedAlgorithm(
58 "Only MGF1 is supported by this backend.",
59 _Reasons.UNSUPPORTED_MGF,
60 )
61
62 if not backend.rsa_padding_supported(padding):
63 raise UnsupportedAlgorithm(
64 "This combination of padding and hash algorithm is not "
65 "supported by this backend.",
66 _Reasons.UNSUPPORTED_PADDING,
67 )
68
69 else:
70 raise UnsupportedAlgorithm(
71 "{} is not supported by this backend.".format(padding.name),
72 _Reasons.UNSUPPORTED_PADDING,
73 )
74
75 return _enc_dec_rsa_pkey_ctx(backend, key, data, padding_enum, padding)
76
77
78 def _enc_dec_rsa_pkey_ctx(backend, key, data, padding_enum, padding):
79 if isinstance(key, _RSAPublicKey):
80 init = backend._lib.EVP_PKEY_encrypt_init
81 crypt = backend._lib.EVP_PKEY_encrypt
82 else:
83 init = backend._lib.EVP_PKEY_decrypt_init
84 crypt = backend._lib.EVP_PKEY_decrypt
85
86 pkey_ctx = backend._lib.EVP_PKEY_CTX_new(key._evp_pkey, backend._ffi.NULL)
87 backend.openssl_assert(pkey_ctx != backend._ffi.NULL)
88 pkey_ctx = backend._ffi.gc(pkey_ctx, backend._lib.EVP_PKEY_CTX_free)
89 res = init(pkey_ctx)
90 backend.openssl_assert(res == 1)
91 res = backend._lib.EVP_PKEY_CTX_set_rsa_padding(pkey_ctx, padding_enum)
92 backend.openssl_assert(res > 0)
93 buf_size = backend._lib.EVP_PKEY_size(key._evp_pkey)
94 backend.openssl_assert(buf_size > 0)
95 if isinstance(padding, OAEP) and backend._lib.Cryptography_HAS_RSA_OAEP_MD:
96 mgf1_md = backend._evp_md_non_null_from_algorithm(
97 padding._mgf._algorithm
98 )
99 res = backend._lib.EVP_PKEY_CTX_set_rsa_mgf1_md(pkey_ctx, mgf1_md)
100 backend.openssl_assert(res > 0)
101 oaep_md = backend._evp_md_non_null_from_algorithm(padding._algorithm)
102 res = backend._lib.EVP_PKEY_CTX_set_rsa_oaep_md(pkey_ctx, oaep_md)
103 backend.openssl_assert(res > 0)
104
105 if (
106 isinstance(padding, OAEP)
107 and padding._label is not None
108 and len(padding._label) > 0
109 ):
110 # set0_rsa_oaep_label takes ownership of the char * so we need to
111 # copy it into some new memory
112 labelptr = backend._lib.OPENSSL_malloc(len(padding._label))
113 backend.openssl_assert(labelptr != backend._ffi.NULL)
114 backend._ffi.memmove(labelptr, padding._label, len(padding._label))
115 res = backend._lib.EVP_PKEY_CTX_set0_rsa_oaep_label(
116 pkey_ctx, labelptr, len(padding._label)
117 )
118 backend.openssl_assert(res == 1)
119
120 outlen = backend._ffi.new("size_t *", buf_size)
121 buf = backend._ffi.new("unsigned char[]", buf_size)
122 res = crypt(pkey_ctx, buf, outlen, data, len(data))
123 if res <= 0:
124 _handle_rsa_enc_dec_error(backend, key)
125
126 return backend._ffi.buffer(buf)[: outlen[0]]
127
128
129 def _handle_rsa_enc_dec_error(backend, key):
130 errors = backend._consume_errors_with_text()
131 if isinstance(key, _RSAPublicKey):
132 raise ValueError(
133 "Data too long for key size. Encrypt less data or use a "
134 "larger key size.",
135 errors,
136 )
137 else:
138 raise ValueError("Decryption failed.", errors)
139
140
141 def _rsa_sig_determine_padding(backend, key, padding, algorithm):
142 if not isinstance(padding, AsymmetricPadding):
143 raise TypeError("Expected provider of AsymmetricPadding.")
144
145 pkey_size = backend._lib.EVP_PKEY_size(key._evp_pkey)
146 backend.openssl_assert(pkey_size > 0)
147
148 if isinstance(padding, PKCS1v15):
149 padding_enum = backend._lib.RSA_PKCS1_PADDING
150 elif isinstance(padding, PSS):
151 if not isinstance(padding._mgf, MGF1):
152 raise UnsupportedAlgorithm(
153 "Only MGF1 is supported by this backend.",
154 _Reasons.UNSUPPORTED_MGF,
155 )
156
157 # Size of key in bytes - 2 is the maximum
158 # PSS signature length (salt length is checked later)
159 if pkey_size - algorithm.digest_size - 2 < 0:
160 raise ValueError(
161 "Digest too large for key size. Use a larger "
162 "key or different digest."
163 )
164
165 padding_enum = backend._lib.RSA_PKCS1_PSS_PADDING
166 else:
167 raise UnsupportedAlgorithm(
168 "{} is not supported by this backend.".format(padding.name),
169 _Reasons.UNSUPPORTED_PADDING,
170 )
171
172 return padding_enum
173
174
175 def _rsa_sig_setup(backend, padding, algorithm, key, data, init_func):
176 padding_enum = _rsa_sig_determine_padding(backend, key, padding, algorithm)
177 evp_md = backend._evp_md_non_null_from_algorithm(algorithm)
178 pkey_ctx = backend._lib.EVP_PKEY_CTX_new(key._evp_pkey, backend._ffi.NULL)
179 backend.openssl_assert(pkey_ctx != backend._ffi.NULL)
180 pkey_ctx = backend._ffi.gc(pkey_ctx, backend._lib.EVP_PKEY_CTX_free)
181 res = init_func(pkey_ctx)
182 backend.openssl_assert(res == 1)
183 res = backend._lib.EVP_PKEY_CTX_set_signature_md(pkey_ctx, evp_md)
184 if res == 0:
185 backend._consume_errors()
186 raise UnsupportedAlgorithm(
187 "{} is not supported by this backend for RSA signing.".format(
188 algorithm.name
189 ),
190 _Reasons.UNSUPPORTED_HASH,
191 )
192 res = backend._lib.EVP_PKEY_CTX_set_rsa_padding(pkey_ctx, padding_enum)
193 backend.openssl_assert(res > 0)
194 if isinstance(padding, PSS):
195 res = backend._lib.EVP_PKEY_CTX_set_rsa_pss_saltlen(
196 pkey_ctx, _get_rsa_pss_salt_length(padding, key, algorithm)
197 )
198 backend.openssl_assert(res > 0)
199
200 mgf1_md = backend._evp_md_non_null_from_algorithm(
201 padding._mgf._algorithm
202 )
203 res = backend._lib.EVP_PKEY_CTX_set_rsa_mgf1_md(pkey_ctx, mgf1_md)
204 backend.openssl_assert(res > 0)
205
206 return pkey_ctx
207
208
209 def _rsa_sig_sign(backend, padding, algorithm, private_key, data):
210 pkey_ctx = _rsa_sig_setup(
211 backend,
212 padding,
213 algorithm,
214 private_key,
215 data,
216 backend._lib.EVP_PKEY_sign_init,
217 )
218 buflen = backend._ffi.new("size_t *")
219 res = backend._lib.EVP_PKEY_sign(
220 pkey_ctx, backend._ffi.NULL, buflen, data, len(data)
221 )
222 backend.openssl_assert(res == 1)
223 buf = backend._ffi.new("unsigned char[]", buflen[0])
224 res = backend._lib.EVP_PKEY_sign(pkey_ctx, buf, buflen, data, len(data))
225 if res != 1:
226 errors = backend._consume_errors_with_text()
227 raise ValueError(
228 "Digest or salt length too long for key size. Use a larger key "
229 "or shorter salt length if you are specifying a PSS salt",
230 errors,
231 )
232
233 return backend._ffi.buffer(buf)[:]
234
235
236 def _rsa_sig_verify(backend, padding, algorithm, public_key, signature, data):
237 pkey_ctx = _rsa_sig_setup(
238 backend,
239 padding,
240 algorithm,
241 public_key,
242 data,
243 backend._lib.EVP_PKEY_verify_init,
244 )
245 res = backend._lib.EVP_PKEY_verify(
246 pkey_ctx, signature, len(signature), data, len(data)
247 )
248 # The previous call can return negative numbers in the event of an
249 # error. This is not a signature failure but we need to fail if it
250 # occurs.
251 backend.openssl_assert(res >= 0)
252 if res == 0:
253 backend._consume_errors()
254 raise InvalidSignature
255
256
257 @utils.register_interface(AsymmetricSignatureContext)
258 class _RSASignatureContext(object):
259 def __init__(self, backend, private_key, padding, algorithm):
260 self._backend = backend
261 self._private_key = private_key
262
263 # We now call _rsa_sig_determine_padding in _rsa_sig_setup. However
264 # we need to make a pointless call to it here so we maintain the
265 # API of erroring on init with this context if the values are invalid.
266 _rsa_sig_determine_padding(backend, private_key, padding, algorithm)
267 self._padding = padding
268 self._algorithm = algorithm
269 self._hash_ctx = hashes.Hash(self._algorithm, self._backend)
270
271 def update(self, data):
272 self._hash_ctx.update(data)
273
274 def finalize(self):
275 return _rsa_sig_sign(
276 self._backend,
277 self._padding,
278 self._algorithm,
279 self._private_key,
280 self._hash_ctx.finalize(),
281 )
282
283
284 @utils.register_interface(AsymmetricVerificationContext)
285 class _RSAVerificationContext(object):
286 def __init__(self, backend, public_key, signature, padding, algorithm):
287 self._backend = backend
288 self._public_key = public_key
289 self._signature = signature
290 self._padding = padding
291 # We now call _rsa_sig_determine_padding in _rsa_sig_setup. However
292 # we need to make a pointless call to it here so we maintain the
293 # API of erroring on init with this context if the values are invalid.
294 _rsa_sig_determine_padding(backend, public_key, padding, algorithm)
295
296 padding = padding
297 self._algorithm = algorithm
298 self._hash_ctx = hashes.Hash(self._algorithm, self._backend)
299
300 def update(self, data):
301 self._hash_ctx.update(data)
302
303 def verify(self):
304 return _rsa_sig_verify(
305 self._backend,
306 self._padding,
307 self._algorithm,
308 self._public_key,
309 self._signature,
310 self._hash_ctx.finalize(),
311 )
312
313
314 @utils.register_interface(RSAPrivateKeyWithSerialization)
315 class _RSAPrivateKey(object):
316 def __init__(self, backend, rsa_cdata, evp_pkey):
317 res = backend._lib.RSA_check_key(rsa_cdata)
318 if res != 1:
319 errors = backend._consume_errors_with_text()
320 raise ValueError("Invalid private key", errors)
321
322 self._backend = backend
323 self._rsa_cdata = rsa_cdata
324 self._evp_pkey = evp_pkey
325
326 n = self._backend._ffi.new("BIGNUM **")
327 self._backend._lib.RSA_get0_key(
328 self._rsa_cdata,
329 n,
330 self._backend._ffi.NULL,
331 self._backend._ffi.NULL,
332 )
333 self._backend.openssl_assert(n[0] != self._backend._ffi.NULL)
334 self._key_size = self._backend._lib.BN_num_bits(n[0])
335
336 key_size = utils.read_only_property("_key_size")
337
338 def signer(self, padding, algorithm):
339 _warn_sign_verify_deprecated()
340 _check_not_prehashed(algorithm)
341 return _RSASignatureContext(self._backend, self, padding, algorithm)
342
343 def decrypt(self, ciphertext, padding):
344 key_size_bytes = (self.key_size + 7) // 8
345 if key_size_bytes != len(ciphertext):
346 raise ValueError("Ciphertext length must be equal to key size.")
347
348 return _enc_dec_rsa(self._backend, self, ciphertext, padding)
349
350 def public_key(self):
351 ctx = self._backend._lib.RSAPublicKey_dup(self._rsa_cdata)
352 self._backend.openssl_assert(ctx != self._backend._ffi.NULL)
353 ctx = self._backend._ffi.gc(ctx, self._backend._lib.RSA_free)
354 res = self._backend._lib.RSA_blinding_on(ctx, self._backend._ffi.NULL)
355 self._backend.openssl_assert(res == 1)
356 evp_pkey = self._backend._rsa_cdata_to_evp_pkey(ctx)
357 return _RSAPublicKey(self._backend, ctx, evp_pkey)
358
359 def private_numbers(self):
360 n = self._backend._ffi.new("BIGNUM **")
361 e = self._backend._ffi.new("BIGNUM **")
362 d = self._backend._ffi.new("BIGNUM **")
363 p = self._backend._ffi.new("BIGNUM **")
364 q = self._backend._ffi.new("BIGNUM **")
365 dmp1 = self._backend._ffi.new("BIGNUM **")
366 dmq1 = self._backend._ffi.new("BIGNUM **")
367 iqmp = self._backend._ffi.new("BIGNUM **")
368 self._backend._lib.RSA_get0_key(self._rsa_cdata, n, e, d)
369 self._backend.openssl_assert(n[0] != self._backend._ffi.NULL)
370 self._backend.openssl_assert(e[0] != self._backend._ffi.NULL)
371 self._backend.openssl_assert(d[0] != self._backend._ffi.NULL)
372 self._backend._lib.RSA_get0_factors(self._rsa_cdata, p, q)
373 self._backend.openssl_assert(p[0] != self._backend._ffi.NULL)
374 self._backend.openssl_assert(q[0] != self._backend._ffi.NULL)
375 self._backend._lib.RSA_get0_crt_params(
376 self._rsa_cdata, dmp1, dmq1, iqmp
377 )
378 self._backend.openssl_assert(dmp1[0] != self._backend._ffi.NULL)
379 self._backend.openssl_assert(dmq1[0] != self._backend._ffi.NULL)
380 self._backend.openssl_assert(iqmp[0] != self._backend._ffi.NULL)
381 return rsa.RSAPrivateNumbers(
382 p=self._backend._bn_to_int(p[0]),
383 q=self._backend._bn_to_int(q[0]),
384 d=self._backend._bn_to_int(d[0]),
385 dmp1=self._backend._bn_to_int(dmp1[0]),
386 dmq1=self._backend._bn_to_int(dmq1[0]),
387 iqmp=self._backend._bn_to_int(iqmp[0]),
388 public_numbers=rsa.RSAPublicNumbers(
389 e=self._backend._bn_to_int(e[0]),
390 n=self._backend._bn_to_int(n[0]),
391 ),
392 )
393
394 def private_bytes(self, encoding, format, encryption_algorithm):
395 return self._backend._private_key_bytes(
396 encoding,
397 format,
398 encryption_algorithm,
399 self,
400 self._evp_pkey,
401 self._rsa_cdata,
402 )
403
404 def sign(self, data, padding, algorithm):
405 data, algorithm = _calculate_digest_and_algorithm(
406 self._backend, data, algorithm
407 )
408 return _rsa_sig_sign(self._backend, padding, algorithm, self, data)
409
410
411 @utils.register_interface(RSAPublicKeyWithSerialization)
412 class _RSAPublicKey(object):
413 def __init__(self, backend, rsa_cdata, evp_pkey):
414 self._backend = backend
415 self._rsa_cdata = rsa_cdata
416 self._evp_pkey = evp_pkey
417
418 n = self._backend._ffi.new("BIGNUM **")
419 self._backend._lib.RSA_get0_key(
420 self._rsa_cdata,
421 n,
422 self._backend._ffi.NULL,
423 self._backend._ffi.NULL,
424 )
425 self._backend.openssl_assert(n[0] != self._backend._ffi.NULL)
426 self._key_size = self._backend._lib.BN_num_bits(n[0])
427
428 key_size = utils.read_only_property("_key_size")
429
430 def verifier(self, signature, padding, algorithm):
431 _warn_sign_verify_deprecated()
432 utils._check_bytes("signature", signature)
433
434 _check_not_prehashed(algorithm)
435 return _RSAVerificationContext(
436 self._backend, self, signature, padding, algorithm
437 )
438
439 def encrypt(self, plaintext, padding):
440 return _enc_dec_rsa(self._backend, self, plaintext, padding)
441
442 def public_numbers(self):
443 n = self._backend._ffi.new("BIGNUM **")
444 e = self._backend._ffi.new("BIGNUM **")
445 self._backend._lib.RSA_get0_key(
446 self._rsa_cdata, n, e, self._backend._ffi.NULL
447 )
448 self._backend.openssl_assert(n[0] != self._backend._ffi.NULL)
449 self._backend.openssl_assert(e[0] != self._backend._ffi.NULL)
450 return rsa.RSAPublicNumbers(
451 e=self._backend._bn_to_int(e[0]),
452 n=self._backend._bn_to_int(n[0]),
453 )
454
455 def public_bytes(self, encoding, format):
456 return self._backend._public_key_bytes(
457 encoding, format, self, self._evp_pkey, self._rsa_cdata
458 )
459
460 def verify(self, signature, data, padding, algorithm):
461 data, algorithm = _calculate_digest_and_algorithm(
462 self._backend, data, algorithm
463 )
464 return _rsa_sig_verify(
465 self._backend, padding, algorithm, self, signature, data
466 )
467
[end of src/cryptography/hazmat/backends/openssl/rsa.py]
[start of src/cryptography/hazmat/bindings/openssl/binding.py]
1 # This file is dual licensed under the terms of the Apache License, Version
2 # 2.0, and the BSD License. See the LICENSE file in the root of this repository
3 # for complete details.
4
5 from __future__ import absolute_import, division, print_function
6
7 import collections
8 import threading
9 import types
10 import warnings
11
12 import cryptography
13 from cryptography import utils
14 from cryptography.exceptions import InternalError
15 from cryptography.hazmat.bindings._openssl import ffi, lib
16 from cryptography.hazmat.bindings.openssl._conditional import CONDITIONAL_NAMES
17
18 _OpenSSLErrorWithText = collections.namedtuple(
19 "_OpenSSLErrorWithText", ["code", "lib", "func", "reason", "reason_text"]
20 )
21
22
23 class _OpenSSLError(object):
24 def __init__(self, code, lib, func, reason):
25 self._code = code
26 self._lib = lib
27 self._func = func
28 self._reason = reason
29
30 def _lib_reason_match(self, lib, reason):
31 return lib == self.lib and reason == self.reason
32
33 code = utils.read_only_property("_code")
34 lib = utils.read_only_property("_lib")
35 func = utils.read_only_property("_func")
36 reason = utils.read_only_property("_reason")
37
38
39 def _consume_errors(lib):
40 errors = []
41 while True:
42 code = lib.ERR_get_error()
43 if code == 0:
44 break
45
46 err_lib = lib.ERR_GET_LIB(code)
47 err_func = lib.ERR_GET_FUNC(code)
48 err_reason = lib.ERR_GET_REASON(code)
49
50 errors.append(_OpenSSLError(code, err_lib, err_func, err_reason))
51
52 return errors
53
54
55 def _errors_with_text(errors):
56 errors_with_text = []
57 for err in errors:
58 buf = ffi.new("char[]", 256)
59 lib.ERR_error_string_n(err.code, buf, len(buf))
60 err_text_reason = ffi.string(buf)
61
62 errors_with_text.append(
63 _OpenSSLErrorWithText(
64 err.code, err.lib, err.func, err.reason, err_text_reason
65 )
66 )
67
68 return errors_with_text
69
70
71 def _consume_errors_with_text(lib):
72 return _errors_with_text(_consume_errors(lib))
73
74
75 def _openssl_assert(lib, ok, errors=None):
76 if not ok:
77 if errors is None:
78 errors = _consume_errors(lib)
79 errors_with_text = _errors_with_text(errors)
80
81 raise InternalError(
82 "Unknown OpenSSL error. This error is commonly encountered when "
83 "another library is not cleaning up the OpenSSL error stack. If "
84 "you are using cryptography with another library that uses "
85 "OpenSSL try disabling it before reporting a bug. Otherwise "
86 "please file an issue at https://github.com/pyca/cryptography/"
87 "issues with information on how to reproduce "
88 "this. ({0!r})".format(errors_with_text),
89 errors_with_text,
90 )
91
92
93 def build_conditional_library(lib, conditional_names):
94 conditional_lib = types.ModuleType("lib")
95 conditional_lib._original_lib = lib
96 excluded_names = set()
97 for condition, names_cb in conditional_names.items():
98 if not getattr(lib, condition):
99 excluded_names.update(names_cb())
100
101 for attr in dir(lib):
102 if attr not in excluded_names:
103 setattr(conditional_lib, attr, getattr(lib, attr))
104
105 return conditional_lib
106
107
108 class Binding(object):
109 """
110 OpenSSL API wrapper.
111 """
112
113 lib = None
114 ffi = ffi
115 _lib_loaded = False
116 _init_lock = threading.Lock()
117 _lock_init_lock = threading.Lock()
118
119 def __init__(self):
120 self._ensure_ffi_initialized()
121
122 @classmethod
123 def _register_osrandom_engine(cls):
124 # Clear any errors extant in the queue before we start. In many
125 # scenarios other things may be interacting with OpenSSL in the same
126 # process space and it has proven untenable to assume that they will
127 # reliably clear the error queue. Once we clear it here we will
128 # error on any subsequent unexpected item in the stack.
129 cls.lib.ERR_clear_error()
130 if cls.lib.CRYPTOGRAPHY_NEEDS_OSRANDOM_ENGINE:
131 result = cls.lib.Cryptography_add_osrandom_engine()
132 _openssl_assert(cls.lib, result in (1, 2))
133
134 @classmethod
135 def _ensure_ffi_initialized(cls):
136 with cls._init_lock:
137 if not cls._lib_loaded:
138 cls.lib = build_conditional_library(lib, CONDITIONAL_NAMES)
139 cls._lib_loaded = True
140 # initialize the SSL library
141 cls.lib.SSL_library_init()
142 # adds all ciphers/digests for EVP
143 cls.lib.OpenSSL_add_all_algorithms()
144 # loads error strings for libcrypto and libssl functions
145 cls.lib.SSL_load_error_strings()
146 cls._register_osrandom_engine()
147
148 @classmethod
149 def init_static_locks(cls):
150 with cls._lock_init_lock:
151 cls._ensure_ffi_initialized()
152 # Use Python's implementation if available, importing _ssl triggers
153 # the setup for this.
154 __import__("_ssl")
155
156 if (
157 not cls.lib.Cryptography_HAS_LOCKING_CALLBACKS
158 or cls.lib.CRYPTO_get_locking_callback() != cls.ffi.NULL
159 ):
160 return
161
162 # If nothing else has setup a locking callback already, we set up
163 # our own
164 res = lib.Cryptography_setup_ssl_threads()
165 _openssl_assert(cls.lib, res == 1)
166
167
168 def _verify_openssl_version(lib):
169 if (
170 lib.CRYPTOGRAPHY_OPENSSL_LESS_THAN_110
171 and not lib.CRYPTOGRAPHY_IS_LIBRESSL
172 ):
173 warnings.warn(
174 "OpenSSL version 1.0.2 is no longer supported by the OpenSSL "
175 "project, please upgrade. The next version of cryptography will "
176 "drop support for it.",
177 utils.CryptographyDeprecationWarning,
178 )
179
180
181 def _verify_package_version(version):
182 # Occasionally we run into situations where the version of the Python
183 # package does not match the version of the shared object that is loaded.
184 # This may occur in environments where multiple versions of cryptography
185 # are installed and available in the python path. To avoid errors cropping
186 # up later this code checks that the currently imported package and the
187 # shared object that were loaded have the same version and raise an
188 # ImportError if they do not
189 so_package_version = ffi.string(lib.CRYPTOGRAPHY_PACKAGE_VERSION)
190 if version.encode("ascii") != so_package_version:
191 raise ImportError(
192 "The version of cryptography does not match the loaded "
193 "shared object. This can happen if you have multiple copies of "
194 "cryptography installed in your Python path. Please try creating "
195 "a new virtual environment to resolve this issue. "
196 "Loaded python version: {}, shared object version: {}".format(
197 version, so_package_version
198 )
199 )
200
201
202 _verify_package_version(cryptography.__version__)
203
204 # OpenSSL is not thread safe until the locks are initialized. We call this
205 # method in module scope so that it executes with the import lock. On
206 # Pythons < 3.4 this import lock is a global lock, which can prevent a race
207 # condition registering the OpenSSL locks. On Python 3.4+ the import lock
208 # is per module so this approach will not work.
209 Binding.init_static_locks()
210
211 _verify_openssl_version(Binding.lib)
212
[end of src/cryptography/hazmat/bindings/openssl/binding.py]
[start of vectors/setup.py]
1 #!/usr/bin/env python
2
3 # This file is dual licensed under the terms of the Apache License, Version
4 # 2.0, and the BSD License. See the LICENSE file in the root of this repository
5 # for complete details.
6
7 from __future__ import absolute_import, division, print_function
8
9 import os
10
11 from setuptools import find_packages, setup
12
13
14 base_dir = os.path.dirname(__file__)
15
16 about = {}
17 with open(os.path.join(base_dir, "cryptography_vectors", "__about__.py")) as f:
18 exec (f.read(), about)
19
20
21 setup(
22 name=about["__title__"],
23 version=about["__version__"],
24 description=about["__summary__"],
25 license=about["__license__"],
26 url=about["__uri__"],
27 author=about["__author__"],
28 author_email=about["__email__"],
29 packages=find_packages(),
30 zip_safe=False,
31 include_package_data=True,
32 )
33
[end of vectors/setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pyca/cryptography
|
1fd7cacdb8675242bb8438cf427b9417dcea6968
|
Make OpenSSL 1.0.2 error (+ env var fallback)
|
2020-08-27T23:21:56Z
|
<patch>
diff --git a/src/cryptography/hazmat/bindings/openssl/binding.py b/src/cryptography/hazmat/bindings/openssl/binding.py
--- a/src/cryptography/hazmat/bindings/openssl/binding.py
+++ b/src/cryptography/hazmat/bindings/openssl/binding.py
@@ -5,6 +5,7 @@
from __future__ import absolute_import, division, print_function
import collections
+import os
import threading
import types
import warnings
@@ -170,12 +171,19 @@ def _verify_openssl_version(lib):
lib.CRYPTOGRAPHY_OPENSSL_LESS_THAN_110
and not lib.CRYPTOGRAPHY_IS_LIBRESSL
):
- warnings.warn(
- "OpenSSL version 1.0.2 is no longer supported by the OpenSSL "
- "project, please upgrade. The next version of cryptography will "
- "drop support for it.",
- utils.CryptographyDeprecationWarning,
- )
+ if os.environ.get("CRYPTOGRAPHY_ALLOW_OPENSSL_102"):
+ warnings.warn(
+ "OpenSSL version 1.0.2 is no longer supported by the OpenSSL "
+ "project, please upgrade. The next version of cryptography "
+ "will completely remove support for it.",
+ utils.CryptographyDeprecationWarning,
+ )
+ else:
+ raise RuntimeError(
+ "You are linking against OpenSSL 1.0.2, which is no longer "
+ "supported by the OpenSSL project. You need to upgrade to a "
+ "newer version of OpenSSL."
+ )
def _verify_package_version(version):
</patch>
|
[]
|
[]
| ||||
pandas-dev__pandas-39118
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
DataFrame.apply doesn't handle numpy ops or DataFrame properties
```
s = pd.Series([1, 2])
df = pd.DataFrame([1, 2])
print(s.agg("sqrt"))
print(df.agg("sqrt"))
print(s.apply("sqrt"))
print(df.apply("sqrt"))
print(s.agg("size"))
print(df.agg("size"))
print(s.apply("size"))
print(df.apply("size"))
```
In each of the two blocks, first three lines all give the expected output and the last raises.
</issue>
<code>
[start of README.md]
1 <div align="center">
2 <img src="https://dev.pandas.io/static/img/pandas.svg"><br>
3 </div>
4
5 -----------------
6
7 # pandas: powerful Python data analysis toolkit
8 [](https://pypi.org/project/pandas/)
9 [](https://anaconda.org/anaconda/pandas/)
10 [](https://doi.org/10.5281/zenodo.3509134)
11 [](https://pypi.org/project/pandas/)
12 [](https://github.com/pandas-dev/pandas/blob/master/LICENSE)
13 [](https://travis-ci.org/pandas-dev/pandas)
14 [](https://dev.azure.com/pandas-dev/pandas/_build/latest?definitionId=1&branch=master)
15 [](https://codecov.io/gh/pandas-dev/pandas)
16 [](https://pandas.pydata.org)
17 [](https://gitter.im/pydata/pandas)
18 [](https://numfocus.org)
19 [](https://github.com/psf/black)
20
21 ## What is it?
22
23 **pandas** is a Python package that provides fast, flexible, and expressive data
24 structures designed to make working with "relational" or "labeled" data both
25 easy and intuitive. It aims to be the fundamental high-level building block for
26 doing practical, **real world** data analysis in Python. Additionally, it has
27 the broader goal of becoming **the most powerful and flexible open source data
28 analysis / manipulation tool available in any language**. It is already well on
29 its way towards this goal.
30
31 ## Main Features
32 Here are just a few of the things that pandas does well:
33
34 - Easy handling of [**missing data**][missing-data] (represented as
35 `NaN`, `NA`, or `NaT`) in floating point as well as non-floating point data
36 - Size mutability: columns can be [**inserted and
37 deleted**][insertion-deletion] from DataFrame and higher dimensional
38 objects
39 - Automatic and explicit [**data alignment**][alignment]: objects can
40 be explicitly aligned to a set of labels, or the user can simply
41 ignore the labels and let `Series`, `DataFrame`, etc. automatically
42 align the data for you in computations
43 - Powerful, flexible [**group by**][groupby] functionality to perform
44 split-apply-combine operations on data sets, for both aggregating
45 and transforming data
46 - Make it [**easy to convert**][conversion] ragged,
47 differently-indexed data in other Python and NumPy data structures
48 into DataFrame objects
49 - Intelligent label-based [**slicing**][slicing], [**fancy
50 indexing**][fancy-indexing], and [**subsetting**][subsetting] of
51 large data sets
52 - Intuitive [**merging**][merging] and [**joining**][joining] data
53 sets
54 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of
55 data sets
56 - [**Hierarchical**][mi] labeling of axes (possible to have multiple
57 labels per tick)
58 - Robust IO tools for loading data from [**flat files**][flat-files]
59 (CSV and delimited), [**Excel files**][excel], [**databases**][db],
60 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore]
61 - [**Time series**][timeseries]-specific functionality: date range
62 generation and frequency conversion, moving window statistics,
63 date shifting and lagging
64
65
66 [missing-data]: https://pandas.pydata.org/pandas-docs/stable/user_guide/missing_data.html
67 [insertion-deletion]: https://pandas.pydata.org/pandas-docs/stable/user_guide/dsintro.html#column-selection-addition-deletion
68 [alignment]: https://pandas.pydata.org/pandas-docs/stable/user_guide/dsintro.html?highlight=alignment#intro-to-data-structures
69 [groupby]: https://pandas.pydata.org/pandas-docs/stable/user_guide/groupby.html#group-by-split-apply-combine
70 [conversion]: https://pandas.pydata.org/pandas-docs/stable/user_guide/dsintro.html#dataframe
71 [slicing]: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#slicing-ranges
72 [fancy-indexing]: https://pandas.pydata.org/pandas-docs/stable/user_guide/advanced.html#advanced
73 [subsetting]: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#boolean-indexing
74 [merging]: https://pandas.pydata.org/pandas-docs/stable/user_guide/merging.html#database-style-dataframe-or-named-series-joining-merging
75 [joining]: https://pandas.pydata.org/pandas-docs/stable/user_guide/merging.html#joining-on-index
76 [reshape]: https://pandas.pydata.org/pandas-docs/stable/user_guide/reshaping.html
77 [pivot-table]: https://pandas.pydata.org/pandas-docs/stable/user_guide/reshaping.html
78 [mi]: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#hierarchical-indexing-multiindex
79 [flat-files]: https://pandas.pydata.org/pandas-docs/stable/user_guide/io.html#csv-text-files
80 [excel]: https://pandas.pydata.org/pandas-docs/stable/user_guide/io.html#excel-files
81 [db]: https://pandas.pydata.org/pandas-docs/stable/user_guide/io.html#sql-queries
82 [hdfstore]: https://pandas.pydata.org/pandas-docs/stable/user_guide/io.html#hdf5-pytables
83 [timeseries]: https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#time-series-date-functionality
84
85 ## Where to get it
86 The source code is currently hosted on GitHub at:
87 https://github.com/pandas-dev/pandas
88
89 Binary installers for the latest released version are available at the [Python
90 Package Index (PyPI)](https://pypi.org/project/pandas) and on [Conda](https://docs.conda.io/en/latest/).
91
92 ```sh
93 # conda
94 conda install pandas
95 ```
96
97 ```sh
98 # or PyPI
99 pip install pandas
100 ```
101
102 ## Dependencies
103 - [NumPy - Adds support for large, multi-dimensional arrays, matrices and high-level mathematical functions to operate on these arrays](https://www.numpy.org)
104 - [python-dateutil - Provides powerful extensions to the standard datetime module](https://labix.org/python-dateutil)
105 - [pytz - Brings the Olson tz database into Python which allows accurate and cross platform timezone calculations](https://pythonhosted.org/pytz)
106
107 See the [full installation instructions](https://pandas.pydata.org/pandas-docs/stable/install.html#dependencies) for minimum supported versions of required, recommended and optional dependencies.
108
109 ## Installation from sources
110 To install pandas from source you need [Cython](https://cython.org/) in addition to the normal
111 dependencies above. Cython can be installed from PyPI:
112
113 ```sh
114 pip install cython
115 ```
116
117 In the `pandas` directory (same one where you found this file after
118 cloning the git repo), execute:
119
120 ```sh
121 python setup.py install
122 ```
123
124 or for installing in [development mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs):
125
126
127 ```sh
128 python -m pip install -e . --no-build-isolation --no-use-pep517
129 ```
130
131 If you have `make`, you can also use `make develop` to run the same command.
132
133 or alternatively
134
135 ```sh
136 python setup.py develop
137 ```
138
139 See the full instructions for [installing from source](https://pandas.pydata.org/pandas-docs/stable/install.html#installing-from-source).
140
141 ## License
142 [BSD 3](LICENSE)
143
144 ## Documentation
145 The official documentation is hosted on PyData.org: https://pandas.pydata.org/pandas-docs/stable
146
147 ## Background
148 Work on ``pandas`` started at [AQR](https://www.aqr.com/) (a quantitative hedge fund) in 2008 and
149 has been under active development since then.
150
151 ## Getting Help
152
153 For usage questions, the best place to go to is [StackOverflow](https://stackoverflow.com/questions/tagged/pandas).
154 Further, general questions and discussions can also take place on the [pydata mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata).
155
156 ## Discussion and Development
157 Most development discussions take place on GitHub in this repo. Further, the [pandas-dev mailing list](https://mail.python.org/mailman/listinfo/pandas-dev) can also be used for specialized discussions or design issues, and a [Gitter channel](https://gitter.im/pydata/pandas) is available for quick development related questions.
158
159 ## Contributing to pandas [](https://www.codetriage.com/pandas-dev/pandas)
160
161 All contributions, bug reports, bug fixes, documentation improvements, enhancements, and ideas are welcome.
162
163 A detailed overview on how to contribute can be found in the **[contributing guide](https://pandas.pydata.org/docs/dev/development/contributing.html)**. There is also an [overview](.github/CONTRIBUTING.md) on GitHub.
164
165 If you are simply looking to start working with the pandas codebase, navigate to the [GitHub "issues" tab](https://github.com/pandas-dev/pandas/issues) and start looking through interesting issues. There are a number of issues listed under [Docs](https://github.com/pandas-dev/pandas/issues?labels=Docs&sort=updated&state=open) and [good first issue](https://github.com/pandas-dev/pandas/issues?labels=good+first+issue&sort=updated&state=open) where you could start out.
166
167 You can also triage issues which may include reproducing bug reports, or asking for vital information such as version numbers or reproduction instructions. If you would like to start triaging issues, one easy way to get started is to [subscribe to pandas on CodeTriage](https://www.codetriage.com/pandas-dev/pandas).
168
169 Or maybe through using pandas you have an idea of your own or are looking for something in the documentation and thinking ‘this can be improved’...you can do something about it!
170
171 Feel free to ask questions on the [mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata) or on [Gitter](https://gitter.im/pydata/pandas).
172
173 As contributors and maintainers to this project, you are expected to abide by pandas' code of conduct. More information can be found at: [Contributor Code of Conduct](https://github.com/pandas-dev/pandas/blob/master/.github/CODE_OF_CONDUCT.md)
174
[end of README.md]
[start of pandas/core/shared_docs.py]
1 from typing import Dict
2
3 _shared_docs: Dict[str, str] = {}
4
5 _shared_docs[
6 "aggregate"
7 ] = """
8 Aggregate using one or more operations over the specified axis.
9
10 Parameters
11 ----------
12 func : function, str, list or dict
13 Function to use for aggregating the data. If a function, must either
14 work when passed a {klass} or when passed to {klass}.apply.
15
16 Accepted combinations are:
17
18 - function
19 - string function name
20 - list of functions and/or function names, e.g. ``[np.sum, 'mean']``
21 - dict of axis labels -> functions, function names or list of such.
22 {axis}
23 *args
24 Positional arguments to pass to `func`.
25 **kwargs
26 Keyword arguments to pass to `func`.
27
28 Returns
29 -------
30 scalar, Series or DataFrame
31
32 The return can be:
33
34 * scalar : when Series.agg is called with single function
35 * Series : when DataFrame.agg is called with a single function
36 * DataFrame : when DataFrame.agg is called with several functions
37
38 Return scalar, Series or DataFrame.
39 {see_also}
40 Notes
41 -----
42 `agg` is an alias for `aggregate`. Use the alias.
43
44 A passed user-defined-function will be passed a Series for evaluation.
45 {examples}"""
46
47 _shared_docs[
48 "compare"
49 ] = """
50 Compare to another {klass} and show the differences.
51
52 .. versionadded:: 1.1.0
53
54 Parameters
55 ----------
56 other : {klass}
57 Object to compare with.
58
59 align_axis : {{0 or 'index', 1 or 'columns'}}, default 1
60 Determine which axis to align the comparison on.
61
62 * 0, or 'index' : Resulting differences are stacked vertically
63 with rows drawn alternately from self and other.
64 * 1, or 'columns' : Resulting differences are aligned horizontally
65 with columns drawn alternately from self and other.
66
67 keep_shape : bool, default False
68 If true, all rows and columns are kept.
69 Otherwise, only the ones with different values are kept.
70
71 keep_equal : bool, default False
72 If true, the result keeps values that are equal.
73 Otherwise, equal values are shown as NaNs.
74 """
75
76 _shared_docs[
77 "groupby"
78 ] = """
79 Group %(klass)s using a mapper or by a Series of columns.
80
81 A groupby operation involves some combination of splitting the
82 object, applying a function, and combining the results. This can be
83 used to group large amounts of data and compute operations on these
84 groups.
85
86 Parameters
87 ----------
88 by : mapping, function, label, or list of labels
89 Used to determine the groups for the groupby.
90 If ``by`` is a function, it's called on each value of the object's
91 index. If a dict or Series is passed, the Series or dict VALUES
92 will be used to determine the groups (the Series' values are first
93 aligned; see ``.align()`` method). If an ndarray is passed, the
94 values are used as-is to determine the groups. A label or list of
95 labels may be passed to group by the columns in ``self``. Notice
96 that a tuple is interpreted as a (single) key.
97 axis : {0 or 'index', 1 or 'columns'}, default 0
98 Split along rows (0) or columns (1).
99 level : int, level name, or sequence of such, default None
100 If the axis is a MultiIndex (hierarchical), group by a particular
101 level or levels.
102 as_index : bool, default True
103 For aggregated output, return object with group labels as the
104 index. Only relevant for DataFrame input. as_index=False is
105 effectively "SQL-style" grouped output.
106 sort : bool, default True
107 Sort group keys. Get better performance by turning this off.
108 Note this does not influence the order of observations within each
109 group. Groupby preserves the order of rows within each group.
110 group_keys : bool, default True
111 When calling ``groupby().apply()``, add group keys to index to identify pieces.
112 squeeze : bool, default False
113 Reduce the dimensionality of the return type if possible,
114 otherwise return a consistent type.
115
116 .. deprecated:: 1.1.0
117
118 observed : bool, default False
119 This only applies if any of the groupers are Categoricals.
120 If True: only show observed values for categorical groupers.
121 If False: show all values for categorical groupers.
122 dropna : bool, default True
123 If True, and if group keys contain NA values, NA values together
124 with row/column will be dropped.
125 If False, NA values will also be treated as the key in groups
126
127 .. versionadded:: 1.1.0
128
129 Returns
130 -------
131 %(klass)sGroupBy
132 Returns a groupby object that contains information about the groups.
133
134 See Also
135 --------
136 resample : Convenience method for frequency conversion and resampling
137 of time series.
138
139 Notes
140 -----
141 See the `user guide
142 <https://pandas.pydata.org/pandas-docs/stable/groupby.html>`_ for more.
143 """
144
145 _shared_docs[
146 "melt"
147 ] = """
148 Unpivot a DataFrame from wide to long format, optionally leaving identifiers set.
149
150 This function is useful to massage a DataFrame into a format where one
151 or more columns are identifier variables (`id_vars`), while all other
152 columns, considered measured variables (`value_vars`), are "unpivoted" to
153 the row axis, leaving just two non-identifier columns, 'variable' and
154 'value'.
155
156 Parameters
157 ----------
158 id_vars : tuple, list, or ndarray, optional
159 Column(s) to use as identifier variables.
160 value_vars : tuple, list, or ndarray, optional
161 Column(s) to unpivot. If not specified, uses all columns that
162 are not set as `id_vars`.
163 var_name : scalar
164 Name to use for the 'variable' column. If None it uses
165 ``frame.columns.name`` or 'variable'.
166 value_name : scalar, default 'value'
167 Name to use for the 'value' column.
168 col_level : int or str, optional
169 If columns are a MultiIndex then use this level to melt.
170 ignore_index : bool, default True
171 If True, original index is ignored. If False, the original index is retained.
172 Index labels will be repeated as necessary.
173
174 .. versionadded:: 1.1.0
175
176 Returns
177 -------
178 DataFrame
179 Unpivoted DataFrame.
180
181 See Also
182 --------
183 %(other)s : Identical method.
184 pivot_table : Create a spreadsheet-style pivot table as a DataFrame.
185 DataFrame.pivot : Return reshaped DataFrame organized
186 by given index / column values.
187 DataFrame.explode : Explode a DataFrame from list-like
188 columns to long format.
189
190 Examples
191 --------
192 >>> df = pd.DataFrame({'A': {0: 'a', 1: 'b', 2: 'c'},
193 ... 'B': {0: 1, 1: 3, 2: 5},
194 ... 'C': {0: 2, 1: 4, 2: 6}})
195 >>> df
196 A B C
197 0 a 1 2
198 1 b 3 4
199 2 c 5 6
200
201 >>> %(caller)sid_vars=['A'], value_vars=['B'])
202 A variable value
203 0 a B 1
204 1 b B 3
205 2 c B 5
206
207 >>> %(caller)sid_vars=['A'], value_vars=['B', 'C'])
208 A variable value
209 0 a B 1
210 1 b B 3
211 2 c B 5
212 3 a C 2
213 4 b C 4
214 5 c C 6
215
216 The names of 'variable' and 'value' columns can be customized:
217
218 >>> %(caller)sid_vars=['A'], value_vars=['B'],
219 ... var_name='myVarname', value_name='myValname')
220 A myVarname myValname
221 0 a B 1
222 1 b B 3
223 2 c B 5
224
225 Original index values can be kept around:
226
227 >>> %(caller)sid_vars=['A'], value_vars=['B', 'C'], ignore_index=False)
228 A variable value
229 0 a B 1
230 1 b B 3
231 2 c B 5
232 0 a C 2
233 1 b C 4
234 2 c C 6
235
236 If you have multi-index columns:
237
238 >>> df.columns = [list('ABC'), list('DEF')]
239 >>> df
240 A B C
241 D E F
242 0 a 1 2
243 1 b 3 4
244 2 c 5 6
245
246 >>> %(caller)scol_level=0, id_vars=['A'], value_vars=['B'])
247 A variable value
248 0 a B 1
249 1 b B 3
250 2 c B 5
251
252 >>> %(caller)sid_vars=[('A', 'D')], value_vars=[('B', 'E')])
253 (A, D) variable_0 variable_1 value
254 0 a B E 1
255 1 b B E 3
256 2 c B E 5
257 """
258
259 _shared_docs[
260 "transform"
261 ] = """
262 Call ``func`` on self producing a {klass} with transformed values.
263
264 Produced {klass} will have same axis length as self.
265
266 Parameters
267 ----------
268 func : function, str, list-like or dict-like
269 Function to use for transforming the data. If a function, must either
270 work when passed a {klass} or when passed to {klass}.apply. If func
271 is both list-like and dict-like, dict-like behavior takes precedence.
272
273 Accepted combinations are:
274
275 - function
276 - string function name
277 - list-like of functions and/or function names, e.g. ``[np.exp, 'sqrt']``
278 - dict-like of axis labels -> functions, function names or list-like of such.
279 {axis}
280 *args
281 Positional arguments to pass to `func`.
282 **kwargs
283 Keyword arguments to pass to `func`.
284
285 Returns
286 -------
287 {klass}
288 A {klass} that must have the same length as self.
289
290 Raises
291 ------
292 ValueError : If the returned {klass} has a different length than self.
293
294 See Also
295 --------
296 {klass}.agg : Only perform aggregating type operations.
297 {klass}.apply : Invoke function on a {klass}.
298
299 Examples
300 --------
301 >>> df = pd.DataFrame({{'A': range(3), 'B': range(1, 4)}})
302 >>> df
303 A B
304 0 0 1
305 1 1 2
306 2 2 3
307 >>> df.transform(lambda x: x + 1)
308 A B
309 0 1 2
310 1 2 3
311 2 3 4
312
313 Even though the resulting {klass} must have the same length as the
314 input {klass}, it is possible to provide several input functions:
315
316 >>> s = pd.Series(range(3))
317 >>> s
318 0 0
319 1 1
320 2 2
321 dtype: int64
322 >>> s.transform([np.sqrt, np.exp])
323 sqrt exp
324 0 0.000000 1.000000
325 1 1.000000 2.718282
326 2 1.414214 7.389056
327
328 You can call transform on a GroupBy object:
329
330 >>> df = pd.DataFrame({{
331 ... "Date": [
332 ... "2015-05-08", "2015-05-07", "2015-05-06", "2015-05-05",
333 ... "2015-05-08", "2015-05-07", "2015-05-06", "2015-05-05"],
334 ... "Data": [5, 8, 6, 1, 50, 100, 60, 120],
335 ... }})
336 >>> df
337 Date Data
338 0 2015-05-08 5
339 1 2015-05-07 8
340 2 2015-05-06 6
341 3 2015-05-05 1
342 4 2015-05-08 50
343 5 2015-05-07 100
344 6 2015-05-06 60
345 7 2015-05-05 120
346 >>> df.groupby('Date')['Data'].transform('sum')
347 0 55
348 1 108
349 2 66
350 3 121
351 4 55
352 5 108
353 6 66
354 7 121
355 Name: Data, dtype: int64
356
357 >>> df = pd.DataFrame({{
358 ... "c": [1, 1, 1, 2, 2, 2, 2],
359 ... "type": ["m", "n", "o", "m", "m", "n", "n"]
360 ... }})
361 >>> df
362 c type
363 0 1 m
364 1 1 n
365 2 1 o
366 3 2 m
367 4 2 m
368 5 2 n
369 6 2 n
370 >>> df['size'] = df.groupby('c')['type'].transform(len)
371 >>> df
372 c type size
373 0 1 m 3
374 1 1 n 3
375 2 1 o 3
376 3 2 m 4
377 4 2 m 4
378 5 2 n 4
379 6 2 n 4
380 """
381
382 _shared_docs[
383 "storage_options"
384 ] = """storage_options : dict, optional
385 Extra options that make sense for a particular storage connection, e.g.
386 host, port, username, password, etc. For HTTP(S) URLs the key-value pairs
387 are forwarded to ``urllib`` as header options. For other URLs (e.g.
388 starting with "s3://", and "gcs://") the key-value pairs are forwarded to
389 ``fsspec``. Please see ``fsspec`` and ``urllib`` for more details."""
390
391 _shared_docs[
392 "replace"
393 ] = """
394 Replace values given in `to_replace` with `value`.
395
396 Values of the {klass} are replaced with other values dynamically.
397 {replace_iloc}
398
399 Parameters
400 ----------
401 to_replace : str, regex, list, dict, Series, int, float, or None
402 How to find the values that will be replaced.
403
404 * numeric, str or regex:
405
406 - numeric: numeric values equal to `to_replace` will be
407 replaced with `value`
408 - str: string exactly matching `to_replace` will be replaced
409 with `value`
410 - regex: regexs matching `to_replace` will be replaced with
411 `value`
412
413 * list of str, regex, or numeric:
414
415 - First, if `to_replace` and `value` are both lists, they
416 **must** be the same length.
417 - Second, if ``regex=True`` then all of the strings in **both**
418 lists will be interpreted as regexs otherwise they will match
419 directly. This doesn't matter much for `value` since there
420 are only a few possible substitution regexes you can use.
421 - str, regex and numeric rules apply as above.
422
423 * dict:
424
425 - Dicts can be used to specify different replacement values
426 for different existing values. For example,
427 ``{{'a': 'b', 'y': 'z'}}`` replaces the value 'a' with 'b' and
428 'y' with 'z'. To use a dict in this way the `value`
429 parameter should be `None`.
430 - For a DataFrame a dict can specify that different values
431 should be replaced in different columns. For example,
432 ``{{'a': 1, 'b': 'z'}}`` looks for the value 1 in column 'a'
433 and the value 'z' in column 'b' and replaces these values
434 with whatever is specified in `value`. The `value` parameter
435 should not be ``None`` in this case. You can treat this as a
436 special case of passing two lists except that you are
437 specifying the column to search in.
438 - For a DataFrame nested dictionaries, e.g.,
439 ``{{'a': {{'b': np.nan}}}}``, are read as follows: look in column
440 'a' for the value 'b' and replace it with NaN. The `value`
441 parameter should be ``None`` to use a nested dict in this
442 way. You can nest regular expressions as well. Note that
443 column names (the top-level dictionary keys in a nested
444 dictionary) **cannot** be regular expressions.
445
446 * None:
447
448 - This means that the `regex` argument must be a string,
449 compiled regular expression, or list, dict, ndarray or
450 Series of such elements. If `value` is also ``None`` then
451 this **must** be a nested dictionary or Series.
452
453 See the examples section for examples of each of these.
454 value : scalar, dict, list, str, regex, default None
455 Value to replace any values matching `to_replace` with.
456 For a DataFrame a dict of values can be used to specify which
457 value to use for each column (columns not in the dict will not be
458 filled). Regular expressions, strings and lists or dicts of such
459 objects are also allowed.
460 {inplace}
461 limit : int, default None
462 Maximum size gap to forward or backward fill.
463 regex : bool or same types as `to_replace`, default False
464 Whether to interpret `to_replace` and/or `value` as regular
465 expressions. If this is ``True`` then `to_replace` *must* be a
466 string. Alternatively, this could be a regular expression or a
467 list, dict, or array of regular expressions in which case
468 `to_replace` must be ``None``.
469 method : {{'pad', 'ffill', 'bfill', `None`}}
470 The method to use when for replacement, when `to_replace` is a
471 scalar, list or tuple and `value` is ``None``.
472
473 .. versionchanged:: 0.23.0
474 Added to DataFrame.
475
476 Returns
477 -------
478 {klass}
479 Object after replacement.
480
481 Raises
482 ------
483 AssertionError
484 * If `regex` is not a ``bool`` and `to_replace` is not
485 ``None``.
486
487 TypeError
488 * If `to_replace` is not a scalar, array-like, ``dict``, or ``None``
489 * If `to_replace` is a ``dict`` and `value` is not a ``list``,
490 ``dict``, ``ndarray``, or ``Series``
491 * If `to_replace` is ``None`` and `regex` is not compilable
492 into a regular expression or is a list, dict, ndarray, or
493 Series.
494 * When replacing multiple ``bool`` or ``datetime64`` objects and
495 the arguments to `to_replace` does not match the type of the
496 value being replaced
497
498 ValueError
499 * If a ``list`` or an ``ndarray`` is passed to `to_replace` and
500 `value` but they are not the same length.
501
502 See Also
503 --------
504 {klass}.fillna : Fill NA values.
505 {klass}.where : Replace values based on boolean condition.
506 Series.str.replace : Simple string replacement.
507
508 Notes
509 -----
510 * Regex substitution is performed under the hood with ``re.sub``. The
511 rules for substitution for ``re.sub`` are the same.
512 * Regular expressions will only substitute on strings, meaning you
513 cannot provide, for example, a regular expression matching floating
514 point numbers and expect the columns in your frame that have a
515 numeric dtype to be matched. However, if those floating point
516 numbers *are* strings, then you can do this.
517 * This method has *a lot* of options. You are encouraged to experiment
518 and play with this method to gain intuition about how it works.
519 * When dict is used as the `to_replace` value, it is like
520 key(s) in the dict are the to_replace part and
521 value(s) in the dict are the value parameter.
522
523 Examples
524 --------
525
526 **Scalar `to_replace` and `value`**
527
528 >>> s = pd.Series([0, 1, 2, 3, 4])
529 >>> s.replace(0, 5)
530 0 5
531 1 1
532 2 2
533 3 3
534 4 4
535 dtype: int64
536
537 >>> df = pd.DataFrame({{'A': [0, 1, 2, 3, 4],
538 ... 'B': [5, 6, 7, 8, 9],
539 ... 'C': ['a', 'b', 'c', 'd', 'e']}})
540 >>> df.replace(0, 5)
541 A B C
542 0 5 5 a
543 1 1 6 b
544 2 2 7 c
545 3 3 8 d
546 4 4 9 e
547
548 **List-like `to_replace`**
549
550 >>> df.replace([0, 1, 2, 3], 4)
551 A B C
552 0 4 5 a
553 1 4 6 b
554 2 4 7 c
555 3 4 8 d
556 4 4 9 e
557
558 >>> df.replace([0, 1, 2, 3], [4, 3, 2, 1])
559 A B C
560 0 4 5 a
561 1 3 6 b
562 2 2 7 c
563 3 1 8 d
564 4 4 9 e
565
566 >>> s.replace([1, 2], method='bfill')
567 0 0
568 1 3
569 2 3
570 3 3
571 4 4
572 dtype: int64
573
574 **dict-like `to_replace`**
575
576 >>> df.replace({{0: 10, 1: 100}})
577 A B C
578 0 10 5 a
579 1 100 6 b
580 2 2 7 c
581 3 3 8 d
582 4 4 9 e
583
584 >>> df.replace({{'A': 0, 'B': 5}}, 100)
585 A B C
586 0 100 100 a
587 1 1 6 b
588 2 2 7 c
589 3 3 8 d
590 4 4 9 e
591
592 >>> df.replace({{'A': {{0: 100, 4: 400}}}})
593 A B C
594 0 100 5 a
595 1 1 6 b
596 2 2 7 c
597 3 3 8 d
598 4 400 9 e
599
600 **Regular expression `to_replace`**
601
602 >>> df = pd.DataFrame({{'A': ['bat', 'foo', 'bait'],
603 ... 'B': ['abc', 'bar', 'xyz']}})
604 >>> df.replace(to_replace=r'^ba.$', value='new', regex=True)
605 A B
606 0 new abc
607 1 foo new
608 2 bait xyz
609
610 >>> df.replace({{'A': r'^ba.$'}}, {{'A': 'new'}}, regex=True)
611 A B
612 0 new abc
613 1 foo bar
614 2 bait xyz
615
616 >>> df.replace(regex=r'^ba.$', value='new')
617 A B
618 0 new abc
619 1 foo new
620 2 bait xyz
621
622 >>> df.replace(regex={{r'^ba.$': 'new', 'foo': 'xyz'}})
623 A B
624 0 new abc
625 1 xyz new
626 2 bait xyz
627
628 >>> df.replace(regex=[r'^ba.$', 'foo'], value='new')
629 A B
630 0 new abc
631 1 new new
632 2 bait xyz
633
634 Compare the behavior of ``s.replace({{'a': None}})`` and
635 ``s.replace('a', None)`` to understand the peculiarities
636 of the `to_replace` parameter:
637
638 >>> s = pd.Series([10, 'a', 'a', 'b', 'a'])
639
640 When one uses a dict as the `to_replace` value, it is like the
641 value(s) in the dict are equal to the `value` parameter.
642 ``s.replace({{'a': None}})`` is equivalent to
643 ``s.replace(to_replace={{'a': None}}, value=None, method=None)``:
644
645 >>> s.replace({{'a': None}})
646 0 10
647 1 None
648 2 None
649 3 b
650 4 None
651 dtype: object
652
653 When ``value=None`` and `to_replace` is a scalar, list or
654 tuple, `replace` uses the method parameter (default 'pad') to do the
655 replacement. So this is why the 'a' values are being replaced by 10
656 in rows 1 and 2 and 'b' in row 4 in this case.
657 The command ``s.replace('a', None)`` is actually equivalent to
658 ``s.replace(to_replace='a', value=None, method='pad')``:
659
660 >>> s.replace('a', None)
661 0 10
662 1 10
663 2 10
664 3 b
665 4 b
666 dtype: object
667 """
668
[end of pandas/core/shared_docs.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pandas-dev/pandas
|
6918b545a58ac70bd99be2faa71e40ea5fb8e61b
|
DataFrame.apply doesn't handle numpy ops or DataFrame properties
```
s = pd.Series([1, 2])
df = pd.DataFrame([1, 2])
print(s.agg("sqrt"))
print(df.agg("sqrt"))
print(s.apply("sqrt"))
print(df.apply("sqrt"))
print(s.agg("size"))
print(df.agg("size"))
print(s.apply("size"))
print(df.apply("size"))
```
In each of the two blocks, first three lines all give the expected output and the last raises.
|
2021-01-11T23:35:03Z
|
<patch>
diff --git a/doc/source/whatsnew/v1.3.0.rst b/doc/source/whatsnew/v1.3.0.rst
--- a/doc/source/whatsnew/v1.3.0.rst
+++ b/doc/source/whatsnew/v1.3.0.rst
@@ -54,6 +54,8 @@ Other enhancements
- Add support for dict-like names in :class:`MultiIndex.set_names` and :class:`MultiIndex.rename` (:issue:`20421`)
- :func:`pandas.read_excel` can now auto detect .xlsb files (:issue:`35416`)
- :meth:`.Rolling.sum`, :meth:`.Expanding.sum`, :meth:`.Rolling.mean`, :meth:`.Expanding.mean`, :meth:`.Rolling.median`, :meth:`.Expanding.median`, :meth:`.Rolling.max`, :meth:`.Expanding.max`, :meth:`.Rolling.min`, and :meth:`.Expanding.min` now support ``Numba`` execution with the ``engine`` keyword (:issue:`38895`)
+- :meth:`DataFrame.apply` can now accept NumPy unary operators as strings, e.g. ``df.apply("sqrt")``, which was already the case for :meth:`Series.apply` (:issue:`39116`)
+- :meth:`DataFrame.apply` can now accept non-callable DataFrame properties as strings, e.g. ``df.apply("size")``, which was already the case for :meth:`Series.apply` (:issue:`39116`)
.. ---------------------------------------------------------------------------
diff --git a/pandas/core/apply.py b/pandas/core/apply.py
--- a/pandas/core/apply.py
+++ b/pandas/core/apply.py
@@ -151,9 +151,11 @@ def agg(self) -> Tuple[Optional[FrameOrSeriesUnion], Optional[bool]]:
if _axis is None:
_axis = getattr(obj, "axis", 0)
- if isinstance(arg, str):
- return obj._try_aggregate_string_function(arg, *args, **kwargs), None
- elif is_dict_like(arg):
+ result = self.maybe_apply_str()
+ if result is not None:
+ return result, None
+
+ if is_dict_like(arg):
arg = cast(AggFuncTypeDict, arg)
return agg_dict_like(obj, arg, _axis), True
elif is_list_like(arg):
@@ -171,6 +173,28 @@ def agg(self) -> Tuple[Optional[FrameOrSeriesUnion], Optional[bool]]:
# caller can react
return result, True
+ def maybe_apply_str(self) -> Optional[FrameOrSeriesUnion]:
+ """
+ Compute apply in case of a string.
+
+ Returns
+ -------
+ result: Series, DataFrame, or None
+ Result when self.f is a string, None otherwise.
+ """
+ f = self.f
+ if not isinstance(f, str):
+ return None
+ # Support for `frame.transform('method')`
+ # Some methods (shift, etc.) require the axis argument, others
+ # don't, so inspect and insert if necessary.
+ func = getattr(self.obj, f, None)
+ if callable(func):
+ sig = inspect.getfullargspec(func)
+ if "axis" in sig.args:
+ self.kwds["axis"] = self.axis
+ return self.obj._try_aggregate_string_function(f, *self.args, **self.kwds)
+
class FrameApply(Apply):
obj: DataFrame
@@ -236,15 +260,9 @@ def apply(self) -> FrameOrSeriesUnion:
return self.apply_empty_result()
# string dispatch
- if isinstance(self.f, str):
- # Support for `frame.transform('method')`
- # Some methods (shift, etc.) require the axis argument, others
- # don't, so inspect and insert if necessary.
- func = getattr(self.obj, self.f)
- sig = inspect.getfullargspec(func)
- if "axis" in sig.args:
- self.kwds["axis"] = self.axis
- return func(*self.args, **self.kwds)
+ result = self.maybe_apply_str()
+ if result is not None:
+ return result
# ufunc
elif isinstance(self.f, np.ufunc):
@@ -581,8 +599,9 @@ def apply(self) -> FrameOrSeriesUnion:
return obj.aggregate(func, *args, **kwds)
# if we are a string, try to dispatch
- if isinstance(func, str):
- return obj._try_aggregate_string_function(func, *args, **kwds)
+ result = self.maybe_apply_str()
+ if result is not None:
+ return result
return self.apply_standard()
</patch>
|
[]
|
[]
| ||||
mesonbuild__meson-1914
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
VS backend doesn't respect warning_level/--warnlevel
All generated VS projects use `/W1` regardless of warning level.
VisualStudioCPPCompiler correctly returns `'/W4'` for `get_warn_args('3')`, so it's clearly the backend that's failing to use it.
I'm currently trying to trace the problem in order to create a pull request for it.
</issue>
<code>
[start of README.md]
1 <p align="center">
2 <img src="http://mesonbuild.com/assets/images/meson_logo.png">
3 </p>
4 Meson® is a project to create the best possible next-generation
5 build system.
6
7 #### Status
8
9 [](https://pypi.python.org/pypi/meson)
10 [](https://travis-ci.org/mesonbuild/meson)
11 [](https://ci.appveyor.com/project/jpakkane/meson)
12 [](https://codecov.io/gh/mesonbuild/meson/branch/master)
13
14 #### Dependencies
15
16 - [Python](http://python.org) (version 3.4 or newer)
17 - [Ninja](https://ninja-build.org) (version 1.5 or newer)
18
19 #### Installing from source
20
21 You can run Meson directly from a revision control checkout or an
22 extracted tarball. If you wish you can install it locally with the
23 standard Python distutils command `python3 setup.py install <your
24 options here>`.
25
26 Meson is also available from
27 [PyPi](https://pypi.python.org/pypi/meson), so it can be installed
28 with `pip3 install meson` (this does not require a source checkout,
29 pip will download the package automatically). The exact command to
30 type to install with pip can very between systems, be sure to use the
31 Python 3 version of pip.
32
33 #### Running
34
35 Meson requires that you have a source directory and a build directory
36 and that these two are different. In your source root must exist a file
37 called 'meson.build'. To generate the build system run this command:
38
39 `meson <source directory> <build directory>`
40
41 Depending on how you obtained Meson the command might also be called
42 `meson.py` instead of plain `meson`. In the rest of this document we
43 are going to use the latter form.
44
45 You can omit either of the two directories, and Meson will substitute
46 the current directory and autodetect what you mean. This allows you to
47 do things like this:
48
49 `cd source_root; mkdir builddir; cd builddir; meson ..`
50
51 or
52
53 `cd source_root; mkdir builddir; meson builddir`
54
55 To compile, cd into your build directory and type `ninja`. To run unit
56 tests, type `ninja test`.
57
58 Install is the same but it can take an extra argument:
59
60 `DESTDIR=/destdir/path ninja install`
61
62 `DESTDIR` can be omitted. If you are installing to system directories,
63 you may need to run this command with sudo.
64
65
66 #### Contributing
67
68 We love code contributions. See the contributing.txt file for
69 details.
70
71
72 #### IRC
73
74 The irc channel for Meson is `#mesonbuild` over at Freenode.
75
76
77 #### Further info
78
79 More information about the Meson build system can be found at the
80 [project's home page](http://mesonbuild.com).
81
82 Meson is a registered trademark of Jussi Pakkanen
83
[end of README.md]
[start of mesonbuild/backend/backends.py]
1 # Copyright 2012-2016 The Meson development team
2
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6
7 # http://www.apache.org/licenses/LICENSE-2.0
8
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import os, pickle, re
16 from .. import build
17 from .. import dependencies
18 from .. import mesonlib
19 from .. import mlog
20 from .. import compilers
21 import json
22 import subprocess
23 from ..mesonlib import MesonException, get_meson_script
24 from ..mesonlib import get_compiler_for_source, classify_unity_sources
25 from ..compilers import CompilerArgs
26 from collections import OrderedDict
27
28 class CleanTrees:
29 '''
30 Directories outputted by custom targets that have to be manually cleaned
31 because on Linux `ninja clean` only deletes empty directories.
32 '''
33 def __init__(self, build_dir, trees):
34 self.build_dir = build_dir
35 self.trees = trees
36
37 class InstallData:
38 def __init__(self, source_dir, build_dir, prefix, strip_bin, mesonintrospect):
39 self.source_dir = source_dir
40 self.build_dir = build_dir
41 self.prefix = prefix
42 self.strip_bin = strip_bin
43 self.targets = []
44 self.headers = []
45 self.man = []
46 self.data = []
47 self.po_package_name = ''
48 self.po = []
49 self.install_scripts = []
50 self.install_subdirs = []
51 self.mesonintrospect = mesonintrospect
52
53 class ExecutableSerialisation:
54 def __init__(self, name, fname, cmd_args, env, is_cross, exe_wrapper,
55 workdir, extra_paths, capture):
56 self.name = name
57 self.fname = fname
58 self.cmd_args = cmd_args
59 self.env = env
60 self.is_cross = is_cross
61 self.exe_runner = exe_wrapper
62 self.workdir = workdir
63 self.extra_paths = extra_paths
64 self.capture = capture
65
66 class TestSerialisation:
67 def __init__(self, name, suite, fname, is_cross, exe_wrapper, is_parallel, cmd_args, env,
68 should_fail, timeout, workdir, extra_paths):
69 self.name = name
70 self.suite = suite
71 self.fname = fname
72 self.is_cross = is_cross
73 self.exe_runner = exe_wrapper
74 self.is_parallel = is_parallel
75 self.cmd_args = cmd_args
76 self.env = env
77 self.should_fail = should_fail
78 self.timeout = timeout
79 self.workdir = workdir
80 self.extra_paths = extra_paths
81
82 class OptionProxy:
83 def __init__(self, name, value):
84 self.name = name
85 self.value = value
86
87 class OptionOverrideProxy:
88 '''Mimic an option list but transparently override
89 selected option values.'''
90 def __init__(self, overrides, options):
91 self.overrides = overrides
92 self.options = options
93
94 def __getitem__(self, option_name):
95 base_opt = self.options[option_name]
96 if option_name in self.overrides:
97 return OptionProxy(base_opt.name, base_opt.validate_value(self.overrides[option_name]))
98 return base_opt
99
100 # This class contains the basic functionality that is needed by all backends.
101 # Feel free to move stuff in and out of it as you see fit.
102 class Backend:
103 def __init__(self, build):
104 self.build = build
105 self.environment = build.environment
106 self.processed_targets = {}
107 self.build_to_src = os.path.relpath(self.environment.get_source_dir(),
108 self.environment.get_build_dir())
109 for t in self.build.targets:
110 priv_dirname = self.get_target_private_dir_abs(t)
111 os.makedirs(priv_dirname, exist_ok=True)
112
113 def get_target_filename(self, t):
114 if isinstance(t, build.CustomTarget):
115 if len(t.get_outputs()) != 1:
116 mlog.warning('custom_target {!r} has more than one output! '
117 'Using the first one.'.format(t.name))
118 filename = t.get_outputs()[0]
119 else:
120 assert(isinstance(t, build.BuildTarget))
121 filename = t.get_filename()
122 return os.path.join(self.get_target_dir(t), filename)
123
124 def get_target_filename_abs(self, target):
125 return os.path.join(self.environment.get_build_dir(), self.get_target_filename(target))
126
127 def get_option_for_target(self, option_name, target):
128 if option_name in target.option_overrides:
129 override = target.option_overrides[option_name]
130 return self.environment.coredata.validate_option_value(option_name, override)
131 return self.environment.coredata.get_builtin_option(option_name)
132
133 def get_target_filename_for_linking(self, target):
134 # On some platforms (msvc for instance), the file that is used for
135 # dynamic linking is not the same as the dynamic library itself. This
136 # file is called an import library, and we want to link against that.
137 # On all other platforms, we link to the library directly.
138 if isinstance(target, build.SharedLibrary):
139 link_lib = target.get_import_filename() or target.get_filename()
140 return os.path.join(self.get_target_dir(target), link_lib)
141 elif isinstance(target, build.StaticLibrary):
142 return os.path.join(self.get_target_dir(target), target.get_filename())
143 raise AssertionError('BUG: Tried to link to something that\'s not a library')
144
145 def get_target_dir(self, target):
146 if self.environment.coredata.get_builtin_option('layout') == 'mirror':
147 dirname = target.get_subdir()
148 else:
149 dirname = 'meson-out'
150 return dirname
151
152 def get_target_source_dir(self, target):
153 dirname = os.path.join(self.build_to_src, self.get_target_dir(target))
154 return dirname
155
156 def get_target_private_dir(self, target):
157 dirname = os.path.join(self.get_target_dir(target), target.get_basename() + target.type_suffix())
158 return dirname
159
160 def get_target_private_dir_abs(self, target):
161 dirname = os.path.join(self.environment.get_build_dir(), self.get_target_private_dir(target))
162 return dirname
163
164 def get_target_generated_dir(self, target, gensrc, src):
165 """
166 Takes a BuildTarget, a generator source (CustomTarget or GeneratedList),
167 and a generated source filename.
168 Returns the full path of the generated source relative to the build root
169 """
170 # CustomTarget generators output to the build dir of the CustomTarget
171 if isinstance(gensrc, build.CustomTarget):
172 return os.path.join(self.get_target_dir(gensrc), src)
173 # GeneratedList generators output to the private build directory of the
174 # target that the GeneratedList is used in
175 return os.path.join(self.get_target_private_dir(target), src)
176
177 def get_unity_source_filename(self, target, suffix):
178 return target.name + '-unity.' + suffix
179
180 def generate_unity_files(self, target, unity_src):
181 abs_files = []
182 result = []
183 compsrcs = classify_unity_sources(target.compilers.values(), unity_src)
184
185 def init_language_file(suffix):
186 unity_src_name = self.get_unity_source_filename(target, suffix)
187 unity_src_subdir = self.get_target_private_dir_abs(target)
188 outfilename = os.path.join(unity_src_subdir,
189 unity_src_name)
190 outfileabs = os.path.join(self.environment.get_build_dir(),
191 outfilename)
192 outfileabs_tmp = outfileabs + '.tmp'
193 abs_files.append(outfileabs)
194 outfileabs_tmp_dir = os.path.dirname(outfileabs_tmp)
195 if not os.path.exists(outfileabs_tmp_dir):
196 os.makedirs(outfileabs_tmp_dir)
197 result.append(mesonlib.File(True, unity_src_subdir, unity_src_name))
198 return open(outfileabs_tmp, 'w')
199
200 # For each language, generate a unity source file and return the list
201 for comp, srcs in compsrcs.items():
202 with init_language_file(comp.get_default_suffix()) as ofile:
203 for src in srcs:
204 ofile.write('#include<%s>\n' % src)
205 [mesonlib.replace_if_different(x, x + '.tmp') for x in abs_files]
206 return result
207
208 def relpath(self, todir, fromdir):
209 return os.path.relpath(os.path.join('dummyprefixdir', todir),
210 os.path.join('dummyprefixdir', fromdir))
211
212 def flatten_object_list(self, target, proj_dir_to_build_root=''):
213 obj_list = []
214 for obj in target.get_objects():
215 if isinstance(obj, str):
216 o = os.path.join(proj_dir_to_build_root,
217 self.build_to_src, target.get_subdir(), obj)
218 obj_list.append(o)
219 elif isinstance(obj, mesonlib.File):
220 obj_list.append(obj.rel_to_builddir(self.build_to_src))
221 elif isinstance(obj, build.ExtractedObjects):
222 obj_list += self.determine_ext_objs(target, obj, proj_dir_to_build_root)
223 else:
224 raise MesonException('Unknown data type in object list.')
225 return obj_list
226
227 def serialize_executable(self, exe, cmd_args, workdir, env={},
228 capture=None):
229 import hashlib
230 # Can't just use exe.name here; it will likely be run more than once
231 if isinstance(exe, (dependencies.ExternalProgram,
232 build.BuildTarget, build.CustomTarget)):
233 basename = exe.name
234 else:
235 basename = os.path.basename(exe)
236 # Take a digest of the cmd args, env, workdir, and capture. This avoids
237 # collisions and also makes the name deterministic over regenerations
238 # which avoids a rebuild by Ninja because the cmdline stays the same.
239 data = bytes(str(sorted(env.items())) + str(cmd_args) + str(workdir) + str(capture),
240 encoding='utf-8')
241 digest = hashlib.sha1(data).hexdigest()
242 scratch_file = 'meson_exe_{0}_{1}.dat'.format(basename, digest)
243 exe_data = os.path.join(self.environment.get_scratch_dir(), scratch_file)
244 with open(exe_data, 'wb') as f:
245 if isinstance(exe, dependencies.ExternalProgram):
246 exe_cmd = exe.get_command()
247 exe_needs_wrapper = False
248 elif isinstance(exe, (build.BuildTarget, build.CustomTarget)):
249 exe_cmd = [self.get_target_filename_abs(exe)]
250 exe_needs_wrapper = exe.is_cross
251 else:
252 exe_cmd = [exe]
253 exe_needs_wrapper = False
254 is_cross = exe_needs_wrapper and \
255 self.environment.is_cross_build() and \
256 self.environment.cross_info.need_cross_compiler() and \
257 self.environment.cross_info.need_exe_wrapper()
258 if is_cross:
259 exe_wrapper = self.environment.cross_info.config['binaries'].get('exe_wrapper', None)
260 else:
261 exe_wrapper = None
262 if mesonlib.is_windows() or mesonlib.is_cygwin():
263 extra_paths = self.determine_windows_extra_paths(exe)
264 else:
265 extra_paths = []
266 es = ExecutableSerialisation(basename, exe_cmd, cmd_args, env,
267 is_cross, exe_wrapper, workdir,
268 extra_paths, capture)
269 pickle.dump(es, f)
270 return exe_data
271
272 def serialize_tests(self):
273 test_data = os.path.join(self.environment.get_scratch_dir(), 'meson_test_setup.dat')
274 with open(test_data, 'wb') as datafile:
275 self.write_test_file(datafile)
276 benchmark_data = os.path.join(self.environment.get_scratch_dir(), 'meson_benchmark_setup.dat')
277 with open(benchmark_data, 'wb') as datafile:
278 self.write_benchmark_file(datafile)
279 return test_data, benchmark_data
280
281 def determine_linker(self, target):
282 '''
283 If we're building a static library, there is only one static linker.
284 Otherwise, we query the target for the dynamic linker.
285 '''
286 if isinstance(target, build.StaticLibrary):
287 if target.is_cross:
288 return self.build.static_cross_linker
289 else:
290 return self.build.static_linker
291 l = target.get_clike_dynamic_linker()
292 if not l:
293 m = "Couldn't determine linker for target {!r}"
294 raise MesonException(m.format(target.name))
295 return l
296
297 def object_filename_from_source(self, target, source, is_unity):
298 if isinstance(source, mesonlib.File):
299 source = source.fname
300 # foo.vala files compile down to foo.c and then foo.c.o, not foo.vala.o
301 if source.endswith('.vala'):
302 if is_unity:
303 return source[:-5] + '.c.' + self.environment.get_object_suffix()
304 source = os.path.join(self.get_target_private_dir(target), source[:-5] + '.c')
305 return source.replace('/', '_').replace('\\', '_') + '.' + self.environment.get_object_suffix()
306
307 def determine_ext_objs(self, target, extobj, proj_dir_to_build_root):
308 result = []
309 targetdir = self.get_target_private_dir(extobj.target)
310 # With unity builds, there's just one object that contains all the
311 # sources, and we only support extracting all the objects in this mode,
312 # so just return that.
313 if self.is_unity(target):
314 comp = get_compiler_for_source(extobj.target.compilers.values(),
315 extobj.srclist[0])
316 # There is a potential conflict here, but it is unlikely that
317 # anyone both enables unity builds and has a file called foo-unity.cpp.
318 osrc = self.get_unity_source_filename(extobj.target,
319 comp.get_default_suffix())
320 osrc = os.path.join(self.get_target_private_dir(extobj.target), osrc)
321 objname = self.object_filename_from_source(extobj.target, osrc, True)
322 objname = objname.replace('/', '_').replace('\\', '_')
323 objpath = os.path.join(proj_dir_to_build_root, targetdir, objname)
324 return [objpath]
325 for osrc in extobj.srclist:
326 objname = self.object_filename_from_source(extobj.target, osrc, False)
327 objpath = os.path.join(proj_dir_to_build_root, targetdir, objname)
328 result.append(objpath)
329 return result
330
331 def get_pch_include_args(self, compiler, target):
332 args = []
333 pchpath = self.get_target_private_dir(target)
334 includeargs = compiler.get_include_args(pchpath, False)
335 for lang in ['c', 'cpp']:
336 p = target.get_pch(lang)
337 if not p:
338 continue
339 if compiler.can_compile(p[-1]):
340 header = p[0]
341 args += compiler.get_pch_use_args(pchpath, header)
342 if len(args) > 0:
343 args = includeargs + args
344 return args
345
346 @staticmethod
347 def escape_extra_args(compiler, args):
348 # No extra escaping/quoting needed when not running on Windows
349 if not mesonlib.is_windows():
350 return args
351 extra_args = []
352 # Compiler-specific escaping is needed for -D args but not for any others
353 if compiler.get_id() == 'msvc':
354 # MSVC needs escaping when a -D argument ends in \ or \"
355 for arg in args:
356 if arg.startswith('-D') or arg.startswith('/D'):
357 # Without extra escaping for these two, the next character
358 # gets eaten
359 if arg.endswith('\\'):
360 arg += '\\'
361 elif arg.endswith('\\"'):
362 arg = arg[:-2] + '\\\\"'
363 extra_args.append(arg)
364 else:
365 # MinGW GCC needs all backslashes in defines to be doubly-escaped
366 # FIXME: Not sure about Cygwin or Clang
367 for arg in args:
368 if arg.startswith('-D') or arg.startswith('/D'):
369 arg = arg.replace('\\', '\\\\')
370 extra_args.append(arg)
371 return extra_args
372
373 def generate_basic_compiler_args(self, target, compiler, no_warn_args=False):
374 # Create an empty commands list, and start adding arguments from
375 # various sources in the order in which they must override each other
376 # starting from hard-coded defaults followed by build options and so on.
377 commands = CompilerArgs(compiler)
378
379 copt_proxy = OptionOverrideProxy(target.option_overrides, self.environment.coredata.compiler_options)
380 # First, the trivial ones that are impossible to override.
381 #
382 # Add -nostdinc/-nostdinc++ if needed; can't be overriden
383 commands += self.get_cross_stdlib_args(target, compiler)
384 # Add things like /NOLOGO or -pipe; usually can't be overriden
385 commands += compiler.get_always_args()
386 # Only add warning-flags by default if the buildtype enables it, and if
387 # we weren't explicitly asked to not emit warnings (for Vala, f.ex)
388 if no_warn_args:
389 commands += compiler.get_no_warn_args()
390 elif self.get_option_for_target('buildtype', target) != 'plain':
391 commands += compiler.get_warn_args(self.get_option_for_target('warning_level', target))
392 # Add -Werror if werror=true is set in the build options set on the
393 # command-line or default_options inside project(). This only sets the
394 # action to be done for warnings if/when they are emitted, so it's ok
395 # to set it after get_no_warn_args() or get_warn_args().
396 if self.get_option_for_target('werror', target):
397 commands += compiler.get_werror_args()
398 # Add compile args for c_* or cpp_* build options set on the
399 # command-line or default_options inside project().
400 commands += compiler.get_option_compile_args(copt_proxy)
401 # Add buildtype args: optimization level, debugging, etc.
402 commands += compiler.get_buildtype_args(self.get_option_for_target('buildtype', target))
403 # Add compile args added using add_project_arguments()
404 commands += self.build.get_project_args(compiler, target.subproject)
405 # Add compile args added using add_global_arguments()
406 # These override per-project arguments
407 commands += self.build.get_global_args(compiler)
408 if not target.is_cross:
409 # Compile args added from the env: CFLAGS/CXXFLAGS, etc. We want these
410 # to override all the defaults, but not the per-target compile args.
411 commands += self.environment.coredata.external_args[compiler.get_language()]
412 # Always set -fPIC for shared libraries
413 if isinstance(target, build.SharedLibrary):
414 commands += compiler.get_pic_args()
415 # Set -fPIC for static libraries by default unless explicitly disabled
416 if isinstance(target, build.StaticLibrary) and target.pic:
417 commands += compiler.get_pic_args()
418 # Add compile args needed to find external dependencies. Link args are
419 # added while generating the link command.
420 # NOTE: We must preserve the order in which external deps are
421 # specified, so we reverse the list before iterating over it.
422 for dep in reversed(target.get_external_deps()):
423 if compiler.language == 'vala':
424 if isinstance(dep, dependencies.PkgConfigDependency):
425 if dep.name == 'glib-2.0' and dep.version_reqs is not None:
426 for req in dep.version_reqs:
427 if req.startswith(('>=', '==')):
428 commands += ['--target-glib', req[2:]]
429 break
430 commands += ['--pkg', dep.name]
431 elif isinstance(dep, dependencies.ExternalLibrary):
432 commands += dep.get_lang_args('vala')
433 else:
434 commands += dep.get_compile_args()
435 # Qt needs -fPIC for executables
436 # XXX: We should move to -fPIC for all executables
437 if isinstance(target, build.Executable):
438 commands += dep.get_exe_args(compiler)
439 # For 'automagic' deps: Boost and GTest. Also dependency('threads').
440 # pkg-config puts the thread flags itself via `Cflags:`
441 if dep.need_threads():
442 commands += compiler.thread_flags()
443 # Fortran requires extra include directives.
444 if compiler.language == 'fortran':
445 for lt in target.link_targets:
446 priv_dir = os.path.join(self.get_target_dir(lt), lt.get_basename() + lt.type_suffix())
447 incflag = compiler.get_include_args(priv_dir, False)
448 commands += incflag
449 return commands
450
451 def build_target_link_arguments(self, compiler, deps):
452 args = []
453 for d in deps:
454 if not isinstance(d, (build.StaticLibrary, build.SharedLibrary)):
455 raise RuntimeError('Tried to link with a non-library target "%s".' % d.get_basename())
456 if isinstance(compiler, (compilers.LLVMDCompiler, compilers.DmdDCompiler)):
457 d_arg = '-L' + self.get_target_filename_for_linking(d)
458 else:
459 d_arg = self.get_target_filename_for_linking(d)
460 args.append(d_arg)
461 return args
462
463 def determine_windows_extra_paths(self, target):
464 '''On Windows there is no such thing as an rpath.
465 We must determine all locations of DLLs that this exe
466 links to and return them so they can be used in unit
467 tests.'''
468 if not isinstance(target, build.Executable):
469 return []
470 prospectives = target.get_transitive_link_deps()
471 result = []
472 for ld in prospectives:
473 if ld == '' or ld == '.':
474 continue
475 dirseg = os.path.join(self.environment.get_build_dir(), self.get_target_dir(ld))
476 if dirseg not in result:
477 result.append(dirseg)
478 return result
479
480 def write_benchmark_file(self, datafile):
481 self.write_test_serialisation(self.build.get_benchmarks(), datafile)
482
483 def write_test_file(self, datafile):
484 self.write_test_serialisation(self.build.get_tests(), datafile)
485
486 def write_test_serialisation(self, tests, datafile):
487 arr = []
488 for t in tests:
489 exe = t.get_exe()
490 if isinstance(exe, dependencies.ExternalProgram):
491 cmd = exe.get_command()
492 else:
493 cmd = [os.path.join(self.environment.get_build_dir(), self.get_target_filename(t.get_exe()))]
494 is_cross = self.environment.is_cross_build() and \
495 self.environment.cross_info.need_cross_compiler() and \
496 self.environment.cross_info.need_exe_wrapper()
497 if is_cross:
498 exe_wrapper = self.environment.cross_info.config['binaries'].get('exe_wrapper', None)
499 else:
500 exe_wrapper = None
501 if mesonlib.is_windows() or mesonlib.is_cygwin():
502 extra_paths = self.determine_windows_extra_paths(exe)
503 else:
504 extra_paths = []
505 cmd_args = []
506 for a in t.cmd_args:
507 if hasattr(a, 'held_object'):
508 a = a.held_object
509 if isinstance(a, mesonlib.File):
510 a = os.path.join(self.environment.get_build_dir(), a.rel_to_builddir(self.build_to_src))
511 cmd_args.append(a)
512 elif isinstance(a, str):
513 cmd_args.append(a)
514 elif isinstance(a, build.Target):
515 cmd_args.append(self.get_target_filename(a))
516 else:
517 raise MesonException('Bad object in test command.')
518 ts = TestSerialisation(t.get_name(), t.suite, cmd, is_cross, exe_wrapper,
519 t.is_parallel, cmd_args, t.env, t.should_fail,
520 t.timeout, t.workdir, extra_paths)
521 arr.append(ts)
522 pickle.dump(arr, datafile)
523
524 def generate_depmf_install(self, d):
525 if self.build.dep_manifest_name is None:
526 return
527 ifilename = os.path.join(self.environment.get_build_dir(), 'depmf.json')
528 ofilename = os.path.join(self.environment.get_prefix(), self.build.dep_manifest_name)
529 mfobj = {'type': 'dependency manifest', 'version': '1.0', 'projects': self.build.dep_manifest}
530 with open(ifilename, 'w') as f:
531 f.write(json.dumps(mfobj))
532 # Copy file from, to, and with mode unchanged
533 d.data.append([ifilename, ofilename, None])
534
535 def get_regen_filelist(self):
536 '''List of all files whose alteration means that the build
537 definition needs to be regenerated.'''
538 deps = [os.path.join(self.build_to_src, df)
539 for df in self.interpreter.get_build_def_files()]
540 if self.environment.is_cross_build():
541 deps.append(os.path.join(self.build_to_src,
542 self.environment.coredata.cross_file))
543 deps.append('meson-private/coredata.dat')
544 if os.path.exists(os.path.join(self.environment.get_source_dir(), 'meson_options.txt')):
545 deps.append(os.path.join(self.build_to_src, 'meson_options.txt'))
546 for sp in self.build.subprojects.keys():
547 fname = os.path.join(self.environment.get_source_dir(), sp, 'meson_options.txt')
548 if os.path.isfile(fname):
549 deps.append(os.path.join(self.build_to_src, sp, 'meson_options.txt'))
550 return deps
551
552 def exe_object_to_cmd_array(self, exe):
553 if self.environment.is_cross_build() and \
554 self.environment.cross_info.need_exe_wrapper() and \
555 isinstance(exe, build.BuildTarget) and exe.is_cross:
556 if 'exe_wrapper' not in self.environment.cross_info.config['binaries']:
557 s = 'Can not use target %s as a generator because it is cross-built\n'
558 s += 'and no exe wrapper is defined. You might want to set it to native instead.'
559 s = s % exe.name
560 raise MesonException(s)
561 if isinstance(exe, build.BuildTarget):
562 exe_arr = [os.path.join(self.environment.get_build_dir(), self.get_target_filename(exe))]
563 else:
564 exe_arr = exe.get_command()
565 return exe_arr
566
567 def replace_extra_args(self, args, genlist):
568 final_args = []
569 for a in args:
570 if a == '@EXTRA_ARGS@':
571 final_args += genlist.get_extra_args()
572 else:
573 final_args.append(a)
574 return final_args
575
576 def replace_outputs(self, args, private_dir, output_list):
577 newargs = []
578 regex = re.compile('@OUTPUT(\d+)@')
579 for arg in args:
580 m = regex.search(arg)
581 while m is not None:
582 index = int(m.group(1))
583 src = '@OUTPUT%d@' % index
584 arg = arg.replace(src, os.path.join(private_dir, output_list[index]))
585 m = regex.search(arg)
586 newargs.append(arg)
587 return newargs
588
589 def get_build_by_default_targets(self):
590 result = OrderedDict()
591 # Get all build and custom targets that must be built by default
592 for name, t in self.build.get_targets().items():
593 if t.build_by_default or t.install or t.build_always:
594 result[name] = t
595 # Get all targets used as test executables and arguments. These must
596 # also be built by default. XXX: Sometime in the future these should be
597 # built only before running tests.
598 for t in self.build.get_tests():
599 exe = t.exe
600 if hasattr(exe, 'held_object'):
601 exe = exe.held_object
602 if isinstance(exe, (build.CustomTarget, build.BuildTarget)):
603 result[exe.get_id()] = exe
604 for arg in t.cmd_args:
605 if hasattr(arg, 'held_object'):
606 arg = arg.held_object
607 if not isinstance(arg, (build.CustomTarget, build.BuildTarget)):
608 continue
609 result[arg.get_id()] = arg
610 return result
611
612 def get_custom_target_provided_libraries(self, target):
613 libs = []
614 for t in target.get_generated_sources():
615 if not isinstance(t, build.CustomTarget):
616 continue
617 for f in t.get_outputs():
618 if self.environment.is_library(f):
619 libs.append(os.path.join(self.get_target_dir(t), f))
620 return libs
621
622 def is_unity(self, target):
623 optval = self.get_option_for_target('unity', target)
624 if optval == 'on' or (optval == 'subprojects' and target.subproject != ''):
625 return True
626 return False
627
628 def get_custom_target_sources(self, target):
629 '''
630 Custom target sources can be of various object types; strings, File,
631 BuildTarget, even other CustomTargets.
632 Returns the path to them relative to the build root directory.
633 '''
634 srcs = []
635 for i in target.get_sources():
636 if hasattr(i, 'held_object'):
637 i = i.held_object
638 if isinstance(i, str):
639 fname = [os.path.join(self.build_to_src, target.subdir, i)]
640 elif isinstance(i, build.BuildTarget):
641 fname = [self.get_target_filename(i)]
642 elif isinstance(i, build.CustomTarget):
643 fname = [os.path.join(self.get_target_dir(i), p) for p in i.get_outputs()]
644 elif isinstance(i, build.GeneratedList):
645 fname = [os.path.join(self.get_target_private_dir(target), p) for p in i.get_outputs()]
646 else:
647 fname = [i.rel_to_builddir(self.build_to_src)]
648 if target.absolute_paths:
649 fname = [os.path.join(self.environment.get_build_dir(), f) for f in fname]
650 srcs += fname
651 return srcs
652
653 def get_custom_target_depend_files(self, target, absolute_paths=False):
654 deps = []
655 for i in target.depend_files:
656 if isinstance(i, mesonlib.File):
657 if absolute_paths:
658 deps.append(i.absolute_path(self.environment.get_source_dir(),
659 self.environment.get_build_dir()))
660 else:
661 deps.append(i.rel_to_builddir(self.build_to_src))
662 else:
663 if absolute_paths:
664 deps.append(os.path.join(self.environment.get_build_dir(), i))
665 else:
666 deps.append(os.path.join(self.build_to_src, i))
667 return deps
668
669 def eval_custom_target_command(self, target, absolute_outputs=False):
670 # We want the outputs to be absolute only when using the VS backend
671 # XXX: Maybe allow the vs backend to use relative paths too?
672 source_root = self.build_to_src
673 build_root = '.'
674 outdir = self.get_target_dir(target)
675 if absolute_outputs:
676 source_root = self.environment.get_source_dir()
677 build_root = self.environment.get_source_dir()
678 outdir = os.path.join(self.environment.get_build_dir(), outdir)
679 outputs = []
680 for i in target.get_outputs():
681 outputs.append(os.path.join(outdir, i))
682 inputs = self.get_custom_target_sources(target)
683 # Evaluate the command list
684 cmd = []
685 for i in target.command:
686 if isinstance(i, build.Executable):
687 cmd += self.exe_object_to_cmd_array(i)
688 continue
689 elif isinstance(i, build.CustomTarget):
690 # GIR scanner will attempt to execute this binary but
691 # it assumes that it is in path, so always give it a full path.
692 tmp = i.get_outputs()[0]
693 i = os.path.join(self.get_target_dir(i), tmp)
694 elif isinstance(i, mesonlib.File):
695 i = i.rel_to_builddir(self.build_to_src)
696 if target.absolute_paths:
697 i = os.path.join(self.environment.get_build_dir(), i)
698 # FIXME: str types are blindly added ignoring 'target.absolute_paths'
699 # because we can't know if they refer to a file or just a string
700 elif not isinstance(i, str):
701 err_msg = 'Argument {0} is of unknown type {1}'
702 raise RuntimeError(err_msg.format(str(i), str(type(i))))
703 elif '@SOURCE_ROOT@' in i:
704 i = i.replace('@SOURCE_ROOT@', source_root)
705 elif '@BUILD_ROOT@' in i:
706 i = i.replace('@BUILD_ROOT@', build_root)
707 elif '@DEPFILE@' in i:
708 if target.depfile is None:
709 msg = 'Custom target {!r} has @DEPFILE@ but no depfile ' \
710 'keyword argument.'.format(target.name)
711 raise MesonException(msg)
712 dfilename = os.path.join(outdir, target.depfile)
713 i = i.replace('@DEPFILE@', dfilename)
714 elif '@PRIVATE_OUTDIR_' in i:
715 match = re.search('@PRIVATE_OUTDIR_(ABS_)?([^/\s*]*)@', i)
716 if not match:
717 msg = 'Custom target {!r} has an invalid argument {!r}' \
718 ''.format(target.name, i)
719 raise MesonException(msg)
720 source = match.group(0)
721 if match.group(1) is None and not target.absolute_paths:
722 lead_dir = ''
723 else:
724 lead_dir = self.environment.get_build_dir()
725 i = i.replace(source, os.path.join(lead_dir, outdir))
726 cmd.append(i)
727 # Substitute the rest of the template strings
728 values = mesonlib.get_filenames_templates_dict(inputs, outputs)
729 cmd = mesonlib.substitute_values(cmd, values)
730 # This should not be necessary but removing it breaks
731 # building GStreamer on Windows. The underlying issue
732 # is problems with quoting backslashes on Windows
733 # which is the seventh circle of hell. The downside is
734 # that this breaks custom targets whose command lines
735 # have backslashes. If you try to fix this be sure to
736 # check that it does not break GST.
737 #
738 # The bug causes file paths such as c:\foo to get escaped
739 # into c:\\foo.
740 #
741 # Unfortunately we have not been able to come up with an
742 # isolated test case for this so unless you manage to come up
743 # with one, the only way is to test the building with Gst's
744 # setup. Note this in your MR or ping us and we will get it
745 # fixed.
746 #
747 # https://github.com/mesonbuild/meson/pull/737
748 cmd = [i.replace('\\', '/') for i in cmd]
749 return inputs, outputs, cmd
750
751 def run_postconf_scripts(self):
752 env = {'MESON_SOURCE_ROOT': self.environment.get_source_dir(),
753 'MESON_BUILD_ROOT': self.environment.get_build_dir(),
754 'MESONINTROSPECT': get_meson_script(self.environment, 'mesonintrospect')}
755 child_env = os.environ.copy()
756 child_env.update(env)
757
758 for s in self.build.postconf_scripts:
759 cmd = s['exe'] + s['args']
760 subprocess.check_call(cmd, env=child_env)
761
[end of mesonbuild/backend/backends.py]
[start of mesonbuild/wrap/wrap.py]
1 # Copyright 2015 The Meson development team
2
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6
7 # http://www.apache.org/licenses/LICENSE-2.0
8
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from .. import mlog
16 import contextlib
17 import urllib.request, os, hashlib, shutil
18 import subprocess
19 import sys
20 from pathlib import Path
21 from . import WrapMode
22
23 try:
24 import ssl
25 has_ssl = True
26 API_ROOT = 'https://wrapdb.mesonbuild.com/v1/'
27 except ImportError:
28 has_ssl = False
29 API_ROOT = 'http://wrapdb.mesonbuild.com/v1/'
30
31 ssl_warning_printed = False
32
33 def build_ssl_context():
34 ctx = ssl.SSLContext(ssl.PROTOCOL_SSLv23)
35 ctx.options |= ssl.OP_NO_SSLv2
36 ctx.options |= ssl.OP_NO_SSLv3
37 ctx.verify_mode = ssl.CERT_REQUIRED
38 ctx.load_default_certs()
39 return ctx
40
41 def quiet_git(cmd, workingdir):
42 pc = subprocess.Popen(['git', '-C', workingdir] + cmd,
43 stdout=subprocess.PIPE, stderr=subprocess.PIPE)
44 out, err = pc.communicate()
45 if pc.returncode != 0:
46 return False, err
47 return True, out
48
49 def open_wrapdburl(urlstring):
50 global ssl_warning_printed
51 if has_ssl:
52 try:
53 return urllib.request.urlopen(urlstring)# , context=build_ssl_context())
54 except urllib.error.URLError:
55 if not ssl_warning_printed:
56 print('SSL connection failed. Falling back to unencrypted connections.')
57 ssl_warning_printed = True
58 if not ssl_warning_printed:
59 print('Warning: SSL not available, traffic not authenticated.',
60 file=sys.stderr)
61 ssl_warning_printed = True
62 # Trying to open SSL connection to wrapdb fails because the
63 # certificate is not known.
64 if urlstring.startswith('https'):
65 urlstring = 'http' + urlstring[5:]
66 return urllib.request.urlopen(urlstring)
67
68
69 class PackageDefinition:
70 def __init__(self, fname):
71 self.values = {}
72 with open(fname) as ifile:
73 first = ifile.readline().strip()
74
75 if first == '[wrap-file]':
76 self.type = 'file'
77 elif first == '[wrap-git]':
78 self.type = 'git'
79 elif first == '[wrap-hg]':
80 self.type = 'hg'
81 else:
82 raise RuntimeError('Invalid format of package file')
83 for line in ifile:
84 line = line.strip()
85 if line == '':
86 continue
87 (k, v) = line.split('=', 1)
88 k = k.strip()
89 v = v.strip()
90 self.values[k] = v
91
92 def get(self, key):
93 return self.values[key]
94
95 def has_patch(self):
96 return 'patch_url' in self.values
97
98 class Resolver:
99 def __init__(self, subdir_root, wrap_mode=WrapMode(1)):
100 self.wrap_mode = wrap_mode
101 self.subdir_root = subdir_root
102 self.cachedir = os.path.join(self.subdir_root, 'packagecache')
103
104 def resolve(self, packagename):
105 # Check if the directory is already resolved
106 dirname = Path(os.path.join(self.subdir_root, packagename))
107 subprojdir = os.path.join(*dirname.parts[-2:])
108 if dirname.is_dir():
109 if (dirname / 'meson.build').is_file():
110 # The directory is there and has meson.build? Great, use it.
111 return packagename
112 # Is the dir not empty and also not a git submodule dir that is
113 # not checkout properly? Can't do anything, exception!
114 elif next(dirname.iterdir(), None) and not (dirname / '.git').is_file():
115 m = '{!r} is not empty and has no meson.build files'
116 raise RuntimeError(m.format(subprojdir))
117 elif dirname.exists():
118 m = '{!r} already exists and is not a dir; cannot use as subproject'
119 raise RuntimeError(m.format(subprojdir))
120
121 dirname = str(dirname)
122 # Check if the subproject is a git submodule
123 if self.resolve_git_submodule(dirname):
124 return packagename
125
126 # Don't download subproject data based on wrap file if requested.
127 # Git submodules are ok (see above)!
128 if self.wrap_mode is WrapMode.nodownload:
129 m = 'Automatic wrap-based subproject downloading is disabled'
130 raise RuntimeError(m)
131
132 # Check if there's a .wrap file for this subproject
133 fname = os.path.join(self.subdir_root, packagename + '.wrap')
134 if not os.path.isfile(fname):
135 # No wrap file with this name? Give up.
136 m = 'No {}.wrap found for {!r}'
137 raise RuntimeError(m.format(packagename, subprojdir))
138 p = PackageDefinition(fname)
139 if p.type == 'file':
140 if not os.path.isdir(self.cachedir):
141 os.mkdir(self.cachedir)
142 self.download(p, packagename)
143 self.extract_package(p)
144 elif p.type == 'git':
145 self.get_git(p)
146 elif p.type == "hg":
147 self.get_hg(p)
148 else:
149 raise AssertionError('Unreachable code.')
150 return p.get('directory')
151
152 def resolve_git_submodule(self, dirname):
153 # Are we in a git repository?
154 ret, out = quiet_git(['rev-parse'], self.subdir_root)
155 if not ret:
156 return False
157 # Is `dirname` a submodule?
158 ret, out = quiet_git(['submodule', 'status', dirname], self.subdir_root)
159 if not ret:
160 return False
161 # Submodule has not been added, add it
162 if out.startswith(b'-'):
163 if subprocess.call(['git', '-C', self.subdir_root, 'submodule', 'update', '--init', dirname]) != 0:
164 return False
165 # Submodule was added already, but it wasn't populated. Do a checkout.
166 elif out.startswith(b' '):
167 if subprocess.call(['git', 'checkout', '.'], cwd=dirname):
168 return True
169 else:
170 m = 'Unknown git submodule output: {!r}'
171 raise AssertionError(m.format(out))
172 return True
173
174 def get_git(self, p):
175 checkoutdir = os.path.join(self.subdir_root, p.get('directory'))
176 revno = p.get('revision')
177 is_there = os.path.isdir(checkoutdir)
178 if is_there:
179 try:
180 subprocess.check_call(['git', 'rev-parse'], cwd=checkoutdir)
181 except subprocess.CalledProcessError:
182 raise RuntimeError('%s is not empty but is not a valid '
183 'git repository, we can not work with it'
184 ' as a subproject directory.' % (
185 checkoutdir))
186
187 if revno.lower() == 'head':
188 # Failure to do pull is not a fatal error,
189 # because otherwise you can't develop without
190 # a working net connection.
191 subprocess.call(['git', 'pull'], cwd=checkoutdir)
192 else:
193 if subprocess.call(['git', 'checkout', revno], cwd=checkoutdir) != 0:
194 subprocess.check_call(['git', 'fetch'], cwd=checkoutdir)
195 subprocess.check_call(['git', 'checkout', revno],
196 cwd=checkoutdir)
197 else:
198 subprocess.check_call(['git', 'clone', p.get('url'),
199 p.get('directory')], cwd=self.subdir_root)
200 if revno.lower() != 'head':
201 subprocess.check_call(['git', 'checkout', revno],
202 cwd=checkoutdir)
203 push_url = p.values.get('push-url')
204 if push_url:
205 subprocess.check_call(['git', 'remote', 'set-url',
206 '--push', 'origin', push_url],
207 cwd=checkoutdir)
208
209 def get_hg(self, p):
210 checkoutdir = os.path.join(self.subdir_root, p.get('directory'))
211 revno = p.get('revision')
212 is_there = os.path.isdir(checkoutdir)
213 if is_there:
214 if revno.lower() == 'tip':
215 # Failure to do pull is not a fatal error,
216 # because otherwise you can't develop without
217 # a working net connection.
218 subprocess.call(['hg', 'pull'], cwd=checkoutdir)
219 else:
220 if subprocess.call(['hg', 'checkout', revno], cwd=checkoutdir) != 0:
221 subprocess.check_call(['hg', 'pull'], cwd=checkoutdir)
222 subprocess.check_call(['hg', 'checkout', revno],
223 cwd=checkoutdir)
224 else:
225 subprocess.check_call(['hg', 'clone', p.get('url'),
226 p.get('directory')], cwd=self.subdir_root)
227 if revno.lower() != 'tip':
228 subprocess.check_call(['hg', 'checkout', revno],
229 cwd=checkoutdir)
230
231 def get_data(self, url):
232 blocksize = 10 * 1024
233 if url.startswith('https://wrapdb.mesonbuild.com'):
234 resp = open_wrapdburl(url)
235 else:
236 resp = urllib.request.urlopen(url)
237 with contextlib.closing(resp) as resp:
238 try:
239 dlsize = int(resp.info()['Content-Length'])
240 except TypeError:
241 dlsize = None
242 if dlsize is None:
243 print('Downloading file of unknown size.')
244 return resp.read()
245 print('Download size:', dlsize)
246 print('Downloading: ', end='')
247 sys.stdout.flush()
248 printed_dots = 0
249 blocks = []
250 downloaded = 0
251 while True:
252 block = resp.read(blocksize)
253 if block == b'':
254 break
255 downloaded += len(block)
256 blocks.append(block)
257 ratio = int(downloaded / dlsize * 10)
258 while printed_dots < ratio:
259 print('.', end='')
260 sys.stdout.flush()
261 printed_dots += 1
262 print('')
263 return b''.join(blocks)
264
265 def get_hash(self, data):
266 h = hashlib.sha256()
267 h.update(data)
268 hashvalue = h.hexdigest()
269 return hashvalue
270
271 def download(self, p, packagename):
272 ofname = os.path.join(self.cachedir, p.get('source_filename'))
273 if os.path.exists(ofname):
274 mlog.log('Using', mlog.bold(packagename), 'from cache.')
275 return
276 srcurl = p.get('source_url')
277 mlog.log('Downloading', mlog.bold(packagename), 'from', mlog.bold(srcurl))
278 srcdata = self.get_data(srcurl)
279 dhash = self.get_hash(srcdata)
280 expected = p.get('source_hash')
281 if dhash != expected:
282 raise RuntimeError('Incorrect hash for source %s:\n %s expected\n %s actual.' % (packagename, expected, dhash))
283 with open(ofname, 'wb') as f:
284 f.write(srcdata)
285 if p.has_patch():
286 purl = p.get('patch_url')
287 mlog.log('Downloading patch from', mlog.bold(purl))
288 pdata = self.get_data(purl)
289 phash = self.get_hash(pdata)
290 expected = p.get('patch_hash')
291 if phash != expected:
292 raise RuntimeError('Incorrect hash for patch %s:\n %s expected\n %s actual' % (packagename, expected, phash))
293 filename = os.path.join(self.cachedir, p.get('patch_filename'))
294 with open(filename, 'wb') as f:
295 f.write(pdata)
296 else:
297 mlog.log('Package does not require patch.')
298
299 def extract_package(self, package):
300 if sys.version_info < (3, 5):
301 try:
302 import lzma
303 del lzma
304 except ImportError:
305 pass
306 else:
307 try:
308 shutil.register_unpack_format('xztar', ['.tar.xz', '.txz'], shutil._unpack_tarfile, [], "xz'ed tar-file")
309 except shutil.RegistryError:
310 pass
311 target_dir = os.path.join(self.subdir_root, package.get('directory'))
312 if os.path.isdir(target_dir):
313 return
314 extract_dir = self.subdir_root
315 # Some upstreams ship packages that do not have a leading directory.
316 # Create one for them.
317 try:
318 package.get('lead_directory_missing')
319 os.mkdir(target_dir)
320 extract_dir = target_dir
321 except KeyError:
322 pass
323 shutil.unpack_archive(os.path.join(self.cachedir, package.get('source_filename')), extract_dir)
324 if package.has_patch():
325 shutil.unpack_archive(os.path.join(self.cachedir, package.get('patch_filename')), self.subdir_root)
326
[end of mesonbuild/wrap/wrap.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
mesonbuild/meson
|
3ced9775477b49110c881e847129b28997153bb0
|
VS backend doesn't respect warning_level/--warnlevel
All generated VS projects use `/W1` regardless of warning level.
VisualStudioCPPCompiler correctly returns `'/W4'` for `get_warn_args('3')`, so it's clearly the backend that's failing to use it.
I'm currently trying to trace the problem in order to create a pull request for it.
|
OK, two discoveries:
VS 2010 through 2015 expect `<WarningLevel>` to go inside `<ClCompile>`. The backend is generating the element, just in the wrong place. **This may be the case for other elements as well.**
Additionally, vs2010backend.py simply appends the warning_level (which is `"1"`, `"2"`, or `"3"`) to `"Level"`, which results in a warning level one lower than the one specified in compilers.py (`/W2`, `/W3`, or `/W4`). I'd assume that, between the two, we want the higher.
I will submit a PR presently.
|
2017-06-08T05:17:16Z
|
<patch>
diff --git a/mesonbuild/backend/vs2010backend.py b/mesonbuild/backend/vs2010backend.py
--- a/mesonbuild/backend/vs2010backend.py
+++ b/mesonbuild/backend/vs2010backend.py
@@ -706,9 +706,6 @@ def gen_vcxproj(self, target, ofname, guid):
ET.SubElement(type_config, 'Optimization').text = 'MinSpace'
elif '/Od' in o_flags:
ET.SubElement(type_config, 'Optimization').text = 'Disabled'
- # Warning level
- warning_level = self.get_option_for_target('warning_level', target)
- ET.SubElement(type_config, 'WarningLevel').text = 'Level' + warning_level
# End configuration
ET.SubElement(root, 'Import', Project='$(VCTargetsPath)\Microsoft.Cpp.props')
generated_files, custom_target_output_files, generated_files_include_dirs = self.generate_custom_generator_commands(target, root)
@@ -862,6 +859,9 @@ def gen_vcxproj(self, target, ofname, guid):
ET.SubElement(clconf, 'MinimalRebuild').text = 'true'
ET.SubElement(clconf, 'FunctionLevelLinking').text = 'true'
pch_node = ET.SubElement(clconf, 'PrecompiledHeader')
+ # Warning level
+ warning_level = self.get_option_for_target('warning_level', target)
+ ET.SubElement(clconf, 'WarningLevel').text = 'Level' + str(1 + int(warning_level))
if self.get_option_for_target('werror', target):
ET.SubElement(clconf, 'TreatWarningAsError').text = 'true'
# Note: SuppressStartupBanner is /NOLOGO and is 'true' by default
</patch>
|
[]
|
[]
| |||
pantsbuild__pants-18447
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Cannot create a new lockfile if unmatched_build_file_globs = "error"
In https://github.com/pantsbuild/pants/pull/17097 we introduced a synthesized `_lockfile` target to wrap a generated lockfile. This target is synthesized for each user resolve, even if the corresponding lockfile hasn't been generated yet.
But then running `pants generate-lockfiles --resolve=my-new-resolve` fails because the synthetic target's sources glob doesn't match anything, and the command that would generate the lockfile and solve the problem is exactly and frustratingly the one that fails...
</issue>
<code>
[start of README.md]
1 # Pants Build System
2
3 Pants is a scalable build system for _monorepos_: codebases containing
4 multiple projects, often using multiple programming languages and frameworks,
5 in a single unified code repository.
6
7 Some noteworthy features include:
8
9 * Explicit dependency modeling.
10 * Fine-grained invalidation.
11 * Shared result caching.
12 * Concurrent execution.
13 * Remote execution.
14 * Unified interface for multiple tools and languages.
15 * Extensibility and customizability via a plugin API.
16
17 Documentation: [www.pantsbuild.org](https://www.pantsbuild.org/).
18
19 # Getting started
20
21 See the [getting started](https://www.pantsbuild.org/docs/getting-started) documentation.
22
23 # Credits
24
25 We release to [PyPI](https://pypi.org/pypi)
26
27 [](https://pypi.org/pypi/pantsbuild.pants)
28 [](https://pypi.org/pypi/pantsbuild.pants)
29
30 <img width="150" height="61" src="https://uploads-ssl.webflow.com/5ac3c046c82724970fc60918/5c019d917bba312af7553b49_MacStadium-developerlogo.png">
31
[end of README.md]
[start of src/python/pants/backend/python/goals/lockfile.py]
1 # Copyright 2021 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 from __future__ import annotations
5
6 import itertools
7 import os.path
8 from collections import defaultdict
9 from dataclasses import dataclass
10 from operator import itemgetter
11 from typing import Iterable
12
13 from pants.backend.python.pip_requirement import PipRequirement
14 from pants.backend.python.subsystems.python_tool_base import PythonToolRequirementsBase
15 from pants.backend.python.subsystems.setup import PythonSetup
16 from pants.backend.python.target_types import PythonRequirementResolveField, PythonRequirementsField
17 from pants.backend.python.util_rules.interpreter_constraints import InterpreterConstraints
18 from pants.backend.python.util_rules.lockfile_diff import _generate_python_lockfile_diff
19 from pants.backend.python.util_rules.lockfile_metadata import PythonLockfileMetadata
20 from pants.backend.python.util_rules.pex_cli import PexCliProcess
21 from pants.backend.python.util_rules.pex_requirements import ( # noqa: F401
22 GeneratePythonToolLockfileSentinel as GeneratePythonToolLockfileSentinel,
23 )
24 from pants.backend.python.util_rules.pex_requirements import (
25 PexRequirements,
26 ResolvePexConfig,
27 ResolvePexConfigRequest,
28 )
29 from pants.core.goals.generate_lockfiles import (
30 GenerateLockfile,
31 GenerateLockfileResult,
32 GenerateLockfilesSubsystem,
33 KnownUserResolveNames,
34 KnownUserResolveNamesRequest,
35 RequestedUserResolveNames,
36 UserGenerateLockfiles,
37 WrappedGenerateLockfile,
38 )
39 from pants.core.util_rules.lockfile_metadata import calculate_invalidation_digest
40 from pants.engine.fs import CreateDigest, Digest, DigestContents, FileContent, MergeDigests
41 from pants.engine.internals.synthetic_targets import SyntheticAddressMaps, SyntheticTargetsRequest
42 from pants.engine.internals.target_adaptor import TargetAdaptor
43 from pants.engine.process import ProcessCacheScope, ProcessResult
44 from pants.engine.rules import Get, collect_rules, rule
45 from pants.engine.target import AllTargets
46 from pants.engine.unions import UnionRule
47 from pants.util.docutil import bin_name
48 from pants.util.logging import LogLevel
49 from pants.util.ordered_set import FrozenOrderedSet
50
51
52 @dataclass(frozen=True)
53 class GeneratePythonLockfile(GenerateLockfile):
54 requirements: FrozenOrderedSet[str]
55 interpreter_constraints: InterpreterConstraints
56
57 @classmethod
58 def from_tool(
59 cls,
60 subsystem: PythonToolRequirementsBase,
61 interpreter_constraints: InterpreterConstraints | None = None,
62 extra_requirements: Iterable[str] = (),
63 ) -> GeneratePythonLockfile:
64 """Create a request for a dedicated lockfile for the tool.
65
66 If the tool determines its interpreter constraints by using the constraints of user code,
67 rather than the option `--interpreter-constraints`, you must pass the arg
68 `interpreter_constraints`.
69 """
70 if not subsystem.uses_custom_lockfile:
71 return cls(
72 requirements=FrozenOrderedSet(),
73 interpreter_constraints=InterpreterConstraints(),
74 resolve_name=subsystem.options_scope,
75 lockfile_dest=subsystem.lockfile,
76 diff=False,
77 )
78 return cls(
79 requirements=FrozenOrderedSet((*subsystem.all_requirements, *extra_requirements)),
80 interpreter_constraints=(
81 interpreter_constraints
82 if interpreter_constraints is not None
83 else subsystem.interpreter_constraints
84 ),
85 resolve_name=subsystem.options_scope,
86 lockfile_dest=subsystem.lockfile,
87 diff=False,
88 )
89
90 @property
91 def requirements_hex_digest(self) -> str:
92 """Produces a hex digest of the requirements input for this lockfile."""
93 return calculate_invalidation_digest(self.requirements)
94
95
96 @rule
97 def wrap_python_lockfile_request(request: GeneratePythonLockfile) -> WrappedGenerateLockfile:
98 return WrappedGenerateLockfile(request)
99
100
101 @dataclass(frozen=True)
102 class _PipArgsAndConstraintsSetup:
103 resolve_config: ResolvePexConfig
104 args: tuple[str, ...]
105 digest: Digest
106
107
108 async def _setup_pip_args_and_constraints_file(resolve_name: str) -> _PipArgsAndConstraintsSetup:
109 resolve_config = await Get(ResolvePexConfig, ResolvePexConfigRequest(resolve_name))
110
111 args = list(resolve_config.pex_args())
112 digests = []
113
114 if resolve_config.no_binary or resolve_config.only_binary:
115 pip_args_file = "__pip_args.txt"
116 args.extend(["-r", pip_args_file])
117 pip_args_file_content = "\n".join(
118 [f"--no-binary {pkg}" for pkg in resolve_config.no_binary]
119 + [f"--only-binary {pkg}" for pkg in resolve_config.only_binary]
120 )
121 pip_args_digest = await Get(
122 Digest, CreateDigest([FileContent(pip_args_file, pip_args_file_content.encode())])
123 )
124 digests.append(pip_args_digest)
125
126 if resolve_config.constraints_file:
127 args.append(f"--constraints={resolve_config.constraints_file.path}")
128 digests.append(resolve_config.constraints_file.digest)
129
130 input_digest = await Get(Digest, MergeDigests(digests))
131 return _PipArgsAndConstraintsSetup(resolve_config, tuple(args), input_digest)
132
133
134 @rule(desc="Generate Python lockfile", level=LogLevel.DEBUG)
135 async def generate_lockfile(
136 req: GeneratePythonLockfile,
137 generate_lockfiles_subsystem: GenerateLockfilesSubsystem,
138 python_setup: PythonSetup,
139 ) -> GenerateLockfileResult:
140 pip_args_setup = await _setup_pip_args_and_constraints_file(req.resolve_name)
141
142 header_delimiter = "//"
143 result = await Get(
144 ProcessResult,
145 PexCliProcess(
146 subcommand=("lock", "create"),
147 extra_args=(
148 "--output=lock.json",
149 "--no-emit-warnings",
150 # See https://github.com/pantsbuild/pants/issues/12458. For now, we always
151 # generate universal locks because they have the best compatibility. We may
152 # want to let users change this, as `style=strict` is safer.
153 "--style=universal",
154 "--pip-version",
155 python_setup.pip_version.value,
156 "--resolver-version",
157 "pip-2020-resolver",
158 # PEX files currently only run on Linux and Mac machines; so we hard code this
159 # limit on lock universaility to avoid issues locking due to irrelevant
160 # Windows-only dependency issues. See this Pex issue that originated from a
161 # Pants user issue presented in Slack:
162 # https://github.com/pantsbuild/pex/issues/1821
163 #
164 # At some point it will probably make sense to expose `--target-system` for
165 # configuration.
166 "--target-system",
167 "linux",
168 "--target-system",
169 "mac",
170 # This makes diffs more readable when lockfiles change.
171 "--indent=2",
172 *pip_args_setup.args,
173 *req.interpreter_constraints.generate_pex_arg_list(),
174 *req.requirements,
175 ),
176 additional_input_digest=pip_args_setup.digest,
177 output_files=("lock.json",),
178 description=f"Generate lockfile for {req.resolve_name}",
179 # Instead of caching lockfile generation with LMDB, we instead use the invalidation
180 # scheme from `lockfile_metadata.py` to check for stale/invalid lockfiles. This is
181 # necessary so that our invalidation is resilient to deleting LMDB or running on a
182 # new machine.
183 #
184 # We disable caching with LMDB so that when you generate a lockfile, you always get
185 # the most up-to-date snapshot of the world. This is generally desirable and also
186 # necessary to avoid an awkward edge case where different developers generate
187 # different lockfiles even when generating at the same time. See
188 # https://github.com/pantsbuild/pants/issues/12591.
189 cache_scope=ProcessCacheScope.PER_SESSION,
190 ),
191 )
192
193 initial_lockfile_digest_contents = await Get(DigestContents, Digest, result.output_digest)
194 metadata = PythonLockfileMetadata.new(
195 valid_for_interpreter_constraints=req.interpreter_constraints,
196 requirements={
197 PipRequirement.parse(
198 i,
199 description_of_origin=f"the lockfile {req.lockfile_dest} for the resolve {req.resolve_name}",
200 )
201 for i in req.requirements
202 },
203 manylinux=pip_args_setup.resolve_config.manylinux,
204 requirement_constraints=(
205 set(pip_args_setup.resolve_config.constraints_file.constraints)
206 if pip_args_setup.resolve_config.constraints_file
207 else set()
208 ),
209 only_binary=set(pip_args_setup.resolve_config.only_binary),
210 no_binary=set(pip_args_setup.resolve_config.no_binary),
211 )
212 lockfile_with_header = metadata.add_header_to_lockfile(
213 initial_lockfile_digest_contents[0].content,
214 regenerate_command=(
215 generate_lockfiles_subsystem.custom_command
216 or f"{bin_name()} generate-lockfiles --resolve={req.resolve_name}"
217 ),
218 delimeter=header_delimiter,
219 )
220 final_lockfile_digest = await Get(
221 Digest, CreateDigest([FileContent(req.lockfile_dest, lockfile_with_header)])
222 )
223
224 if req.diff:
225 diff = await _generate_python_lockfile_diff(
226 final_lockfile_digest, req.resolve_name, req.lockfile_dest
227 )
228 else:
229 diff = None
230
231 return GenerateLockfileResult(final_lockfile_digest, req.resolve_name, req.lockfile_dest, diff)
232
233
234 class RequestedPythonUserResolveNames(RequestedUserResolveNames):
235 pass
236
237
238 class KnownPythonUserResolveNamesRequest(KnownUserResolveNamesRequest):
239 pass
240
241
242 @rule
243 def determine_python_user_resolves(
244 _: KnownPythonUserResolveNamesRequest, python_setup: PythonSetup
245 ) -> KnownUserResolveNames:
246 return KnownUserResolveNames(
247 names=tuple(python_setup.resolves.keys()),
248 option_name="[python].resolves",
249 requested_resolve_names_cls=RequestedPythonUserResolveNames,
250 )
251
252
253 @rule
254 async def setup_user_lockfile_requests(
255 requested: RequestedPythonUserResolveNames, all_targets: AllTargets, python_setup: PythonSetup
256 ) -> UserGenerateLockfiles:
257 if not (python_setup.enable_resolves and python_setup.resolves_generate_lockfiles):
258 return UserGenerateLockfiles()
259
260 resolve_to_requirements_fields = defaultdict(set)
261 for tgt in all_targets:
262 if not tgt.has_fields((PythonRequirementResolveField, PythonRequirementsField)):
263 continue
264 resolve = tgt[PythonRequirementResolveField].normalized_value(python_setup)
265 resolve_to_requirements_fields[resolve].add(tgt[PythonRequirementsField])
266
267 return UserGenerateLockfiles(
268 GeneratePythonLockfile(
269 requirements=PexRequirements.req_strings_from_requirement_fields(
270 resolve_to_requirements_fields[resolve]
271 ),
272 interpreter_constraints=InterpreterConstraints(
273 python_setup.resolves_to_interpreter_constraints.get(
274 resolve, python_setup.interpreter_constraints
275 )
276 ),
277 resolve_name=resolve,
278 lockfile_dest=python_setup.resolves[resolve],
279 diff=False,
280 )
281 for resolve in requested
282 )
283
284
285 @dataclass(frozen=True)
286 class PythonSyntheticLockfileTargetsRequest(SyntheticTargetsRequest):
287 """Register the type used to create synthetic targets for Python lockfiles.
288
289 As the paths for all lockfiles are known up-front, we set the `path` field to
290 `SyntheticTargetsRequest.SINGLE_REQUEST_FOR_ALL_TARGETS` so that we get a single request for all
291 our synthetic targets rather than one request per directory.
292 """
293
294 path: str = SyntheticTargetsRequest.SINGLE_REQUEST_FOR_ALL_TARGETS
295
296
297 @rule
298 async def python_lockfile_synthetic_targets(
299 request: PythonSyntheticLockfileTargetsRequest,
300 python_setup: PythonSetup,
301 ) -> SyntheticAddressMaps:
302 if not python_setup.enable_synthetic_lockfiles:
303 return SyntheticAddressMaps()
304
305 resolves = [
306 (os.path.dirname(lockfile), os.path.basename(lockfile), name)
307 for name, lockfile in python_setup.resolves.items()
308 ]
309 return SyntheticAddressMaps.for_targets_request(
310 request,
311 [
312 (
313 os.path.join(spec_path, "BUILD.python-lockfiles"),
314 tuple(
315 TargetAdaptor("_lockfiles", name=name, sources=[lockfile])
316 for _, lockfile, name in lockfiles
317 ),
318 )
319 for spec_path, lockfiles in itertools.groupby(sorted(resolves), key=itemgetter(0))
320 ],
321 )
322
323
324 def rules():
325 return (
326 *collect_rules(),
327 UnionRule(GenerateLockfile, GeneratePythonLockfile),
328 UnionRule(KnownUserResolveNamesRequest, KnownPythonUserResolveNamesRequest),
329 UnionRule(RequestedUserResolveNames, RequestedPythonUserResolveNames),
330 UnionRule(SyntheticTargetsRequest, PythonSyntheticLockfileTargetsRequest),
331 )
332
[end of src/python/pants/backend/python/goals/lockfile.py]
[start of src/python/pants/backend/python/subsystems/setup.py]
1 # Copyright 2014 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 from __future__ import annotations
5
6 import enum
7 import logging
8 import os
9 from typing import Iterable, List, Optional, TypeVar, cast
10
11 from packaging.utils import canonicalize_name
12
13 from pants.base.deprecated import warn_or_error
14 from pants.core.goals.generate_lockfiles import UnrecognizedResolveNamesError
15 from pants.option.option_types import (
16 BoolOption,
17 DictOption,
18 EnumOption,
19 FileOption,
20 StrListOption,
21 StrOption,
22 )
23 from pants.option.subsystem import Subsystem
24 from pants.util.docutil import bin_name, doc_url
25 from pants.util.memo import memoized_method, memoized_property
26 from pants.util.strutil import softwrap
27
28 logger = logging.getLogger(__name__)
29
30
31 @enum.unique
32 class PipVersion(enum.Enum):
33 V20_3_4 = "20.3.4-patched"
34 V22_2_2 = "22.2.2"
35 V22_3 = "22.3"
36 V22_3_1 = "22.3.1"
37 V23_0 = "23.0"
38 V23_0_1 = "23.0.1"
39
40
41 @enum.unique
42 class InvalidLockfileBehavior(enum.Enum):
43 error = "error"
44 ignore = "ignore"
45 warn = "warn"
46
47
48 @enum.unique
49 class LockfileGenerator(enum.Enum):
50 PEX = "pex"
51 POETRY = "poetry"
52
53
54 RESOLVE_OPTION_KEY__DEFAULT = "__default__"
55
56 _T = TypeVar("_T")
57
58
59 class PythonSetup(Subsystem):
60 options_scope = "python"
61 help = "Options for Pants's Python backend."
62
63 default_interpreter_constraints = ["CPython>=3.7,<4"]
64 default_interpreter_universe = ["2.7", "3.5", "3.6", "3.7", "3.8", "3.9", "3.10", "3.11"]
65
66 _interpreter_constraints = StrListOption(
67 default=default_interpreter_constraints,
68 help=softwrap(
69 """
70 The Python interpreters your codebase is compatible with.
71
72 These constraints are used as the default value for the `interpreter_constraints`
73 field of Python targets.
74
75 Specify with requirement syntax, e.g. 'CPython>=2.7,<3' (A CPython interpreter with
76 version >=2.7 AND version <3) or 'PyPy' (A pypy interpreter of any version). Multiple
77 constraint strings will be ORed together.
78 """
79 ),
80 advanced=True,
81 metavar="<requirement>",
82 )
83
84 @memoized_property
85 def interpreter_constraints(self) -> tuple[str, ...]:
86 # TODO: In 2.17.0.dev0 we should set the default above to None and tweak the message here
87 # appropriately.
88 if self.options.is_default("interpreter_constraints"):
89 warn_or_error(
90 "2.17.0.dev0",
91 "the factory default interpreter constraints value",
92 softwrap(
93 f"""\
94 You're relying on the default interpreter constraints that ship with Pants
95 ({self._interpreter_constraints}). This default is deprecated, in favor of
96 explicitly specifying the interpreter versions your code is actually intended to
97 run against.
98
99 You specify interpreter constraints using the `interpreter_constraints` option in
100 the `[python]` section of pants.toml. We recommend constraining to a single interpreter
101 minor version if you can, e.g., `interpreter_constraints = ['==3.11.*']`, or at
102 least a small number of interpreter minor versions, e.g., `interpreter_constraints
103 = ['>=3.10,<3.12']`. See {doc_url("python-interpreter-compatibility")} for details.
104
105 Set explicit interpreter constraints now to get rid of this warning.
106 """
107 ),
108 )
109 return self._interpreter_constraints
110
111 interpreter_versions_universe = StrListOption(
112 default=default_interpreter_universe,
113 help=softwrap(
114 f"""
115 All known Python major/minor interpreter versions that may be used by either
116 your code or tools used by your code.
117
118 This is used by Pants to robustly handle interpreter constraints, such as knowing
119 when generating lockfiles which Python versions to check if your code is using.
120
121 This does not control which interpreter your code will use. Instead, to set your
122 interpreter constraints, update `[python].interpreter_constraints`, the
123 `interpreter_constraints` field, and relevant tool options like
124 `[isort].interpreter_constraints` to tell Pants which interpreters your code
125 actually uses. See {doc_url('python-interpreter-compatibility')}.
126
127 All elements must be the minor and major Python version, e.g. '2.7' or '3.10'. Do
128 not include the patch version.
129 """
130 ),
131 advanced=True,
132 )
133 enable_resolves = BoolOption(
134 default=False,
135 help=softwrap(
136 """
137 Set to true to enable lockfiles for user code. See `[python].resolves` for an
138 explanation of this feature.
139
140 This option is mutually exclusive with `[python].requirement_constraints`. We strongly
141 recommend using this option because it:
142
143 1. Uses `--hash` to validate that all downloaded files are expected, which reduces\
144 the risk of supply chain attacks.
145 2. Enforces that all transitive dependencies are in the lockfile, whereas\
146 constraints allow you to leave off dependencies. This ensures your build is more\
147 stable and reduces the risk of supply chain attacks.
148 3. Allows you to have multiple lockfiles in your repository.
149 """
150 ),
151 advanced=True,
152 mutually_exclusive_group="lockfile",
153 )
154 resolves = DictOption[str](
155 default={"python-default": "3rdparty/python/default.lock"},
156 help=softwrap(
157 f"""
158 A mapping of logical names to lockfile paths used in your project.
159
160 Many organizations only need a single resolve for their whole project, which is
161 a good default and often the simplest thing to do. However, you may need multiple
162 resolves, such as if you use two conflicting versions of a requirement in
163 your repository.
164
165 If you only need a single resolve, run `{bin_name()} generate-lockfiles` to
166 generate the lockfile.
167
168 If you need multiple resolves:
169
170 1. Via this option, define multiple resolve names and their lockfile paths.\
171 The names should be meaningful to your repository, such as `data-science` or\
172 `pants-plugins`.
173 2. Set the default with `[python].default_resolve`.
174 3. Update your `python_requirement` targets with the `resolve` field to declare which\
175 resolve they should be available in. They default to `[python].default_resolve`,\
176 so you only need to update targets that you want in non-default resolves.\
177 (Often you'll set this via the `python_requirements` or `poetry_requirements`\
178 target generators)
179 4. Run `{bin_name()} generate-lockfiles` to generate the lockfiles. If the results\
180 aren't what you'd expect, adjust the prior step.
181 5. Update any targets like `python_source` / `python_sources`,\
182 `python_test` / `python_tests`, and `pex_binary` which need to set a non-default\
183 resolve with the `resolve` field.
184
185 If a target can work with multiple resolves, you can either use the `parametrize`
186 mechanism or manually create a distinct target per resolve. See {doc_url("targets")}
187 for information about `parametrize`.
188
189 For example:
190
191 python_sources(
192 resolve=parametrize("data-science", "web-app"),
193 )
194
195 You can name the lockfile paths what you would like; Pants does not expect a
196 certain file extension or location.
197
198 Only applies if `[python].enable_resolves` is true.
199 """
200 ),
201 advanced=True,
202 )
203 default_resolve = StrOption(
204 default="python-default",
205 help=softwrap(
206 """
207 The default value used for the `resolve` field.
208
209 The name must be defined as a resolve in `[python].resolves`.
210 """
211 ),
212 advanced=True,
213 )
214 default_run_goal_use_sandbox = BoolOption(
215 default=True,
216 help=softwrap(
217 """
218 The default value used for the `run_goal_use_sandbox` field of Python targets. See the
219 relevant field for more details.
220 """
221 ),
222 )
223 pip_version = EnumOption(
224 default=PipVersion.V20_3_4,
225 help="Use this version of Pip for resolving requirements and generating lockfiles.",
226 advanced=True,
227 )
228 _resolves_to_interpreter_constraints = DictOption["list[str]"](
229 help=softwrap(
230 """
231 Override the interpreter constraints to use when generating a resolve's lockfile
232 with the `generate-lockfiles` goal.
233
234 By default, each resolve from `[python].resolves` will use your
235 global interpreter constraints set in `[python].interpreter_constraints`. With
236 this option, you can override each resolve to use certain interpreter
237 constraints, such as `{'data-science': ['==3.8.*']}`.
238
239 Warning: this does NOT impact the interpreter constraints used by targets within the
240 resolve, which is instead set by the option `[python.interpreter_constraints` and the
241 `interpreter_constraints` field. It only impacts how the lockfile is generated.
242
243 Pants will validate that the interpreter constraints of your code using a
244 resolve are compatible with that resolve's own constraints. For example, if your
245 code is set to use ['==3.9.*'] via the `interpreter_constraints` field, but it's
246 using a resolve whose interpreter constraints are set to ['==3.7.*'], then
247 Pants will error explaining the incompatibility.
248
249 The keys must be defined as resolves in `[python].resolves`. To change the interpreter
250 constraints for tool lockfiles, change `[tool].interpreter_constraints`, e.g.
251 `[black].interpreter_constraints`; if the tool does not have that option, it determines
252 its interpreter constraints from your user code.
253 """
254 ),
255 advanced=True,
256 )
257 _resolves_to_constraints_file = DictOption[str](
258 help=softwrap(
259 f"""
260 When generating a resolve's lockfile, use a constraints file to pin the version of
261 certain requirements. This is particularly useful to pin the versions of transitive
262 dependencies of your direct requirements.
263
264 See https://pip.pypa.io/en/stable/user_guide/#constraints-files for more information on
265 the format of constraint files and how constraints are applied in Pex and pip.
266
267 Expects a dictionary of resolve names from `[python].resolves` and Python tools (e.g.
268 `black` and `pytest`) to file paths for
269 constraints files. For example,
270 `{{'data-science': '3rdparty/data-science-constraints.txt'}}`.
271 If a resolve is not set in the dictionary, it will not use a constraints file.
272
273 You can use the key `{RESOLVE_OPTION_KEY__DEFAULT}` to set a default value for all
274 resolves.
275 """
276 ),
277 advanced=True,
278 )
279 _resolves_to_no_binary = DictOption[List[str]](
280 help=softwrap(
281 f"""
282 When generating a resolve's lockfile, do not use binary packages (i.e. wheels) for
283 these 3rdparty project names.
284
285 Expects a dictionary of resolve names from `[python].resolves` and Python tools (e.g.
286 `black` and `pytest`) to lists of project names. For example,
287 `{{'data-science': ['requests', 'numpy']}}`. If a resolve is not set in the dictionary,
288 it will have no restrictions on binary packages.
289
290 You can use the key `{RESOLVE_OPTION_KEY__DEFAULT}` to set a default value for all
291 resolves.
292
293 For each resolve, you can also use the value `:all:` to disable all binary packages:
294 `{{'data-science': [':all:']}}`.
295
296 Note that some packages are tricky to compile and may fail to install when this option
297 is used on them. See https://pip.pypa.io/en/stable/cli/pip_install/#install-no-binary
298 for details.
299 """
300 ),
301 advanced=True,
302 )
303 _resolves_to_only_binary = DictOption[List[str]](
304 help=softwrap(
305 f"""
306 When generating a resolve's lockfile, do not use source packages (i.e. sdists) for
307 these 3rdparty project names, e.g `['django', 'requests']`.
308
309 Expects a dictionary of resolve names from `[python].resolves` and Python tools (e.g.
310 `black` and `pytest`) to lists of project names. For example,
311 `{{'data-science': ['requests', 'numpy']}}`. If a resolve is not set in the dictionary,
312 it will have no restrictions on source packages.
313
314 You can use the key `{RESOLVE_OPTION_KEY__DEFAULT}` to set a default value for all
315 resolves.
316
317 For each resolve you can use the value `:all:` to disable all source packages:
318 `{{'data-science': [':all:']}}`.
319
320 Packages without binary distributions will fail to install when this option is used on
321 them. See https://pip.pypa.io/en/stable/cli/pip_install/#install-only-binary for
322 details.
323 """
324 ),
325 advanced=True,
326 )
327 invalid_lockfile_behavior = EnumOption(
328 default=InvalidLockfileBehavior.error,
329 help=softwrap(
330 """
331 The behavior when a lockfile has requirements or interpreter constraints that are
332 not compatible with what the current build is using.
333
334 We recommend keeping the default of `error` for CI builds.
335
336 Note that `warn` will still expect a Pants lockfile header, it only won't error if
337 the lockfile is stale and should be regenerated.
338
339 Use `ignore` to avoid needing a lockfile header at all, e.g. if you are manually
340 managing lockfiles rather than using the `generate-lockfiles` goal.
341 """
342 ),
343 advanced=True,
344 )
345 resolves_generate_lockfiles = BoolOption(
346 default=True,
347 help=softwrap(
348 """
349 If False, Pants will not attempt to generate lockfiles for `[python].resolves` when
350 running the `generate-lockfiles` goal.
351
352 This is intended to allow you to manually generate lockfiles for your own code,
353 rather than using Pex lockfiles. For example, when adopting Pants in a project already
354 using Poetry, you can use `poetry export --dev` to create a requirements.txt-style
355 lockfile understood by Pants, then point `[python].resolves` to the file.
356
357 If you set this to False, Pants will not attempt to validate the metadata headers
358 for your user lockfiles. This is useful so that you can keep
359 `[python].invalid_lockfile_behavior` to `error` or `warn` if you'd like so that tool
360 lockfiles continue to be validated, while user lockfiles are skipped.
361
362 Warning: it will likely be slower to install manually generated user lockfiles than Pex
363 ones because Pants cannot as efficiently extract the subset of requirements used for a
364 particular task. See the option `[python].run_against_entire_lockfile`.
365 """
366 ),
367 advanced=True,
368 )
369 run_against_entire_lockfile = BoolOption(
370 default=False,
371 help=softwrap(
372 """
373 If enabled, when running binaries, tests, and repls, Pants will use the entire
374 lockfile file instead of just the relevant subset.
375
376 If you are using Pex lockfiles, we generally do not recommend this. You will already
377 get similar performance benefits to this option, without the downsides.
378
379 Otherwise, this option can improve
380 performance and reduce cache size. But it has two consequences: 1) All cached test
381 results will be invalidated if any requirement in the lockfile changes, rather
382 than just those that depend on the changed requirement. 2) Requirements unneeded
383 by a test/run/repl will be present on the sys.path, which might in rare cases
384 cause their behavior to change.
385
386 This option does not affect packaging deployable artifacts, such as
387 PEX files, wheels and cloud functions, which will still use just the exact
388 subset of requirements needed.
389 """
390 ),
391 advanced=True,
392 )
393
394 __constraints_deprecation_msg = softwrap(
395 f"""
396 We encourage instead migrating to `[python].enable_resolves` and `[python].resolves`,
397 which is an improvement over this option. The `[python].resolves` feature ensures that
398 your lockfiles are fully comprehensive, i.e. include all transitive dependencies;
399 uses hashes for better supply chain security; and supports advanced features like VCS
400 and local requirements, along with options `[python].resolves_to_only_binary`.
401
402 To migrate, stop setting `[python].requirement_constraints` and
403 `[python].resolve_all_constraints`, and instead set `[python].enable_resolves` to
404 `true`. Then, run `{bin_name()} generate-lockfiles`.
405 """
406 )
407 requirement_constraints = FileOption(
408 default=None,
409 help=softwrap(
410 """
411 When resolving third-party requirements for your own code (vs. tools you run),
412 use this constraints file to determine which versions to use.
413
414 Mutually exclusive with `[python].enable_resolves`, which we generally recommend as an
415 improvement over constraints file.
416
417 See https://pip.pypa.io/en/stable/user_guide/#constraints-files for more
418 information on the format of constraint files and how constraints are applied in
419 Pex and pip.
420
421 This only applies when resolving user requirements, rather than tools you run
422 like Black and Pytest. To constrain tools, set `[tool].lockfile`, e.g.
423 `[black].lockfile`.
424 """
425 ),
426 advanced=True,
427 mutually_exclusive_group="lockfile",
428 removal_version="3.0.0.dev0",
429 removal_hint=__constraints_deprecation_msg,
430 )
431 _resolve_all_constraints = BoolOption(
432 default=True,
433 help=softwrap(
434 """
435 (Only relevant when using `[python].requirement_constraints.`) If enabled, when
436 resolving requirements, Pants will first resolve your entire
437 constraints file as a single global resolve. Then, if the code uses a subset of
438 your constraints file, Pants will extract the relevant requirements from that
439 global resolve so that only what's actually needed gets used. If disabled, Pants
440 will not use a global resolve and will resolve each subset of your requirements
441 independently.
442
443 Usually this option should be enabled because it can result in far fewer resolves.
444 """
445 ),
446 advanced=True,
447 removal_version="3.0.0.dev0",
448 removal_hint=__constraints_deprecation_msg,
449 )
450 resolver_manylinux = StrOption(
451 default="manylinux2014",
452 help=softwrap(
453 """
454 Whether to allow resolution of manylinux wheels when resolving requirements for
455 foreign linux platforms. The value should be a manylinux platform upper bound,
456 e.g.: 'manylinux2010', or else the string 'no' to disallow.
457 """
458 ),
459 advanced=True,
460 )
461
462 tailor_source_targets = BoolOption(
463 default=True,
464 help=softwrap(
465 """
466 If true, add `python_sources`, `python_tests`, and `python_test_utils` targets with
467 the `tailor` goal."""
468 ),
469 advanced=True,
470 )
471 tailor_ignore_empty_init_files = BoolOption(
472 "--tailor-ignore-empty-init-files",
473 default=True,
474 help=softwrap(
475 """
476 If true, don't add `python_sources` targets for `__init__.py` files that are both empty
477 and where there are no other Python files in the directory.
478
479 Empty and solitary `__init__.py` files usually exist as import scaffolding rather than
480 true library code, so it can be noisy to add BUILD files.
481
482 Even if this option is set to true, Pants will still ensure the empty `__init__.py`
483 files are included in the sandbox when running processes.
484
485 If you set to false, you may also want to set `[python-infer].init_files = "always"`.
486 """
487 ),
488 advanced=True,
489 )
490 tailor_requirements_targets = BoolOption(
491 default=True,
492 help=softwrap(
493 """
494 If true, add `python_requirements`, `poetry_requirements`, and `pipenv_requirements`
495 target generators with the `tailor` goal.
496
497 `python_requirements` targets are added for any file that matches the pattern
498 `*requirements*.txt`. You will need to manually add `python_requirements` for different
499 file names like `reqs.txt`.
500
501 `poetry_requirements` targets are added for `pyproject.toml` files with `[tool.poetry`
502 in them.
503 """
504 ),
505 advanced=True,
506 )
507 tailor_pex_binary_targets = BoolOption(
508 default=False,
509 help=softwrap(
510 """
511 If true, add `pex_binary` targets for Python files named `__main__.py` or with a
512 `__main__` clause with the `tailor` goal.
513 """
514 ),
515 advanced=True,
516 )
517 tailor_py_typed_targets = BoolOption(
518 default=True,
519 help=softwrap(
520 """
521 If true, add `resource` targets for marker files named `py.typed` with the `tailor` goal.
522 """
523 ),
524 advanced=True,
525 )
526 macos_big_sur_compatibility = BoolOption(
527 default=False,
528 help=softwrap(
529 """
530 If set, and if running on MacOS Big Sur, use macosx_10_16 as the platform
531 when building wheels. Otherwise, the default of macosx_11_0 will be used.
532 This may be required for pip to be able to install the resulting distribution
533 on Big Sur.
534 """
535 ),
536 advanced=True,
537 )
538 enable_lockfile_targets = BoolOption(
539 default=True,
540 help=softwrap(
541 """
542 Create targets for all Python lockfiles defined in `[python].resolves`.
543
544 The lockfile targets will then be used as dependencies to the `python_requirement`
545 targets that use them, invalidating source targets per resolve when the lockfile
546 changes.
547
548 If another targets address is in conflict with the created lockfile target, it will
549 shadow the lockfile target and it will not be available as a dependency for any
550 `python_requirement` targets.
551 """
552 ),
553 advanced=True,
554 )
555 repl_history = BoolOption(
556 default=True,
557 help="Whether to use the standard Python command history file when running a repl.",
558 )
559
560 @property
561 def enable_synthetic_lockfiles(self) -> bool:
562 return self.enable_resolves and self.enable_lockfile_targets
563
564 @memoized_property
565 def resolves_to_interpreter_constraints(self) -> dict[str, tuple[str, ...]]:
566 result = {}
567 unrecognized_resolves = []
568 for resolve, ics in self._resolves_to_interpreter_constraints.items():
569 if resolve not in self.resolves:
570 unrecognized_resolves.append(resolve)
571 result[resolve] = tuple(ics)
572 if unrecognized_resolves:
573 raise UnrecognizedResolveNamesError(
574 unrecognized_resolves,
575 self.resolves.keys(),
576 description_of_origin="the option `[python].resolves_to_interpreter_constraints`",
577 )
578 return result
579
580 def _resolves_to_option_helper(
581 self,
582 option_value: dict[str, _T],
583 option_name: str,
584 all_python_tool_resolve_names: tuple[str, ...],
585 ) -> dict[str, _T]:
586 all_valid_resolves = {*self.resolves, *all_python_tool_resolve_names}
587 unrecognized_resolves = set(option_value.keys()) - {
588 RESOLVE_OPTION_KEY__DEFAULT,
589 *all_valid_resolves,
590 }
591 if unrecognized_resolves:
592 raise UnrecognizedResolveNamesError(
593 sorted(unrecognized_resolves),
594 {*all_valid_resolves, RESOLVE_OPTION_KEY__DEFAULT},
595 description_of_origin=f"the option `[python].{option_name}`",
596 )
597 default_val = option_value.get(RESOLVE_OPTION_KEY__DEFAULT)
598 if not default_val:
599 return option_value
600 return {resolve: option_value.get(resolve, default_val) for resolve in all_valid_resolves}
601
602 @memoized_method
603 def resolves_to_constraints_file(
604 self, all_python_tool_resolve_names: tuple[str, ...]
605 ) -> dict[str, str]:
606 return self._resolves_to_option_helper(
607 self._resolves_to_constraints_file,
608 "resolves_to_constraints_file",
609 all_python_tool_resolve_names,
610 )
611
612 @memoized_method
613 def resolves_to_no_binary(
614 self, all_python_tool_resolve_names: tuple[str, ...]
615 ) -> dict[str, list[str]]:
616 return {
617 resolve: [canonicalize_name(v) for v in vals]
618 for resolve, vals in self._resolves_to_option_helper(
619 self._resolves_to_no_binary,
620 "resolves_to_no_binary",
621 all_python_tool_resolve_names,
622 ).items()
623 }
624
625 @memoized_method
626 def resolves_to_only_binary(
627 self, all_python_tool_resolve_names: tuple[str, ...]
628 ) -> dict[str, list[str]]:
629 return {
630 resolve: sorted([canonicalize_name(v) for v in vals])
631 for resolve, vals in self._resolves_to_option_helper(
632 self._resolves_to_only_binary,
633 "resolves_to_only_binary",
634 all_python_tool_resolve_names,
635 ).items()
636 }
637
638 @property
639 def manylinux(self) -> str | None:
640 manylinux = cast(Optional[str], self.resolver_manylinux)
641 if manylinux is None or manylinux.lower() in ("false", "no", "none"):
642 return None
643 return manylinux
644
645 @property
646 def resolve_all_constraints(self) -> bool:
647 if (
648 self._resolve_all_constraints
649 and not self.options.is_default("resolve_all_constraints")
650 and not self.requirement_constraints
651 ):
652 raise ValueError(
653 softwrap(
654 """
655 `[python].resolve_all_constraints` is enabled, so
656 `[python].requirement_constraints` must also be set.
657 """
658 )
659 )
660 return self._resolve_all_constraints
661
662 @property
663 def scratch_dir(self):
664 return os.path.join(self.options.pants_workdir, *self.options_scope.split("."))
665
666 def compatibility_or_constraints(self, compatibility: Iterable[str] | None) -> tuple[str, ...]:
667 """Return either the given `compatibility` field or the global interpreter constraints.
668
669 If interpreter constraints are supplied by the CLI flag, return those only.
670 """
671 if self.options.is_flagged("interpreter_constraints"):
672 return self.interpreter_constraints
673 return tuple(compatibility or self.interpreter_constraints)
674
675 def compatibilities_or_constraints(
676 self, compatibilities: Iterable[Iterable[str] | None]
677 ) -> tuple[str, ...]:
678 return tuple(
679 constraint
680 for compatibility in compatibilities
681 for constraint in self.compatibility_or_constraints(compatibility)
682 )
683
[end of src/python/pants/backend/python/subsystems/setup.py]
[start of src/python/pants/backend/python/util_rules/lockfile_diff.py]
1 # Copyright 2022 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 from __future__ import annotations
5
6 import itertools
7 import json
8 import logging
9 from dataclasses import dataclass
10 from typing import TYPE_CHECKING, Any, Mapping
11
12 from packaging.version import parse
13
14 if TYPE_CHECKING:
15 # We seem to get a version of `packaging` that doesn't have `LegacyVersion` when running
16 # pytest..
17 from packaging.version import LegacyVersion, Version
18
19 from pants.backend.python.util_rules.pex_requirements import (
20 LoadedLockfile,
21 LoadedLockfileRequest,
22 Lockfile,
23 strip_comments_from_pex_json_lockfile,
24 )
25 from pants.base.exceptions import EngineError
26 from pants.core.goals.generate_lockfiles import LockfileDiff, LockfilePackages, PackageName
27 from pants.engine.fs import Digest, DigestContents
28 from pants.engine.rules import Get
29 from pants.util.frozendict import FrozenDict
30
31 logger = logging.getLogger(__name__)
32
33
34 @dataclass(frozen=True, order=True)
35 class PythonRequirementVersion:
36 _parsed: LegacyVersion | Version
37
38 @classmethod
39 def parse(cls, version: str) -> PythonRequirementVersion:
40 return cls(parse(version))
41
42 def __str__(self) -> str:
43 return str(self._parsed)
44
45 def __getattr__(self, key: str) -> Any:
46 return getattr(self._parsed, key)
47
48
49 def _pex_lockfile_requirements(
50 lockfile_data: Mapping[str, Any] | None, path: str | None = None
51 ) -> LockfilePackages:
52 if not lockfile_data:
53 return LockfilePackages({})
54
55 try:
56 # Setup generators
57 locked_resolves = (
58 (
59 (PackageName(r["project_name"]), PythonRequirementVersion.parse(r["version"]))
60 for r in resolve["locked_requirements"]
61 )
62 for resolve in lockfile_data["locked_resolves"]
63 )
64 requirements = dict(itertools.chain.from_iterable(locked_resolves))
65 except KeyError as e:
66 if path:
67 logger.warning(f"{path}: Failed to parse lockfile: {e}")
68
69 requirements = {}
70
71 return LockfilePackages(requirements)
72
73
74 async def _parse_lockfile(lockfile: Lockfile) -> FrozenDict[str, Any] | None:
75 try:
76 loaded = await Get(
77 LoadedLockfile,
78 LoadedLockfileRequest(lockfile),
79 )
80 fc = await Get(DigestContents, Digest, loaded.lockfile_digest)
81 parsed = await _parse_lockfile_content(next(iter(fc)).content, lockfile.url)
82 return parsed
83 except EngineError:
84 # May fail in case the file doesn't exist, which is expected when parsing the "old" lockfile
85 # the first time a new lockfile is generated.
86 return None
87
88
89 async def _parse_lockfile_content(content: bytes, url: str) -> FrozenDict[str, Any] | None:
90 try:
91 parsed_lockfile = json.loads(content)
92 return FrozenDict.deep_freeze(parsed_lockfile)
93 except json.JSONDecodeError as e:
94 logger.debug(f"{url}: Failed to parse lockfile contents: {e}")
95 return None
96
97
98 async def _generate_python_lockfile_diff(
99 digest: Digest, resolve_name: str, path: str
100 ) -> LockfileDiff:
101 new_digest_contents = await Get(DigestContents, Digest, digest)
102 new_content = next(c for c in new_digest_contents if c.path == path).content
103 new_content = strip_comments_from_pex_json_lockfile(new_content)
104 new = await _parse_lockfile_content(new_content, path)
105 old = await _parse_lockfile(
106 Lockfile(
107 url=path,
108 url_description_of_origin="existing lockfile",
109 resolve_name=resolve_name,
110 )
111 )
112 return LockfileDiff.create(
113 path=path,
114 resolve_name=resolve_name,
115 old=_pex_lockfile_requirements(old),
116 new=_pex_lockfile_requirements(new, path),
117 )
118
[end of src/python/pants/backend/python/util_rules/lockfile_diff.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pantsbuild/pants
|
ddecc6ee144d5abdd430d1f90ba4a8c4cedb6310
|
Cannot create a new lockfile if unmatched_build_file_globs = "error"
In https://github.com/pantsbuild/pants/pull/17097 we introduced a synthesized `_lockfile` target to wrap a generated lockfile. This target is synthesized for each user resolve, even if the corresponding lockfile hasn't been generated yet.
But then running `pants generate-lockfiles --resolve=my-new-resolve` fails because the synthetic target's sources glob doesn't match anything, and the command that would generate the lockfile and solve the problem is exactly and frustratingly the one that fails...
|
2023-03-08T01:33:42Z
|
<patch>
diff --git a/src/python/pants/backend/python/dependency_inference/rules.py b/src/python/pants/backend/python/dependency_inference/rules.py
--- a/src/python/pants/backend/python/dependency_inference/rules.py
+++ b/src/python/pants/backend/python/dependency_inference/rules.py
@@ -42,6 +42,7 @@
from pants.backend.python.util_rules import ancestor_files, pex
from pants.backend.python.util_rules.ancestor_files import AncestorFiles, AncestorFilesRequest
from pants.backend.python.util_rules.interpreter_constraints import InterpreterConstraints
+from pants.base.glob_match_error_behavior import GlobMatchErrorBehavior
from pants.core import target_types
from pants.core.target_types import AllAssetTargets, AllAssetTargetsByPath, AllAssetTargetsRequest
from pants.core.util_rules import stripped_source_files
@@ -57,7 +58,6 @@
Targets,
)
from pants.engine.unions import UnionRule
-from pants.option.global_options import OwnersNotFoundBehavior
from pants.source.source_root import SourceRoot, SourceRootRequest
from pants.util.docutil import doc_url
from pants.util.strutil import bullet_list, softwrap
@@ -564,7 +564,7 @@ async def infer_python_conftest_dependencies(
owners = await MultiGet(
# NB: Because conftest.py files effectively always have content, we require an
# owning target.
- Get(Owners, OwnersRequest((f,), OwnersNotFoundBehavior.error))
+ Get(Owners, OwnersRequest((f,), GlobMatchErrorBehavior.error))
for f in conftest_files.snapshot.files
)
diff --git a/src/python/pants/core/target_types.py b/src/python/pants/core/target_types.py
--- a/src/python/pants/core/target_types.py
+++ b/src/python/pants/core/target_types.py
@@ -30,6 +30,7 @@
FileDigest,
FileEntry,
MergeDigests,
+ PathGlobs,
RemovePrefix,
Snapshot,
)
@@ -48,6 +49,7 @@
HydrateSourcesRequest,
InvalidFieldTypeException,
MultipleSourcesField,
+ OptionalSingleSourceField,
OverridesField,
SingleSourceField,
SourcesField,
@@ -60,6 +62,7 @@
generate_multiple_sources_field_help_message,
)
from pants.engine.unions import UnionRule
+from pants.option.global_options import UnmatchedBuildFileGlobs
from pants.util.docutil import bin_name
from pants.util.frozendict import FrozenDict
from pants.util.logging import LogLevel
@@ -858,9 +861,19 @@ async def package_archive_target(field_set: ArchiveFieldSet) -> BuiltPackage:
# -----------------------------------------------------------------------------------------------
-class LockfileSourceField(SingleSourceField):
+class LockfileSourceField(OptionalSingleSourceField):
+ """Source field for synthesized `_lockfile` targets.
+
+ It is special in that it always ignores any missing files, regardless of the global
+ `--unmatched-build-file-globs` option.
+ """
+
uses_source_roots = False
required = True
+ value: str
+
+ def path_globs(self, unmatched_build_file_globs: UnmatchedBuildFileGlobs) -> PathGlobs: # type: ignore[misc]
+ return super().path_globs(UnmatchedBuildFileGlobs.ignore())
class LockfileDependenciesField(Dependencies):
diff --git a/src/python/pants/engine/internals/graph.py b/src/python/pants/engine/internals/graph.py
--- a/src/python/pants/engine/internals/graph.py
+++ b/src/python/pants/engine/internals/graph.py
@@ -88,11 +88,7 @@
_generate_file_level_targets,
)
from pants.engine.unions import UnionMembership, UnionRule
-from pants.option.global_options import (
- GlobalOptions,
- OwnersNotFoundBehavior,
- UnmatchedBuildFileGlobs,
-)
+from pants.option.global_options import GlobalOptions, UnmatchedBuildFileGlobs
from pants.util.docutil import bin_name, doc_url
from pants.util.frozendict import FrozenDict
from pants.util.logging import LogLevel
@@ -741,7 +737,7 @@ async def coarsened_targets(
def _log_or_raise_unmatched_owners(
file_paths: Sequence[PurePath],
- owners_not_found_behavior: OwnersNotFoundBehavior,
+ owners_not_found_behavior: GlobMatchErrorBehavior,
ignore_option: str | None = None,
) -> None:
option_msg = (
@@ -767,7 +763,7 @@ def _log_or_raise_unmatched_owners(
f"{doc_url('create-initial-build-files')}.{option_msg}"
)
- if owners_not_found_behavior == OwnersNotFoundBehavior.warn:
+ if owners_not_found_behavior == GlobMatchErrorBehavior.warn:
logger.warning(msg)
else:
raise ResolveError(msg)
@@ -782,7 +778,7 @@ class OwnersRequest:
"""
sources: tuple[str, ...]
- owners_not_found_behavior: OwnersNotFoundBehavior = OwnersNotFoundBehavior.ignore
+ owners_not_found_behavior: GlobMatchErrorBehavior = GlobMatchErrorBehavior.ignore
filter_by_global_options: bool = False
match_if_owning_build_file_included_in_sources: bool = False
@@ -890,7 +886,7 @@ def create_live_and_deleted_gets(
if (
unmatched_sources
- and owners_request.owners_not_found_behavior != OwnersNotFoundBehavior.ignore
+ and owners_request.owners_not_found_behavior != GlobMatchErrorBehavior.ignore
):
_log_or_raise_unmatched_owners(
[PurePath(path) for path in unmatched_sources], owners_request.owners_not_found_behavior
@@ -908,7 +904,7 @@ def create_live_and_deleted_gets(
def extract_unmatched_build_file_globs(
global_options: GlobalOptions,
) -> UnmatchedBuildFileGlobs:
- return global_options.unmatched_build_file_globs
+ return UnmatchedBuildFileGlobs(global_options.unmatched_build_file_globs)
class AmbiguousCodegenImplementationsException(Exception):
diff --git a/src/python/pants/engine/target.py b/src/python/pants/engine/target.py
--- a/src/python/pants/engine/target.py
+++ b/src/python/pants/engine/target.py
@@ -2073,7 +2073,7 @@ def path_globs(self, unmatched_build_file_globs: UnmatchedBuildFileGlobs) -> Pat
# Use fields default error behavior if defined, if we use default globs else the provided
# error behavior.
error_behavior = (
- unmatched_build_file_globs.to_glob_match_error_behavior()
+ unmatched_build_file_globs.error_behavior
if conjunction == GlobExpansionConjunction.all_match
or self.default_glob_match_error_behavior is None
else self.default_glob_match_error_behavior
@@ -2888,7 +2888,7 @@ def relativize_glob(glob: str) -> str:
return tuple(
PathGlobs(
[relativize_glob(glob)],
- glob_match_error_behavior=unmatched_build_file_globs.to_glob_match_error_behavior(),
+ glob_match_error_behavior=unmatched_build_file_globs.error_behavior,
description_of_origin=f"the `overrides` field for {address}",
)
for glob in overrides_keys
diff --git a/src/python/pants/init/specs_calculator.py b/src/python/pants/init/specs_calculator.py
--- a/src/python/pants/init/specs_calculator.py
+++ b/src/python/pants/init/specs_calculator.py
@@ -32,7 +32,7 @@ def calculate_specs(
) -> Specs:
"""Determine the specs for a given Pants run."""
global_options = options.for_global_scope()
- unmatched_cli_globs = global_options.unmatched_cli_globs.to_glob_match_error_behavior()
+ unmatched_cli_globs = global_options.unmatched_cli_globs
specs = SpecsParser().parse_specs(
options.specs,
description_of_origin="CLI arguments",
diff --git a/src/python/pants/option/global_options.py b/src/python/pants/option/global_options.py
--- a/src/python/pants/option/global_options.py
+++ b/src/python/pants/option/global_options.py
@@ -13,7 +13,7 @@
from datetime import datetime
from enum import Enum
from pathlib import Path, PurePath
-from typing import Any, Callable, Type, cast
+from typing import Any, Callable, Type, TypeVar, cast
from pants.base.build_environment import (
get_buildroot,
@@ -73,36 +73,40 @@ class DynamicUIRenderer(Enum):
experimental_prodash = "experimental-prodash"
-class UnmatchedBuildFileGlobs(Enum):
- """What to do when globs do not match in BUILD files."""
+_G = TypeVar("_G", bound="_GlobMatchErrorBehaviorOptionBase")
- warn = "warn"
- error = "error"
- def to_glob_match_error_behavior(self) -> GlobMatchErrorBehavior:
- return GlobMatchErrorBehavior(self.value)
+@dataclass(frozen=True)
+class _GlobMatchErrorBehaviorOptionBase:
+ """This class exists to have dedicated types per global option of the `GlobMatchErrorBehavior`
+ so we can extract the relevant option in a rule to limit the scope of downstream rules to avoid
+ depending on the entire global options data."""
+ error_behavior: GlobMatchErrorBehavior
-class UnmatchedCliGlobs(Enum):
- """What to do when globs do not match in CLI args."""
+ @classmethod
+ def ignore(cls: type[_G]) -> _G:
+ return cls(GlobMatchErrorBehavior.ignore)
- ignore = "ignore"
- warn = "warn"
- error = "error"
+ @classmethod
+ def warn(cls: type[_G]) -> _G:
+ return cls(GlobMatchErrorBehavior.warn)
+
+ @classmethod
+ def error(cls: type[_G]) -> _G:
+ return cls(GlobMatchErrorBehavior.error)
- def to_glob_match_error_behavior(self) -> GlobMatchErrorBehavior:
- return GlobMatchErrorBehavior(self.value)
+class UnmatchedBuildFileGlobs(_GlobMatchErrorBehaviorOptionBase):
+ """What to do when globs do not match in BUILD files."""
-class OwnersNotFoundBehavior(Enum):
- """What to do when a file argument cannot be mapped to an owning target."""
- ignore = "ignore"
- warn = "warn"
- error = "error"
+class UnmatchedCliGlobs(_GlobMatchErrorBehaviorOptionBase):
+ """What to do when globs do not match in CLI args."""
+
- def to_glob_match_error_behavior(self) -> GlobMatchErrorBehavior:
- return GlobMatchErrorBehavior(self.value)
+class OwnersNotFoundBehavior(_GlobMatchErrorBehaviorOptionBase):
+ """What to do when a file argument cannot be mapped to an owning target."""
@enum.unique
@@ -1577,7 +1581,7 @@ class GlobalOptions(BootstrapOptions, Subsystem):
)
unmatched_build_file_globs = EnumOption(
- default=UnmatchedBuildFileGlobs.warn,
+ default=GlobMatchErrorBehavior.warn,
help=softwrap(
"""
What to do when files and globs specified in BUILD files, such as in the
@@ -1591,7 +1595,7 @@ class GlobalOptions(BootstrapOptions, Subsystem):
advanced=True,
)
unmatched_cli_globs = EnumOption(
- default=UnmatchedCliGlobs.error,
+ default=GlobMatchErrorBehavior.error,
help=softwrap(
"""
What to do when command line arguments, e.g. files and globs like `dir::`, cannot be
</patch>
|
[]
|
[]
| ||||
pypa__pip-8678
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
'pip wheel --no-deps' doesn't work with extras on the new resolver
**What did you want to do?**
`pip wheel --use-feature=2020-resolver --no-deps splitio_client[cpphash,redis]==8.2.0 -w wheels/` should download `splitio_client-8.2.0-py2.py3-none-any.whl` like `pip wheel --no-deps splitio_client[cpphash,redis]==8.2.0 -w wheels/` does.
On its own this is an artificial case, but I broke it out of a 150 line requirements file that was failing to find the issue. :)
**Output**
#### With --use-feature=2020-resolver
```console
$ pip wheel --use-feature=2020-resolver --no-deps splitio_client[cpphash,redis]==8.2.0 -w wheels/
```
(no output)
#### Without --use-feature=2020-resolver
```console
$ pip wheel --no-deps splitio_client[cpphash,redis]==8.2.0 -w wheels/
Processing /Users/andy/Library/Caches/pip/wheels/5a/9a/1e/de8c54b6448f1a2615e76d5d2e7395342f9d1370865d2c0566/splitio_client-8.2.0-py2.py3-none-any.whl
Saved ./wheels/splitio_client-8.2.0-py2.py3-none-any.whl
Skipping splitio-client, due to already being wheel.
```
</issue>
<code>
[start of README.rst]
1 pip - The Python Package Installer
2 ==================================
3
4 .. image:: https://img.shields.io/pypi/v/pip.svg
5 :target: https://pypi.org/project/pip/
6
7 .. image:: https://readthedocs.org/projects/pip/badge/?version=latest
8 :target: https://pip.pypa.io/en/latest
9
10 pip is the `package installer`_ for Python. You can use pip to install packages from the `Python Package Index`_ and other indexes.
11
12 Please take a look at our documentation for how to install and use pip:
13
14 * `Installation`_
15 * `Usage`_
16
17 We release updates regularly, with a new version every 3 months. Find more details in our documentation:
18
19 * `Release notes`_
20 * `Release process`_
21
22 In 2020, we're working on improvements to the heart of pip. Please `learn more and take our survey`_ to help us do it right.
23
24 If you find bugs, need help, or want to talk to the developers, please use our mailing lists or chat rooms:
25
26 * `Issue tracking`_
27 * `Discourse channel`_
28 * `User IRC`_
29
30 If you want to get involved head over to GitHub to get the source code, look at our development documentation and feel free to jump on the developer mailing lists and chat rooms:
31
32 * `GitHub page`_
33 * `Development documentation`_
34 * `Development mailing list`_
35 * `Development IRC`_
36
37 Code of Conduct
38 ---------------
39
40 Everyone interacting in the pip project's codebases, issue trackers, chat
41 rooms, and mailing lists is expected to follow the `PyPA Code of Conduct`_.
42
43 .. _package installer: https://packaging.python.org/guides/tool-recommendations/
44 .. _Python Package Index: https://pypi.org
45 .. _Installation: https://pip.pypa.io/en/stable/installing.html
46 .. _Usage: https://pip.pypa.io/en/stable/
47 .. _Release notes: https://pip.pypa.io/en/stable/news.html
48 .. _Release process: https://pip.pypa.io/en/latest/development/release-process/
49 .. _GitHub page: https://github.com/pypa/pip
50 .. _Development documentation: https://pip.pypa.io/en/latest/development
51 .. _learn more and take our survey: https://pyfound.blogspot.com/2020/03/new-pip-resolver-to-roll-out-this-year.html
52 .. _Issue tracking: https://github.com/pypa/pip/issues
53 .. _Discourse channel: https://discuss.python.org/c/packaging
54 .. _Development mailing list: https://mail.python.org/mailman3/lists/distutils-sig.python.org/
55 .. _User IRC: https://webchat.freenode.net/?channels=%23pypa
56 .. _Development IRC: https://webchat.freenode.net/?channels=%23pypa-dev
57 .. _PyPA Code of Conduct: https://www.pypa.io/en/latest/code-of-conduct/
58
[end of README.rst]
[start of noxfile.py]
1 """Automation using nox.
2 """
3
4 # The following comment should be removed at some point in the future.
5 # mypy: disallow-untyped-defs=False
6
7 import glob
8 import os
9 import shutil
10 import sys
11
12 import nox
13
14 sys.path.append(".")
15 from tools.automation import release # isort:skip # noqa
16 sys.path.pop()
17
18 nox.options.reuse_existing_virtualenvs = True
19 nox.options.sessions = ["lint"]
20
21 LOCATIONS = {
22 "common-wheels": "tests/data/common_wheels",
23 "protected-pip": "tools/tox_pip.py",
24 }
25 REQUIREMENTS = {
26 "docs": "tools/requirements/docs.txt",
27 "tests": "tools/requirements/tests.txt",
28 "common-wheels": "tools/requirements/tests-common_wheels.txt",
29 }
30
31 AUTHORS_FILE = "AUTHORS.txt"
32 VERSION_FILE = "src/pip/__init__.py"
33
34
35 def run_with_protected_pip(session, *arguments):
36 """Do a session.run("pip", *arguments), using a "protected" pip.
37
38 This invokes a wrapper script, that forwards calls to original virtualenv
39 (stable) version, and not the code being tested. This ensures pip being
40 used is not the code being tested.
41 """
42 env = {"VIRTUAL_ENV": session.virtualenv.location}
43
44 command = ("python", LOCATIONS["protected-pip"]) + arguments
45 kwargs = {"env": env, "silent": True}
46 session.run(*command, **kwargs)
47
48
49 def should_update_common_wheels():
50 # If the cache hasn't been created, create it.
51 if not os.path.exists(LOCATIONS["common-wheels"]):
52 return True
53
54 # If the requirements was updated after cache, we'll repopulate it.
55 cache_last_populated_at = os.path.getmtime(LOCATIONS["common-wheels"])
56 requirements_updated_at = os.path.getmtime(REQUIREMENTS["common-wheels"])
57 need_to_repopulate = requirements_updated_at > cache_last_populated_at
58
59 # Clear the stale cache.
60 if need_to_repopulate:
61 shutil.rmtree(LOCATIONS["common-wheels"], ignore_errors=True)
62
63 return need_to_repopulate
64
65
66 # -----------------------------------------------------------------------------
67 # Development Commands
68 # These are currently prototypes to evaluate whether we want to switch over
69 # completely to nox for all our automation. Contributors should prefer using
70 # `tox -e ...` until this note is removed.
71 # -----------------------------------------------------------------------------
72 @nox.session(python=["2.7", "3.5", "3.6", "3.7", "3.8", "pypy", "pypy3"])
73 def test(session):
74 # Get the common wheels.
75 if should_update_common_wheels():
76 run_with_protected_pip(
77 session,
78 "wheel",
79 "-w", LOCATIONS["common-wheels"],
80 "-r", REQUIREMENTS["common-wheels"],
81 )
82 else:
83 msg = (
84 "Re-using existing common-wheels at {}."
85 .format(LOCATIONS["common-wheels"])
86 )
87 session.log(msg)
88
89 # Build source distribution
90 sdist_dir = os.path.join(session.virtualenv.location, "sdist")
91 if os.path.exists(sdist_dir):
92 shutil.rmtree(sdist_dir, ignore_errors=True)
93 session.run(
94 "python", "setup.py", "sdist",
95 "--formats=zip", "--dist-dir", sdist_dir,
96 silent=True,
97 )
98 generated_files = os.listdir(sdist_dir)
99 assert len(generated_files) == 1
100 generated_sdist = os.path.join(sdist_dir, generated_files[0])
101
102 # Install source distribution
103 run_with_protected_pip(session, "install", generated_sdist)
104
105 # Install test dependencies
106 run_with_protected_pip(session, "install", "-r", REQUIREMENTS["tests"])
107
108 # Parallelize tests as much as possible, by default.
109 arguments = session.posargs or ["-n", "auto"]
110
111 # Run the tests
112 # LC_CTYPE is set to get UTF-8 output inside of the subprocesses that our
113 # tests use.
114 session.run("pytest", *arguments, env={"LC_CTYPE": "en_US.UTF-8"})
115
116
117 @nox.session
118 def docs(session):
119 session.install("-e", ".")
120 session.install("-r", REQUIREMENTS["docs"])
121
122 def get_sphinx_build_command(kind):
123 # Having the conf.py in the docs/html is weird but needed because we
124 # can not use a different configuration directory vs source directory
125 # on RTD currently. So, we'll pass "-c docs/html" here.
126 # See https://github.com/rtfd/readthedocs.org/issues/1543.
127 return [
128 "sphinx-build",
129 "-W",
130 "-c", "docs/html", # see note above
131 "-d", "docs/build/doctrees/" + kind,
132 "-b", kind,
133 "docs/" + kind,
134 "docs/build/" + kind,
135 ]
136
137 session.run(*get_sphinx_build_command("html"))
138 session.run(*get_sphinx_build_command("man"))
139
140
141 @nox.session
142 def lint(session):
143 session.install("pre-commit")
144
145 if session.posargs:
146 args = session.posargs + ["--all-files"]
147 else:
148 args = ["--all-files", "--show-diff-on-failure"]
149
150 session.run("pre-commit", "run", *args)
151
152
153 @nox.session
154 def vendoring(session):
155 session.install("vendoring")
156
157 session.run("vendoring", "sync", ".", "-v")
158
159
160 # -----------------------------------------------------------------------------
161 # Release Commands
162 # -----------------------------------------------------------------------------
163 @nox.session(name="prepare-release")
164 def prepare_release(session):
165 version = release.get_version_from_arguments(session)
166 if not version:
167 session.error("Usage: nox -s prepare-release -- <version>")
168
169 session.log("# Ensure nothing is staged")
170 if release.modified_files_in_git("--staged"):
171 session.error("There are files staged in git")
172
173 session.log(f"# Updating {AUTHORS_FILE}")
174 release.generate_authors(AUTHORS_FILE)
175 if release.modified_files_in_git():
176 release.commit_file(
177 session, AUTHORS_FILE, message=f"Update {AUTHORS_FILE}",
178 )
179 else:
180 session.log(f"# No changes to {AUTHORS_FILE}")
181
182 session.log("# Generating NEWS")
183 release.generate_news(session, version)
184
185 session.log(f"# Bumping for release {version}")
186 release.update_version_file(version, VERSION_FILE)
187 release.commit_file(session, VERSION_FILE, message="Bump for release")
188
189 session.log("# Tagging release")
190 release.create_git_tag(session, version, message=f"Release {version}")
191
192 session.log("# Bumping for development")
193 next_dev_version = release.get_next_development_version(version)
194 release.update_version_file(next_dev_version, VERSION_FILE)
195 release.commit_file(session, VERSION_FILE, message="Bump for development")
196
197
198 @nox.session(name="build-release")
199 def build_release(session):
200 version = release.get_version_from_arguments(session)
201 if not version:
202 session.error("Usage: nox -s build-release -- YY.N[.P]")
203
204 session.log("# Ensure no files in dist/")
205 if release.have_files_in_folder("dist"):
206 session.error(
207 "There are files in dist/. Remove them and try again. "
208 "You can use `git clean -fxdi -- dist` command to do this"
209 )
210
211 session.log("# Install dependencies")
212 session.install("setuptools", "wheel", "twine")
213
214 with release.isolated_temporary_checkout(session, version) as build_dir:
215 session.log(
216 "# Start the build in an isolated, "
217 f"temporary Git checkout at {build_dir!s}",
218 )
219 with release.workdir(session, build_dir):
220 tmp_dists = build_dists(session)
221
222 tmp_dist_paths = (build_dir / p for p in tmp_dists)
223 session.log(f"# Copying dists from {build_dir}")
224 os.makedirs('dist', exist_ok=True)
225 for dist, final in zip(tmp_dist_paths, tmp_dists):
226 session.log(f"# Copying {dist} to {final}")
227 shutil.copy(dist, final)
228
229
230 def build_dists(session):
231 """Return dists with valid metadata."""
232 session.log(
233 "# Check if there's any Git-untracked files before building the wheel",
234 )
235
236 has_forbidden_git_untracked_files = any(
237 # Don't report the environment this session is running in
238 not untracked_file.startswith('.nox/build-release/')
239 for untracked_file in release.get_git_untracked_files()
240 )
241 if has_forbidden_git_untracked_files:
242 session.error(
243 "There are untracked files in the working directory. "
244 "Remove them and try again",
245 )
246
247 session.log("# Build distributions")
248 session.run("python", "setup.py", "sdist", "bdist_wheel", silent=True)
249 produced_dists = glob.glob("dist/*")
250
251 session.log(f"# Verify distributions: {', '.join(produced_dists)}")
252 session.run("twine", "check", *produced_dists, silent=True)
253
254 return produced_dists
255
256
257 @nox.session(name="upload-release")
258 def upload_release(session):
259 version = release.get_version_from_arguments(session)
260 if not version:
261 session.error("Usage: nox -s upload-release -- YY.N[.P]")
262
263 session.log("# Install dependencies")
264 session.install("twine")
265
266 distribution_files = glob.glob("dist/*")
267 session.log(f"# Distribution files: {distribution_files}")
268
269 # Sanity check: Make sure there's 2 distribution files.
270 count = len(distribution_files)
271 if count != 2:
272 session.error(
273 f"Expected 2 distribution files for upload, got {count}. "
274 f"Remove dist/ and run 'nox -s build-release -- {version}'"
275 )
276 # Sanity check: Make sure the files are correctly named.
277 distfile_names = map(os.path.basename, distribution_files)
278 expected_distribution_files = [
279 f"pip-{version}-py2.py3-none-any.whl",
280 f"pip-{version}.tar.gz",
281 ]
282 if sorted(distfile_names) != sorted(expected_distribution_files):
283 session.error(
284 f"Distribution files do not seem to be for {version} release."
285 )
286
287 session.log("# Upload distributions")
288 session.run("twine", "upload", *distribution_files)
289
[end of noxfile.py]
[start of src/pip/_internal/cli/cmdoptions.py]
1 """
2 shared options and groups
3
4 The principle here is to define options once, but *not* instantiate them
5 globally. One reason being that options with action='append' can carry state
6 between parses. pip parses general options twice internally, and shouldn't
7 pass on state. To be consistent, all options will follow this design.
8 """
9
10 # The following comment should be removed at some point in the future.
11 # mypy: strict-optional=False
12
13 from __future__ import absolute_import
14
15 import os
16 import textwrap
17 import warnings
18 from distutils.util import strtobool
19 from functools import partial
20 from optparse import SUPPRESS_HELP, Option, OptionGroup
21 from textwrap import dedent
22
23 from pip._internal.cli.progress_bars import BAR_TYPES
24 from pip._internal.exceptions import CommandError
25 from pip._internal.locations import USER_CACHE_DIR, get_src_prefix
26 from pip._internal.models.format_control import FormatControl
27 from pip._internal.models.index import PyPI
28 from pip._internal.models.target_python import TargetPython
29 from pip._internal.utils.hashes import STRONG_HASHES
30 from pip._internal.utils.typing import MYPY_CHECK_RUNNING
31
32 if MYPY_CHECK_RUNNING:
33 from typing import Any, Callable, Dict, Optional, Tuple
34 from optparse import OptionParser, Values
35 from pip._internal.cli.parser import ConfigOptionParser
36
37
38 def raise_option_error(parser, option, msg):
39 # type: (OptionParser, Option, str) -> None
40 """
41 Raise an option parsing error using parser.error().
42
43 Args:
44 parser: an OptionParser instance.
45 option: an Option instance.
46 msg: the error text.
47 """
48 msg = '{} error: {}'.format(option, msg)
49 msg = textwrap.fill(' '.join(msg.split()))
50 parser.error(msg)
51
52
53 def make_option_group(group, parser):
54 # type: (Dict[str, Any], ConfigOptionParser) -> OptionGroup
55 """
56 Return an OptionGroup object
57 group -- assumed to be dict with 'name' and 'options' keys
58 parser -- an optparse Parser
59 """
60 option_group = OptionGroup(parser, group['name'])
61 for option in group['options']:
62 option_group.add_option(option())
63 return option_group
64
65
66 def check_install_build_global(options, check_options=None):
67 # type: (Values, Optional[Values]) -> None
68 """Disable wheels if per-setup.py call options are set.
69
70 :param options: The OptionParser options to update.
71 :param check_options: The options to check, if not supplied defaults to
72 options.
73 """
74 if check_options is None:
75 check_options = options
76
77 def getname(n):
78 # type: (str) -> Optional[Any]
79 return getattr(check_options, n, None)
80 names = ["build_options", "global_options", "install_options"]
81 if any(map(getname, names)):
82 control = options.format_control
83 control.disallow_binaries()
84 warnings.warn(
85 'Disabling all use of wheels due to the use of --build-option '
86 '/ --global-option / --install-option.', stacklevel=2,
87 )
88
89
90 def check_dist_restriction(options, check_target=False):
91 # type: (Values, bool) -> None
92 """Function for determining if custom platform options are allowed.
93
94 :param options: The OptionParser options.
95 :param check_target: Whether or not to check if --target is being used.
96 """
97 dist_restriction_set = any([
98 options.python_version,
99 options.platform,
100 options.abi,
101 options.implementation,
102 ])
103
104 binary_only = FormatControl(set(), {':all:'})
105 sdist_dependencies_allowed = (
106 options.format_control != binary_only and
107 not options.ignore_dependencies
108 )
109
110 # Installations or downloads using dist restrictions must not combine
111 # source distributions and dist-specific wheels, as they are not
112 # guaranteed to be locally compatible.
113 if dist_restriction_set and sdist_dependencies_allowed:
114 raise CommandError(
115 "When restricting platform and interpreter constraints using "
116 "--python-version, --platform, --abi, or --implementation, "
117 "either --no-deps must be set, or --only-binary=:all: must be "
118 "set and --no-binary must not be set (or must be set to "
119 ":none:)."
120 )
121
122 if check_target:
123 if dist_restriction_set and not options.target_dir:
124 raise CommandError(
125 "Can not use any platform or abi specific options unless "
126 "installing via '--target'"
127 )
128
129
130 def _path_option_check(option, opt, value):
131 # type: (Option, str, str) -> str
132 return os.path.expanduser(value)
133
134
135 class PipOption(Option):
136 TYPES = Option.TYPES + ("path",)
137 TYPE_CHECKER = Option.TYPE_CHECKER.copy()
138 TYPE_CHECKER["path"] = _path_option_check
139
140
141 ###########
142 # options #
143 ###########
144
145 help_ = partial(
146 Option,
147 '-h', '--help',
148 dest='help',
149 action='help',
150 help='Show help.',
151 ) # type: Callable[..., Option]
152
153 isolated_mode = partial(
154 Option,
155 "--isolated",
156 dest="isolated_mode",
157 action="store_true",
158 default=False,
159 help=(
160 "Run pip in an isolated mode, ignoring environment variables and user "
161 "configuration."
162 ),
163 ) # type: Callable[..., Option]
164
165 require_virtualenv = partial(
166 Option,
167 # Run only if inside a virtualenv, bail if not.
168 '--require-virtualenv', '--require-venv',
169 dest='require_venv',
170 action='store_true',
171 default=False,
172 help=SUPPRESS_HELP
173 ) # type: Callable[..., Option]
174
175 verbose = partial(
176 Option,
177 '-v', '--verbose',
178 dest='verbose',
179 action='count',
180 default=0,
181 help='Give more output. Option is additive, and can be used up to 3 times.'
182 ) # type: Callable[..., Option]
183
184 no_color = partial(
185 Option,
186 '--no-color',
187 dest='no_color',
188 action='store_true',
189 default=False,
190 help="Suppress colored output",
191 ) # type: Callable[..., Option]
192
193 version = partial(
194 Option,
195 '-V', '--version',
196 dest='version',
197 action='store_true',
198 help='Show version and exit.',
199 ) # type: Callable[..., Option]
200
201 quiet = partial(
202 Option,
203 '-q', '--quiet',
204 dest='quiet',
205 action='count',
206 default=0,
207 help=(
208 'Give less output. Option is additive, and can be used up to 3'
209 ' times (corresponding to WARNING, ERROR, and CRITICAL logging'
210 ' levels).'
211 ),
212 ) # type: Callable[..., Option]
213
214 progress_bar = partial(
215 Option,
216 '--progress-bar',
217 dest='progress_bar',
218 type='choice',
219 choices=list(BAR_TYPES.keys()),
220 default='on',
221 help=(
222 'Specify type of progress to be displayed [' +
223 '|'.join(BAR_TYPES.keys()) + '] (default: %default)'
224 ),
225 ) # type: Callable[..., Option]
226
227 log = partial(
228 PipOption,
229 "--log", "--log-file", "--local-log",
230 dest="log",
231 metavar="path",
232 type="path",
233 help="Path to a verbose appending log."
234 ) # type: Callable[..., Option]
235
236 no_input = partial(
237 Option,
238 # Don't ask for input
239 '--no-input',
240 dest='no_input',
241 action='store_true',
242 default=False,
243 help="Disable prompting for input."
244 ) # type: Callable[..., Option]
245
246 proxy = partial(
247 Option,
248 '--proxy',
249 dest='proxy',
250 type='str',
251 default='',
252 help="Specify a proxy in the form [user:passwd@]proxy.server:port."
253 ) # type: Callable[..., Option]
254
255 retries = partial(
256 Option,
257 '--retries',
258 dest='retries',
259 type='int',
260 default=5,
261 help="Maximum number of retries each connection should attempt "
262 "(default %default times).",
263 ) # type: Callable[..., Option]
264
265 timeout = partial(
266 Option,
267 '--timeout', '--default-timeout',
268 metavar='sec',
269 dest='timeout',
270 type='float',
271 default=15,
272 help='Set the socket timeout (default %default seconds).',
273 ) # type: Callable[..., Option]
274
275
276 def exists_action():
277 # type: () -> Option
278 return Option(
279 # Option when path already exist
280 '--exists-action',
281 dest='exists_action',
282 type='choice',
283 choices=['s', 'i', 'w', 'b', 'a'],
284 default=[],
285 action='append',
286 metavar='action',
287 help="Default action when a path already exists: "
288 "(s)witch, (i)gnore, (w)ipe, (b)ackup, (a)bort.",
289 )
290
291
292 cert = partial(
293 PipOption,
294 '--cert',
295 dest='cert',
296 type='path',
297 metavar='path',
298 help="Path to alternate CA bundle.",
299 ) # type: Callable[..., Option]
300
301 client_cert = partial(
302 PipOption,
303 '--client-cert',
304 dest='client_cert',
305 type='path',
306 default=None,
307 metavar='path',
308 help="Path to SSL client certificate, a single file containing the "
309 "private key and the certificate in PEM format.",
310 ) # type: Callable[..., Option]
311
312 index_url = partial(
313 Option,
314 '-i', '--index-url', '--pypi-url',
315 dest='index_url',
316 metavar='URL',
317 default=PyPI.simple_url,
318 help="Base URL of the Python Package Index (default %default). "
319 "This should point to a repository compliant with PEP 503 "
320 "(the simple repository API) or a local directory laid out "
321 "in the same format.",
322 ) # type: Callable[..., Option]
323
324
325 def extra_index_url():
326 # type: () -> Option
327 return Option(
328 '--extra-index-url',
329 dest='extra_index_urls',
330 metavar='URL',
331 action='append',
332 default=[],
333 help="Extra URLs of package indexes to use in addition to "
334 "--index-url. Should follow the same rules as "
335 "--index-url.",
336 )
337
338
339 no_index = partial(
340 Option,
341 '--no-index',
342 dest='no_index',
343 action='store_true',
344 default=False,
345 help='Ignore package index (only looking at --find-links URLs instead).',
346 ) # type: Callable[..., Option]
347
348
349 def find_links():
350 # type: () -> Option
351 return Option(
352 '-f', '--find-links',
353 dest='find_links',
354 action='append',
355 default=[],
356 metavar='url',
357 help="If a URL or path to an html file, then parse for links to "
358 "archives such as sdist (.tar.gz) or wheel (.whl) files. "
359 "If a local path or file:// URL that's a directory, "
360 "then look for archives in the directory listing. "
361 "Links to VCS project URLs are not supported.",
362 )
363
364
365 def trusted_host():
366 # type: () -> Option
367 return Option(
368 "--trusted-host",
369 dest="trusted_hosts",
370 action="append",
371 metavar="HOSTNAME",
372 default=[],
373 help="Mark this host or host:port pair as trusted, even though it "
374 "does not have valid or any HTTPS.",
375 )
376
377
378 def constraints():
379 # type: () -> Option
380 return Option(
381 '-c', '--constraint',
382 dest='constraints',
383 action='append',
384 default=[],
385 metavar='file',
386 help='Constrain versions using the given constraints file. '
387 'This option can be used multiple times.'
388 )
389
390
391 def requirements():
392 # type: () -> Option
393 return Option(
394 '-r', '--requirement',
395 dest='requirements',
396 action='append',
397 default=[],
398 metavar='file',
399 help='Install from the given requirements file. '
400 'This option can be used multiple times.'
401 )
402
403
404 def editable():
405 # type: () -> Option
406 return Option(
407 '-e', '--editable',
408 dest='editables',
409 action='append',
410 default=[],
411 metavar='path/url',
412 help=('Install a project in editable mode (i.e. setuptools '
413 '"develop mode") from a local project path or a VCS url.'),
414 )
415
416
417 def _handle_src(option, opt_str, value, parser):
418 # type: (Option, str, str, OptionParser) -> None
419 value = os.path.abspath(value)
420 setattr(parser.values, option.dest, value)
421
422
423 src = partial(
424 PipOption,
425 '--src', '--source', '--source-dir', '--source-directory',
426 dest='src_dir',
427 type='path',
428 metavar='dir',
429 default=get_src_prefix(),
430 action='callback',
431 callback=_handle_src,
432 help='Directory to check out editable projects into. '
433 'The default in a virtualenv is "<venv path>/src". '
434 'The default for global installs is "<current dir>/src".'
435 ) # type: Callable[..., Option]
436
437
438 def _get_format_control(values, option):
439 # type: (Values, Option) -> Any
440 """Get a format_control object."""
441 return getattr(values, option.dest)
442
443
444 def _handle_no_binary(option, opt_str, value, parser):
445 # type: (Option, str, str, OptionParser) -> None
446 existing = _get_format_control(parser.values, option)
447 FormatControl.handle_mutual_excludes(
448 value, existing.no_binary, existing.only_binary,
449 )
450
451
452 def _handle_only_binary(option, opt_str, value, parser):
453 # type: (Option, str, str, OptionParser) -> None
454 existing = _get_format_control(parser.values, option)
455 FormatControl.handle_mutual_excludes(
456 value, existing.only_binary, existing.no_binary,
457 )
458
459
460 def no_binary():
461 # type: () -> Option
462 format_control = FormatControl(set(), set())
463 return Option(
464 "--no-binary", dest="format_control", action="callback",
465 callback=_handle_no_binary, type="str",
466 default=format_control,
467 help='Do not use binary packages. Can be supplied multiple times, and '
468 'each time adds to the existing value. Accepts either ":all:" to '
469 'disable all binary packages, ":none:" to empty the set (notice '
470 'the colons), or one or more package names with commas between '
471 'them (no colons). Note that some packages are tricky to compile '
472 'and may fail to install when this option is used on them.',
473 )
474
475
476 def only_binary():
477 # type: () -> Option
478 format_control = FormatControl(set(), set())
479 return Option(
480 "--only-binary", dest="format_control", action="callback",
481 callback=_handle_only_binary, type="str",
482 default=format_control,
483 help='Do not use source packages. Can be supplied multiple times, and '
484 'each time adds to the existing value. Accepts either ":all:" to '
485 'disable all source packages, ":none:" to empty the set, or one '
486 'or more package names with commas between them. Packages '
487 'without binary distributions will fail to install when this '
488 'option is used on them.',
489 )
490
491
492 platform = partial(
493 Option,
494 '--platform',
495 dest='platform',
496 metavar='platform',
497 default=None,
498 help=("Only use wheels compatible with <platform>. "
499 "Defaults to the platform of the running system."),
500 ) # type: Callable[..., Option]
501
502
503 # This was made a separate function for unit-testing purposes.
504 def _convert_python_version(value):
505 # type: (str) -> Tuple[Tuple[int, ...], Optional[str]]
506 """
507 Convert a version string like "3", "37", or "3.7.3" into a tuple of ints.
508
509 :return: A 2-tuple (version_info, error_msg), where `error_msg` is
510 non-None if and only if there was a parsing error.
511 """
512 if not value:
513 # The empty string is the same as not providing a value.
514 return (None, None)
515
516 parts = value.split('.')
517 if len(parts) > 3:
518 return ((), 'at most three version parts are allowed')
519
520 if len(parts) == 1:
521 # Then we are in the case of "3" or "37".
522 value = parts[0]
523 if len(value) > 1:
524 parts = [value[0], value[1:]]
525
526 try:
527 version_info = tuple(int(part) for part in parts)
528 except ValueError:
529 return ((), 'each version part must be an integer')
530
531 return (version_info, None)
532
533
534 def _handle_python_version(option, opt_str, value, parser):
535 # type: (Option, str, str, OptionParser) -> None
536 """
537 Handle a provided --python-version value.
538 """
539 version_info, error_msg = _convert_python_version(value)
540 if error_msg is not None:
541 msg = (
542 'invalid --python-version value: {!r}: {}'.format(
543 value, error_msg,
544 )
545 )
546 raise_option_error(parser, option=option, msg=msg)
547
548 parser.values.python_version = version_info
549
550
551 python_version = partial(
552 Option,
553 '--python-version',
554 dest='python_version',
555 metavar='python_version',
556 action='callback',
557 callback=_handle_python_version, type='str',
558 default=None,
559 help=dedent("""\
560 The Python interpreter version to use for wheel and "Requires-Python"
561 compatibility checks. Defaults to a version derived from the running
562 interpreter. The version can be specified using up to three dot-separated
563 integers (e.g. "3" for 3.0.0, "3.7" for 3.7.0, or "3.7.3"). A major-minor
564 version can also be given as a string without dots (e.g. "37" for 3.7.0).
565 """),
566 ) # type: Callable[..., Option]
567
568
569 implementation = partial(
570 Option,
571 '--implementation',
572 dest='implementation',
573 metavar='implementation',
574 default=None,
575 help=("Only use wheels compatible with Python "
576 "implementation <implementation>, e.g. 'pp', 'jy', 'cp', "
577 " or 'ip'. If not specified, then the current "
578 "interpreter implementation is used. Use 'py' to force "
579 "implementation-agnostic wheels."),
580 ) # type: Callable[..., Option]
581
582
583 abi = partial(
584 Option,
585 '--abi',
586 dest='abi',
587 metavar='abi',
588 default=None,
589 help=("Only use wheels compatible with Python "
590 "abi <abi>, e.g. 'pypy_41'. If not specified, then the "
591 "current interpreter abi tag is used. Generally "
592 "you will need to specify --implementation, "
593 "--platform, and --python-version when using "
594 "this option."),
595 ) # type: Callable[..., Option]
596
597
598 def add_target_python_options(cmd_opts):
599 # type: (OptionGroup) -> None
600 cmd_opts.add_option(platform())
601 cmd_opts.add_option(python_version())
602 cmd_opts.add_option(implementation())
603 cmd_opts.add_option(abi())
604
605
606 def make_target_python(options):
607 # type: (Values) -> TargetPython
608 target_python = TargetPython(
609 platform=options.platform,
610 py_version_info=options.python_version,
611 abi=options.abi,
612 implementation=options.implementation,
613 )
614
615 return target_python
616
617
618 def prefer_binary():
619 # type: () -> Option
620 return Option(
621 "--prefer-binary",
622 dest="prefer_binary",
623 action="store_true",
624 default=False,
625 help="Prefer older binary packages over newer source packages."
626 )
627
628
629 cache_dir = partial(
630 PipOption,
631 "--cache-dir",
632 dest="cache_dir",
633 default=USER_CACHE_DIR,
634 metavar="dir",
635 type='path',
636 help="Store the cache data in <dir>."
637 ) # type: Callable[..., Option]
638
639
640 def _handle_no_cache_dir(option, opt, value, parser):
641 # type: (Option, str, str, OptionParser) -> None
642 """
643 Process a value provided for the --no-cache-dir option.
644
645 This is an optparse.Option callback for the --no-cache-dir option.
646 """
647 # The value argument will be None if --no-cache-dir is passed via the
648 # command-line, since the option doesn't accept arguments. However,
649 # the value can be non-None if the option is triggered e.g. by an
650 # environment variable, like PIP_NO_CACHE_DIR=true.
651 if value is not None:
652 # Then parse the string value to get argument error-checking.
653 try:
654 strtobool(value)
655 except ValueError as exc:
656 raise_option_error(parser, option=option, msg=str(exc))
657
658 # Originally, setting PIP_NO_CACHE_DIR to a value that strtobool()
659 # converted to 0 (like "false" or "no") caused cache_dir to be disabled
660 # rather than enabled (logic would say the latter). Thus, we disable
661 # the cache directory not just on values that parse to True, but (for
662 # backwards compatibility reasons) also on values that parse to False.
663 # In other words, always set it to False if the option is provided in
664 # some (valid) form.
665 parser.values.cache_dir = False
666
667
668 no_cache = partial(
669 Option,
670 "--no-cache-dir",
671 dest="cache_dir",
672 action="callback",
673 callback=_handle_no_cache_dir,
674 help="Disable the cache.",
675 ) # type: Callable[..., Option]
676
677 no_deps = partial(
678 Option,
679 '--no-deps', '--no-dependencies',
680 dest='ignore_dependencies',
681 action='store_true',
682 default=False,
683 help="Don't install package dependencies.",
684 ) # type: Callable[..., Option]
685
686
687 def _handle_build_dir(option, opt, value, parser):
688 # type: (Option, str, str, OptionParser) -> None
689 if value:
690 value = os.path.abspath(value)
691 setattr(parser.values, option.dest, value)
692
693
694 build_dir = partial(
695 PipOption,
696 '-b', '--build', '--build-dir', '--build-directory',
697 dest='build_dir',
698 type='path',
699 metavar='dir',
700 action='callback',
701 callback=_handle_build_dir,
702 help='(DEPRECATED) '
703 'Directory to unpack packages into and build in. Note that '
704 'an initial build still takes place in a temporary directory. '
705 'The location of temporary directories can be controlled by setting '
706 'the TMPDIR environment variable (TEMP on Windows) appropriately. '
707 'When passed, build directories are not cleaned in case of failures.'
708 ) # type: Callable[..., Option]
709
710 ignore_requires_python = partial(
711 Option,
712 '--ignore-requires-python',
713 dest='ignore_requires_python',
714 action='store_true',
715 help='Ignore the Requires-Python information.'
716 ) # type: Callable[..., Option]
717
718 no_build_isolation = partial(
719 Option,
720 '--no-build-isolation',
721 dest='build_isolation',
722 action='store_false',
723 default=True,
724 help='Disable isolation when building a modern source distribution. '
725 'Build dependencies specified by PEP 518 must be already installed '
726 'if this option is used.'
727 ) # type: Callable[..., Option]
728
729
730 def _handle_no_use_pep517(option, opt, value, parser):
731 # type: (Option, str, str, OptionParser) -> None
732 """
733 Process a value provided for the --no-use-pep517 option.
734
735 This is an optparse.Option callback for the no_use_pep517 option.
736 """
737 # Since --no-use-pep517 doesn't accept arguments, the value argument
738 # will be None if --no-use-pep517 is passed via the command-line.
739 # However, the value can be non-None if the option is triggered e.g.
740 # by an environment variable, for example "PIP_NO_USE_PEP517=true".
741 if value is not None:
742 msg = """A value was passed for --no-use-pep517,
743 probably using either the PIP_NO_USE_PEP517 environment variable
744 or the "no-use-pep517" config file option. Use an appropriate value
745 of the PIP_USE_PEP517 environment variable or the "use-pep517"
746 config file option instead.
747 """
748 raise_option_error(parser, option=option, msg=msg)
749
750 # Otherwise, --no-use-pep517 was passed via the command-line.
751 parser.values.use_pep517 = False
752
753
754 use_pep517 = partial(
755 Option,
756 '--use-pep517',
757 dest='use_pep517',
758 action='store_true',
759 default=None,
760 help='Use PEP 517 for building source distributions '
761 '(use --no-use-pep517 to force legacy behaviour).'
762 ) # type: Any
763
764 no_use_pep517 = partial(
765 Option,
766 '--no-use-pep517',
767 dest='use_pep517',
768 action='callback',
769 callback=_handle_no_use_pep517,
770 default=None,
771 help=SUPPRESS_HELP
772 ) # type: Any
773
774 install_options = partial(
775 Option,
776 '--install-option',
777 dest='install_options',
778 action='append',
779 metavar='options',
780 help="Extra arguments to be supplied to the setup.py install "
781 "command (use like --install-option=\"--install-scripts=/usr/local/"
782 "bin\"). Use multiple --install-option options to pass multiple "
783 "options to setup.py install. If you are using an option with a "
784 "directory path, be sure to use absolute path.",
785 ) # type: Callable[..., Option]
786
787 global_options = partial(
788 Option,
789 '--global-option',
790 dest='global_options',
791 action='append',
792 metavar='options',
793 help="Extra global options to be supplied to the setup.py "
794 "call before the install command.",
795 ) # type: Callable[..., Option]
796
797 no_clean = partial(
798 Option,
799 '--no-clean',
800 action='store_true',
801 default=False,
802 help="Don't clean up build directories."
803 ) # type: Callable[..., Option]
804
805 pre = partial(
806 Option,
807 '--pre',
808 action='store_true',
809 default=False,
810 help="Include pre-release and development versions. By default, "
811 "pip only finds stable versions.",
812 ) # type: Callable[..., Option]
813
814 disable_pip_version_check = partial(
815 Option,
816 "--disable-pip-version-check",
817 dest="disable_pip_version_check",
818 action="store_true",
819 default=False,
820 help="Don't periodically check PyPI to determine whether a new version "
821 "of pip is available for download. Implied with --no-index.",
822 ) # type: Callable[..., Option]
823
824
825 def _handle_merge_hash(option, opt_str, value, parser):
826 # type: (Option, str, str, OptionParser) -> None
827 """Given a value spelled "algo:digest", append the digest to a list
828 pointed to in a dict by the algo name."""
829 if not parser.values.hashes:
830 parser.values.hashes = {}
831 try:
832 algo, digest = value.split(':', 1)
833 except ValueError:
834 parser.error('Arguments to {} must be a hash name ' # noqa
835 'followed by a value, like --hash=sha256:'
836 'abcde...'.format(opt_str))
837 if algo not in STRONG_HASHES:
838 parser.error('Allowed hash algorithms for {} are {}.'.format( # noqa
839 opt_str, ', '.join(STRONG_HASHES)))
840 parser.values.hashes.setdefault(algo, []).append(digest)
841
842
843 hash = partial(
844 Option,
845 '--hash',
846 # Hash values eventually end up in InstallRequirement.hashes due to
847 # __dict__ copying in process_line().
848 dest='hashes',
849 action='callback',
850 callback=_handle_merge_hash,
851 type='string',
852 help="Verify that the package's archive matches this "
853 'hash before installing. Example: --hash=sha256:abcdef...',
854 ) # type: Callable[..., Option]
855
856
857 require_hashes = partial(
858 Option,
859 '--require-hashes',
860 dest='require_hashes',
861 action='store_true',
862 default=False,
863 help='Require a hash to check each requirement against, for '
864 'repeatable installs. This option is implied when any package in a '
865 'requirements file has a --hash option.',
866 ) # type: Callable[..., Option]
867
868
869 list_path = partial(
870 PipOption,
871 '--path',
872 dest='path',
873 type='path',
874 action='append',
875 help='Restrict to the specified installation path for listing '
876 'packages (can be used multiple times).'
877 ) # type: Callable[..., Option]
878
879
880 def check_list_path_option(options):
881 # type: (Values) -> None
882 if options.path and (options.user or options.local):
883 raise CommandError(
884 "Cannot combine '--path' with '--user' or '--local'"
885 )
886
887
888 no_python_version_warning = partial(
889 Option,
890 '--no-python-version-warning',
891 dest='no_python_version_warning',
892 action='store_true',
893 default=False,
894 help='Silence deprecation warnings for upcoming unsupported Pythons.',
895 ) # type: Callable[..., Option]
896
897
898 unstable_feature = partial(
899 Option,
900 '--unstable-feature',
901 dest='unstable_features',
902 metavar='feature',
903 action='append',
904 default=[],
905 choices=['resolver'],
906 help=SUPPRESS_HELP, # TODO: drop this in pip 20.3
907 ) # type: Callable[..., Option]
908
909 use_new_feature = partial(
910 Option,
911 '--use-feature',
912 dest='features_enabled',
913 metavar='feature',
914 action='append',
915 default=[],
916 choices=['2020-resolver', 'fast-deps'],
917 help='Enable new functionality, that may be backward incompatible.',
918 ) # type: Callable[..., Option]
919
920 use_deprecated_feature = partial(
921 Option,
922 '--use-deprecated',
923 dest='deprecated_features_enabled',
924 metavar='feature',
925 action='append',
926 default=[],
927 choices=[],
928 help=(
929 'Enable deprecated functionality, that will be removed in the future.'
930 ),
931 ) # type: Callable[..., Option]
932
933
934 ##########
935 # groups #
936 ##########
937
938 general_group = {
939 'name': 'General Options',
940 'options': [
941 help_,
942 isolated_mode,
943 require_virtualenv,
944 verbose,
945 version,
946 quiet,
947 log,
948 no_input,
949 proxy,
950 retries,
951 timeout,
952 exists_action,
953 trusted_host,
954 cert,
955 client_cert,
956 cache_dir,
957 no_cache,
958 disable_pip_version_check,
959 no_color,
960 no_python_version_warning,
961 unstable_feature,
962 use_new_feature,
963 use_deprecated_feature,
964 ]
965 } # type: Dict[str, Any]
966
967 index_group = {
968 'name': 'Package Index Options',
969 'options': [
970 index_url,
971 extra_index_url,
972 no_index,
973 find_links,
974 ]
975 } # type: Dict[str, Any]
976
[end of src/pip/_internal/cli/cmdoptions.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pypa/pip
|
e51a027964b524cccd7b3fe530dd59f87edc7d8f
|
'pip wheel --no-deps' doesn't work with extras on the new resolver
**What did you want to do?**
`pip wheel --use-feature=2020-resolver --no-deps splitio_client[cpphash,redis]==8.2.0 -w wheels/` should download `splitio_client-8.2.0-py2.py3-none-any.whl` like `pip wheel --no-deps splitio_client[cpphash,redis]==8.2.0 -w wheels/` does.
On its own this is an artificial case, but I broke it out of a 150 line requirements file that was failing to find the issue. :)
**Output**
#### With --use-feature=2020-resolver
```console
$ pip wheel --use-feature=2020-resolver --no-deps splitio_client[cpphash,redis]==8.2.0 -w wheels/
```
(no output)
#### Without --use-feature=2020-resolver
```console
$ pip wheel --no-deps splitio_client[cpphash,redis]==8.2.0 -w wheels/
Processing /Users/andy/Library/Caches/pip/wheels/5a/9a/1e/de8c54b6448f1a2615e76d5d2e7395342f9d1370865d2c0566/splitio_client-8.2.0-py2.py3-none-any.whl
Saved ./wheels/splitio_client-8.2.0-py2.py3-none-any.whl
Skipping splitio-client, due to already being wheel.
```
|
Can reproduce with master.
Ah, I think I know the problem. An extra-ed package is implemented as a virtual package with a dependency on its non-extra-ed self, but `--no-deps` makes the new resolver skip all dependencies, including that.
Note that `wheel` has nothing to do with the problem. It can be reproduced with `install` as well. The base package won’t be installed if extras are specified in conjunction with `--no-deps`.
|
2020-08-02T00:25:43Z
|
<patch>
diff --git a/src/pip/_internal/resolution/resolvelib/base.py b/src/pip/_internal/resolution/resolvelib/base.py
--- a/src/pip/_internal/resolution/resolvelib/base.py
+++ b/src/pip/_internal/resolution/resolvelib/base.py
@@ -69,8 +69,8 @@ def source_link(self):
# type: () -> Optional[Link]
raise NotImplementedError("Override in subclass")
- def iter_dependencies(self):
- # type: () -> Iterable[Optional[Requirement]]
+ def iter_dependencies(self, with_requires):
+ # type: (bool) -> Iterable[Optional[Requirement]]
raise NotImplementedError("Override in subclass")
def get_install_requirement(self):
diff --git a/src/pip/_internal/resolution/resolvelib/candidates.py b/src/pip/_internal/resolution/resolvelib/candidates.py
--- a/src/pip/_internal/resolution/resolvelib/candidates.py
+++ b/src/pip/_internal/resolution/resolvelib/candidates.py
@@ -275,8 +275,10 @@ def _get_requires_python_specifier(self):
return None
return spec
- def iter_dependencies(self):
- # type: () -> Iterable[Optional[Requirement]]
+ def iter_dependencies(self, with_requires):
+ # type: (bool) -> Iterable[Optional[Requirement]]
+ if not with_requires:
+ return
for r in self.dist.requires():
yield self._factory.make_requirement_from_spec(str(r), self._ireq)
python_dep = self._factory.make_requires_python_requirement(
@@ -420,8 +422,10 @@ def format_for_error(self):
# type: () -> str
return "{} {} (Installed)".format(self.name, self.version)
- def iter_dependencies(self):
- # type: () -> Iterable[Optional[Requirement]]
+ def iter_dependencies(self, with_requires):
+ # type: (bool) -> Iterable[Optional[Requirement]]
+ if not with_requires:
+ return
for r in self.dist.requires():
yield self._factory.make_requirement_from_spec(str(r), self._ireq)
@@ -519,10 +523,16 @@ def source_link(self):
# type: () -> Optional[Link]
return self.base.source_link
- def iter_dependencies(self):
- # type: () -> Iterable[Optional[Requirement]]
+ def iter_dependencies(self, with_requires):
+ # type: (bool) -> Iterable[Optional[Requirement]]
factory = self.base._factory
+ # Add a dependency on the exact base
+ # (See note 2b in the class docstring)
+ yield factory.make_requirement_from_candidate(self.base)
+ if not with_requires:
+ return
+
# The user may have specified extras that the candidate doesn't
# support. We ignore any unsupported extras here.
valid_extras = self.extras.intersection(self.base.dist.extras)
@@ -535,10 +545,6 @@ def iter_dependencies(self):
extra
)
- # Add a dependency on the exact base
- # (See note 2b in the class docstring)
- yield factory.make_requirement_from_candidate(self.base)
-
for r in self.base.dist.requires(valid_extras):
requirement = factory.make_requirement_from_spec(
str(r), self.base._ireq, valid_extras,
@@ -585,8 +591,8 @@ def format_for_error(self):
# type: () -> str
return "Python {}".format(self.version)
- def iter_dependencies(self):
- # type: () -> Iterable[Optional[Requirement]]
+ def iter_dependencies(self, with_requires):
+ # type: (bool) -> Iterable[Optional[Requirement]]
return ()
def get_install_requirement(self):
diff --git a/src/pip/_internal/resolution/resolvelib/provider.py b/src/pip/_internal/resolution/resolvelib/provider.py
--- a/src/pip/_internal/resolution/resolvelib/provider.py
+++ b/src/pip/_internal/resolution/resolvelib/provider.py
@@ -145,6 +145,9 @@ def is_satisfied_by(self, requirement, candidate):
def get_dependencies(self, candidate):
# type: (Candidate) -> Sequence[Requirement]
- if self._ignore_dependencies:
- return []
- return [r for r in candidate.iter_dependencies() if r is not None]
+ with_requires = not self._ignore_dependencies
+ return [
+ r
+ for r in candidate.iter_dependencies(with_requires)
+ if r is not None
+ ]
</patch>
|
[]
|
[]
| |||
pandas-dev__pandas-37676
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
BUG: setitem with boolean mask and series as value is broken for Series with EA type
Consider the following example (on master, 0.25.0dev) where the value being assigned to `s[boolean_mask] = value` is a Series itself (of the correct length + matching index):
```
In [1]: df = pd.DataFrame({'a': [0, 0, np.nan, np.nan], 'b': pd.array(range(4), dtype='Int64')})
In [2]: s = df['b'].copy()
In [3]: s[df['a'].isna()] = df.loc[df['a'].isna(), 'b']
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-3-04e8761e7ad9> in <module>
----> 1 s[df['a'].isna()] = df.loc[df['a'].isna(), 'b']
~/scipy/pandas/pandas/core/series.py in __setitem__(self, key, value)
1055 # do the setitem
1056 cacher_needs_updating = self._check_is_chained_assignment_possible()
-> 1057 setitem(key, value)
1058 if cacher_needs_updating:
1059 self._maybe_update_cacher()
~/scipy/pandas/pandas/core/series.py in setitem(key, value)
1046 key = check_bool_indexer(self.index, key)
1047 try:
-> 1048 self._where(~key, value, inplace=True)
1049 return
1050 except InvalidIndexError:
~/scipy/pandas/pandas/core/generic.py in _where(self, cond, other, inplace, axis, level, errors, try_cast)
8819 new_data = self._data.putmask(mask=cond, new=other, align=align,
8820 inplace=True, axis=block_axis,
-> 8821 transpose=self._AXIS_REVERSED)
8822 self._update_inplace(new_data)
8823
~/scipy/pandas/pandas/core/internals/managers.py in putmask(self, **kwargs)
511
512 def putmask(self, **kwargs):
--> 513 return self.apply('putmask', **kwargs)
514
515 def diff(self, **kwargs):
~/scipy/pandas/pandas/core/internals/managers.py in apply(self, f, axes, filter, do_integrity_check, consolidate, **kwargs)
393 copy=align_copy)
394
--> 395 applied = getattr(b, f)(**kwargs)
396 result_blocks = _extend_blocks(applied, result_blocks)
397
~/scipy/pandas/pandas/core/internals/blocks.py in putmask(self, mask, new, align, inplace, axis, transpose)
1593 mask = _safe_reshape(mask, new_values.shape)
1594
-> 1595 new_values[mask] = new
1596 new_values = self._try_coerce_result(new_values)
1597 return [self.make_block(values=new_values)]
~/scipy/pandas/pandas/core/arrays/integer.py in __setitem__(self, key, value)
399 mask = mask[0]
400
--> 401 self._data[key] = value
402 self._mask[key] = mask
403
ValueError: NumPy boolean array indexing assignment cannot assign 4 input values to the 2 output values where the mask is true
```
The reason this fails is because under the hood, the assigned value is aligned with the Series, turning it into a 4-len series, and then trying to assign this to an array with a boolean mask for 2 values.
</issue>
<code>
[start of README.md]
1 <div align="center">
2 <img src="https://dev.pandas.io/static/img/pandas.svg"><br>
3 </div>
4
5 -----------------
6
7 # pandas: powerful Python data analysis toolkit
8 [](https://pypi.org/project/pandas/)
9 [](https://anaconda.org/anaconda/pandas/)
10 [](https://doi.org/10.5281/zenodo.3509134)
11 [](https://pypi.org/project/pandas/)
12 [](https://github.com/pandas-dev/pandas/blob/master/LICENSE)
13 [](https://travis-ci.org/pandas-dev/pandas)
14 [](https://dev.azure.com/pandas-dev/pandas/_build/latest?definitionId=1&branch=master)
15 [](https://codecov.io/gh/pandas-dev/pandas)
16 [](https://pandas.pydata.org)
17 [](https://gitter.im/pydata/pandas)
18 [](https://numfocus.org)
19 [](https://github.com/psf/black)
20
21 ## What is it?
22
23 **pandas** is a Python package that provides fast, flexible, and expressive data
24 structures designed to make working with "relational" or "labeled" data both
25 easy and intuitive. It aims to be the fundamental high-level building block for
26 doing practical, **real world** data analysis in Python. Additionally, it has
27 the broader goal of becoming **the most powerful and flexible open source data
28 analysis / manipulation tool available in any language**. It is already well on
29 its way towards this goal.
30
31 ## Main Features
32 Here are just a few of the things that pandas does well:
33
34 - Easy handling of [**missing data**][missing-data] (represented as
35 `NaN`, `NA`, or `NaT`) in floating point as well as non-floating point data
36 - Size mutability: columns can be [**inserted and
37 deleted**][insertion-deletion] from DataFrame and higher dimensional
38 objects
39 - Automatic and explicit [**data alignment**][alignment]: objects can
40 be explicitly aligned to a set of labels, or the user can simply
41 ignore the labels and let `Series`, `DataFrame`, etc. automatically
42 align the data for you in computations
43 - Powerful, flexible [**group by**][groupby] functionality to perform
44 split-apply-combine operations on data sets, for both aggregating
45 and transforming data
46 - Make it [**easy to convert**][conversion] ragged,
47 differently-indexed data in other Python and NumPy data structures
48 into DataFrame objects
49 - Intelligent label-based [**slicing**][slicing], [**fancy
50 indexing**][fancy-indexing], and [**subsetting**][subsetting] of
51 large data sets
52 - Intuitive [**merging**][merging] and [**joining**][joining] data
53 sets
54 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of
55 data sets
56 - [**Hierarchical**][mi] labeling of axes (possible to have multiple
57 labels per tick)
58 - Robust IO tools for loading data from [**flat files**][flat-files]
59 (CSV and delimited), [**Excel files**][excel], [**databases**][db],
60 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore]
61 - [**Time series**][timeseries]-specific functionality: date range
62 generation and frequency conversion, moving window statistics,
63 date shifting and lagging.
64
65
66 [missing-data]: https://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data
67 [insertion-deletion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#column-selection-addition-deletion
68 [alignment]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html?highlight=alignment#intro-to-data-structures
69 [groupby]: https://pandas.pydata.org/pandas-docs/stable/groupby.html#group-by-split-apply-combine
70 [conversion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe
71 [slicing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges
72 [fancy-indexing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#advanced-indexing-with-ix
73 [subsetting]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing
74 [merging]: https://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging
75 [joining]: https://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index
76 [reshape]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-and-pivot-tables
77 [pivot-table]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#pivot-tables-and-cross-tabulations
78 [mi]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex
79 [flat-files]: https://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files
80 [excel]: https://pandas.pydata.org/pandas-docs/stable/io.html#excel-files
81 [db]: https://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries
82 [hdfstore]: https://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables
83 [timeseries]: https://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-series-date-functionality
84
85 ## Where to get it
86 The source code is currently hosted on GitHub at:
87 https://github.com/pandas-dev/pandas
88
89 Binary installers for the latest released version are available at the [Python
90 package index](https://pypi.org/project/pandas) and on conda.
91
92 ```sh
93 # conda
94 conda install pandas
95 ```
96
97 ```sh
98 # or PyPI
99 pip install pandas
100 ```
101
102 ## Dependencies
103 - [NumPy](https://www.numpy.org)
104 - [python-dateutil](https://labix.org/python-dateutil)
105 - [pytz](https://pythonhosted.org/pytz)
106
107 See the [full installation instructions](https://pandas.pydata.org/pandas-docs/stable/install.html#dependencies) for minimum supported versions of required, recommended and optional dependencies.
108
109 ## Installation from sources
110 To install pandas from source you need Cython in addition to the normal
111 dependencies above. Cython can be installed from pypi:
112
113 ```sh
114 pip install cython
115 ```
116
117 In the `pandas` directory (same one where you found this file after
118 cloning the git repo), execute:
119
120 ```sh
121 python setup.py install
122 ```
123
124 or for installing in [development mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs):
125
126
127 ```sh
128 python -m pip install -e . --no-build-isolation --no-use-pep517
129 ```
130
131 If you have `make`, you can also use `make develop` to run the same command.
132
133 or alternatively
134
135 ```sh
136 python setup.py develop
137 ```
138
139 See the full instructions for [installing from source](https://pandas.pydata.org/pandas-docs/stable/install.html#installing-from-source).
140
141 ## License
142 [BSD 3](LICENSE)
143
144 ## Documentation
145 The official documentation is hosted on PyData.org: https://pandas.pydata.org/pandas-docs/stable
146
147 ## Background
148 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and
149 has been under active development since then.
150
151 ## Getting Help
152
153 For usage questions, the best place to go to is [StackOverflow](https://stackoverflow.com/questions/tagged/pandas).
154 Further, general questions and discussions can also take place on the [pydata mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata).
155
156 ## Discussion and Development
157 Most development discussions take place on github in this repo. Further, the [pandas-dev mailing list](https://mail.python.org/mailman/listinfo/pandas-dev) can also be used for specialized discussions or design issues, and a [Gitter channel](https://gitter.im/pydata/pandas) is available for quick development related questions.
158
159 ## Contributing to pandas [](https://www.codetriage.com/pandas-dev/pandas)
160
161 All contributions, bug reports, bug fixes, documentation improvements, enhancements, and ideas are welcome.
162
163 A detailed overview on how to contribute can be found in the **[contributing guide](https://pandas.pydata.org/docs/dev/development/contributing.html)**. There is also an [overview](.github/CONTRIBUTING.md) on GitHub.
164
165 If you are simply looking to start working with the pandas codebase, navigate to the [GitHub "issues" tab](https://github.com/pandas-dev/pandas/issues) and start looking through interesting issues. There are a number of issues listed under [Docs](https://github.com/pandas-dev/pandas/issues?labels=Docs&sort=updated&state=open) and [good first issue](https://github.com/pandas-dev/pandas/issues?labels=good+first+issue&sort=updated&state=open) where you could start out.
166
167 You can also triage issues which may include reproducing bug reports, or asking for vital information such as version numbers or reproduction instructions. If you would like to start triaging issues, one easy way to get started is to [subscribe to pandas on CodeTriage](https://www.codetriage.com/pandas-dev/pandas).
168
169 Or maybe through using pandas you have an idea of your own or are looking for something in the documentation and thinking ‘this can be improved’...you can do something about it!
170
171 Feel free to ask questions on the [mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata) or on [Gitter](https://gitter.im/pydata/pandas).
172
173 As contributors and maintainers to this project, you are expected to abide by pandas' code of conduct. More information can be found at: [Contributor Code of Conduct](https://github.com/pandas-dev/pandas/blob/master/.github/CODE_OF_CONDUCT.md)
174
[end of README.md]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pandas-dev/pandas
|
f34fe6244e941c8701f9c0d243277ff075c58f05
|
BUG: setitem with boolean mask and series as value is broken for Series with EA type
Consider the following example (on master, 0.25.0dev) where the value being assigned to `s[boolean_mask] = value` is a Series itself (of the correct length + matching index):
```
In [1]: df = pd.DataFrame({'a': [0, 0, np.nan, np.nan], 'b': pd.array(range(4), dtype='Int64')})
In [2]: s = df['b'].copy()
In [3]: s[df['a'].isna()] = df.loc[df['a'].isna(), 'b']
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-3-04e8761e7ad9> in <module>
----> 1 s[df['a'].isna()] = df.loc[df['a'].isna(), 'b']
~/scipy/pandas/pandas/core/series.py in __setitem__(self, key, value)
1055 # do the setitem
1056 cacher_needs_updating = self._check_is_chained_assignment_possible()
-> 1057 setitem(key, value)
1058 if cacher_needs_updating:
1059 self._maybe_update_cacher()
~/scipy/pandas/pandas/core/series.py in setitem(key, value)
1046 key = check_bool_indexer(self.index, key)
1047 try:
-> 1048 self._where(~key, value, inplace=True)
1049 return
1050 except InvalidIndexError:
~/scipy/pandas/pandas/core/generic.py in _where(self, cond, other, inplace, axis, level, errors, try_cast)
8819 new_data = self._data.putmask(mask=cond, new=other, align=align,
8820 inplace=True, axis=block_axis,
-> 8821 transpose=self._AXIS_REVERSED)
8822 self._update_inplace(new_data)
8823
~/scipy/pandas/pandas/core/internals/managers.py in putmask(self, **kwargs)
511
512 def putmask(self, **kwargs):
--> 513 return self.apply('putmask', **kwargs)
514
515 def diff(self, **kwargs):
~/scipy/pandas/pandas/core/internals/managers.py in apply(self, f, axes, filter, do_integrity_check, consolidate, **kwargs)
393 copy=align_copy)
394
--> 395 applied = getattr(b, f)(**kwargs)
396 result_blocks = _extend_blocks(applied, result_blocks)
397
~/scipy/pandas/pandas/core/internals/blocks.py in putmask(self, mask, new, align, inplace, axis, transpose)
1593 mask = _safe_reshape(mask, new_values.shape)
1594
-> 1595 new_values[mask] = new
1596 new_values = self._try_coerce_result(new_values)
1597 return [self.make_block(values=new_values)]
~/scipy/pandas/pandas/core/arrays/integer.py in __setitem__(self, key, value)
399 mask = mask[0]
400
--> 401 self._data[key] = value
402 self._mask[key] = mask
403
ValueError: NumPy boolean array indexing assignment cannot assign 4 input values to the 2 output values where the mask is true
```
The reason this fails is because under the hood, the assigned value is aligned with the Series, turning it into a 4-len series, and then trying to assign this to an array with a boolean mask for 2 values.
|
``.loc`` seems to work, so that can be a good workaround for now in actual code
|
2020-11-06T22:51:37Z
|
<patch>
diff --git a/pandas/conftest.py b/pandas/conftest.py
--- a/pandas/conftest.py
+++ b/pandas/conftest.py
@@ -1143,6 +1143,26 @@ def any_nullable_int_dtype(request):
return request.param
[email protected](params=tm.ALL_EA_INT_DTYPES + tm.FLOAT_EA_DTYPES)
+def any_numeric_dtype(request):
+ """
+ Parameterized fixture for any nullable integer dtype and
+ any float ea dtypes.
+
+ * 'UInt8'
+ * 'Int8'
+ * 'UInt16'
+ * 'Int16'
+ * 'UInt32'
+ * 'Int32'
+ * 'UInt64'
+ * 'Int64'
+ * 'Float32'
+ * 'Float64'
+ """
+ return request.param
+
+
@pytest.fixture(params=tm.SIGNED_EA_INT_DTYPES)
def any_signed_nullable_int_dtype(request):
"""
</patch>
|
[]
|
[]
| |||
googleapis__google-cloud-python-4339
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
api_core: Add metadata option to wrap_method
To update GAPIC to use routing headers, `wrap_method` needs to accept more metadata than just the user agent. Example:
```python
metadata = [
(b'x-google-request-params',
b'name={}&book.read={}'.format(name, book.read))]
self._create_book(request, retry=retry, timeout=timeout, metadata=metadata)
```
</issue>
<code>
[start of README.rst]
1 Google Cloud Python Client
2 ==========================
3
4 Python idiomatic client for `Google Cloud Platform`_ services.
5
6 .. _Google Cloud Platform: https://cloud.google.com/
7
8 |pypi| |circleci| |appveyor| |coverage| |versions|
9
10 - `Homepage`_
11 - `API Documentation`_
12 - `Read The Docs Documentation`_
13
14 .. _Homepage: https://googlecloudplatform.github.io/google-cloud-python/
15 .. _API Documentation: https://googlecloudplatform.github.io/google-cloud-python/latest/
16 .. _Read The Docs Documentation: https://google-cloud-python.readthedocs.io/en/latest/
17
18 The following client libraries have **GA** support:
19
20 - `Google Cloud Datastore`_ (`Datastore README`_)
21 - `Google Cloud Storage`_ (`Storage README`_)
22 - `Google Cloud Translation`_ (`Translation README`_)
23 - `Stackdriver Logging`_ (`Logging README`_)
24
25 **GA** (general availability) indicates that the client library for a
26 particular service is stable, and that the code surface will not change in
27 backwards-incompatible ways unless either absolutely necessary (e.g. because
28 of critical security issues) or with an extensive deprecation period.
29 Issues and requests against GA libraries are addressed with the highest
30 priority.
31
32 The following client libraries have **beta** support:
33
34 - `Google BigQuery`_ (`BigQuery README`_)
35 - `Google Cloud Firestore`_ (`Firestore README`_)
36 - `Google Cloud Natural Language`_ (`Natural Language README`_)
37 - `Google Cloud Pub/Sub`_ (`Pub/Sub README`_)
38 - `Google Cloud Spanner`_ (`Spanner README`_)
39 - `Google Cloud Speech`_ (`Speech README`_)
40 - `Google Cloud Video Intelligence`_ (`Video Intelligence README`_)
41 - `Google Cloud Vision`_ (`Vision README`_)
42
43 **Beta** indicates that the client library for a particular service is
44 mostly stable and is being prepared for release. Issues and requests
45 against beta libraries are addressed with a higher priority.
46
47 This client library has **alpha** support for the following Google
48 Cloud Platform services:
49
50 - `Google Cloud Bigtable`_ (`Bigtable README`_)
51 - `Google Cloud Bigtable - HappyBase`_ (`HappyBase README`_)
52 - `Google Cloud DNS`_ (`DNS README`_)
53 - `Google Cloud Resource Manager`_ (`Resource Manager README`_)
54 - `Google Cloud Runtime Configuration`_ (`Runtime Config README`_)
55 - `Stackdriver Error Reporting`_ (`Error Reporting README`_)
56 - `Stackdriver Monitoring`_ (`Monitoring README`_)
57
58 **Alpha** indicates that the client library for a particular service is
59 still a work-in-progress and is more likely to get backwards-incompatible
60 updates. See `versioning`_ for more details.
61
62 .. _Google Cloud Datastore: https://pypi.org/project/google-cloud-datastore/
63 .. _Datastore README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/datastore
64 .. _Google Cloud Storage: https://pypi.org/project/google-cloud-storage/
65 .. _Storage README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/storage
66 .. _Google Cloud Pub/Sub: https://pypi.org/project/google-cloud-pubsub/
67 .. _Pub/Sub README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/pubsub
68 .. _Google BigQuery: https://pypi.org/project/google-cloud-bigquery/
69 .. _BigQuery README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/bigquery
70 .. _Google Cloud Resource Manager: https://pypi.org/project/google-cloud-resource-manager/
71 .. _Resource Manager README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/resource_manager
72 .. _Stackdriver Logging: https://pypi.org/project/google-cloud-logging/
73 .. _Logging README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/logging
74 .. _Stackdriver Monitoring: https://pypi.org/project/google-cloud-monitoring/
75 .. _Monitoring README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/monitoring
76 .. _Google Cloud Bigtable: https://pypi.org/project/google-cloud-bigtable/
77 .. _Bigtable README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/bigtable
78 .. _Google Cloud DNS: https://pypi.org/project/google-cloud-dns/
79 .. _DNS README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/dns
80 .. _Stackdriver Error Reporting: https://pypi.org/project/google-cloud-error-reporting/
81 .. _Error Reporting README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/error_reporting
82 .. _Google Cloud Natural Language: https://pypi.org/project/google-cloud-language/
83 .. _Natural Language README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/language
84 .. _Google Cloud Translation: https://pypi.org/project/google-cloud-translate/
85 .. _Translation README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/translate
86 .. _Google Cloud Speech: https://pypi.org/project/google-cloud-speech/
87 .. _Speech README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/speech
88 .. _Google Cloud Vision: https://pypi.org/project/google-cloud-vision/
89 .. _Vision README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/vision
90 .. _Google Cloud Bigtable - HappyBase: https://pypi.org/project/google-cloud-happybase/
91 .. _HappyBase README: https://github.com/GoogleCloudPlatform/google-cloud-python-happybase
92 .. _Google Cloud Runtime Configuration: https://cloud.google.com/deployment-manager/runtime-configurator/
93 .. _Runtime Config README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/runtimeconfig
94 .. _Google Cloud Spanner: https://pypi.python.org/pypi/google-cloud-spanner
95 .. _Spanner README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/spanner
96 .. _Google Cloud Video Intelligence: https://pypi.python.org/pypi/google-cloud-videointelligence
97 .. _Video Intelligence README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/videointelligence
98 .. _versioning: https://github.com/GoogleCloudPlatform/google-cloud-python/blob/master/CONTRIBUTING.rst#versioning
99 .. _Google Cloud Firestore: https://pypi.org/project/google-cloud-firestore/
100 .. _Firestore README: https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/firestore
101
102 If you need support for other Google APIs, check out the
103 `Google APIs Python Client library`_.
104
105 .. _Google APIs Python Client library: https://github.com/google/google-api-python-client
106
107 Quick Start
108 -----------
109
110 .. code-block:: console
111
112 $ pip install --upgrade google-cloud
113
114 For more information on setting up your Python development environment,
115 such as installing ``pip`` and ``virtualenv`` on your system, please refer
116 to `Python Development Environment Setup Guide`_ for Google Cloud Platform.
117
118 .. _Python Development Environment Setup Guide: https://cloud.google.com/python/setup
119
120 Example Applications
121 --------------------
122
123 - `getting-started-python`_ - A sample and `tutorial`_ that demonstrates how to build a complete web application using Cloud Datastore, Cloud Storage, and Cloud Pub/Sub and deploy it to Google App Engine or Google Compute Engine.
124 - `google-cloud-python-expenses-demo`_ - A sample expenses demo using Cloud Datastore and Cloud Storage
125
126 .. _getting-started-python: https://github.com/GoogleCloudPlatform/getting-started-python
127 .. _tutorial: https://cloud.google.com/python
128 .. _google-cloud-python-expenses-demo: https://github.com/GoogleCloudPlatform/google-cloud-python-expenses-demo
129
130 Authentication
131 --------------
132
133 With ``google-cloud-python`` we try to make authentication as painless as possible.
134 Check out the `Authentication section`_ in our documentation to learn more.
135 You may also find the `authentication document`_ shared by all the
136 ``google-cloud-*`` libraries to be helpful.
137
138 .. _Authentication section: https://google-cloud-python.readthedocs.io/en/latest/core/auth.html
139 .. _authentication document: https://github.com/GoogleCloudPlatform/google-cloud-common/tree/master/authentication
140
141 Contributing
142 ------------
143
144 Contributions to this library are always welcome and highly encouraged.
145
146 See the `CONTRIBUTING doc`_ for more information on how to get started.
147
148 .. _CONTRIBUTING doc: https://github.com/GoogleCloudPlatform/google-cloud-python/blob/master/CONTRIBUTING.rst
149
150 Community
151 ---------
152
153 Google Cloud Platform Python developers hang out in `Slack`_ in the ``#python``
154 channel, click here to `get an invitation`_.
155
156
157 .. _Slack: https://googlecloud-community.slack.com
158 .. _get an invitation: https://gcp-slack.appspot.com/
159
160 License
161 -------
162
163 Apache 2.0 - See `the LICENSE`_ for more information.
164
165 .. _the LICENSE: https://github.com/GoogleCloudPlatform/google-cloud-python/blob/master/LICENSE
166
167 .. |circleci| image:: https://circleci.com/gh/GoogleCloudPlatform/google-cloud-python.svg?style=shield
168 :target: https://circleci.com/gh/GoogleCloudPlatform/google-cloud-python
169 .. |appveyor| image:: https://ci.appveyor.com/api/projects/status/github/googlecloudplatform/google-cloud-python?branch=master&svg=true
170 :target: https://ci.appveyor.com/project/GoogleCloudPlatform/google-cloud-python
171 .. |coverage| image:: https://coveralls.io/repos/GoogleCloudPlatform/google-cloud-python/badge.svg?branch=master
172 :target: https://coveralls.io/r/GoogleCloudPlatform/google-cloud-python?branch=master
173 .. |pypi| image:: https://img.shields.io/pypi/v/google-cloud.svg
174 :target: https://pypi.org/project/google-cloud/
175 .. |versions| image:: https://img.shields.io/pypi/pyversions/google-cloud.svg
176 :target: https://pypi.org/project/google-cloud/
177
[end of README.rst]
[start of api_core/google/api_core/gapic_v1/method.py]
1 # Copyright 2017 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Helpers for wrapping low-level gRPC methods with common functionality.
16
17 This is used by gapic clients to provide common error mapping, retry, timeout,
18 pagination, and long-running operations to gRPC methods.
19 """
20
21 from google.api_core import general_helpers
22 from google.api_core import grpc_helpers
23 from google.api_core import timeout
24 from google.api_core.gapic_v1 import client_info
25
26 METRICS_METADATA_KEY = 'x-goog-api-client'
27 USE_DEFAULT_METADATA = object()
28 DEFAULT = object()
29 """Sentinel value indicating that a retry or timeout argument was unspecified,
30 so the default should be used."""
31
32
33 def _is_not_none_or_false(value):
34 return value is not None and value is not False
35
36
37 def _apply_decorators(func, decorators):
38 """Apply a list of decorators to a given function.
39
40 ``decorators`` may contain items that are ``None`` or ``False`` which will
41 be ignored.
42 """
43 decorators = filter(_is_not_none_or_false, reversed(decorators))
44
45 for decorator in decorators:
46 func = decorator(func)
47
48 return func
49
50
51 def _determine_timeout(default_timeout, specified_timeout, retry):
52 """Determines how timeout should be applied to a wrapped method.
53
54 Args:
55 default_timeout (Optional[Timeout]): The default timeout specified
56 at method creation time.
57 specified_timeout (Optional[Timeout]): The timeout specified at
58 invocation time. If :attr:`DEFAULT`, this will be set to
59 the ``default_timeout``.
60 retry (Optional[Retry]): The retry specified at invocation time.
61
62 Returns:
63 Optional[Timeout]: The timeout to apply to the method or ``None``.
64 """
65 if specified_timeout is DEFAULT:
66 specified_timeout = default_timeout
67
68 if specified_timeout is default_timeout:
69 # If timeout is the default and the default timeout is exponential and
70 # a non-default retry is specified, make sure the timeout's deadline
71 # matches the retry's. This handles the case where the user leaves
72 # the timeout default but specifies a lower deadline via the retry.
73 if (retry and retry is not DEFAULT
74 and isinstance(default_timeout, timeout.ExponentialTimeout)):
75 return default_timeout.with_deadline(retry._deadline)
76 else:
77 return default_timeout
78
79 # If timeout is specified as a number instead of a Timeout instance,
80 # convert it to a ConstantTimeout.
81 if isinstance(specified_timeout, (int, float)):
82 return timeout.ConstantTimeout(specified_timeout)
83 else:
84 return specified_timeout
85
86
87 class _GapicCallable(object):
88 """Callable that applies retry, timeout, and metadata logic.
89
90 Args:
91 target (Callable): The low-level RPC method.
92 retry (google.api_core.retry.Retry): The default retry for the
93 callable. If ``None``, this callable will not retry by default
94 timeout (google.api_core.timeout.Timeout): The default timeout
95 for the callable. If ``None``, this callable will not specify
96 a timeout argument to the low-level RPC method by default.
97 user_agent_metadata (Tuple[str, str]): The user agent metadata key and
98 value to provide to the RPC method. If ``None``, no additional
99 metadata will be passed to the RPC method.
100 """
101
102 def __init__(self, target, retry, timeout, user_agent_metadata=None):
103 self._target = target
104 self._retry = retry
105 self._timeout = timeout
106 self._user_agent_metadata = user_agent_metadata
107
108 def __call__(self, *args, **kwargs):
109 """Invoke the low-level RPC with retry, timeout, and metadata."""
110 # Note: Due to Python 2 lacking keyword-only arguments we use kwargs to
111 # extract the retry and timeout params.
112 timeout_ = _determine_timeout(
113 self._timeout,
114 kwargs.pop('timeout', self._timeout),
115 # Use only the invocation-specified retry only for this, as we only
116 # want to adjust the timeout deadline if the *user* specified
117 # a different retry.
118 kwargs.get('retry', None))
119
120 retry = kwargs.pop('retry', self._retry)
121
122 if retry is DEFAULT:
123 retry = self._retry
124
125 # Apply all applicable decorators.
126 wrapped_func = _apply_decorators(self._target, [retry, timeout_])
127
128 # Add the user agent metadata to the call.
129 if self._user_agent_metadata is not None:
130 metadata = kwargs.get('metadata', [])
131 metadata.append(self._user_agent_metadata)
132 kwargs['metadata'] = metadata
133
134 return wrapped_func(*args, **kwargs)
135
136
137 def wrap_method(
138 func, default_retry=None, default_timeout=None,
139 client_info=client_info.DEFAULT_CLIENT_INFO):
140 """Wrap an RPC method with common behavior.
141
142 This applies common error wrapping, retry, and timeout behavior a function.
143 The wrapped function will take optional ``retry`` and ``timeout``
144 arguments.
145
146 For example::
147
148 import google.api_core.gapic_v1.method
149 from google.api_core import retry
150 from google.api_core import timeout
151
152 # The original RPC method.
153 def get_topic(name, timeout=None):
154 request = publisher_v2.GetTopicRequest(name=name)
155 return publisher_stub.GetTopic(request, timeout=timeout)
156
157 default_retry = retry.Retry(deadline=60)
158 default_timeout = timeout.Timeout(deadline=60)
159 wrapped_get_topic = google.api_core.gapic_v1.method.wrap_method(
160 get_topic, default_retry)
161
162 # Execute get_topic with default retry and timeout:
163 response = wrapped_get_topic()
164
165 # Execute get_topic without doing any retying but with the default
166 # timeout:
167 response = wrapped_get_topic(retry=None)
168
169 # Execute get_topic but only retry on 5xx errors:
170 my_retry = retry.Retry(retry.if_exception_type(
171 exceptions.InternalServerError))
172 response = wrapped_get_topic(retry=my_retry)
173
174 The way this works is by late-wrapping the given function with the retry
175 and timeout decorators. Essentially, when ``wrapped_get_topic()`` is
176 called:
177
178 * ``get_topic()`` is first wrapped with the ``timeout`` into
179 ``get_topic_with_timeout``.
180 * ``get_topic_with_timeout`` is wrapped with the ``retry`` into
181 ``get_topic_with_timeout_and_retry()``.
182 * The final ``get_topic_with_timeout_and_retry`` is called passing through
183 the ``args`` and ``kwargs``.
184
185 The callstack is therefore::
186
187 method.__call__() ->
188 Retry.__call__() ->
189 Timeout.__call__() ->
190 wrap_errors() ->
191 get_topic()
192
193 Note that if ``timeout`` or ``retry`` is ``None``, then they are not
194 applied to the function. For example,
195 ``wrapped_get_topic(timeout=None, retry=None)`` is more or less
196 equivalent to just calling ``get_topic`` but with error re-mapping.
197
198 Args:
199 func (Callable[Any]): The function to wrap. It should accept an
200 optional ``timeout`` argument. If ``metadata`` is not ``None``, it
201 should accept a ``metadata`` argument.
202 default_retry (Optional[google.api_core.Retry]): The default retry
203 strategy. If ``None``, the method will not retry by default.
204 default_timeout (Optional[google.api_core.Timeout]): The default
205 timeout strategy. Can also be specified as an int or float. If
206 ``None``, the method will not have timeout specified by default.
207 client_info
208 (Optional[google.api_core.gapic_v1.client_info.ClientInfo]):
209 Client information used to create a user-agent string that's
210 passed as gRPC metadata to the method. If unspecified, then
211 a sane default will be used. If ``None``, then no user agent
212 metadata will be provided to the RPC method.
213
214 Returns:
215 Callable: A new callable that takes optional ``retry`` and ``timeout``
216 arguments and applies the common error mapping, retry, timeout,
217 and metadata behavior to the low-level RPC method.
218 """
219 func = grpc_helpers.wrap_errors(func)
220
221 if client_info is not None:
222 user_agent_metadata = client_info.to_grpc_metadata()
223 else:
224 user_agent_metadata = None
225
226 return general_helpers.wraps(func)(
227 _GapicCallable(
228 func, default_retry, default_timeout,
229 user_agent_metadata=user_agent_metadata))
230
[end of api_core/google/api_core/gapic_v1/method.py]
[start of spanner/google/cloud/spanner_admin_database_v1/gapic/database_admin_client.py]
1 # Copyright 2017, Google LLC All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 #
15 # EDITING INSTRUCTIONS
16 # This file was generated from the file
17 # https://github.com/google/googleapis/blob/master/google/spanner/admin/database/v1/spanner_database_admin.proto,
18 # and updates to that file get reflected here through a refresh process.
19 # For the short term, the refresh process will only be runnable by Google engineers.
20 #
21 # The only allowed edits are to method and file documentation. A 3-way
22 # merge preserves those additions if the generated source changes.
23 """Accesses the google.spanner.admin.database.v1 DatabaseAdmin API."""
24
25 import collections
26 import json
27 import os
28 import pkg_resources
29 import platform
30
31 from google.gapic.longrunning import operations_client
32 from google.gax import api_callable
33 from google.gax import config
34 from google.gax import path_template
35 import google.gax
36
37 from google.cloud.spanner_admin_database_v1.gapic import database_admin_client_config
38 from google.cloud.spanner_admin_database_v1.gapic import enums
39 from google.cloud.spanner_admin_database_v1.proto import spanner_database_admin_pb2
40 from google.iam.v1 import iam_policy_pb2
41 from google.iam.v1 import policy_pb2
42 from google.protobuf import empty_pb2
43
44 _PageDesc = google.gax.PageDescriptor
45
46
47 class DatabaseAdminClient(object):
48 """
49 Cloud Spanner Database Admin API
50
51 The Cloud Spanner Database Admin API can be used to create, drop, and
52 list databases. It also enables updating the schema of pre-existing
53 databases.
54 """
55
56 SERVICE_ADDRESS = 'spanner.googleapis.com'
57 """The default address of the service."""
58
59 DEFAULT_SERVICE_PORT = 443
60 """The default port of the service."""
61
62 _PAGE_DESCRIPTORS = {
63 'list_databases': _PageDesc('page_token', 'next_page_token',
64 'databases')
65 }
66
67 # The scopes needed to make gRPC calls to all of the methods defined in
68 # this service
69 _ALL_SCOPES = (
70 'https://www.googleapis.com/auth/cloud-platform',
71 'https://www.googleapis.com/auth/spanner.admin', )
72
73 _INSTANCE_PATH_TEMPLATE = path_template.PathTemplate(
74 'projects/{project}/instances/{instance}')
75 _DATABASE_PATH_TEMPLATE = path_template.PathTemplate(
76 'projects/{project}/instances/{instance}/databases/{database}')
77
78 @classmethod
79 def instance_path(cls, project, instance):
80 """Returns a fully-qualified instance resource name string."""
81 return cls._INSTANCE_PATH_TEMPLATE.render({
82 'project': project,
83 'instance': instance,
84 })
85
86 @classmethod
87 def database_path(cls, project, instance, database):
88 """Returns a fully-qualified database resource name string."""
89 return cls._DATABASE_PATH_TEMPLATE.render({
90 'project': project,
91 'instance': instance,
92 'database': database,
93 })
94
95 @classmethod
96 def match_project_from_instance_name(cls, instance_name):
97 """Parses the project from a instance resource.
98
99 Args:
100 instance_name (str): A fully-qualified path representing a instance
101 resource.
102
103 Returns:
104 A string representing the project.
105 """
106 return cls._INSTANCE_PATH_TEMPLATE.match(instance_name).get('project')
107
108 @classmethod
109 def match_instance_from_instance_name(cls, instance_name):
110 """Parses the instance from a instance resource.
111
112 Args:
113 instance_name (str): A fully-qualified path representing a instance
114 resource.
115
116 Returns:
117 A string representing the instance.
118 """
119 return cls._INSTANCE_PATH_TEMPLATE.match(instance_name).get('instance')
120
121 @classmethod
122 def match_project_from_database_name(cls, database_name):
123 """Parses the project from a database resource.
124
125 Args:
126 database_name (str): A fully-qualified path representing a database
127 resource.
128
129 Returns:
130 A string representing the project.
131 """
132 return cls._DATABASE_PATH_TEMPLATE.match(database_name).get('project')
133
134 @classmethod
135 def match_instance_from_database_name(cls, database_name):
136 """Parses the instance from a database resource.
137
138 Args:
139 database_name (str): A fully-qualified path representing a database
140 resource.
141
142 Returns:
143 A string representing the instance.
144 """
145 return cls._DATABASE_PATH_TEMPLATE.match(database_name).get('instance')
146
147 @classmethod
148 def match_database_from_database_name(cls, database_name):
149 """Parses the database from a database resource.
150
151 Args:
152 database_name (str): A fully-qualified path representing a database
153 resource.
154
155 Returns:
156 A string representing the database.
157 """
158 return cls._DATABASE_PATH_TEMPLATE.match(database_name).get('database')
159
160 def __init__(self,
161 channel=None,
162 credentials=None,
163 ssl_credentials=None,
164 scopes=None,
165 client_config=None,
166 lib_name=None,
167 lib_version='',
168 metrics_headers=()):
169 """Constructor.
170
171 Args:
172 channel (~grpc.Channel): A ``Channel`` instance through
173 which to make calls.
174 credentials (~google.auth.credentials.Credentials): The authorization
175 credentials to attach to requests. These credentials identify this
176 application to the service.
177 ssl_credentials (~grpc.ChannelCredentials): A
178 ``ChannelCredentials`` instance for use with an SSL-enabled
179 channel.
180 scopes (Sequence[str]): A list of OAuth2 scopes to attach to requests.
181 client_config (dict):
182 A dictionary for call options for each method. See
183 :func:`google.gax.construct_settings` for the structure of
184 this data. Falls back to the default config if not specified
185 or the specified config is missing data points.
186 lib_name (str): The API library software used for calling
187 the service. (Unless you are writing an API client itself,
188 leave this as default.)
189 lib_version (str): The API library software version used
190 for calling the service. (Unless you are writing an API client
191 itself, leave this as default.)
192 metrics_headers (dict): A dictionary of values for tracking
193 client library metrics. Ultimately serializes to a string
194 (e.g. 'foo/1.2.3 bar/3.14.1'). This argument should be
195 considered private.
196 """
197 # Unless the calling application specifically requested
198 # OAuth scopes, request everything.
199 if scopes is None:
200 scopes = self._ALL_SCOPES
201
202 # Initialize an empty client config, if none is set.
203 if client_config is None:
204 client_config = {}
205
206 # Initialize metrics_headers as an ordered dictionary
207 # (cuts down on cardinality of the resulting string slightly).
208 metrics_headers = collections.OrderedDict(metrics_headers)
209 metrics_headers['gl-python'] = platform.python_version()
210
211 # The library may or may not be set, depending on what is
212 # calling this client. Newer client libraries set the library name
213 # and version.
214 if lib_name:
215 metrics_headers[lib_name] = lib_version
216
217 # Finally, track the GAPIC package version.
218 metrics_headers['gapic'] = pkg_resources.get_distribution(
219 'google-cloud-spanner', ).version
220
221 # Load the configuration defaults.
222 defaults = api_callable.construct_settings(
223 'google.spanner.admin.database.v1.DatabaseAdmin',
224 database_admin_client_config.config,
225 client_config,
226 config.STATUS_CODE_NAMES,
227 metrics_headers=metrics_headers,
228 page_descriptors=self._PAGE_DESCRIPTORS, )
229 self.database_admin_stub = config.create_stub(
230 spanner_database_admin_pb2.DatabaseAdminStub,
231 channel=channel,
232 service_path=self.SERVICE_ADDRESS,
233 service_port=self.DEFAULT_SERVICE_PORT,
234 credentials=credentials,
235 scopes=scopes,
236 ssl_credentials=ssl_credentials)
237
238 self.operations_client = operations_client.OperationsClient(
239 service_path=self.SERVICE_ADDRESS,
240 channel=channel,
241 credentials=credentials,
242 ssl_credentials=ssl_credentials,
243 scopes=scopes,
244 client_config=client_config,
245 metrics_headers=metrics_headers, )
246
247 self._list_databases = api_callable.create_api_call(
248 self.database_admin_stub.ListDatabases,
249 settings=defaults['list_databases'])
250 self._create_database = api_callable.create_api_call(
251 self.database_admin_stub.CreateDatabase,
252 settings=defaults['create_database'])
253 self._get_database = api_callable.create_api_call(
254 self.database_admin_stub.GetDatabase,
255 settings=defaults['get_database'])
256 self._update_database_ddl = api_callable.create_api_call(
257 self.database_admin_stub.UpdateDatabaseDdl,
258 settings=defaults['update_database_ddl'])
259 self._drop_database = api_callable.create_api_call(
260 self.database_admin_stub.DropDatabase,
261 settings=defaults['drop_database'])
262 self._get_database_ddl = api_callable.create_api_call(
263 self.database_admin_stub.GetDatabaseDdl,
264 settings=defaults['get_database_ddl'])
265 self._set_iam_policy = api_callable.create_api_call(
266 self.database_admin_stub.SetIamPolicy,
267 settings=defaults['set_iam_policy'])
268 self._get_iam_policy = api_callable.create_api_call(
269 self.database_admin_stub.GetIamPolicy,
270 settings=defaults['get_iam_policy'])
271 self._test_iam_permissions = api_callable.create_api_call(
272 self.database_admin_stub.TestIamPermissions,
273 settings=defaults['test_iam_permissions'])
274
275 # Service calls
276 def list_databases(self, parent, page_size=None, options=None):
277 """
278 Lists Cloud Spanner databases.
279
280 Example:
281 >>> from google.cloud import spanner_admin_database_v1
282 >>> from google.gax import CallOptions, INITIAL_PAGE
283 >>>
284 >>> client = spanner_admin_database_v1.DatabaseAdminClient()
285 >>>
286 >>> parent = client.instance_path('[PROJECT]', '[INSTANCE]')
287 >>>
288 >>>
289 >>> # Iterate over all results
290 >>> for element in client.list_databases(parent):
291 ... # process element
292 ... pass
293 >>>
294 >>> # Or iterate over results one page at a time
295 >>> for page in client.list_databases(parent, options=CallOptions(page_token=INITIAL_PAGE)):
296 ... for element in page:
297 ... # process element
298 ... pass
299
300 Args:
301 parent (str): Required. The instance whose databases should be listed.
302 Values are of the form ``projects/<project>/instances/<instance>``.
303 page_size (int): The maximum number of resources contained in the
304 underlying API response. If page streaming is performed per-
305 resource, this parameter does not affect the return value. If page
306 streaming is performed per-page, this determines the maximum number
307 of resources in a page.
308 options (~google.gax.CallOptions): Overrides the default
309 settings for this call, e.g, timeout, retries etc.
310
311 Returns:
312 A :class:`~google.gax.PageIterator` instance. By default, this
313 is an iterable of :class:`~google.cloud.spanner_admin_database_v1.types.Database` instances.
314 This object can also be configured to iterate over the pages
315 of the response through the `options` parameter.
316
317 Raises:
318 :exc:`google.gax.errors.GaxError` if the RPC is aborted.
319 :exc:`ValueError` if the parameters are invalid.
320 """
321 request = spanner_database_admin_pb2.ListDatabasesRequest(
322 parent=parent, page_size=page_size)
323 return self._list_databases(request, options)
324
325 def create_database(self,
326 parent,
327 create_statement,
328 extra_statements=None,
329 options=None):
330 """
331 Creates a new Cloud Spanner database and starts to prepare it for serving.
332 The returned ``long-running operation`` will
333 have a name of the format ``<database_name>/operations/<operation_id>`` and
334 can be used to track preparation of the database. The
335 ``metadata`` field type is
336 ``CreateDatabaseMetadata``. The
337 ``response`` field type is
338 ``Database``, if successful.
339
340 Example:
341 >>> from google.cloud import spanner_admin_database_v1
342 >>>
343 >>> client = spanner_admin_database_v1.DatabaseAdminClient()
344 >>>
345 >>> parent = client.instance_path('[PROJECT]', '[INSTANCE]')
346 >>> create_statement = ''
347 >>>
348 >>> response = client.create_database(parent, create_statement)
349 >>>
350 >>> def callback(operation_future):
351 ... # Handle result.
352 ... result = operation_future.result()
353 >>>
354 >>> response.add_done_callback(callback)
355 >>>
356 >>> # Handle metadata.
357 >>> metadata = response.metadata()
358
359 Args:
360 parent (str): Required. The name of the instance that will serve the new database.
361 Values are of the form ``projects/<project>/instances/<instance>``.
362 create_statement (str): Required. A ``CREATE DATABASE`` statement, which specifies the ID of the
363 new database. The database ID must conform to the regular expression
364 ``[a-z][a-z0-9_\-]*[a-z0-9]`` and be between 2 and 30 characters in length.
365 extra_statements (list[str]): An optional list of DDL statements to run inside the newly created
366 database. Statements can create tables, indexes, etc. These
367 statements execute atomically with the creation of the database:
368 if there is an error in any statement, the database is not created.
369 options (~google.gax.CallOptions): Overrides the default
370 settings for this call, e.g, timeout, retries etc.
371
372 Returns:
373 A :class:`~google.cloud.spanner_admin_database_v1.types._OperationFuture` instance.
374
375 Raises:
376 :exc:`google.gax.errors.GaxError` if the RPC is aborted.
377 :exc:`ValueError` if the parameters are invalid.
378 """
379 request = spanner_database_admin_pb2.CreateDatabaseRequest(
380 parent=parent,
381 create_statement=create_statement,
382 extra_statements=extra_statements)
383 return google.gax._OperationFuture(
384 self._create_database(request, options), self.operations_client,
385 spanner_database_admin_pb2.Database,
386 spanner_database_admin_pb2.CreateDatabaseMetadata, options)
387
388 def get_database(self, name, options=None):
389 """
390 Gets the state of a Cloud Spanner database.
391
392 Example:
393 >>> from google.cloud import spanner_admin_database_v1
394 >>>
395 >>> client = spanner_admin_database_v1.DatabaseAdminClient()
396 >>>
397 >>> name = client.database_path('[PROJECT]', '[INSTANCE]', '[DATABASE]')
398 >>>
399 >>> response = client.get_database(name)
400
401 Args:
402 name (str): Required. The name of the requested database. Values are of the form
403 ``projects/<project>/instances/<instance>/databases/<database>``.
404 options (~google.gax.CallOptions): Overrides the default
405 settings for this call, e.g, timeout, retries etc.
406
407 Returns:
408 A :class:`~google.cloud.spanner_admin_database_v1.types.Database` instance.
409
410 Raises:
411 :exc:`google.gax.errors.GaxError` if the RPC is aborted.
412 :exc:`ValueError` if the parameters are invalid.
413 """
414 request = spanner_database_admin_pb2.GetDatabaseRequest(name=name)
415 return self._get_database(request, options)
416
417 def update_database_ddl(self,
418 database,
419 statements,
420 operation_id=None,
421 options=None):
422 """
423 Updates the schema of a Cloud Spanner database by
424 creating/altering/dropping tables, columns, indexes, etc. The returned
425 ``long-running operation`` will have a name of
426 the format ``<database_name>/operations/<operation_id>`` and can be used to
427 track execution of the schema change(s). The
428 ``metadata`` field type is
429 ``UpdateDatabaseDdlMetadata``. The operation has no response.
430
431 Example:
432 >>> from google.cloud import spanner_admin_database_v1
433 >>>
434 >>> client = spanner_admin_database_v1.DatabaseAdminClient()
435 >>>
436 >>> database = client.database_path('[PROJECT]', '[INSTANCE]', '[DATABASE]')
437 >>> statements = []
438 >>>
439 >>> response = client.update_database_ddl(database, statements)
440 >>>
441 >>> def callback(operation_future):
442 ... # Handle result.
443 ... result = operation_future.result()
444 >>>
445 >>> response.add_done_callback(callback)
446 >>>
447 >>> # Handle metadata.
448 >>> metadata = response.metadata()
449
450 Args:
451 database (str): Required. The database to update.
452 statements (list[str]): DDL statements to be applied to the database.
453 operation_id (str): If empty, the new update request is assigned an
454 automatically-generated operation ID. Otherwise, ``operation_id``
455 is used to construct the name of the resulting
456 ``Operation``.
457
458 Specifying an explicit operation ID simplifies determining
459 whether the statements were executed in the event that the
460 ``UpdateDatabaseDdl`` call is replayed,
461 or the return value is otherwise lost: the ``database`` and
462 ``operation_id`` fields can be combined to form the
463 ``name`` of the resulting
464 ``longrunning.Operation``: ``<database>/operations/<operation_id>``.
465
466 ``operation_id`` should be unique within the database, and must be
467 a valid identifier: ``[a-z][a-z0-9_]*``. Note that
468 automatically-generated operation IDs always begin with an
469 underscore. If the named operation already exists,
470 ``UpdateDatabaseDdl`` returns
471 ``ALREADY_EXISTS``.
472 options (~google.gax.CallOptions): Overrides the default
473 settings for this call, e.g, timeout, retries etc.
474
475 Returns:
476 A :class:`~google.cloud.spanner_admin_database_v1.types._OperationFuture` instance.
477
478 Raises:
479 :exc:`google.gax.errors.GaxError` if the RPC is aborted.
480 :exc:`ValueError` if the parameters are invalid.
481 """
482 request = spanner_database_admin_pb2.UpdateDatabaseDdlRequest(
483 database=database,
484 statements=statements,
485 operation_id=operation_id)
486 return google.gax._OperationFuture(
487 self._update_database_ddl(request, options),
488 self.operations_client, empty_pb2.Empty,
489 spanner_database_admin_pb2.UpdateDatabaseDdlMetadata, options)
490
491 def drop_database(self, database, options=None):
492 """
493 Drops (aka deletes) a Cloud Spanner database.
494
495 Example:
496 >>> from google.cloud import spanner_admin_database_v1
497 >>>
498 >>> client = spanner_admin_database_v1.DatabaseAdminClient()
499 >>>
500 >>> database = client.database_path('[PROJECT]', '[INSTANCE]', '[DATABASE]')
501 >>>
502 >>> client.drop_database(database)
503
504 Args:
505 database (str): Required. The database to be dropped.
506 options (~google.gax.CallOptions): Overrides the default
507 settings for this call, e.g, timeout, retries etc.
508
509 Raises:
510 :exc:`google.gax.errors.GaxError` if the RPC is aborted.
511 :exc:`ValueError` if the parameters are invalid.
512 """
513 request = spanner_database_admin_pb2.DropDatabaseRequest(
514 database=database)
515 self._drop_database(request, options)
516
517 def get_database_ddl(self, database, options=None):
518 """
519 Returns the schema of a Cloud Spanner database as a list of formatted
520 DDL statements. This method does not show pending schema updates, those may
521 be queried using the ``Operations`` API.
522
523 Example:
524 >>> from google.cloud import spanner_admin_database_v1
525 >>>
526 >>> client = spanner_admin_database_v1.DatabaseAdminClient()
527 >>>
528 >>> database = client.database_path('[PROJECT]', '[INSTANCE]', '[DATABASE]')
529 >>>
530 >>> response = client.get_database_ddl(database)
531
532 Args:
533 database (str): Required. The database whose schema we wish to get.
534 options (~google.gax.CallOptions): Overrides the default
535 settings for this call, e.g, timeout, retries etc.
536
537 Returns:
538 A :class:`~google.cloud.spanner_admin_database_v1.types.GetDatabaseDdlResponse` instance.
539
540 Raises:
541 :exc:`google.gax.errors.GaxError` if the RPC is aborted.
542 :exc:`ValueError` if the parameters are invalid.
543 """
544 request = spanner_database_admin_pb2.GetDatabaseDdlRequest(
545 database=database)
546 return self._get_database_ddl(request, options)
547
548 def set_iam_policy(self, resource, policy, options=None):
549 """
550 Sets the access control policy on a database resource. Replaces any
551 existing policy.
552
553 Authorization requires ``spanner.databases.setIamPolicy`` permission on
554 ``resource``.
555
556 Example:
557 >>> from google.cloud import spanner_admin_database_v1
558 >>>
559 >>> client = spanner_admin_database_v1.DatabaseAdminClient()
560 >>>
561 >>> resource = client.database_path('[PROJECT]', '[INSTANCE]', '[DATABASE]')
562 >>> policy = {}
563 >>>
564 >>> response = client.set_iam_policy(resource, policy)
565
566 Args:
567 resource (str): REQUIRED: The resource for which the policy is being specified.
568 ``resource`` is usually specified as a path. For example, a Project
569 resource is specified as ``projects/{project}``.
570 policy (Union[dict, ~google.cloud.spanner_admin_database_v1.types.Policy]): REQUIRED: The complete policy to be applied to the ``resource``. The size of
571 the policy is limited to a few 10s of KB. An empty policy is a
572 valid policy but certain Cloud Platform services (such as Projects)
573 might reject them.
574 If a dict is provided, it must be of the same form as the protobuf
575 message :class:`~google.cloud.spanner_admin_database_v1.types.Policy`
576 options (~google.gax.CallOptions): Overrides the default
577 settings for this call, e.g, timeout, retries etc.
578
579 Returns:
580 A :class:`~google.cloud.spanner_admin_database_v1.types.Policy` instance.
581
582 Raises:
583 :exc:`google.gax.errors.GaxError` if the RPC is aborted.
584 :exc:`ValueError` if the parameters are invalid.
585 """
586 request = iam_policy_pb2.SetIamPolicyRequest(
587 resource=resource, policy=policy)
588 return self._set_iam_policy(request, options)
589
590 def get_iam_policy(self, resource, options=None):
591 """
592 Gets the access control policy for a database resource. Returns an empty
593 policy if a database exists but does not have a policy set.
594
595 Authorization requires ``spanner.databases.getIamPolicy`` permission on
596 ``resource``.
597
598 Example:
599 >>> from google.cloud import spanner_admin_database_v1
600 >>>
601 >>> client = spanner_admin_database_v1.DatabaseAdminClient()
602 >>>
603 >>> resource = client.database_path('[PROJECT]', '[INSTANCE]', '[DATABASE]')
604 >>>
605 >>> response = client.get_iam_policy(resource)
606
607 Args:
608 resource (str): REQUIRED: The resource for which the policy is being requested.
609 ``resource`` is usually specified as a path. For example, a Project
610 resource is specified as ``projects/{project}``.
611 options (~google.gax.CallOptions): Overrides the default
612 settings for this call, e.g, timeout, retries etc.
613
614 Returns:
615 A :class:`~google.cloud.spanner_admin_database_v1.types.Policy` instance.
616
617 Raises:
618 :exc:`google.gax.errors.GaxError` if the RPC is aborted.
619 :exc:`ValueError` if the parameters are invalid.
620 """
621 request = iam_policy_pb2.GetIamPolicyRequest(resource=resource)
622 return self._get_iam_policy(request, options)
623
624 def test_iam_permissions(self, resource, permissions, options=None):
625 """
626 Returns permissions that the caller has on the specified database resource.
627
628 Attempting this RPC on a non-existent Cloud Spanner database will result in
629 a NOT_FOUND error if the user has ``spanner.databases.list`` permission on
630 the containing Cloud Spanner instance. Otherwise returns an empty set of
631 permissions.
632
633 Example:
634 >>> from google.cloud import spanner_admin_database_v1
635 >>>
636 >>> client = spanner_admin_database_v1.DatabaseAdminClient()
637 >>>
638 >>> resource = client.database_path('[PROJECT]', '[INSTANCE]', '[DATABASE]')
639 >>> permissions = []
640 >>>
641 >>> response = client.test_iam_permissions(resource, permissions)
642
643 Args:
644 resource (str): REQUIRED: The resource for which the policy detail is being requested.
645 ``resource`` is usually specified as a path. For example, a Project
646 resource is specified as ``projects/{project}``.
647 permissions (list[str]): The set of permissions to check for the ``resource``. Permissions with
648 wildcards (such as '*' or 'storage.*') are not allowed. For more
649 information see
650 `IAM Overview <https://cloud.google.com/iam/docs/overview#permissions>`_.
651 options (~google.gax.CallOptions): Overrides the default
652 settings for this call, e.g, timeout, retries etc.
653
654 Returns:
655 A :class:`~google.cloud.spanner_admin_database_v1.types.TestIamPermissionsResponse` instance.
656
657 Raises:
658 :exc:`google.gax.errors.GaxError` if the RPC is aborted.
659 :exc:`ValueError` if the parameters are invalid.
660 """
661 request = iam_policy_pb2.TestIamPermissionsRequest(
662 resource=resource, permissions=permissions)
663 return self._test_iam_permissions(request, options)
664
[end of spanner/google/cloud/spanner_admin_database_v1/gapic/database_admin_client.py]
[start of vision/google/cloud/vision_helpers/decorators.py]
1 # Copyright 2017, Google LLC All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from __future__ import absolute_import
16
17
18 def add_single_feature_methods(cls):
19 """Custom decorator intended for :class:`~vision.helpers.VisionHelpers`.
20
21 This metaclass adds a `{feature}` method for every feature
22 defined on the Feature enum.
23 """
24 # Sanity check: This only makes sense if we are building the GAPIC
25 # subclass and have enums already attached.
26 if not hasattr(cls, 'enums'):
27 return cls
28
29 # Iterate over the Feature.Type enum and add get a list of
30 # features which will receive single-feature detection methods.
31 features = [k for k in cls.enums.Feature.Type.__dict__.keys()
32 if k.replace('_', '').isalpha() and k.upper() == k]
33
34 # Add each single-feature method to the class.
35 for feature in features:
36 # Sanity check: Do not make a method for the falsy feature.
37 if feature == 'TYPE_UNSPECIFIED':
38 continue
39
40 # Assign the appropriate metadata to the function.
41 detect = _create_single_feature_method(feature, cls.enums.Feature.Type)
42
43 # Assign a qualified name to the function, and perform module
44 # replacement on the docstring.
45 detect.__qualname__ = '{cls}.{name}'.format(
46 cls=cls.__name__,
47 name=detect.__name__,
48 )
49 detect.__doc__ = detect.__doc__.format(
50 module=cls.__module__,
51 )
52
53 # Place the function on the class being created.
54 setattr(cls, detect.__name__, detect)
55
56 # Done; return the class.
57 return cls
58
59
60 def _create_single_feature_method(feature, enum):
61 """Return a function that will detect a single feature.
62
63 Args:
64 feature (str): A specific feature defined as an attribute on
65 :class:`~enums.Feature.Type`.
66 enum (class): The :class:`~enums.Feature.Type` class.
67
68 Returns:
69 function: A helper function to detect just that feature.
70 """
71 # Define the function properties.
72 fx_name = feature.lower()
73 if 'detection' in fx_name:
74 fx_doc = 'Perform {0}.'.format(fx_name.replace('_', ' '))
75 else:
76 fx_doc = 'Return {desc} information.'.format(
77 desc=fx_name.replace('_', ' '),
78 )
79
80 # Provide a complete docstring with argument and return value
81 # information.
82 fx_doc += """
83
84 Args:
85 image (:class:`~.{module}.types.Image`): The image to analyze.
86 options (:class:`google.gax.CallOptions`): Overrides the
87 default settings for this call, e.g, timeout, retries, etc.
88 kwargs (dict): Additional properties to be set on the
89 :class:`~.{module}.types.AnnotateImageRequest`.
90
91 Returns:
92 :class:`~.{module}.types.AnnotateImageResponse`: The API response.
93 """
94
95 # Get the actual feature value to send.
96 feature_value = {'type': enum.__dict__[feature]}
97
98 # Define the function to be returned.
99 def inner(self, image, options=None, **kwargs):
100 """Return a single feature annotation for the given image.
101
102 Intended for use with functools.partial, to create the particular
103 single-feature methods.
104 """
105 request = dict(
106 image=image,
107 features=[feature_value],
108 **kwargs
109 )
110 return self.annotate_image(request, options=options)
111
112 # Set the appropriate function metadata.
113 inner.__name__ = fx_name
114 inner.__doc__ = fx_doc
115
116 # Return the final function.
117 return inner
118
[end of vision/google/cloud/vision_helpers/decorators.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
googleapis/google-cloud-python
|
e12372db34b6202d8c4f6e50d0e8b8c047f30d9d
|
api_core: Add metadata option to wrap_method
To update GAPIC to use routing headers, `wrap_method` needs to accept more metadata than just the user agent. Example:
```python
metadata = [
(b'x-google-request-params',
b'name={}&book.read={}'.format(name, book.read))]
self._create_book(request, retry=retry, timeout=timeout, metadata=metadata)
```
|
@eoogbe not to be obnoxious, but I have the exact same question about this change as I had with the (same? similar?) feature proposed in https://github.com/googleapis/gax-python/pull/203
Why does this need to occur at *wrap* time as opposed to *invocation* time? Doing this at wrap time adds complexity to this layer but adding metadata at invocation time is a feature that already exists. It seems the rationale here is "routing headers" - won't those depend on the RPC request message which isn't known until invocation time?
@jonparrott I got a little confused at the api change. Can I specify metadata once the method is wrapped method?
Yep! The wrapper passes through all keyword args to the wrapped method so
you can just pass it in like you would when invoking any old grpc method. I
can throw up a code snippet tomorrow if needed.
On Wed, Nov 1, 2017, 9:39 PM Evawere Ogbe <[email protected]> wrote:
> @jonparrott <https://github.com/jonparrott> I got a little confused at
> the api change. Can I specify metadata once the method is wrapped method?
>
> —
> You are receiving this because you were mentioned.
>
>
> Reply to this email directly, view it on GitHub
> <https://github.com/GoogleCloudPlatform/google-cloud-python/pull/4322#issuecomment-341315861>,
> or mute the thread
> <https://github.com/notifications/unsubscribe-auth/AAPUc1wJ2FRwzsknpg4k1diuM4nuDxaOks5syUdqgaJpZM4QO7XJ>
> .
>
|
2017-11-03T18:05:16Z
|
<patch>
diff --git a/api_core/google/api_core/gapic_v1/method.py b/api_core/google/api_core/gapic_v1/method.py
--- a/api_core/google/api_core/gapic_v1/method.py
+++ b/api_core/google/api_core/gapic_v1/method.py
@@ -94,16 +94,17 @@ class _GapicCallable(object):
timeout (google.api_core.timeout.Timeout): The default timeout
for the callable. If ``None``, this callable will not specify
a timeout argument to the low-level RPC method by default.
- user_agent_metadata (Tuple[str, str]): The user agent metadata key and
- value to provide to the RPC method. If ``None``, no additional
- metadata will be passed to the RPC method.
+ metadata (Sequence[Tuple[str, str]]): Additional metadata that is
+ provided to the RPC method on every invocation. This is merged with
+ any metadata specified during invocation. If ``None``, no
+ additional metadata will be passed to the RPC method.
"""
- def __init__(self, target, retry, timeout, user_agent_metadata=None):
+ def __init__(self, target, retry, timeout, metadata=None):
self._target = target
self._retry = retry
self._timeout = timeout
- self._user_agent_metadata = user_agent_metadata
+ self._metadata = metadata
def __call__(self, *args, **kwargs):
"""Invoke the low-level RPC with retry, timeout, and metadata."""
@@ -126,9 +127,9 @@ def __call__(self, *args, **kwargs):
wrapped_func = _apply_decorators(self._target, [retry, timeout_])
# Add the user agent metadata to the call.
- if self._user_agent_metadata is not None:
- metadata = kwargs.get('metadata', [])
- metadata.append(self._user_agent_metadata)
+ if self._metadata is not None:
+ metadata = list(kwargs.get('metadata', []))
+ metadata.extend(self._metadata)
kwargs['metadata'] = metadata
return wrapped_func(*args, **kwargs)
@@ -219,11 +220,11 @@ def get_topic(name, timeout=None):
func = grpc_helpers.wrap_errors(func)
if client_info is not None:
- user_agent_metadata = client_info.to_grpc_metadata()
+ user_agent_metadata = [client_info.to_grpc_metadata()]
else:
user_agent_metadata = None
return general_helpers.wraps(func)(
_GapicCallable(
func, default_retry, default_timeout,
- user_agent_metadata=user_agent_metadata))
+ metadata=user_agent_metadata))
</patch>
|
[]
|
[]
| |||
pantsbuild__pants-13402
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`--layout=packed` should always be used for the `repository.pex`
**Describe the bug**
When building a `pex_binary` when `requirements_constraints` is set, the `repository.pex` that is built for the binary will not set `--layout=packed`, because the request that is triggering the constraints build is not `internal_only`.
But it _should_ use `--layout=packed`, since it is only the source of the artifacts for the pex, rather than the exposed artifact. All cases of `repository.pex` builds should (since they are never exposed verbatim, and always consumed by other builds.)
**Pants version**
`2.7.1`
</issue>
<code>
[start of README.md]
1 # Pants Build System
2
3 Pants is a scalable build system for _monorepos_: codebases containing
4 multiple projects, often using multiple programming languages and frameworks,
5 in a single unified code repository.
6
7 Some noteworthy features include:
8
9 * Explicit dependency modeling.
10 * Fine-grained invalidation.
11 * Shared result caching.
12 * Concurrent execution.
13 * Remote execution.
14 * Unified interface for multiple tools and languages.
15 * Extensibility and customizability via a plugin API.
16
17 Documentation: [www.pantsbuild.org](https://www.pantsbuild.org/).
18
19 We release to [PyPI](https://pypi.org/pypi)
20 [](https://pypi.org/pypi/pantsbuild.pants)
21 [](https://pypi.org/pypi/pantsbuild.pants)
22
23 # Requirements
24
25 To run Pants, you need:
26
27 * Linux or macOS.
28 * Python 3.7+ discoverable on your `PATH`.
29 * A C compiler, system headers and Python headers (to compile native Python modules).
30 * Internet access (so that Pants can fully bootstrap itself).
31
[end of README.md]
[start of src/python/pants/backend/python/util_rules/pex.py]
1 # Copyright 2019 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 from __future__ import annotations
5
6 import dataclasses
7 import json
8 import logging
9 import os
10 import shlex
11 from dataclasses import dataclass
12 from pathlib import PurePath
13 from textwrap import dedent
14 from typing import Iterable, Iterator, List, Mapping, Tuple
15
16 import packaging.specifiers
17 import packaging.version
18 from pkg_resources import Requirement
19
20 from pants.backend.python.target_types import MainSpecification
21 from pants.backend.python.target_types import PexPlatformsField as PythonPlatformsField
22 from pants.backend.python.target_types import PythonRequirementsField
23 from pants.backend.python.util_rules import pex_cli
24 from pants.backend.python.util_rules.interpreter_constraints import InterpreterConstraints
25 from pants.backend.python.util_rules.lockfile_metadata import (
26 InvalidLockfileReason,
27 LockfileMetadata,
28 )
29 from pants.backend.python.util_rules.pex_cli import PexCliProcess, PexPEX
30 from pants.backend.python.util_rules.pex_environment import (
31 CompletePexEnvironment,
32 PexEnvironment,
33 PexRuntimeEnvironment,
34 PythonExecutable,
35 )
36 from pants.engine.collection import Collection, DeduplicatedCollection
37 from pants.engine.engine_aware import EngineAwareParameter
38 from pants.engine.fs import (
39 EMPTY_DIGEST,
40 AddPrefix,
41 CreateDigest,
42 Digest,
43 DigestContents,
44 FileContent,
45 GlobMatchErrorBehavior,
46 MergeDigests,
47 PathGlobs,
48 )
49 from pants.engine.platform import Platform
50 from pants.engine.process import (
51 BashBinary,
52 MultiPlatformProcess,
53 Process,
54 ProcessCacheScope,
55 ProcessResult,
56 )
57 from pants.engine.rules import Get, collect_rules, rule
58 from pants.python.python_repos import PythonRepos
59 from pants.python.python_setup import InvalidLockfileBehavior, PythonSetup
60 from pants.util.docutil import doc_url
61 from pants.util.frozendict import FrozenDict
62 from pants.util.logging import LogLevel
63 from pants.util.meta import frozen_after_init
64 from pants.util.ordered_set import FrozenOrderedSet
65 from pants.util.strutil import pluralize
66
67
68 @dataclass(frozen=True)
69 class Lockfile:
70 file_path: str
71 file_path_description_of_origin: str
72 lockfile_hex_digest: str | None
73
74
75 @dataclass(frozen=True)
76 class LockfileContent:
77 file_content: FileContent
78 lockfile_hex_digest: str | None
79
80
81 @dataclass(frozen=True)
82 class _ToolLockfileMixin:
83 options_scope_name: str
84 uses_source_plugins: bool
85 uses_project_interpreter_constraints: bool
86
87
88 @dataclass(frozen=True)
89 class ToolDefaultLockfile(LockfileContent, _ToolLockfileMixin):
90 pass
91
92
93 @dataclass(frozen=True)
94 class ToolCustomLockfile(Lockfile, _ToolLockfileMixin):
95 pass
96
97
98 @frozen_after_init
99 @dataclass(unsafe_hash=True)
100 class PexRequirements:
101 req_strings: FrozenOrderedSet[str]
102 repository_pex: Pex | None
103
104 def __init__(
105 self, req_strings: Iterable[str] = (), *, repository_pex: Pex | None = None
106 ) -> None:
107 """
108 :param req_strings: The requirement strings to resolve.
109 :param repository_pex: An optional PEX to resolve requirements from via the Pex CLI
110 `--pex-repository` option.
111 """
112 self.req_strings = FrozenOrderedSet(sorted(req_strings))
113 self.repository_pex = repository_pex
114
115 @classmethod
116 def create_from_requirement_fields(
117 cls,
118 fields: Iterable[PythonRequirementsField],
119 *,
120 additional_requirements: Iterable[str] = (),
121 ) -> PexRequirements:
122 field_requirements = {str(python_req) for field in fields for python_req in field.value}
123 return PexRequirements({*field_requirements, *additional_requirements})
124
125 def __bool__(self) -> bool:
126 return bool(self.req_strings)
127
128
129 class PexPlatforms(DeduplicatedCollection[str]):
130 sort_input = True
131
132 @classmethod
133 def create_from_platforms_field(cls, field: PythonPlatformsField) -> PexPlatforms:
134 return cls(field.value or ())
135
136 def generate_pex_arg_list(self) -> List[str]:
137 args = []
138 for platform in self:
139 args.extend(["--platform", platform])
140 return args
141
142
143 @frozen_after_init
144 @dataclass(unsafe_hash=True)
145 class PexRequest(EngineAwareParameter):
146 output_filename: str
147 internal_only: bool
148 python: PythonExecutable | None
149 requirements: PexRequirements | Lockfile | LockfileContent
150 interpreter_constraints: InterpreterConstraints
151 platforms: PexPlatforms
152 sources: Digest | None
153 additional_inputs: Digest | None
154 main: MainSpecification | None
155 additional_args: Tuple[str, ...]
156 pex_path: Tuple[Pex, ...]
157 apply_requirement_constraints: bool
158 description: str | None = dataclasses.field(compare=False)
159
160 def __init__(
161 self,
162 *,
163 output_filename: str,
164 internal_only: bool,
165 python: PythonExecutable | None = None,
166 requirements: PexRequirements | Lockfile | LockfileContent = PexRequirements(),
167 interpreter_constraints=InterpreterConstraints(),
168 platforms=PexPlatforms(),
169 sources: Digest | None = None,
170 additional_inputs: Digest | None = None,
171 main: MainSpecification | None = None,
172 additional_args: Iterable[str] = (),
173 pex_path: Iterable[Pex] = (),
174 apply_requirement_constraints: bool = False,
175 description: str | None = None,
176 ) -> None:
177 """A request to create a PEX from its inputs.
178
179 :param output_filename: The name of the built Pex file, which typically should end in
180 `.pex`.
181 :param internal_only: Whether we ever materialize the Pex and distribute it directly
182 to end users, such as with the `binary` goal. Typically, instead, the user never
183 directly uses the Pex, e.g. with `lint` and `test`. If True, we will use a Pex setting
184 that results in faster build time but compatibility with fewer interpreters at runtime.
185 :param python: A particular PythonExecutable to use, which must match any relevant
186 interpreter_constraints.
187 :param requirements: The requirements that the PEX should contain.
188 :param interpreter_constraints: Any constraints on which Python versions may be used.
189 :param platforms: Which platforms should be supported. Setting this value will cause
190 interpreter constraints to not be used because platforms already constrain the valid
191 Python versions, e.g. by including `cp36m` in the platform string.
192 :param sources: Any source files that should be included in the Pex.
193 :param additional_inputs: Any inputs that are not source files and should not be included
194 directly in the Pex, but should be present in the environment when building the Pex.
195 :param main: The main for the built Pex, equivalent to Pex's `-e` or '-c' flag. If
196 left off, the Pex will open up as a REPL.
197 :param additional_args: Any additional Pex flags.
198 :param pex_path: Pex files to add to the PEX_PATH.
199 :param apply_requirement_constraints: Whether to apply any configured
200 requirement_constraints while building this PEX.
201 :param description: A human-readable description to render in the dynamic UI when building
202 the Pex.
203 """
204 self.output_filename = output_filename
205 self.internal_only = internal_only
206 self.python = python
207 self.requirements = requirements
208 self.interpreter_constraints = interpreter_constraints
209 self.platforms = platforms
210 self.sources = sources
211 self.additional_inputs = additional_inputs
212 self.main = main
213 self.additional_args = tuple(additional_args)
214 self.pex_path = tuple(pex_path)
215 self.apply_requirement_constraints = apply_requirement_constraints
216 self.description = description
217 self.__post_init__()
218
219 def __post_init__(self):
220 if self.internal_only and self.platforms:
221 raise ValueError(
222 "Internal only PEXes can only constrain interpreters with interpreter_constraints."
223 f"Given platform constraints {self.platforms} for internal only pex request: "
224 f"{self}."
225 )
226 if self.python and self.platforms:
227 raise ValueError(
228 "Only one of platforms or a specific interpreter may be set. Got "
229 f"both {self.platforms} and {self.python}."
230 )
231 if self.python and self.interpreter_constraints:
232 raise ValueError(
233 "Only one of interpreter_constraints or a specific interpreter may be set. Got "
234 f"both {self.interpreter_constraints} and {self.python}."
235 )
236
237 def debug_hint(self) -> str:
238 return self.output_filename
239
240
241 @dataclass(frozen=True)
242 class Pex:
243 """Wrapper for a digest containing a pex file created with some filename."""
244
245 digest: Digest
246 name: str
247 python: PythonExecutable | None
248
249
250 logger = logging.getLogger(__name__)
251
252
253 @rule(desc="Find Python interpreter for constraints", level=LogLevel.DEBUG)
254 async def find_interpreter(
255 interpreter_constraints: InterpreterConstraints, pex_runtime_env: PexRuntimeEnvironment
256 ) -> PythonExecutable:
257 formatted_constraints = " OR ".join(str(constraint) for constraint in interpreter_constraints)
258 result = await Get(
259 ProcessResult,
260 PexCliProcess(
261 description=f"Find interpreter for constraints: {formatted_constraints}",
262 # Here, we run the Pex CLI with no requirements, which just selects an interpreter.
263 # Normally, this would start an isolated repl. By passing `--`, we force the repl to
264 # instead act as an interpreter (the selected one) and tell us about itself. The upshot
265 # is we run the Pex interpreter selection logic unperturbed but without resolving any
266 # distributions.
267 argv=(
268 *interpreter_constraints.generate_pex_arg_list(),
269 "--",
270 "-c",
271 # N.B.: The following code snippet must be compatible with Python 2.7 and
272 # Python 3.5+.
273 #
274 # When hashing, we pick 8192 for efficiency of reads and fingerprint updates
275 # (writes) since it's a common OS buffer size and an even multiple of the
276 # hash block size.
277 dedent(
278 """\
279 import hashlib, os, sys
280
281 python = os.path.realpath(sys.executable)
282 print(python)
283
284 hasher = hashlib.sha256()
285 with open(python, "rb") as fp:
286 for chunk in iter(lambda: fp.read(8192), b""):
287 hasher.update(chunk)
288 print(hasher.hexdigest())
289 """
290 ),
291 ),
292 level=LogLevel.DEBUG,
293 # NB: We want interpreter discovery to re-run fairly frequently
294 # (PER_RESTART_SUCCESSFUL), but not on every run of Pants (NEVER, which is effectively
295 # per-Session). See #10769 for a solution that is less of a tradeoff.
296 cache_scope=ProcessCacheScope.PER_RESTART_SUCCESSFUL,
297 ),
298 )
299 path, fingerprint = result.stdout.decode().strip().splitlines()
300
301 if pex_runtime_env.verbosity > 0:
302 log_output = result.stderr.decode()
303 if log_output:
304 logger.info("%s", log_output)
305
306 return PythonExecutable(path=path, fingerprint=fingerprint)
307
308
309 @dataclass(frozen=True)
310 class BuildPexResult:
311 result: ProcessResult
312 pex_filename: str
313 digest: Digest
314 python: PythonExecutable | None
315
316 def create_pex(self) -> Pex:
317 return Pex(digest=self.digest, name=self.pex_filename, python=self.python)
318
319
320 @rule(level=LogLevel.DEBUG)
321 async def build_pex(
322 request: PexRequest,
323 python_setup: PythonSetup,
324 python_repos: PythonRepos,
325 platform: Platform,
326 pex_runtime_env: PexRuntimeEnvironment,
327 ) -> BuildPexResult:
328 """Returns a PEX with the given settings."""
329 argv = ["--output-file", request.output_filename, *request.additional_args]
330
331 repository_pex = (
332 request.requirements.repository_pex
333 if isinstance(request.requirements, PexRequirements)
334 else None
335 )
336 if repository_pex:
337 argv.extend(["--pex-repository", repository_pex.name])
338 else:
339 # NB: In setting `--no-pypi`, we rely on the default value of `--python-repos-indexes`
340 # including PyPI, which will override `--no-pypi` and result in using PyPI in the default
341 # case. Why set `--no-pypi`, then? We need to do this so that
342 # `--python-repos-repos=['custom_url']` will only point to that index and not include PyPI.
343 argv.extend(
344 [
345 "--no-pypi",
346 *(f"--index={index}" for index in python_repos.indexes),
347 *(f"--repo={repo}" for repo in python_repos.repos),
348 "--resolver-version",
349 "pip-2020-resolver",
350 ]
351 )
352
353 is_lockfile = isinstance(request.requirements, (Lockfile, LockfileContent))
354 if is_lockfile:
355 argv.append("--no-transitive")
356
357 python: PythonExecutable | None = None
358
359 # NB: If `--platform` is specified, this signals that the PEX should not be built locally.
360 # `--interpreter-constraint` only makes sense in the context of building locally. These two
361 # flags are mutually exclusive. See https://github.com/pantsbuild/pex/issues/957.
362 if request.platforms:
363 # TODO(#9560): consider validating that these platforms are valid with the interpreter
364 # constraints.
365 argv.extend(request.platforms.generate_pex_arg_list())
366 elif request.python:
367 python = request.python
368 elif request.internal_only:
369 # NB: If it's an internal_only PEX, we do our own lookup of the interpreter based on the
370 # interpreter constraints, and then will run the PEX with that specific interpreter. We
371 # will have already validated that there were no platforms.
372 python = await Get(
373 PythonExecutable, InterpreterConstraints, request.interpreter_constraints
374 )
375 else:
376 # `--interpreter-constraint` options are mutually exclusive with the `--python` option,
377 # so we only specify them if we have not already located a concrete Python.
378 argv.extend(request.interpreter_constraints.generate_pex_arg_list())
379
380 if python:
381 argv.extend(["--python", python.path])
382
383 argv.append("--no-emit-warnings")
384
385 if python_setup.resolver_jobs:
386 argv.extend(["--jobs", str(python_setup.resolver_jobs)])
387
388 if python_setup.manylinux:
389 argv.extend(["--manylinux", python_setup.manylinux])
390 else:
391 argv.append("--no-manylinux")
392
393 if request.main is not None:
394 argv.extend(request.main.iter_pex_args())
395
396 # TODO(John Sirois): Right now any request requirements will shadow corresponding pex path
397 # requirements, which could lead to problems. Support shading python binaries.
398 # See: https://github.com/pantsbuild/pants/issues/9206
399 if request.pex_path:
400 argv.extend(["--pex-path", ":".join(pex.name for pex in request.pex_path)])
401
402 source_dir_name = "source_files"
403 argv.append(f"--sources-directory={source_dir_name}")
404 sources_digest_as_subdir = await Get(
405 Digest, AddPrefix(request.sources or EMPTY_DIGEST, source_dir_name)
406 )
407
408 additional_inputs_digest = request.additional_inputs or EMPTY_DIGEST
409 repository_pex_digest = repository_pex.digest if repository_pex else EMPTY_DIGEST
410
411 constraint_file_digest = EMPTY_DIGEST
412 if (
413 not is_lockfile and request.apply_requirement_constraints
414 ) and python_setup.requirement_constraints is not None:
415 argv.extend(["--constraints", python_setup.requirement_constraints])
416 constraint_file_digest = await Get(
417 Digest,
418 PathGlobs(
419 [python_setup.requirement_constraints],
420 glob_match_error_behavior=GlobMatchErrorBehavior.error,
421 description_of_origin="the option `[python-setup].requirement_constraints`",
422 ),
423 )
424
425 requirements_file_digest = EMPTY_DIGEST
426
427 # TODO(#12314): Capture the resolve name for multiple user lockfiles.
428 resolve_name = (
429 request.requirements.options_scope_name
430 if isinstance(request.requirements, (ToolDefaultLockfile, ToolCustomLockfile))
431 else None
432 )
433
434 if isinstance(request.requirements, Lockfile):
435 argv.extend(["--requirement", request.requirements.file_path])
436
437 globs = PathGlobs(
438 [request.requirements.file_path],
439 glob_match_error_behavior=GlobMatchErrorBehavior.error,
440 description_of_origin=request.requirements.file_path_description_of_origin,
441 )
442 if python_setup.invalid_lockfile_behavior in {
443 InvalidLockfileBehavior.warn,
444 InvalidLockfileBehavior.error,
445 }:
446 requirements_file_digest_contents = await Get(DigestContents, PathGlobs, globs)
447 metadata = LockfileMetadata.from_lockfile(
448 requirements_file_digest_contents[0].content,
449 request.requirements.file_path,
450 resolve_name,
451 )
452 _validate_metadata(metadata, request, request.requirements, python_setup)
453 requirements_file_digest = await Get(Digest, PathGlobs, globs)
454
455 elif isinstance(request.requirements, LockfileContent):
456 file_content = request.requirements.file_content
457 argv.extend(["--requirement", file_content.path])
458
459 if python_setup.invalid_lockfile_behavior in {
460 InvalidLockfileBehavior.warn,
461 InvalidLockfileBehavior.error,
462 }:
463 metadata = LockfileMetadata.from_lockfile(
464 file_content.content, resolve_name=resolve_name
465 )
466 _validate_metadata(metadata, request, request.requirements, python_setup)
467 requirements_file_digest = await Get(Digest, CreateDigest([file_content]))
468 else:
469 argv.extend(request.requirements.req_strings)
470
471 merged_digest = await Get(
472 Digest,
473 MergeDigests(
474 (
475 sources_digest_as_subdir,
476 additional_inputs_digest,
477 constraint_file_digest,
478 requirements_file_digest,
479 repository_pex_digest,
480 *(pex.digest for pex in request.pex_path),
481 )
482 ),
483 )
484
485 output_files: Iterable[str] | None = None
486 output_directories: Iterable[str] | None = None
487 if request.internal_only:
488 # This is a much friendlier layout for the CAS than the default zipapp.
489 argv.extend(["--layout", "packed"])
490 output_directories = [request.output_filename]
491 else:
492 output_files = [request.output_filename]
493
494 process = await Get(
495 Process,
496 PexCliProcess(
497 python=python,
498 argv=argv,
499 additional_input_digest=merged_digest,
500 description=_build_pex_description(request),
501 output_files=output_files,
502 output_directories=output_directories,
503 ),
504 )
505
506 # NB: Building a Pex is platform dependent, so in order to get a PEX that we can use locally
507 # without cross-building, we specify that our PEX command should be run on the current local
508 # platform.
509 result = await Get(ProcessResult, MultiPlatformProcess({platform: process}))
510
511 if pex_runtime_env.verbosity > 0:
512 log_output = result.stderr.decode()
513 if log_output:
514 logger.info("%s", log_output)
515
516 digest = (
517 await Get(
518 Digest, MergeDigests((result.output_digest, *(pex.digest for pex in request.pex_path)))
519 )
520 if request.pex_path
521 else result.output_digest
522 )
523
524 return BuildPexResult(
525 result=result, pex_filename=request.output_filename, digest=digest, python=python
526 )
527
528
529 def _validate_metadata(
530 metadata: LockfileMetadata,
531 request: PexRequest,
532 requirements: (Lockfile | LockfileContent),
533 python_setup: PythonSetup,
534 ) -> None:
535
536 validation = metadata.is_valid_for(
537 requirements.lockfile_hex_digest,
538 request.interpreter_constraints,
539 python_setup.interpreter_universe,
540 )
541
542 if validation:
543 return
544
545 def tool_message_parts(
546 requirements: (ToolCustomLockfile | ToolDefaultLockfile),
547 ) -> Iterator[str]:
548
549 tool_name = requirements.options_scope_name
550 uses_source_plugins = requirements.uses_source_plugins
551 uses_project_interpreter_constraints = requirements.uses_project_interpreter_constraints
552
553 yield "You are using "
554
555 if isinstance(requirements, ToolDefaultLockfile):
556 yield "the `<default>` lockfile provided by Pants "
557 elif isinstance(requirements, ToolCustomLockfile):
558 yield f"the lockfile at {requirements.file_path} "
559
560 yield (
561 f"to install the tool `{tool_name}`, but it is not compatible with your "
562 "configuration: "
563 "\n\n"
564 )
565
566 if InvalidLockfileReason.INVALIDATION_DIGEST_MISMATCH in validation.failure_reasons:
567 yield (
568 "- You have set different requirements than those used to generate the lockfile. "
569 f"You can fix this by not setting `[{tool_name}].version`, "
570 )
571
572 if uses_source_plugins:
573 yield f"`[{tool_name}].source_plugins`, "
574
575 yield (
576 f"and `[{tool_name}].extra_requirements`, or by using a new "
577 "custom lockfile."
578 "\n"
579 )
580
581 if InvalidLockfileReason.INTERPRETER_CONSTRAINTS_MISMATCH in validation.failure_reasons:
582 yield (
583 f"- You have set interpreter constraints (`{request.interpreter_constraints}`) that "
584 "are not compatible with those used to generate the lockfile "
585 f"(`{metadata.valid_for_interpreter_constraints}`). "
586 )
587 if not uses_project_interpreter_constraints:
588 yield (
589 f"You can fix this by not setting `[{tool_name}].interpreter_constraints`, "
590 "or by using a new custom lockfile. "
591 )
592 else:
593 yield (
594 f"`{tool_name}` determines its interpreter constraints based on your code's own "
595 "constraints. To fix this error, you can either change your code's constraints "
596 f"(see {doc_url('python-interpreter-compatibility')}) or by generating a new "
597 "custom lockfile. "
598 )
599 yield "\n"
600
601 yield "\n"
602
603 if not isinstance(requirements, ToolCustomLockfile):
604 yield (
605 "To generate a custom lockfile based on your current configuration, set "
606 f"`[{tool_name}].lockfile` to where you want to create the lockfile, then run "
607 f"`./pants generate-lockfiles --resolve={tool_name}`. "
608 )
609 else:
610 yield (
611 "To regenerate your lockfile based on your current configuration, run "
612 f"`./pants generate-lockfiles --resolve={tool_name}`. "
613 )
614
615 message: str
616 if isinstance(requirements, (ToolCustomLockfile, ToolDefaultLockfile)):
617 message = "".join(tool_message_parts(requirements)).strip()
618 else:
619 # TODO: Replace with an actual value once user lockfiles are supported
620 assert False
621
622 if python_setup.invalid_lockfile_behavior == InvalidLockfileBehavior.error:
623 raise ValueError(message)
624 else:
625 logger.warning("%s", message)
626
627
628 def _build_pex_description(request: PexRequest) -> str:
629 if request.description:
630 return request.description
631
632 if isinstance(request.requirements, Lockfile):
633 desc_suffix = f"from {request.requirements.file_path}"
634 elif isinstance(request.requirements, LockfileContent):
635 desc_suffix = f"from {request.requirements.file_content.path}"
636 else:
637 if not request.requirements.req_strings:
638 return f"Building {request.output_filename}"
639 elif request.requirements.repository_pex:
640 repo_pex = request.requirements.repository_pex.name
641 return (
642 f"Extracting {pluralize(len(request.requirements.req_strings), 'requirement')} "
643 f"to build {request.output_filename} from {repo_pex}: "
644 f"{', '.join(request.requirements.req_strings)}"
645 )
646 else:
647 desc_suffix = (
648 f"with {pluralize(len(request.requirements.req_strings), 'requirement')}: "
649 f"{', '.join(request.requirements.req_strings)}"
650 )
651 return f"Building {request.output_filename} {desc_suffix}"
652
653
654 @rule
655 async def create_pex(request: PexRequest) -> Pex:
656 result = await Get(BuildPexResult, PexRequest, request)
657 return result.create_pex()
658
659
660 @dataclass(frozen=True)
661 class Script:
662 path: PurePath
663
664 @property
665 def argv0(self) -> str:
666 return f"./{self.path}" if self.path.parent == PurePath() else str(self.path)
667
668
669 @dataclass(frozen=True)
670 class VenvScript:
671 script: Script
672 content: FileContent
673
674
675 @dataclass(frozen=True)
676 class VenvScriptWriter:
677 complete_pex_env: CompletePexEnvironment
678 pex: Pex
679 venv_dir: PurePath
680
681 @classmethod
682 def create(
683 cls, pex_environment: PexEnvironment, pex: Pex, venv_rel_dir: PurePath
684 ) -> VenvScriptWriter:
685 # N.B.: We don't know the working directory that will be used in any given
686 # invocation of the venv scripts; so we deal with working_directory inside the scripts
687 # themselves by absolutifying all relevant paths at runtime.
688 complete_pex_env = pex_environment.in_sandbox(working_directory=None)
689 venv_dir = complete_pex_env.pex_root / venv_rel_dir
690 return cls(complete_pex_env=complete_pex_env, pex=pex, venv_dir=venv_dir)
691
692 def _create_venv_script(
693 self,
694 bash: BashBinary,
695 *,
696 script_path: PurePath,
697 venv_executable: PurePath,
698 ) -> VenvScript:
699 env_vars = (
700 f"{name}={shlex.quote(value)}"
701 for name, value in self.complete_pex_env.environment_dict(
702 python_configured=True
703 ).items()
704 )
705
706 target_venv_executable = shlex.quote(str(venv_executable))
707 venv_dir = shlex.quote(str(self.venv_dir))
708 execute_pex_args = " ".join(
709 f"$(ensure_absolute {shlex.quote(arg)})"
710 for arg in self.complete_pex_env.create_argv(self.pex.name, python=self.pex.python)
711 )
712
713 script = dedent(
714 f"""\
715 #!{bash.path}
716 set -euo pipefail
717
718 # N.B.: We convert all sandbox root relative paths to absolute paths so this script
719 # works when run with a cwd set elsewhere.
720
721 # N.B.: This relies on BASH_SOURCE which has been available since bash-3.0, released in
722 # 2004. In turn, our use of BASH_SOURCE relies on the fact that this script is executed
723 # by the engine via its absolute path.
724 ABS_SANDBOX_ROOT="${{BASH_SOURCE%/*}}"
725
726 function ensure_absolute() {{
727 local value0="$1"
728 shift
729 if [ "${{value0:0:1}}" == "/" ]; then
730 echo "${{value0}}" "$@"
731 else
732 echo "${{ABS_SANDBOX_ROOT}}/${{value0}}" "$@"
733 fi
734 }}
735
736 export {" ".join(env_vars)}
737 export PEX_ROOT="$(ensure_absolute ${{PEX_ROOT}})"
738
739 execute_pex_args="{execute_pex_args}"
740 target_venv_executable="$(ensure_absolute {target_venv_executable})"
741 venv_dir="$(ensure_absolute {venv_dir})"
742
743 # Let PEX_TOOLS invocations pass through to the original PEX file since venvs don't come
744 # with tools support.
745 if [ -n "${{PEX_TOOLS:-}}" ]; then
746 exec ${{execute_pex_args}} "$@"
747 fi
748
749 # If the seeded venv has been removed from the PEX_ROOT, we re-seed from the original
750 # `--venv` mode PEX file.
751 if [ ! -e "${{target_venv_executable}}" ]; then
752 rm -rf "${{venv_dir}}" || true
753 PEX_INTERPRETER=1 ${{execute_pex_args}} -c ''
754 fi
755
756 exec "${{target_venv_executable}}" "$@"
757 """
758 )
759 return VenvScript(
760 script=Script(script_path),
761 content=FileContent(path=str(script_path), content=script.encode(), is_executable=True),
762 )
763
764 def exe(self, bash: BashBinary) -> VenvScript:
765 """Writes a safe shim for the venv's executable `pex` script."""
766 script_path = PurePath(f"{self.pex.name}_pex_shim.sh")
767 return self._create_venv_script(
768 bash, script_path=script_path, venv_executable=self.venv_dir / "pex"
769 )
770
771 def bin(self, bash: BashBinary, name: str) -> VenvScript:
772 """Writes a safe shim for an executable or script in the venv's `bin` directory."""
773 script_path = PurePath(f"{self.pex.name}_bin_{name}_shim.sh")
774 return self._create_venv_script(
775 bash,
776 script_path=script_path,
777 venv_executable=self.venv_dir / "bin" / name,
778 )
779
780 def python(self, bash: BashBinary) -> VenvScript:
781 """Writes a safe shim for the venv's python binary."""
782 return self.bin(bash, "python")
783
784
785 @dataclass(frozen=True)
786 class VenvPex:
787 digest: Digest
788 pex_filename: str
789 pex: Script
790 python: Script
791 bin: FrozenDict[str, Script]
792 venv_rel_dir: str
793
794
795 @frozen_after_init
796 @dataclass(unsafe_hash=True)
797 class VenvPexRequest:
798 pex_request: PexRequest
799 bin_names: Tuple[str, ...] = ()
800
801 def __init__(self, pex_request: PexRequest, bin_names: Iterable[str] = ()) -> None:
802 """A request for a PEX that runs in a venv and optionally exposes select vanv `bin` scripts.
803
804 :param pex_request: The details of the desired PEX.
805 :param bin_names: The names of venv `bin` scripts to expose for execution.
806 """
807 self.pex_request = pex_request
808 self.bin_names = tuple(bin_names)
809
810
811 @rule
812 def wrap_venv_prex_request(pex_request: PexRequest) -> VenvPexRequest:
813 # Allow creating a VenvPex from a plain PexRequest when no extra bin scripts need to be exposed.
814 return VenvPexRequest(pex_request)
815
816
817 @rule
818 async def create_venv_pex(
819 request: VenvPexRequest, bash: BashBinary, pex_environment: PexEnvironment
820 ) -> VenvPex:
821 # VenvPex is motivated by improving performance of Python tools by eliminating traditional PEX
822 # file startup overhead.
823 #
824 # To achieve the minimal overhead (on the order of 1ms) we discard:
825 # 1. Using Pex default mode:
826 # Although this does reduce initial tool execution overhead, it still leaves a minimum
827 # O(100ms) of overhead per subsequent tool invocation. Fundamentally, Pex still needs to
828 # execute its `sys.path` isolation bootstrap code in this case.
829 # 2. Using the Pex `venv` tool:
830 # The idea here would be to create a tool venv as a Process output and then use the tool
831 # venv as an input digest for all tool invocations. This was tried and netted ~500ms of
832 # overhead over raw venv use.
833 #
834 # Instead we use Pex's `--venv` mode. In this mode you can run the Pex file and it will create a
835 # venv on the fly in the PEX_ROOT as needed. Since the PEX_ROOT is a named_cache, we avoid the
836 # digest materialization overhead present in 2 above. Since the venv is naturally isolated we
837 # avoid the `sys.path` isolation overhead of Pex itself present in 1 above.
838 #
839 # This does leave O(50ms) of overhead though for the PEX bootstrap code to detect an already
840 # created venv in the PEX_ROOT and re-exec into it. To eliminate this overhead we execute the
841 # `pex` venv script in the PEX_ROOT directly. This is not robust on its own though, since the
842 # named caches store might be pruned at any time. To guard against that case we introduce a shim
843 # bash script that checks to see if the `pex` venv script exists in the PEX_ROOT and re-creates
844 # the PEX_ROOT venv if not. Using the shim script to run Python tools gets us down to the ~1ms
845 # of overhead we currently enjoy.
846
847 pex_request = request.pex_request
848 seeded_venv_request = dataclasses.replace(
849 pex_request, additional_args=pex_request.additional_args + ("--venv", "--seed", "verbose")
850 )
851 venv_pex_result = await Get(BuildPexResult, PexRequest, seeded_venv_request)
852 # Pex verbose --seed mode outputs the absolute path of the PEX executable as well as the
853 # absolute path of the PEX_ROOT. In the --venv case this is the `pex` script in the venv root
854 # directory.
855 seed_info = json.loads(venv_pex_result.result.stdout.decode())
856 abs_pex_root = PurePath(seed_info["pex_root"])
857 abs_pex_path = PurePath(seed_info["pex"])
858 venv_rel_dir = abs_pex_path.relative_to(abs_pex_root).parent
859
860 venv_script_writer = VenvScriptWriter.create(
861 pex_environment=pex_environment, pex=venv_pex_result.create_pex(), venv_rel_dir=venv_rel_dir
862 )
863 pex = venv_script_writer.exe(bash)
864 python = venv_script_writer.python(bash)
865 scripts = {bin_name: venv_script_writer.bin(bash, bin_name) for bin_name in request.bin_names}
866 scripts_digest = await Get(
867 Digest,
868 CreateDigest(
869 (
870 pex.content,
871 python.content,
872 *(venv_script.content for venv_script in scripts.values()),
873 )
874 ),
875 )
876 input_digest = await Get(Digest, MergeDigests((venv_script_writer.pex.digest, scripts_digest)))
877
878 return VenvPex(
879 digest=input_digest,
880 pex_filename=venv_pex_result.pex_filename,
881 pex=pex.script,
882 python=python.script,
883 bin=FrozenDict((bin_name, venv_script.script) for bin_name, venv_script in scripts.items()),
884 venv_rel_dir=venv_rel_dir.as_posix(),
885 )
886
887
888 @frozen_after_init
889 @dataclass(unsafe_hash=True)
890 class PexProcess:
891 pex: Pex
892 argv: Tuple[str, ...]
893 description: str = dataclasses.field(compare=False)
894 level: LogLevel
895 input_digest: Digest | None
896 working_directory: str | None
897 extra_env: FrozenDict[str, str] | None
898 output_files: tuple[str, ...] | None
899 output_directories: tuple[str, ...] | None
900 timeout_seconds: int | None
901 execution_slot_variable: str | None
902 cache_scope: ProcessCacheScope
903
904 def __init__(
905 self,
906 pex: Pex,
907 *,
908 description: str,
909 argv: Iterable[str] = (),
910 level: LogLevel = LogLevel.INFO,
911 input_digest: Digest | None = None,
912 working_directory: str | None = None,
913 extra_env: Mapping[str, str] | None = None,
914 output_files: Iterable[str] | None = None,
915 output_directories: Iterable[str] | None = None,
916 timeout_seconds: int | None = None,
917 execution_slot_variable: str | None = None,
918 cache_scope: ProcessCacheScope = ProcessCacheScope.SUCCESSFUL,
919 ) -> None:
920 self.pex = pex
921 self.argv = tuple(argv)
922 self.description = description
923 self.level = level
924 self.input_digest = input_digest
925 self.working_directory = working_directory
926 self.extra_env = FrozenDict(extra_env) if extra_env else None
927 self.output_files = tuple(output_files) if output_files else None
928 self.output_directories = tuple(output_directories) if output_directories else None
929 self.timeout_seconds = timeout_seconds
930 self.execution_slot_variable = execution_slot_variable
931 self.cache_scope = cache_scope
932
933
934 @rule
935 async def setup_pex_process(request: PexProcess, pex_environment: PexEnvironment) -> Process:
936 pex = request.pex
937 complete_pex_env = pex_environment.in_sandbox(working_directory=request.working_directory)
938 argv = complete_pex_env.create_argv(pex.name, *request.argv, python=pex.python)
939 env = {
940 **complete_pex_env.environment_dict(python_configured=pex.python is not None),
941 **(request.extra_env or {}),
942 }
943 input_digest = (
944 await Get(Digest, MergeDigests((pex.digest, request.input_digest)))
945 if request.input_digest
946 else pex.digest
947 )
948 return Process(
949 argv,
950 description=request.description,
951 level=request.level,
952 input_digest=input_digest,
953 working_directory=request.working_directory,
954 env=env,
955 output_files=request.output_files,
956 output_directories=request.output_directories,
957 append_only_caches=complete_pex_env.append_only_caches,
958 timeout_seconds=request.timeout_seconds,
959 execution_slot_variable=request.execution_slot_variable,
960 cache_scope=request.cache_scope,
961 )
962
963
964 @frozen_after_init
965 @dataclass(unsafe_hash=True)
966 class VenvPexProcess:
967 venv_pex: VenvPex
968 argv: Tuple[str, ...]
969 description: str = dataclasses.field(compare=False)
970 level: LogLevel
971 input_digest: Digest | None
972 working_directory: str | None
973 extra_env: FrozenDict[str, str] | None
974 output_files: tuple[str, ...] | None
975 output_directories: tuple[str, ...] | None
976 timeout_seconds: int | None
977 execution_slot_variable: str | None
978 cache_scope: ProcessCacheScope
979
980 def __init__(
981 self,
982 venv_pex: VenvPex,
983 *,
984 description: str,
985 argv: Iterable[str] = (),
986 level: LogLevel = LogLevel.INFO,
987 input_digest: Digest | None = None,
988 working_directory: str | None = None,
989 extra_env: Mapping[str, str] | None = None,
990 output_files: Iterable[str] | None = None,
991 output_directories: Iterable[str] | None = None,
992 timeout_seconds: int | None = None,
993 execution_slot_variable: str | None = None,
994 cache_scope: ProcessCacheScope = ProcessCacheScope.SUCCESSFUL,
995 ) -> None:
996 self.venv_pex = venv_pex
997 self.argv = tuple(argv)
998 self.description = description
999 self.level = level
1000 self.input_digest = input_digest
1001 self.working_directory = working_directory
1002 self.extra_env = FrozenDict(extra_env) if extra_env else None
1003 self.output_files = tuple(output_files) if output_files else None
1004 self.output_directories = tuple(output_directories) if output_directories else None
1005 self.timeout_seconds = timeout_seconds
1006 self.execution_slot_variable = execution_slot_variable
1007 self.cache_scope = cache_scope
1008
1009
1010 @rule
1011 async def setup_venv_pex_process(
1012 request: VenvPexProcess, pex_environment: PexEnvironment
1013 ) -> Process:
1014 venv_pex = request.venv_pex
1015 pex_bin = (
1016 os.path.relpath(venv_pex.pex.argv0, request.working_directory)
1017 if request.working_directory
1018 else venv_pex.pex.argv0
1019 )
1020 argv = (pex_bin, *request.argv)
1021 input_digest = (
1022 await Get(Digest, MergeDigests((venv_pex.digest, request.input_digest)))
1023 if request.input_digest
1024 else venv_pex.digest
1025 )
1026 return Process(
1027 argv=argv,
1028 description=request.description,
1029 level=request.level,
1030 input_digest=input_digest,
1031 working_directory=request.working_directory,
1032 env=request.extra_env,
1033 output_files=request.output_files,
1034 output_directories=request.output_directories,
1035 append_only_caches=pex_environment.in_sandbox(
1036 working_directory=request.working_directory
1037 ).append_only_caches,
1038 timeout_seconds=request.timeout_seconds,
1039 execution_slot_variable=request.execution_slot_variable,
1040 cache_scope=request.cache_scope,
1041 )
1042
1043
1044 @dataclass(frozen=True)
1045 class PexDistributionInfo:
1046 """Information about an individual distribution in a PEX file, as reported by `PEX_TOOLS=1
1047 repository info -v`."""
1048
1049 project_name: str
1050 version: packaging.version.Version
1051 requires_python: packaging.specifiers.SpecifierSet | None
1052 requires_dists: tuple[Requirement, ...]
1053
1054
1055 class PexResolveInfo(Collection[PexDistributionInfo]):
1056 """Information about all distributions resolved in a PEX file, as reported by `PEX_TOOLS=1
1057 repository info -v`."""
1058
1059
1060 def parse_repository_info(repository_info: str) -> PexResolveInfo:
1061 def iter_dist_info() -> Iterator[PexDistributionInfo]:
1062 for line in repository_info.splitlines():
1063 info = json.loads(line)
1064 requires_python = info["requires_python"]
1065 yield PexDistributionInfo(
1066 project_name=info["project_name"],
1067 version=packaging.version.Version(info["version"]),
1068 requires_python=(
1069 packaging.specifiers.SpecifierSet(requires_python)
1070 if requires_python is not None
1071 else None
1072 ),
1073 requires_dists=tuple(
1074 Requirement.parse(req) for req in sorted(info["requires_dists"])
1075 ),
1076 )
1077
1078 return PexResolveInfo(sorted(iter_dist_info(), key=lambda dist: dist.project_name))
1079
1080
1081 @rule
1082 async def determine_venv_pex_resolve_info(venv_pex: VenvPex) -> PexResolveInfo:
1083 process_result = await Get(
1084 ProcessResult,
1085 VenvPexProcess(
1086 venv_pex,
1087 argv=["repository", "info", "-v"],
1088 extra_env={"PEX_TOOLS": "1"},
1089 input_digest=venv_pex.digest,
1090 description=f"Determine distributions found in {venv_pex.pex_filename}",
1091 level=LogLevel.DEBUG,
1092 ),
1093 )
1094 return parse_repository_info(process_result.stdout.decode())
1095
1096
1097 @rule
1098 async def determine_pex_resolve_info(pex_pex: PexPEX, pex: Pex) -> PexResolveInfo:
1099 process_result = await Get(
1100 ProcessResult,
1101 PexProcess(
1102 pex=Pex(digest=pex_pex.digest, name=pex_pex.exe, python=pex.python),
1103 argv=[pex.name, "repository", "info", "-v"],
1104 input_digest=pex.digest,
1105 extra_env={"PEX_MODULE": "pex.tools"},
1106 description=f"Determine distributions found in {pex.name}",
1107 level=LogLevel.DEBUG,
1108 ),
1109 )
1110 return parse_repository_info(process_result.stdout.decode())
1111
1112
1113 def rules():
1114 return [*collect_rules(), *pex_cli.rules()]
1115
[end of src/python/pants/backend/python/util_rules/pex.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
pantsbuild/pants
|
723ffe79b74a24dc4aa8aae61eb4d7c17d6b3da4
|
`--layout=packed` should always be used for the `repository.pex`
**Describe the bug**
When building a `pex_binary` when `requirements_constraints` is set, the `repository.pex` that is built for the binary will not set `--layout=packed`, because the request that is triggering the constraints build is not `internal_only`.
But it _should_ use `--layout=packed`, since it is only the source of the artifacts for the pex, rather than the exposed artifact. All cases of `repository.pex` builds should (since they are never exposed verbatim, and always consumed by other builds.)
**Pants version**
`2.7.1`
|
2021-10-28T20:32:33Z
|
<patch>
diff --git a/src/python/pants/backend/python/util_rules/pex.py b/src/python/pants/backend/python/util_rules/pex.py
--- a/src/python/pants/backend/python/util_rules/pex.py
+++ b/src/python/pants/backend/python/util_rules/pex.py
@@ -99,10 +99,19 @@ class ToolCustomLockfile(Lockfile, _ToolLockfileMixin):
@dataclass(unsafe_hash=True)
class PexRequirements:
req_strings: FrozenOrderedSet[str]
+ # TODO: The constraints.txt resolve for `resolve_all_constraints` will be removed as part of
+ # #12314, but in the meantime, it "acts like" a lockfile, but isn't actually typed as a Lockfile
+ # because the constraints are modified in memory first. This flag marks a `PexRequirements`
+ # resolve as being a request for the entire constraints file.
+ is_all_constraints_resolve: bool
repository_pex: Pex | None
def __init__(
- self, req_strings: Iterable[str] = (), *, repository_pex: Pex | None = None
+ self,
+ req_strings: Iterable[str] = (),
+ *,
+ is_all_constraints_resolve: bool = False,
+ repository_pex: Pex | None = None,
) -> None:
"""
:param req_strings: The requirement strings to resolve.
@@ -110,6 +119,7 @@ def __init__(
`--pex-repository` option.
"""
self.req_strings = FrozenOrderedSet(sorted(req_strings))
+ self.is_all_constraints_resolve = is_all_constraints_resolve
self.repository_pex = repository_pex
@classmethod
@@ -432,6 +442,7 @@ async def build_pex(
)
if isinstance(request.requirements, Lockfile):
+ is_monolithic_resolve = True
argv.extend(["--requirement", request.requirements.file_path])
globs = PathGlobs(
@@ -453,6 +464,7 @@ async def build_pex(
requirements_file_digest = await Get(Digest, PathGlobs, globs)
elif isinstance(request.requirements, LockfileContent):
+ is_monolithic_resolve = True
file_content = request.requirements.file_content
argv.extend(["--requirement", file_content.path])
@@ -466,6 +478,7 @@ async def build_pex(
_validate_metadata(metadata, request, request.requirements, python_setup)
requirements_file_digest = await Get(Digest, CreateDigest([file_content]))
else:
+ is_monolithic_resolve = request.requirements.is_all_constraints_resolve
argv.extend(request.requirements.req_strings)
merged_digest = await Get(
@@ -484,7 +497,7 @@ async def build_pex(
output_files: Iterable[str] | None = None
output_directories: Iterable[str] | None = None
- if request.internal_only:
+ if request.internal_only or is_monolithic_resolve:
# This is a much friendlier layout for the CAS than the default zipapp.
argv.extend(["--layout", "packed"])
output_directories = [request.output_filename]
diff --git a/src/python/pants/backend/python/util_rules/pex_from_targets.py b/src/python/pants/backend/python/util_rules/pex_from_targets.py
--- a/src/python/pants/backend/python/util_rules/pex_from_targets.py
+++ b/src/python/pants/backend/python/util_rules/pex_from_targets.py
@@ -354,7 +354,11 @@ async def _setup_constraints_repository_pex(
description=f"Resolving {constraints_path}",
output_filename="repository.pex",
internal_only=request.internal_only,
- requirements=PexRequirements(all_constraints),
+ requirements=PexRequirements(
+ all_constraints,
+ # TODO: See PexRequirements docs.
+ is_all_constraints_resolve=True,
+ ),
interpreter_constraints=request.interpreter_constraints,
platforms=request.platforms,
additional_args=request.additional_lockfile_args,
</patch>
|
[]
|
[]
| ||||
Lightning-AI__lightning-3188
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
RMSLE metric appears to be incorrect
## 🐛 Bug
The usage of mse [in the rmsle function](https://github.com/PyTorchLightning/pytorch-lightning/blob/22b9642117394d3c50587ae137dbf94c6dd5173c/pytorch_lightning/metrics/functional/regression.py#L138) looks wrong to me. It looks like this function currently computes _mean squared log error_ instead of _root mean squared log error_.
### Expected behavior
I would expect that rmsle looks like this:
```python
rmsle = rmse(torch.log(pred + 1), torch.log(target + 1), reduction=reduction)
```
</issue>
<code>
[start of README.md]
1 <div align="center">
2
3 
4
5 # PyTorch Lightning
6
7 **The lightweight PyTorch wrapper for high-performance AI research. Scale your models, not the boilerplate.**
8
9 <p align="center">
10 <a href="#pytorch-lightning-masterclass">Masterclass</a> •
11 <a href="#key-features">Key Features</a> •
12 <a href="#how-to-use">How To Use</a> •
13 <a href="#docs">Docs</a> •
14 <a href="#resources">Resources</a> •
15 <a href="#community">Community</a> •
16 <a href="#faq">FAQ</a> •
17 <a href="#licence">Licence</a>
18 </p>
19
20
21 [](https://badge.fury.io/py/pytorch-lightning)
22 [](https://pepy.tech/project/pytorch-lightning)
23 [](https://hub.docker.com/r/pytorchlightning/pytorch_lightning)
24 [](https://codecov.io/gh/PyTorchLightning/pytorch-lightning)
25
26 [](https://pytorch-lightning.readthedocs.io/en/stable/)
27 [](https://join.slack.com/t/pytorch-lightning/shared_invite/zt-f6bl2l0l-JYMK3tbAgAmGRrlNr00f1A)
28 [](https://github.com/PytorchLightning/pytorch-lightning/blob/master/LICENSE)
29 [](https://shields.io/)
30
31 <!--
32 [](https://www.codefactor.io/repository/github/pytorchlightning/pytorch-lightning)
33 -->
34 </div>
35
36 ###### *Codecov is > 90%+ but build delays may show less
37
38 ## PyTorch Lightning is just organized PyTorch
39 
40
41 Lightning disentangles PyTorch code to decouple the science from the engineering
42 by organizing it into 4 categories:
43
44 1. Research code (the LightningModule).
45 2. Engineering code (you delete, and is handled by the Trainer).
46 3. Non-essential research code (logging, etc... this goes in Callbacks).
47 4. Data (use PyTorch Dataloaders or organize them into a LightningDataModule).
48
49 Once you do this, you can train on multiple-GPUs, TPUs, CPUs and even in 16-bit precision without changing your code!
50
51 Get started with our [3 steps guide](https://pytorch-lightning.readthedocs.io/en/stable/new-project.html)
52
53 ---
54 ## Trending contributors
55
56 [](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/links/0)
57 [](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/links/1)
58 [](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/links/2)
59 [](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/links/3)
60 [](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/links/4)
61 [](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/links/5)
62 [](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/links/6)
63 [](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/links/7)
64
65 ---
66
67 ## Continuous Integration
68 <center>
69
70 | System / PyTorch ver. | 1.3 (min. req.)* | 1.4 | 1.5 | 1.6 (latest) |
71 | :---: | :---: | :---: | :---: | :---: |
72 | Conda py3.7 [linux] | [](https://github.com/PyTorchLightning/pytorch-lightning/actions?query=workflow%3A%22PyTorch+%26+Conda%22+branch%3Amaster) | [](https://github.com/PyTorchLightning/pytorch-lightning/actions?query=workflow%3A%22PyTorch+%26+Conda%22+branch%3Amaster) | [](https://github.com/PyTorchLightning/pytorch-lightning/actions?query=workflow%3A%22PyTorch+%26+Conda%22+branch%3Amaster) | [](https://github.com/PyTorchLightning/pytorch-lightning/actions?query=workflow%3A%22PyTorch+%26+Conda%22+branch%3Amaster) |
73 | Linux py3.7 [GPUs**] | - | - | - | [](http://35.192.60.23/PyTorchLightning/pytorch-lightning) |
74 | Linux py3.7 [TPUs***] | - | - | - | [](https://github.com/PyTorchLightning/pytorch-lightning/actions?query=workflow%3A%22TPU+tests%22+branch%3Amaster) |
75 | Linux py3.6 / py3.7 / py3.8 | [](https://github.com/PyTorchLightning/pytorch-lightning/actions?query=workflow%3A%22CI+testing%22) | - | - | [](https://github.com/PyTorchLightning/pytorch-lightning/actions?query=workflow%3A%22CI+testing%22) |
76 | OSX py3.6 / py3.7 | - | [](https://github.com/PyTorchLightning/pytorch-lightning/actions?query=workflow%3A%22CI+testing%22) | - | [](https://github.com/PyTorchLightning/pytorch-lightning/actions?query=workflow%3A%22CI+testing%22) |
77 | Windows py3.6 / py3.7 / py3.8 | [](https://github.com/PyTorchLightning/pytorch-lightning/actions?query=workflow%3A%22CI+testing%22) | - | - | [](https://github.com/PyTorchLightning/pytorch-lightning/actions?query=workflow%3A%22CI+testing%22)
78
79 - _\* `torch>=1.4` is the minimal pytorch version for Python 3.8_
80 - _\** tests run on two NVIDIA K80_
81 - _\*** tests run on Google GKE TPUv2/3_
82
83 </center>
84
85 ---
86
87 ## [PyTorch Lightning Masterclass](https://www.youtube.com/watch?v=DbESHcCoWbM&list=PLaMu-SDt_RB5NUm67hU2pdE75j6KaIOv2)
88 ### [New lessons weekly!](https://www.youtube.com/watch?v=DbESHcCoWbM&list=PLaMu-SDt_RB5NUm67hU2pdE75j6KaIOv2)
89
90 <div style="display: flex">
91 <div>
92 <p>From PyTorch to PyTorch Lightning</p>
93 <a href="https://www.youtube.com/watch?v=DbESHcCoWbM&list=PLaMu-SDt_RB5NUm67hU2pdE75j6KaIOv2">
94 <img alt="From PyTorch to PyTorch Lightning" src="https://github.com/PyTorchLightning/pytorch-lightning/blob/master/docs/source/_images/general/PTL101_youtube_thumbnail.jpg" width=250">
95 </a>
96 </div>
97 <div style="margin-top: 5px">
98 <p>Converting a VAE to PyTorch Lightning</p>
99 <a href="https://www.youtube.com/watch?v=QHww1JH7IDU">
100 <img alt="From PyTorch to PyTorch Lightning" src="https://github.com/PyTorchLightning/pytorch-lightning/blob/master/docs/source/_images/general/tutorial_cover.jpg" width=250">
101 </a>
102 </div>
103 </div>
104
105 ---
106
107 ## Key Features
108
109 * Scale your models to run on any hardware (CPU, GPUs, TPUs) without changing your model
110 * Making code more readable by decoupling the research code from the engineering
111 * Easier to reproduce
112 * Less error prone by automating most of the training loop and tricky engineering
113 * Keeps all the flexibility (LightningModules are still PyTorch modules), but removes a ton of boilerplate
114 * Lightning has out-of-the-box integration with the popular logging/visualizing frameworks ([Tensorboard](https://pytorch.org/docs/stable/tensorboard.html), [MLFlow](https://mlflow.org/), [Neptune.ai](https://neptune.ai/), [Comet.ml](https://www.comet.ml/site/), [Wandb](https://www.wandb.com/)).
115 * [Tested rigorously with every new PR](https://github.com/PyTorchLightning/pytorch-lightning/tree/master/tests). We test every combination of PyTorch and Python supported versions, every OS, multi GPUs and even TPUs.
116 * Minimal running speed overhead (about 300 ms per epoch compared with pure PyTorch).
117
118 ### Lightning automates 40+ parts of DL/ML research
119 - GPU training
120 - Distributed GPU (cluster) training
121 - TPU training
122 - EarlyStopping
123 - Logging/Visualizing
124 - Checkpointing
125 - Experiment management
126 - [Full list here](https://pytorch-lightning.readthedocs.io/en/latest/#common-use-cases)
127
128 ---
129
130 ## How To Use
131
132 ##### Install
133 Simple installation from PyPI
134 ```bash
135 pip install pytorch-lightning
136 ```
137
138 From Conda
139 ```bash
140 conda install pytorch-lightning -c conda-forge
141 ```
142
143 Install bleeding-edge (no guarantees)
144 ```bash
145 pip install git+https://github.com/PytorchLightning/pytorch-lightning.git@master --upgrade
146 ```
147
148 ##### Here's a minimal example without a test loop.
149
150 ```python
151 import os
152 import torch
153 import torch.nn.functional as F
154 from torchvision.datasets import MNIST
155 from torch.utils.data import DataLoader, random_split
156 from torchvision import transforms
157 import pytorch_lightning as pl
158 ```
159
160 ```python
161 # this is just a plain nn.Module with some structure
162 class LitClassifier(pl.LightningModule):
163
164 def __init__(self):
165 super().__init__()
166 self.l1 = torch.nn.Linear(28 * 28, 10)
167
168 def forward(self, x):
169 return torch.relu(self.l1(x.view(x.size(0), -1)))
170
171 def training_step(self, batch, batch_idx):
172 x, y = batch
173 y_hat = self(x)
174 loss = F.cross_entropy(y_hat, y)
175 result = pl.TrainResult(loss)
176 result.log('train_loss', loss, on_epoch=True)
177 return result
178
179 def validation_step(self, batch, batch_idx):
180 x, y = batch
181 y_hat = self(x)
182 loss = F.cross_entropy(y_hat, y)
183 result = pl.EvalResult(checkpoint_on=loss)
184 result.log('val_loss', loss)
185 return result
186
187 def configure_optimizers(self):
188 return torch.optim.Adam(self.parameters(), lr=0.02)
189
190 # train!
191 dataset = MNIST(os.getcwd(), download=True, transform=transforms.ToTensor())
192 train, val = random_split(dataset, [55000, 5000])
193
194 model = LitClassifier()
195 trainer = pl.Trainer()
196 trainer.fit(model, DataLoader(train), DataLoader(val))
197 ```
198
199 #### And without changing a single line of code, you could run on GPUs
200 ```python
201 # 8 GPUs
202 trainer = Trainer(max_epochs=1, gpus=8)
203
204 # 256 GPUs
205 trainer = Trainer(max_epochs=1, gpus=8, num_nodes=32)
206 ```
207
208 Or TPUs
209 ```python
210 # Distributes TPU core training
211 trainer = Trainer(tpu_cores=8)
212
213 # Single TPU core training
214 trainer = Trainer(tpu_cores=[1])
215 ```
216
217 ---
218
219 ### Docs
220 - [master](https://pytorch-lightning.readthedocs.io/en/latest)
221 - [stable](https://pytorch-lightning.readthedocs.io/en/stable)
222 - [0.9.0](https://pytorch-lightning.readthedocs.io/en/0.9.0/)
223 - [0.8.5](https://pytorch-lightning.readthedocs.io/en/0.8.5/)
224 - [0.8.4](https://pytorch-lightning.readthedocs.io/en/0.8.4/)
225 - [0.8.3](https://pytorch-lightning.readthedocs.io/en/0.8.3/)
226 - [0.8.1](https://pytorch-lightning.readthedocs.io/en/0.8.1/)
227
228 ---
229
230 ## Resources
231
232 ### Examples
233 ###### Hello world
234 [MNIST hello world](https://colab.research.google.com/drive/1F_RNcHzTfFuQf-LeKvSlud6x7jXYkG31#scrollTo=gEulmrbxwaYL)
235 [MNIST on TPUs](https://colab.research.google.com/drive/1-_LKx4HwAxl5M6xPJmqAAu444LTDQoa3)
236
237 ###### Contrastive Learning
238 [BYOL](https://pytorch-lightning-bolts.readthedocs.io/en/latest/self_supervised_models.html#byol)
239 [CPC v2](https://pytorch-lightning-bolts.readthedocs.io/en/latest/self_supervised_models.html#cpc-v2)
240 [Moco v2](https://pytorch-lightning-bolts.readthedocs.io/en/latest/self_supervised_models.html#moco-v2)
241 [SIMCLR](https://pytorch-lightning-bolts.readthedocs.io/en/latest/self_supervised_models.html#simclr)
242
243 ###### NLP
244 [BERT](https://colab.research.google.com/drive/1F_RNcHzTfFuQf-LeKvSlud6x7jXYkG31#scrollTo=7uQVI-xv9Ddj)
245 [GPT-2](https://pytorch-lightning-bolts.readthedocs.io/en/latest/convolutional.html#gpt-2)
246
247
248 ###### Reinforcement Learning
249 [DQN](https://colab.research.google.com/drive/1F_RNcHzTfFuQf-LeKvSlud6x7jXYkG31#scrollTo=NWvMLBDySQI5)
250 [Dueling-DQN](https://pytorch-lightning-bolts.readthedocs.io/en/latest/reinforce_learn.html#dueling-dqn)
251 [Reinforce](https://pytorch-lightning-bolts.readthedocs.io/en/latest/reinforce_learn.html#reinforce)
252
253 ###### Vision
254 [GAN](https://colab.research.google.com/drive/1F_RNcHzTfFuQf-LeKvSlud6x7jXYkG31#scrollTo=P0bSmCw57aV5)
255
256 ###### Classic ML
257 [Logistic Regression](https://pytorch-lightning-bolts.readthedocs.io/en/latest/classic_ml.html#logistic-regression)
258 [Linear Regression](https://pytorch-lightning-bolts.readthedocs.io/en/latest/classic_ml.html#linear-regression)
259
260 ### Tutorials
261 Check out our [introduction guide](https://pytorch-lightning.readthedocs.io/en/latest/introduction_guide.html) to get started.
262 Or jump straight into [our tutorials](https://pytorch-lightning.readthedocs.io/en/latest/#tutorials).
263
264 ---
265
266 ## Community
267
268 The lightning cimmunity is maintained by
269 - [15 core contributors](https://pytorch-lightning.readthedocs.io/en/latest/governance.html) who are all a mix of professional engineers, Research Scientists, Ph.D. students from top AI labs.
270 - 200+ community contributors.
271
272 Lightning is also part of the [PyTorch ecosystem](https://pytorch.org/ecosystem/) which requires projects to have solid testing, documentation and support.
273
274 ### Asking for help
275 If you have any questions please:
276 1. [read the docs](https://pytorch-lightning.rtfd.io/en/latest/).
277 2. [Search through the issues](https://github.com/PytorchLightning/pytorch-lightning/issues?utf8=%E2%9C%93&q=my++question).
278 3. [Join our slack](https://join.slack.com/t/pytorch-lightning/shared_invite/zt-f6bl2l0l-JYMK3tbAgAmGRrlNr00f1A).
279 4. [Ask on stackoverflow](https://stackoverflow.com/questions/ask?guided=false) with the tag pytorch-lightning.
280
281 ### Funding
282 Building open-source software with only a few part-time people is hard! We've secured funding to make sure we can
283 hire a full-time staff, attend conferences, and move faster through implementing features you request.
284
285 Our goal is to build an incredible research platform and a big supportive community. Many open-source projects
286 have gone on to fund operations through things like support and special help for big corporations!
287
288 If you are one of these corporations, please feel free to reach out to [email protected]!
289
290 ---
291
292 ## FAQ
293
294 **Starting a new project?**
295
296 [Use our seed-project aimed at reproducibility!](https://github.com/PytorchLightning/pytorch-lightning-conference-seed)
297
298 **Why lightning?**
299
300 Although your research/production project might start simple, once you add things like GPU AND TPU training, 16-bit precision, etc, you end up spending more time engineering than researching. Lightning automates AND rigorously tests those parts for you.
301
302 Lightning has 3 goals in mind:
303
304 1. Maximal flexibility while abstracting out the common boilerplate across research projects.
305 2. Reproducibility. If all projects use the LightningModule template, it will be much much easier to understand what's going on and where to look! It will also mean every implementation follows a standard format.
306 3. Democratizing PyTorch power-user features. Distributed training? 16-bit? know you need them but don't want to take the time to implement? All good... these come built into Lightning.
307
308
309 **Who is Lightning for?**
310
311 - Professional researchers
312 - Ph.D. students
313 - Corporate production teams
314
315 If you're just getting into deep learning, we recommend you learn PyTorch first! Once you've implemented a few models, come back and use all the advanced features of Lightning :)
316
317 **What does lightning control for me?**
318
319 Everything in Blue!
320 This is how lightning separates the science (red) from engineering (blue).
321
322 
323
324 **How much effort is it to convert?**
325
326 If your code is not a huge mess you should be able to organize it into a LightningModule in less than 1 hour.
327 If your code IS a mess, then you needed to clean up anyhow ;)
328
329 [Check out this step-by-step guide](https://towardsdatascience.com/from-pytorch-to-pytorch-lightning-a-gentle-introduction-b371b7caaf09).
330 [Or watch this video](https://www.youtube.com/watch?v=QHww1JH7IDU).
331
332 **How flexible is it?**
333
334 As you see, you're just organizing your PyTorch code - there's no abstraction.
335
336 And for the stuff that the Trainer abstracts out, you can [override any part](https://pytorch-lightning.readthedocs.io/en/latest/introduction_guide.html#extensibility) you want to do things like implement your own distributed training, 16-bit precision, or even a custom backward pass.
337
338 For example, here you could do your own backward pass without worrying about GPUs, TPUs or 16-bit since we already handle it.
339
340 ```python
341 class LitModel(LightningModule):
342
343 def optimizer_zero_grad(self, current_epoch, batch_idx, optimizer, opt_idx):
344 optimizer.zero_grad()
345 ```
346
347 For anything else you might need, we have an extensive [callback system](https://pytorch-lightning.readthedocs.io/en/latest/introduction_guide.html#callbacks) you can use to add arbitrary functionality not implemented by our team in the Trainer.
348
349 **What types of research works?**
350
351 Anything! Remember, that this is just organized PyTorch code.
352 The Training step defines the core complexity found in the training loop.
353
354 ##### Could be as complex as a seq2seq
355
356 ```python
357 # define what happens for training here
358 def training_step(self, batch, batch_idx):
359 x, y = batch
360
361 # define your own forward and loss calculation
362 hidden_states = self.encoder(x)
363
364 # even as complex as a seq-2-seq + attn model
365 # (this is just a toy, non-working example to illustrate)
366 start_token = '<SOS>'
367 last_hidden = torch.zeros(...)
368 loss = 0
369 for step in range(max_seq_len):
370 attn_context = self.attention_nn(hidden_states, start_token)
371 pred = self.decoder(start_token, attn_context, last_hidden)
372 last_hidden = pred
373 pred = self.predict_nn(pred)
374 loss += self.loss(last_hidden, y[step])
375
376 #toy example as well
377 loss = loss / max_seq_len
378 return {'loss': loss}
379 ```
380
381 ##### Or as basic as CNN image classification
382
383 ```python
384 # define what happens for validation here
385 def validation_step(self, batch, batch_idx):
386 x, y = batch
387
388 # or as basic as a CNN classification
389 out = self(x)
390 loss = my_loss(out, y)
391 return {'loss': loss}
392 ```
393
394 **Does Lightning Slow my PyTorch?**
395
396 No! Lightning is meant for research/production cases that require high-performance.
397
398 We have tests to ensure we get the EXACT same results in under 600 ms difference per epoch. In reality, lightning adds about a 300 ms overhead per epoch.
399 [Check out the parity tests here](https://github.com/PyTorchLightning/pytorch-lightning/tree/master/benchmarks).
400
401 Overall, Lightning guarantees rigorously tested, correct, modern best practices for the automated parts.
402
403 **How does Lightning compare with Ignite and fast.ai?**
404
405 [Here's a thorough comparison](https://medium.com/@_willfalcon/pytorch-lightning-vs-pytorch-ignite-vs-fast-ai-61dc7480ad8a).
406
407 **Is this another library I have to learn?**
408
409 Nope! We use pure Pytorch everywhere and don't add unnecessary abstractions!
410
411 **Are there plans to support Python 2?**
412
413 Nope.
414
415 **Are there plans to support virtualenv?**
416
417 Nope. Please use anaconda or miniconda.
418 ```bash
419 conda activate my_env
420 pip install pytorch-lightning
421 ```
422
423 ---
424
425 ## Licence
426
427 Please observe the Apache 2.0 license that is listed in this repository. In addition
428 the Lightning framework is Patent Pending.
429
430 ## BibTeX
431 If you want to cite the framework feel free to use this (but only if you loved it 😊):
432
433 ```bibtex
434 @article{falcon2019pytorch,
435 title={PyTorch Lightning},
436 author={Falcon, WA},
437 journal={GitHub. Note: https://github.com/PyTorchLightning/pytorch-lightning Cited by},
438 volume={3},
439 year={2019}
440 }
441 ```
442
[end of README.md]
[start of pytorch_lightning/metrics/__init__.py]
1 from pytorch_lightning.metrics.classification import (
2 Accuracy,
3 AveragePrecision,
4 ConfusionMatrix,
5 F1,
6 FBeta,
7 Recall,
8 ROC,
9 AUROC,
10 DiceCoefficient,
11 MulticlassPrecisionRecall,
12 MulticlassROC,
13 Precision,
14 PrecisionRecall,
15 IoU,
16 )
17 from pytorch_lightning.metrics.converters import numpy_metric, tensor_metric
18 from pytorch_lightning.metrics.metric import Metric, TensorMetric, NumpyMetric
19 from pytorch_lightning.metrics.nlp import BLEUScore
20 from pytorch_lightning.metrics.regression import (
21 MAE,
22 MSE,
23 PSNR,
24 RMSE,
25 RMSLE,
26 SSIM
27 )
28 from pytorch_lightning.metrics.sklearns import (
29 AUC,
30 PrecisionRecallCurve,
31 SklearnMetric,
32 )
33
34 __classification_metrics = [
35 "AUC",
36 "AUROC",
37 "Accuracy",
38 "AveragePrecision",
39 "ConfusionMatrix",
40 "DiceCoefficient",
41 "F1",
42 "FBeta",
43 "MulticlassPrecisionRecall",
44 "MulticlassROC",
45 "Precision",
46 "PrecisionRecall",
47 "PrecisionRecallCurve",
48 "ROC",
49 "Recall",
50 "IoU",
51 ]
52 __regression_metrics = [
53 "MAE",
54 "MSE",
55 "PSNR",
56 "RMSE",
57 "RMSLE",
58 "SSIM"
59 ]
60 __sequence_metrics = ["BLEUScore"]
61 __all__ = __regression_metrics + __classification_metrics + ["SklearnMetric"] + __sequence_metrics
62
[end of pytorch_lightning/metrics/__init__.py]
[start of pytorch_lightning/metrics/functional/__init__.py]
1 from pytorch_lightning.metrics.functional.classification import (
2 accuracy,
3 auc,
4 auroc,
5 average_precision,
6 confusion_matrix,
7 dice_score,
8 f1_score,
9 fbeta_score,
10 multiclass_precision_recall_curve,
11 multiclass_roc,
12 precision,
13 precision_recall,
14 precision_recall_curve,
15 recall,
16 roc,
17 stat_scores,
18 stat_scores_multiple_classes,
19 to_categorical,
20 to_onehot,
21 iou,
22 )
23 from pytorch_lightning.metrics.functional.nlp import bleu_score
24 from pytorch_lightning.metrics.functional.regression import (
25 mae,
26 mse,
27 psnr,
28 rmse,
29 rmsle,
30 ssim
31 )
32
[end of pytorch_lightning/metrics/functional/__init__.py]
[start of pytorch_lightning/metrics/functional/regression.py]
1 from typing import Sequence
2
3 import torch
4 from torch.nn import functional as F
5
6 from pytorch_lightning.metrics.functional.reduction import reduce
7
8
9 def mse(
10 pred: torch.Tensor,
11 target: torch.Tensor,
12 reduction: str = 'elementwise_mean'
13 ) -> torch.Tensor:
14 """
15 Computes mean squared error
16
17 Args:
18 pred: estimated labels
19 target: ground truth labels
20 reduction: a method to reduce metric score over labels (default: takes the mean)
21 Available reduction methods:
22
23 - elementwise_mean: takes the mean
24 - none: pass array
25 - sum: add elements
26
27 Return:
28 Tensor with MSE
29
30 Example:
31
32 >>> x = torch.tensor([0., 1, 2, 3])
33 >>> y = torch.tensor([0., 1, 2, 2])
34 >>> mse(x, y)
35 tensor(0.2500)
36
37 """
38 mse = F.mse_loss(pred, target, reduction='none')
39 mse = reduce(mse, reduction=reduction)
40 return mse
41
42
43 def rmse(
44 pred: torch.Tensor,
45 target: torch.Tensor,
46 reduction: str = 'elementwise_mean'
47 ) -> torch.Tensor:
48 """
49 Computes root mean squared error
50
51 Args:
52 pred: estimated labels
53 target: ground truth labels
54 reduction: a method to reduce metric score over labels (default: takes the mean)
55 Available reduction methods:
56
57 - elementwise_mean: takes the mean
58 - none: pass array
59 - sum: add elements
60
61 Return:
62 Tensor with RMSE
63
64
65 >>> x = torch.tensor([0., 1, 2, 3])
66 >>> y = torch.tensor([0., 1, 2, 2])
67 >>> rmse(x, y)
68 tensor(0.5000)
69
70 """
71 rmse = torch.sqrt(mse(pred, target, reduction=reduction))
72 return rmse
73
74
75 def mae(
76 pred: torch.Tensor,
77 target: torch.Tensor,
78 reduction: str = 'elementwise_mean'
79 ) -> torch.Tensor:
80 """
81 Computes mean absolute error
82
83 Args:
84 pred: estimated labels
85 target: ground truth labels
86 reduction: a method to reduce metric score over labels (default: takes the mean)
87 Available reduction methods:
88
89 - elementwise_mean: takes the mean
90 - none: pass array
91 - sum: add elements
92
93 Return:
94 Tensor with MAE
95
96 Example:
97
98 >>> x = torch.tensor([0., 1, 2, 3])
99 >>> y = torch.tensor([0., 1, 2, 2])
100 >>> mae(x, y)
101 tensor(0.2500)
102
103 """
104 mae = F.l1_loss(pred, target, reduction='none')
105 mae = reduce(mae, reduction=reduction)
106 return mae
107
108
109 def rmsle(
110 pred: torch.Tensor,
111 target: torch.Tensor,
112 reduction: str = 'elementwise_mean'
113 ) -> torch.Tensor:
114 """
115 Computes root mean squared log error
116
117 Args:
118 pred: estimated labels
119 target: ground truth labels
120 reduction: a method to reduce metric score over labels (default: takes the mean)
121 Available reduction methods:
122
123 - elementwise_mean: takes the mean
124 - none: pass array
125 - sum: add elements
126
127 Return:
128 Tensor with RMSLE
129
130 Example:
131
132 >>> x = torch.tensor([0., 1, 2, 3])
133 >>> y = torch.tensor([0., 1, 2, 2])
134 >>> rmsle(x, y)
135 tensor(0.0207)
136
137 """
138 rmsle = mse(torch.log(pred + 1), torch.log(target + 1), reduction=reduction)
139 return rmsle
140
141
142 def psnr(
143 pred: torch.Tensor,
144 target: torch.Tensor,
145 data_range: float = None,
146 base: float = 10.0,
147 reduction: str = 'elementwise_mean'
148 ) -> torch.Tensor:
149 """
150 Computes the peak signal-to-noise ratio
151
152 Args:
153 pred: estimated signal
154 target: groun truth signal
155 data_range: the range of the data. If None, it is determined from the data (max - min)
156 base: a base of a logarithm to use (default: 10)
157 reduction: a method to reduce metric score over labels (default: takes the mean)
158 Available reduction methods:
159
160 - elementwise_mean: takes the mean
161 - none: pass array
162 - sum add elements
163
164 Return:
165 Tensor with PSNR score
166
167 Example:
168
169 >>> from pytorch_lightning.metrics.regression import PSNR
170 >>> pred = torch.tensor([[0.0, 1.0], [2.0, 3.0]])
171 >>> target = torch.tensor([[3.0, 2.0], [1.0, 0.0]])
172 >>> metric = PSNR()
173 >>> metric(pred, target)
174 tensor(2.5527)
175
176 """
177
178 if data_range is None:
179 data_range = max(target.max() - target.min(), pred.max() - pred.min())
180 else:
181 data_range = torch.tensor(float(data_range))
182
183 mse_score = mse(pred.view(-1), target.view(-1), reduction=reduction)
184 psnr_base_e = 2 * torch.log(data_range) - torch.log(mse_score)
185 psnr = psnr_base_e * (10 / torch.log(torch.tensor(base)))
186 return psnr
187
188
189 def _gaussian_kernel(channel, kernel_size, sigma, device):
190 def gaussian(kernel_size, sigma, device):
191 gauss = torch.arange(
192 start=(1 - kernel_size) / 2, end=(1 + kernel_size) / 2, step=1, dtype=torch.float32, device=device
193 )
194 gauss = torch.exp(-gauss.pow(2) / (2 * pow(sigma, 2)))
195 return (gauss / gauss.sum()).unsqueeze(dim=0) # (1, kernel_size)
196
197 gaussian_kernel_x = gaussian(kernel_size[0], sigma[0], device)
198 gaussian_kernel_y = gaussian(kernel_size[1], sigma[1], device)
199 kernel = torch.matmul(gaussian_kernel_x.t(), gaussian_kernel_y) # (kernel_size, 1) * (1, kernel_size)
200
201 return kernel.expand(channel, 1, kernel_size[0], kernel_size[1])
202
203
204 def ssim(
205 pred: torch.Tensor,
206 target: torch.Tensor,
207 kernel_size: Sequence[int] = (11, 11),
208 sigma: Sequence[float] = (1.5, 1.5),
209 reduction: str = "elementwise_mean",
210 data_range: float = None,
211 k1: float = 0.01,
212 k2: float = 0.03
213 ) -> torch.Tensor:
214 """
215 Computes Structual Similarity Index Measure
216
217 Args:
218 pred: estimated image
219 target: ground truth image
220 kernel_size: size of the gaussian kernel (default: (11, 11))
221 sigma: Standard deviation of the gaussian kernel (default: (1.5, 1.5))
222 reduction: a method to reduce metric score over labels (default: takes the mean)
223 Available reduction methods:
224
225 - elementwise_mean: takes the mean
226 - none: pass away
227 - sum: add elements
228
229 data_range: Range of the image. If ``None``, it is determined from the image (max - min)
230 k1: Parameter of SSIM. Default: 0.01
231 k2: Parameter of SSIM. Default: 0.03
232
233 Return:
234 Tensor with SSIM score
235
236 Example:
237
238 >>> pred = torch.rand([16, 1, 16, 16])
239 >>> target = pred * 0.75
240 >>> ssim(pred, target)
241 tensor(0.9219)
242
243 """
244
245 if pred.dtype != target.dtype:
246 raise TypeError(
247 "Expected `pred` and `target` to have the same data type."
248 f" Got pred: {pred.dtype} and target: {target.dtype}."
249 )
250
251 if pred.shape != target.shape:
252 raise ValueError(
253 "Expected `pred` and `target` to have the same shape."
254 f" Got pred: {pred.shape} and target: {target.shape}."
255 )
256
257 if len(pred.shape) != 4 or len(target.shape) != 4:
258 raise ValueError(
259 "Expected `pred` and `target` to have BxCxHxW shape."
260 f" Got pred: {pred.shape} and target: {target.shape}."
261 )
262
263 if len(kernel_size) != 2 or len(sigma) != 2:
264 raise ValueError(
265 "Expected `kernel_size` and `sigma` to have the length of two."
266 f" Got kernel_size: {len(kernel_size)} and sigma: {len(sigma)}."
267 )
268
269 if any(x % 2 == 0 or x <= 0 for x in kernel_size):
270 raise ValueError(f"Expected `kernel_size` to have odd positive number. Got {kernel_size}.")
271
272 if any(y <= 0 for y in sigma):
273 raise ValueError(f"Expected `sigma` to have positive number. Got {sigma}.")
274
275 if data_range is None:
276 data_range = max(pred.max() - pred.min(), target.max() - target.min())
277
278 C1 = pow(k1 * data_range, 2)
279 C2 = pow(k2 * data_range, 2)
280 device = pred.device
281
282 channel = pred.size(1)
283 kernel = _gaussian_kernel(channel, kernel_size, sigma, device)
284
285 # Concatenate
286 # pred for mu_pred
287 # target for mu_target
288 # pred * pred for sigma_pred
289 # target * target for sigma_target
290 # pred * target for sigma_pred_target
291 input_list = torch.cat([pred, target, pred * pred, target * target, pred * target]) # (5 * B, C, H, W)
292 outputs = F.conv2d(input_list, kernel, groups=channel)
293 output_list = [outputs[x * pred.size(0): (x + 1) * pred.size(0)] for x in range(len(outputs))]
294
295 mu_pred_sq = output_list[0].pow(2)
296 mu_target_sq = output_list[1].pow(2)
297 mu_pred_target = output_list[0] * output_list[1]
298
299 sigma_pred_sq = output_list[2] - mu_pred_sq
300 sigma_target_sq = output_list[3] - mu_target_sq
301 sigma_pred_target = output_list[4] - mu_pred_target
302
303 UPPER = 2 * sigma_pred_target + C2
304 LOWER = sigma_pred_sq + sigma_target_sq + C2
305
306 ssim_idx = ((2 * mu_pred_target + C1) * UPPER) / ((mu_pred_sq + mu_target_sq + C1) * LOWER)
307
308 return reduce(ssim_idx, reduction)
309
[end of pytorch_lightning/metrics/functional/regression.py]
[start of pytorch_lightning/metrics/regression.py]
1 # Copyright The PyTorch Lightning team.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from typing import Sequence
16
17 import torch
18
19 from pytorch_lightning.metrics.functional.regression import (
20 mae,
21 mse,
22 psnr,
23 rmse,
24 rmsle,
25 ssim
26 )
27 from pytorch_lightning.metrics.metric import Metric
28
29
30 class MSE(Metric):
31 """
32 Computes the mean squared loss.
33
34 Example:
35
36 >>> pred = torch.tensor([0., 1, 2, 3])
37 >>> target = torch.tensor([0., 1, 2, 2])
38 >>> metric = MSE()
39 >>> metric(pred, target)
40 tensor(0.2500)
41
42 """
43
44 def __init__(
45 self,
46 reduction: str = 'elementwise_mean',
47 ):
48 """
49 Args:
50 reduction: a method to reduce metric score over labels (default: takes the mean)
51 Available reduction methods:
52 - elementwise_mean: takes the mean
53 - none: pass array
54 - sum: add elements
55 """
56 super().__init__(name='mse')
57 self.reduction = reduction
58
59 def forward(self, pred: torch.Tensor, target: torch.Tensor) -> torch.Tensor:
60 """
61 Actual metric computation
62
63 Args:
64 pred: predicted labels
65 target: ground truth labels
66
67 Return:
68 A Tensor with the mse loss.
69 """
70 return mse(pred, target, self.reduction)
71
72
73 class RMSE(Metric):
74 """
75 Computes the root mean squared loss.
76
77 Example:
78
79 >>> pred = torch.tensor([0., 1, 2, 3])
80 >>> target = torch.tensor([0., 1, 2, 2])
81 >>> metric = RMSE()
82 >>> metric(pred, target)
83 tensor(0.5000)
84
85 """
86
87 def __init__(
88 self,
89 reduction: str = 'elementwise_mean',
90 ):
91 """
92 Args:
93 reduction: a method to reduce metric score over labels (default: takes the mean)
94 Available reduction methods:
95 - elementwise_mean: takes the mean
96 - none: pass array
97 - sum: add elements
98 """
99 super().__init__(name='rmse')
100 self.reduction = reduction
101
102 def forward(self, pred: torch.Tensor, target: torch.Tensor) -> torch.Tensor:
103 """
104 Actual metric computation
105
106 Args:
107 pred: predicted labels
108 target: ground truth labels
109
110 Return:
111 A Tensor with the rmse loss.
112 """
113 return rmse(pred, target, self.reduction)
114
115
116 class MAE(Metric):
117 """
118 Computes the mean absolute loss or L1-loss.
119
120 Example:
121
122 >>> pred = torch.tensor([0., 1, 2, 3])
123 >>> target = torch.tensor([0., 1, 2, 2])
124 >>> metric = MAE()
125 >>> metric(pred, target)
126 tensor(0.2500)
127
128 """
129
130 def __init__(
131 self,
132 reduction: str = 'elementwise_mean',
133 ):
134 """
135 Args:
136 reduction: a method to reduce metric score over labels (default: takes the mean)
137 Available reduction methods:
138 - elementwise_mean: takes the mean
139 - none: pass array
140 - sum: add elements
141 """
142 super().__init__(name='mae')
143 self.reduction = reduction
144
145 def forward(self, pred: torch.Tensor, target: torch.Tensor) -> torch.Tensor:
146 """
147 Actual metric computation
148
149 Args:
150 pred: predicted labels
151 target: ground truth labels
152
153 Return:
154 A Tensor with the mae loss.
155 """
156 return mae(pred, target, self.reduction)
157
158
159 class RMSLE(Metric):
160 """
161 Computes the root mean squared log loss.
162
163 Example:
164
165 >>> pred = torch.tensor([0., 1, 2, 3])
166 >>> target = torch.tensor([0., 1, 2, 2])
167 >>> metric = RMSLE()
168 >>> metric(pred, target)
169 tensor(0.0207)
170
171 """
172
173 def __init__(
174 self,
175 reduction: str = 'elementwise_mean',
176 ):
177 """
178 Args:
179 reduction: a method to reduce metric score over labels (default: takes the mean)
180 Available reduction methods:
181 - elementwise_mean: takes the mean
182 - none: pass array
183 - sum: add elements
184 """
185 super().__init__(name='rmsle')
186 self.reduction = reduction
187
188 def forward(self, pred: torch.Tensor, target: torch.Tensor) -> torch.Tensor:
189 """
190 Actual metric computation
191
192 Args:
193 pred: predicted labels
194 target: ground truth labels
195
196 Return:
197 A Tensor with the rmsle loss.
198 """
199 return rmsle(pred, target, self.reduction)
200
201
202 class PSNR(Metric):
203 """
204 Computes the peak signal-to-noise ratio
205
206 Example:
207
208 >>> pred = torch.tensor([[0.0, 1.0], [2.0, 3.0]])
209 >>> target = torch.tensor([[3.0, 2.0], [1.0, 0.0]])
210 >>> metric = PSNR()
211 >>> metric(pred, target)
212 tensor(2.5527)
213
214 """
215
216 def __init__(
217 self,
218 data_range: float = None,
219 base: int = 10,
220 reduction: str = 'elementwise_mean'
221 ):
222 """
223 Args:
224 data_range: the range of the data. If None, it is determined from the data (max - min)
225 base: a base of a logarithm to use (default: 10)
226 reduction: a method to reduce metric score over labels (default: takes the mean)
227 Available reduction methods:
228 - elementwise_mean: takes the mean
229 - none: pass array
230 - sum: add elements
231 """
232 super().__init__(name='psnr')
233 self.data_range = data_range
234 self.base = float(base)
235 self.reduction = reduction
236
237 def forward(self, pred: torch.Tensor, target: torch.Tensor) -> torch.Tensor:
238 """
239 Actual metric computation
240
241 Args:
242 pred: predicted labels
243 target: ground truth labels
244
245 Return:
246 A Tensor with psnr score.
247 """
248 return psnr(pred, target, self.data_range, self.base, self.reduction)
249
250
251 class SSIM(Metric):
252 """
253 Computes Structual Similarity Index Measure
254
255 Example:
256
257 >>> pred = torch.rand([16, 1, 16, 16])
258 >>> target = pred * 0.75
259 >>> metric = SSIM()
260 >>> metric(pred, target)
261 tensor(0.9219)
262
263 """
264
265 def __init__(
266 self,
267 kernel_size: Sequence[int] = (11, 11),
268 sigma: Sequence[float] = (1.5, 1.5),
269 reduction: str = "elementwise_mean",
270 data_range: float = None,
271 k1: float = 0.01,
272 k2: float = 0.03
273 ):
274 """
275 Args:
276 kernel_size: Size of the gaussian kernel (default: (11, 11))
277 sigma: Standard deviation of the gaussian kernel (default: (1.5, 1.5))
278 reduction: a method to reduce metric score over labels (default: takes the mean)
279 Available reduction methods:
280 - elementwise_mean: takes the mean
281 - none: pass away
282 - sum: add elements
283
284 data_range: Range of the image. If ``None``, it is determined from the image (max - min)
285 k1: Parameter of SSIM. Default: 0.01
286 k2: Parameter of SSIM. Default: 0.03
287 """
288 super().__init__(name="ssim")
289 self.kernel_size = kernel_size
290 self.sigma = sigma
291 self.reduction = reduction
292 self.data_range = data_range
293 self.k1 = k1
294 self.k2 = k2
295
296 def forward(self, pred: torch.Tensor, target: torch.Tensor) -> torch.Tensor:
297 """
298 Actual metric computation
299
300 Args:
301 pred: Estimated image
302 target: Ground truth image
303
304 Return:
305 A Tensor with SSIM score.
306 """
307 return ssim(pred, target, self.kernel_size, self.sigma, self.reduction, self.data_range, self.k1, self.k2)
308
[end of pytorch_lightning/metrics/regression.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
Lightning-AI/lightning
|
cb0c60bf7a0cc2d97915ed6585224ea842d3f45f
|
RMSLE metric appears to be incorrect
## 🐛 Bug
The usage of mse [in the rmsle function](https://github.com/PyTorchLightning/pytorch-lightning/blob/22b9642117394d3c50587ae137dbf94c6dd5173c/pytorch_lightning/metrics/functional/regression.py#L138) looks wrong to me. It looks like this function currently computes _mean squared log error_ instead of _root mean squared log error_.
### Expected behavior
I would expect that rmsle looks like this:
```python
rmsle = rmse(torch.log(pred + 1), torch.log(target + 1), reduction=reduction)
```
|
Hi! thanks for your contribution!, great first issue!
yep it's a bug. mind send a PR :)
|
2020-08-26T07:34:31Z
|
<patch>
diff --git a/pytorch_lightning/metrics/functional/regression.py b/pytorch_lightning/metrics/functional/regression.py
--- a/pytorch_lightning/metrics/functional/regression.py
+++ b/pytorch_lightning/metrics/functional/regression.py
@@ -132,10 +132,10 @@ def rmsle(
>>> x = torch.tensor([0., 1, 2, 3])
>>> y = torch.tensor([0., 1, 2, 2])
>>> rmsle(x, y)
- tensor(0.0207)
+ tensor(0.1438)
"""
- rmsle = mse(torch.log(pred + 1), torch.log(target + 1), reduction=reduction)
+ rmsle = rmse(torch.log(pred + 1), torch.log(target + 1), reduction=reduction)
return rmsle
diff --git a/pytorch_lightning/metrics/regression.py b/pytorch_lightning/metrics/regression.py
--- a/pytorch_lightning/metrics/regression.py
+++ b/pytorch_lightning/metrics/regression.py
@@ -166,7 +166,7 @@ class RMSLE(Metric):
>>> target = torch.tensor([0., 1, 2, 2])
>>> metric = RMSLE()
>>> metric(pred, target)
- tensor(0.0207)
+ tensor(0.1438)
"""
</patch>
|
[]
|
[]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.